No papers match these filters.
-
It’s Time to Standardize RDF Messages
This poster paper proposes RDF Messages as an explicit interoperability concept for grouping RDF statements into atomic communicative units across streams, logs, serializations, and tooling.
-
The Vocabulary Hub as a Catalog for Semantic Artifacts for Discovery and Alignment of Datasets
This paper shows how dataset profiles and profile alignments can extend vocabulary hubs in data spaces, enabling discovery and alignment of datasets toward consumer data models.
-
Context Associations: an Application-Independent Annotation Method for RDF Knowledge Graphs
This paper presents Context Associations as a uniform representation for contextual information associated with RDF statements, supporting discovery, exchange, storage, and processing of heterogeneous annotations.
-
Incentivizing sustainable data exchanges through unique contextualization of history and destiny
This paper motivates Trust Envelopes as uniquely contextualized wrappers around data, provenance, and usage conditions to support more sustainable and compliant data exchanges.
-
Miravi: a linked data viewer
Miravi is a configurable Linked Data viewer that demonstrates how decentralized and permissioned Linked Data sources can still be queried and visualized in a user-friendly Web interface.
-
May the FORCE be with you? A framework for ODRL rule compliance through evaluation
This paper presents FORCE, a framework and Web playground built around ODRL evaluation to make policy interpretation more uniform, testable, and easier to understand.
-
Extending RML to support permissioned data sharing with multiple views
The usage of Semantic Web technologies for data integration has extended from open data sharing to permissioned data sharing, as exemplified by standardization efforts from, for example, the Solid project, the Fedora Repository, and more broadly the deployment of data spaces. This leads to more applications of Semantic Web technologies and the Resource Description Framework (RDF) in particular.
-
It’s about time: time functions for comparing partial and floating time literals in SPARQL
This paper introduces Time Functions, a set of SPARQL extensions that compare partial and floating temporal literals consistently by treating them as time intervals.
-
Towards queryable verifiable credentials
This paper proposes a SPARQL-based approach to request semantically aligned claims from verifiable credentials within OID4VP flows.
-
The semantic web language server : enhancing the developer experience for semantic web practitioners
The Semantic Web has introduced a variety of syntaxes for e.g., serializing, querying, and validating linked data, such as Turtle, SPARQL, and SHACL. While these formats enable powerful interactions with data, they are highly sensitive to human error; even minor typos can disrupt the semantics of a document, rendering it invalid or non-interoperable.
-
Interoperable interpretation and evaluation of ODRL policies
On the Web, consent banners (cookies) are the prevailing response to legislation such as GDPR for handling protected data. These banners are meant to inform users about how their personal data will be managed by services and third parties.
-
Bringing modern IDE features to Semantic Web formats with the Semantic Web Language Server
Authoring Semantic Web documents such as ontologies or SPARQL queries is error-prone, often leading to interoperability issues, validation failures, or incorrect reasoning.
-
Demonstrating a pragmatic solution to context associations in RDF using blank node graphs
This paper proposes blank node graphs as a pragmatic way to associate provenance, policy, and other context information with exact RDF statement sets without accidental graph merging.
-
Enforcing usage control policies in solid using rule-based web agents
A core feature of the Solid ecosystem is enabling the sharing of data resources with other agents using access control policies. However, the decision of allowing access (or not) to a given data resource may not be final and might need to change over time.
-
Linking application and semantic data with RDF Lens
Linked Data is commonly regarded as an unfriendly data structure to be directly used by application developers. The (often unknown) triple-based structure of RDF graphs causes developers to struggle to extract the triples of interest and translate them into the object-like structure needed for their application.
-
Interoperable and continuous usage control enforcement in dataspaces
In many use cases, policies governing data access need to take time into account: for example, in logistics, the location of a delivery vehicle may only be exposed by a recipient for the duration of their delivery. The Open Digital Rights Language (ODRL) standard, a commonly used policy expressing language, does not support this type of dynamicity.
-
Semantic and technically interoperable data exchange in the Flanders Smart Data Space
This paper introduces the Flanders Smart Data Space and shows, through the traffic measurements data space, how semantic and technical interoperability reduce integration effort across organizational boundaries.
-
RDF-Connect : a declarative framework for streaming and cross-environment data processing pipelines
Data processing pipelines are a crucial component of any data-centric system today. Machine learning, data integration, and knowledge graph publishing are examples where data processing pipelines are needed.
-
Optimizing traversal queries of sensor data using a rule-based reachability approach
Link Traversal queries face challenges in completeness and long execution time due to the size of the web. Reachability criteria define completeness by restricting the links followed by engines.
-
The open circularity platform : a decentralized data sharing platform for circular value networks
A Circular Value Network (CVN) is established when connections between actors (e.g. companies and consumers) are exploited to maximally use resources (e.g. by enabling reuse or recycling). Increasing the potential of a CVN requires information flowing between actors, regardless of the domain they operate in, the systems they use, and the data models they adhere to.
-
RML-view-to-CSV : a proof-of-concept Implementation for RML Logical Views
Although the W3C Community Group on Knowledge Graph Construction (KGC)’s work on the modular RDF Mapping Language (RML) specification has taken great strides, open issues and respective solution proposals remain. Some of these issues are (i) inability to handle hierarchy in nested data, (ii) limited join functionality, and (iii) inability to handle mixed data formats.
-
Opportunities for shape-based optimization of link traversal queries
Data on the web is naturally unindexed and decentralized. Centralizing web data, especially personal data, raises ethical and legal concerns.
-
Publishing public transport data on the Web with the Linked Connections framework
Publishing transport data on the Web for consumption by others poses several challenges for data publishers. In addition to planned schedules, access to live schedule updates (e.g. delays or cancellations) and historical data is fundamental to enable reliable applications and to support machine learning use cases.
-
YARRRML + LDES : simultaneously lowering complexity from knowledge graph generation and publication
Linked Data Event Streams (LDES) is an advanced Knowledge Graph (KG) publication specification aimed at continuous data source replication and synchronization with benefits such as data entities versioning and history retention while providing a self-descriptive API. However, building an LDES requires a high level of expertise in the Semantic Web ecosystem.
-
Towards algebraic mapping operators for knowledge graph construction
Declarative knowledge graph construction has matured to the point where state of the art techniques are focusing on optimizing the mapping processes. However, these optimization techniques use the syntax of the mapping language without considering the impact of the semantics.
-
LDESTS : enabling efficient storage and querying of large volumes of time series data on solid pods
The Solid ecosystem provides a good foundation for the decentralised Web. However, the current document-based implementations of the Solid specification lack support to efficiently interact with high volumes of time series data.
-
Bringing IDE support to JSON-LD with the language server protocol
JSON-LD is a popular data format used to describe and share semantic data on the web. However, creating and editing JSON-LD documents can be a challenging task, especially when dealing with complex contexts that include many properties.
-
Linked data event streams in solid LDP containers
The Solid Project – at the time of writing – uses containers with resources in them as defined in the LDP specification as a way to give developers the flexibility to write to a storage in the way they see fit.
-
Reference conditions : relating mapping rules without joining
Existing knowledge graph construction mapping languages have a legacy of mapping over relational databases. A such, current mapping language join constructs conflate securing referential integrity with relating concepts across data sources.
-
A policy-oriented architecture for enforcing consent in solid
The Solid project aims to restore end-users’ control over their data by decoupling services and applications from data storage. To realize data governance by the user, the Solid Protocol 0.9 relies on Web Access Control, which has limited expressivity and interpretability.
-
Publishing cultural heritage collections of Ghent with linked data event streams
Cultural heritage institutions maintain digital artefacts of their collections using Collection Management Software (CMS). In order to attract new audiences, these data should be interoperable with and reusable within other Web APIs.
-
Continuous generation of versioned collections’ members with RML and LDES
When evolving datasets are used to generate a knowledge graph, it is usually challenging to keep the graph synchronized in a timely manner when changes occur in the source data. Current approaches fully regenerate a knowledge graph in such cases, which may be time consuming depending on the data type, size, and update frequency.
-
What’s in a Pod? A knowledge graph interpretation for the Solid ecosystem
The Solid vision aims to make data independent of applications through technical specifica‐ tions, which detail how to publish and consume permissioned data across multiple autono‐ mous locations called “pods”. The current document-centric interpretation of Solid, wherein a pod is a single hierarchy of Linked Data documents, cannot fully realize this independence.
-
Describing a network of live datasets with the SDS vocabulary
Data publishers can provide multiple interfaces per dataset. Each interface has its own merits and drawbacks, SPARQL endpoints are expensive to host and clients find it difficult to work with static data dumps.
-
Integrating OSLO semantics in word processors
Documents issued by the government such as public tenders or policy documents often lack consistent semantics, which leads to ambiguities and misinterpretations. Take for example granting subsidies to companies.
-
Using ANPR data to create an anonymized linked open dataset on urban bustle
ANPR cameras allow the automatic detection of vehicle license plates and are increasingly used for law enforcement. However, also statistical data generated by ANPR cameras are a potential source of urban insights.
-
Third-party payment specification for MaaS
Mobility as a service (MaaS) allows intelligent transportation across multiple mobility providers, such as calculating the least expensive route. However, the current standards do not tackle third-party payments (TPPs) where a third party compensates a part of a travellers’ trip cost when certain criteria are met.
-
Leveraging semantic technologies for digital interoperability in the European railway domain
The European Union Agency for Railways is an European authority, tasked with the provision of a legal and technical framework to support harmonized and safe cross-border railway operations throughout the EU. So far, the agency relied on traditional application-centric approaches to support the data exchange among multiple actors interacting within the railway domain.
-
A file-based linked data fragments approach to prefix search
Text-fields that need to look up specific entities in a dataset can be equipped with autocompletion functionality. When a dataset becomes too large to be embedded in the page, setting up a full-text search API is not the only alternative.
-
Geospatially partitioning public transit networks for open data publishing
Public transit operators often publish their open data in a data dump, but developers with limited computational resources may not have the means to process all this data efficiently. In our prior work we have shown that geospatially partitioning an operator’s network can improve query times for client-side route planning applications by a factor of 2.4.
-
Facilitating the analysis of COVID-19 literature through a knowledge graph
At the end of 2019, Chinese authorities alerted the World Health Organization (WHO) of the outbreak of a new strain of the coronavirus, called SARS-CoV-2, which struck humanity by an unprecedented disaster a few months later. In response to this pandemic, a publicly available dataset was released on Kaggle which contained information of over 63,000 papers.
-
Velopark : a linked open data platform for bicycle parkings
Cycling as a mean of urban transportation is positively correlated with cleaner, healthier and happier cities. By providing more infrastructure, such as secure parking facilities, cities aim on attracting more cyclists.
-
Interactive route personalization using regions of interest
There is an abundance of services and applications that find the most efficient route between two places, people are not always interested in efficiency; sometimes we just want a pleasant route. Such routes are subjective though, and may depend on contextual factors that route planners are oblivious to.
-
How to prototype a client-side route planner for Helsinki with routable tiles and linked connections
Route planning is key in application domains such as delivery services, tourism advice and ride sharing. Today’s route planning as a service solutions do not cover all requirements of each use case, forcing application developers to build their own self-hosted route planners.
-
Geospatial partitioning of open transit data
Public transit operators often publish their open data as a single data dump, but developers with limited computational resources may not be able to process all this data. Existing work has already focused on fragmenting the data by departure time, so that data consumers can be more selective in the data they process.
-
Efficient live public transport data sharing for route planning on the Web
Web-based information services transformed how we interact with public transport. Discovering alternatives to reach destinations and obtaining live updates about them is necessary to optimize journeys and improve the quality of travellers’ experience.
-
Comparing a polling and push-based approach for live open data interfaces
There are two mechanisms for publishing live changing resources on the Web: a client can pull the latest state of a resource or the server pushes updates to the client. In the state of the art, it is clear that pushing delivers a lower latency compared to pulling, however, this has not been tested for an Open Data usage scenario where 15 k clients are not an exception.
-
Open traffic lights : a strategy for publishing and preserving traffic lights data
For better traffic flow and making better policy decisions, the city of Antwerp is connecting traffic lights to the Internet. The live “time to green” only tells a part of the story: also the historical values need to be preserved and need to be made accessible to everyone.
-
Client-side route planning : preprocessing the OpenStreetMap road network for Routable Tiles
This paper presents work on client-side route planning : preprocessing the OpenStreetMap road network for Routable Tiles by Delva and colleagues.
-
Decentralized publication and consumption of transfer footpaths
Users expect route planners that combine all modes of transportation to propose good journeys to their destination. These route planners use data from several sources such as road networks and schedule-based public transit.
-
Using an existing website as a queryable low-cost LOD publishing interface
Maintaining an Open Dataset comes at an extra recurring cost when it is published in a dedicated Web interface. As there is not often a direct financial return from publishing a dataset publicly, these extra costs need to be minimized.
-
Republishing OpenStreetMap’s roads as linked routable tiles
Route planning providers manually integrate different geo-spatial datasets before offering a Web service to developers, thus creating a closed world view. In contrast, combining open datasets at runtime can provide more information for user-specific route planning needs.
-
Supporting sustainable publishing and consuming of live linked time series streams
The road to publishing public streaming data on the Web is paved with trade-offs that determine its viability. The cost of unrestricted query answering on top of data streams, may not be affordable for all data publishers.
-
Predicting train occupancies based on query logs and external data sources
A machine learning paper on predicting how crowded trains will be by combining user query logs with external transport data sources.
-
Constraints for a large-scale ITS data-sharing system: a use case in the city of Ghent
This paper applies Web architecture constraints to large-scale ITS data sharing and shows how those constraints make transport datasets easier to cache, query, and reuse.
-
Open Transport Data for maximising reuse in multimodal route planners: a study in Flanders
Open transport data only becomes reusable when legal, syntactic, semantic, and querying concerns are aligned. This paper distils 10 practical challenges from interviews and workshops in Flanders.
-
Public Transit Route Planning through Lightweight Linked Data Interfaces
Linked Connections shows how a lightweight Linked Data interface can make public transit route planning cheaper to publish and more scalable to reuse than a traditional origin-destination API.
-
Continuously updating query results over real-time linked data
Existing solutions to query dynamic Linked Data sources extend the sparql language, and require continuous server processing for each query. Traditional sparql endpoints accept highly expressive queries, contributing to high server cost.
-
Continuous client-side query evaluation over dynamic linked data
Existing solutions to query dynamic Linked Data sources extend the SPARQL language, and require continuous server processing for each query. Traditional SPARQL endpoints already accept highly expressive queries, so extending these endpoints for time-sensitive queries increases the server cost even further.
-
Triple pattern fragments: a low-cost knowledge graph interface for the web
Billions of Linked Data triples exist in thousands of RDF knowledge graphs on the Web, but few of those graphs can be queried live from Web applications. Only a limited number of knowledge graphs are available in a queryable interface, and existing interfaces can be expensive to host at high availability.
-
Intermodal public transit routing using Linked Connections
Ever since public transit agencies have found their way to the Web, they inform travelers using route planning software made available on their website. These travelers also need to be informed about other modes of transport, for which they have to consult other websites, or for which they have to ask the transit agency’s server maintainer to implement new functionalities.
-
Quantifying the interoperability of open government datasets
A proposed technique quantifies the semantic interoperability of open government datasets with three metrics calculated using a set of statements that indicate for each pair of identifiers in the system whether or not they represent the same concept.
-
The 5 stars of open data portals
This paper presents work on the 5 stars of open data portals by Colpaert and colleagues.