• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 29
  • 14
  • 11
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 197
  • 197
  • 99
  • 87
  • 47
  • 42
  • 37
  • 30
  • 29
  • 28
  • 28
  • 28
  • 27
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reasoning about quality in the Web of Linked Data

Baillie, Chris January 2015 (has links)
In recent years the Web has evolved from a collection of hyperlinked documents to a vast ecosystem of interconnected documents, devices, services, and agents. However, the open nature of the Web enables anyone or any thing to publish any content they choose. Therefore poor quality data can quickly propagate and an appropriate mechanism to assess the quality of such data is essential if agents are to identify reliable information for use in decision-making. Existing assessment frameworks investigate the context around data (additional information that describes the situation in which a datum was created). Such metadata can be made available by publishing information to the Web of Linked Data. However, there are situations in which examining context alone is not sufficient - such as when one must identify the agent responsible for data creation, or transformational processes applied to data. In these situations, examining data provenance is critical to identifying quality issues. Moreover, there will be situations in which an agent is unable to perform a quality assessment of their own. For example, if the original contextual metadata is no longer available. Here, it may be possible for agents to explore provenance of previous quality assessments and make decisions about quality result re-use. This thesis explores issues around quality assessment and provenance in the Web of Linked Data. It contributes a formal model of quality assessment designed to align with emerging standards for provenance on the Web. This model is then realised as an OWL ontology, which can be used as part of a software framework to perform data quality assessment. Through a number of real-world examples, spanning environmental sensing, invasive species monitoring, and passenger information domains, the thesis establishes the importance of examining provenance as part of quality assessment. Moreover, it demonstrates that by examining quality assessment provenance agents can make re-use decisions about existing quality assessment results. Included in these implementations are sets of example quality metrics that demonstrate how these can be encoded using the SPARQL Inferencing Notation (SPIN).
2

Experiments with Linked Data / Experiments with Linked Data

Nohejl, Pavel January 2011 (has links)
The goal of this master thesis is to create a "manual" to Linked Data technology. The first part of this thesis describes the Semantic Web and its relationship to Linked Data. Then follows a detailed explanation of Linked Data and so called "Linked Data principles" including involved technologies and tools. The second part of the thesis contains practical experiences with creation and using Linked Data. Firstly is described obtaining data on public procurement by web crawler developed for these purposes, followed by a description of transformation obtained (relational) data into Linked Data and their interlinking with external Linked Data sources. One part of this thesis is also an application consuming created Linked Data. This is compared with the traditional approach when the application consumes data from a relational database. This comparison is supplemented by a benchmark. Finally is presented a manual for the beginning developer which summarizes our experiences. The list of problems which are necessary to solve (from our point of view) for further development of Linked Data is also included.
3

Privacy-aware Linked Widgets

Fernandez Garcia, Javier D., Ekaputra, Fajar J., Aryan, Peb Ruswono, Azzam, Amr, Kiesling, Elmar January 2019 (has links) (PDF)
The European General Data Protection Regulation (GDPR) brings new challenges for companies, who must demonstrate that their systems and business processes comply with usage constraints specified by data subjects. However, due to the lack of standards, tools, and best practices, many organizations struggle to adapt their infrastructure and processes to ensure and demonstrate that all data processing is in compliance with users' given consent. The SPECIAL EU H2020 project has developed vocabularies that can formally describe data subjects' given consent as well as methods that use this description to automatically determine whether processing of the data according to a given policy is compliant with the given consent. Whereas this makes it possible to determine whether processing was compliant or not, integration of the approach into existing line of business applications and ex-ante compliance checking remains an open challenge. In this short paper, we demonstrate how the SPECIAL consent and compliance framework can be integrated into Linked Widgets, a mashup platform, in order to support privacy-aware ad-hoc integration of personal data. The resulting environment makes it possible to create data integration and processing workflows out of components that inherently respect usage policies of the data that is being processed and are able to demonstrate compliance. We provide an overview of the necessary meta data and orchestration towards a privacy-aware linked data mashup platform that automatically respects subjects' given consents. The evaluation results show the potential of our approach for ex-ante usage policy compliance checking within the Linked Widgets Platforms and beyond.
4

Analýza a vizualizace statistických Linkded Data / Analysing and Visualizing Statistical Linked Data

Helmich, Jiří January 2013 (has links)
The thesis describes several means of processing statistical data in the ambience of Linked Data and is in particular focused on the utilization of Data Cube Vocabulary metaformat. Its content offers a description of tools related to analysis and visualization of RDF data not only from the statistical view. An indivisible part of this work is the depiction of the Payola tool on whose development is the author still working on. The outcome of this thesis is mainly proposal and consequential implementation of the system that enables a conversion of RDF data in compliance with the DCV vocabularies. The designed system was implemented and integrated to the Payola application. Several other extensions of the system were also implemented by the author. Within the scope of the implementation process there are mentioned also limitations arising from the integration with Payola. In the conclusion the writer describes a few experiments where some of the chosen datasets were applied to the implemented system. Powered by TCPDF (www.tcpdf.org)
5

Integrace a konzumace důvěryhodných Linked Data / Towards Trustworthy Linked Data Integration and Consumption

Knap, Tomáš January 2013 (has links)
Title: Towards Trustworthy Linked Data Integration and Consumption Author: RNDr. Tomáš Knap Department: Department of Software Engineering Supervisor: RNDr. Irena Holubová, PhD., Department of Software Engineering Abstract: We are now finally at a point when datasets based upon open standards are being published on an increasing basis by a variety of Web communities, governmental initiatives, and various companies. Linked Data offers information consumers a level of information integration and aggregation agility that has up to now not been possible. Consumers can now "mashup" and readily integrate information for use in a myriad of alternative end uses. Indiscriminate addition of information can, however, come with inherent problems, such as the provision of poor quality, inaccurate, irrelevant or fraudulent information. All will come with associated costs of the consumed data which will negatively affect data consumer's benefit and Linked Data applications usage and uptake. In this thesis, we address these issues by proposing ODCleanStore, a Linked Da- ta management and querying tool able to provide data consumers with Linked Data, which is cleansed, properly linked, integrated, and trustworthy accord- ing to consumer's subjective requirements. Trustworthiness of data means that the data has associated...
6

Hybrid Question Answering over Linked Data

Bahmid, Rawan 13 August 2018 (has links)
The emergence of Linked Data in the form of knowledge graphs in RDF has been one of the most recent evolutions of the Semantic Web. This led to the development of question answering systems based on RDF and SPARQL to allow end users to access and benefit from these knowledge graphs. However, a lot of information on the Web is still unstructured, which restricts the ability of answering questions whose answer does not exist in a knowledge base. To tackle this issue, hybrid question answering has emerged as an important challenge. In fact, hybrid question answering entails the task of question answering by combining both structured (RDF) and unstructured knowledge sources (text) into one answer. This thesis tackles hybrid question answering based on natural language questions. It focuses on the analysis and improvement of an open source system called HAWK, identifies its limitations and provides solutions and recommendations in the form of a generic question-answering pipeline called HAWK_R. Our system mostly uses heuristic methods, patterns and the ontological schema and knowledge base and provides three main additions: question classification, annotation and answer verification and ranking based on query content. Our results show a clear improvement over the original HAWK based on several Question Answering over Linked Data (QALD) competitions. In fact, our methods are not limited to HAWK and can also help increase the performance of other question answering systems.
7

HDT crypt: Compression and Encryption of RDF Datasets

Fernandez Garcia, Javier David, Kirrane, Sabrina, Polleres, Axel, Steyskal, Simon January 2018 (has links) (PDF)
The publication and interchange of RDF datasets online has experienced significant growth in recent years, promoted by different but complementary efforts, such as Linked Open Data, the Web of Things and RDF stream processing systems. However, the current Linked Data infrastructure does not cater for the storage and exchange of sensitive or private data. On the one hand, data publishers need means to limit access to confidential data (e.g. health, financial, personal, or other sensitive data). On the other hand, the infrastructure needs to compress RDF graphs in a manner that minimises the amount of data that is both stored and transferred over the wire. In this paper, we demonstrate how HDT - a compressed serialization format for RDF - can be extended to cater for supporting encryption. We propose a number of different graph partitioning strategies and discuss the benefits and tradeoffs of each approach.
8

Framework pre Android aplikácie založené na geolokácii a používajúce Linked Data / Framework for geolocation-based Android applications using Linked Data

Snoha, Matej January 2018 (has links)
Title: Framework for geolocation-based Android applications using Linked Data Author: Bc. Matej Snoha The aim of this thesis is to design and implement a framework for geolo- cation based mobile applications using Linked Data. Introduced are Linked Data technologies in the context of mobile application develop- ment, data modeling, and geographical queries. This work follows the software development lifecycle from requirement gathering, software analysis, design of the application framework and its individual compo- nents, up to the implementation of required functionality and subsequent deployment and evaluation of the functional application framework. The resulting implementation of the framework consists of a mobile applica- tion that displays nearby places from Linked Data datasets on a map and a cloud service with a repository of required definitions. It serves to demonstrate functionality of the theoretical part of the work in real-life scenarios.
9

Visual knowledge graph management tool / Visual knowledge graph management tool

Woska, Aleš January 2016 (has links)
Linked data are usually visualized as graph structures which are useful for browsing resources but less useful for viewing structured data. This thesis proposes a solution how to visualize linked data in a tabular structure. The goal of the visualization is to make an orientation in data easier for a user. A tabular structure for specific data can be designed and managed in a graphic editor. Visualized data can be checked for errors and can be edited to obtain a script which fixes them. Powered by TCPDF (www.tcpdf.org)
10

On techniques for pay-as-you-go data integration of linked data

Christodoulou, Klitos January 2015 (has links)
It is recognised that nowadays, users interact with large amounts of data that exist in disparate forms, and are stored under different settings. Moreover, it is true that the amount of structured and un-structured data outside a single well organised data management system is expanding rapidly. To address the recent challenges of managing large amounts of potentially distributed data, the vision of a dataspace was introduced. This data management paradigm aims at reducing the complexity behind the challenges of integrating heterogeneous data sources. Recently, efforts by the Linked Data (LD) community gave rise to a Web of Data (WoD) that interweaves with the current Web of documents in a way that it is useful for data consumption by both humans and computational agents. On the WoD, datasets are structured under a common data model and published as Web resources following a simple set of guidelines that enables them to be linked with other pieces of data, as well as, to be annotated with useful meta data that help determine their semantics. The WoD is an evolving open ecosystem including specialist publishers as well as community efforts aiming at re-publishing isolated databases as LD on the WoD, and annotating them with meta data. The WoD raises new opportunities and challenges. However, currently it mostly relies on manual efforts for integrating the large amounts of heterogeneous data sources on the WoD. This dissertation makes the case that several techniques from the dataspaces research area (aiming at on-demand integration of data sources in a pay-as-you-go fashion) can support the integration of heterogeneous WoD sources. In so doing, this dissertation explores the opportunities and identifies the challenges of adapting existing pay-as-you-go data integration techniques in the context of LD. More specifically, this dissertation makes the following contributions: (1) a case-study for identifying the challenges when existing pay-as-you-go data integration techniques are applied in a setting where data sources are LD; (2) a methodology that deals with the 'schema-less' nature of LD sources by automatically inferring a conceptual structure from a given RDF graph thus enabling downstream tasks, such as the identification of matches and the derivation of mappings, which are, both, essential for the automatic bootstrapping of a dataspace; and (3) a well-defined, principled methodology that builds on a Bayesian inference technique for reasoning under uncertainty to improve pay-as-you-go integration. Although the developed methodology is generic in being able to reason with different hypothesis, its effectiveness has only been explored on reducing the uncertain decisions made by string-based matchers during the matching stage of a dataspace system.

Page generated in 0.0492 seconds