• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 27
  • 27
  • 21
  • 20
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 146
  • 123
  • 108
  • 81
  • 67
  • 63
  • 56
  • 54
  • 51
  • 49
  • 46
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Question Answering on RDF Data Cubes

Höffner, Konrad 26 March 2021 (has links)
The Semantic Web, a Web of Data, is an extension of the World Wide Web (WWW), a Web of Documents. A large amount of such data is freely available as Linked Open Data (LOD) for many areas of knowledge, forming the LOD Cloud. While this data conforms to the Resource Description Framework (RDF) and can thus be processed by machines, users need to master a formal query language and learn a specific vocabulary. Semantic Question Answering (SQA) systems remove those access barriers by letting the user ask natural language questions that the systems translate into formal queries. Thus, the research area of SQA plays an important role for the acceptance and benefit of the Semantic Web. The original contributions of this thesis to SQA are: First, we survey the current state of the art of SQA. We complement existing surveys by systematically identifying SQA publications in the chosen timeframe. 72 publications describing 62 different systems are systematically and manually selected using predefined inclusion and exclusion criteria out of 1960 candidates from the end of 2010 to July 2015. The survey identifies common challenges, structured solutions, and recommendations on research opportunities for future systems. From that point on, we focus on multidimensional numerical data, which is immensely valuable as it influences decisions in health care, policy and finance, among others. With the growth of the open data movement, more and more of it is becoming freely available. A large amount of such data is included in the LOD cloud using the RDF Data Cube (RDC) vocabulary. However, consuming multidimensional numerical data requires experts and specialized tools. Traditional SQA systems cannot process RDCs because their meta-structure is opaque to applications that expect facts to be encoded in single triples, This motivates our second contribution, the design and implementation of the first SQA algorithm on RDF Data Cubes. We kick-start this new research subfield by creating a user question corpus and a benchmark over multiple data sets. The evaluation of our system on the benchmark, which is included in the public Question Answering over Linked Data (QALD) challenge of 2016, shows the feasibility of the approach, but also highlights challenges, which we discuss in detail as a starting point for future work in the field. The benchmark is based on our final contribution, the addition of 955 financial government spending data sets to the LOD cloud by transforming data sets of the OpenSpending project to RDF Data Cubes. Open spending data has the power to reduce corruption by increasing accountability and strengthens democracy because voters can make better informed decisions. An informed and trusting public also strengthens the government itself because it is more likely to commit to large projects. OpenSpending.org is an open platform that provides public finance data from governments around the world. The transformation result, called LinkedSpending, consists of more than five million planned and carried out financial transactions in 955 data sets from all over the world as Linked Open Data and is freely available and openly licensed.:1 Introduction 1.1 Motivation 1.2 Research Questions and Contributions 1.3 Thesis Structure 2 Preliminaries 2.1 Semantic Web 2.1.1 URIs and URLs 2.1.2 Linked Data 2.1.3 Resource Description Framework 2.1.4 Ontologies 2.2 Question Answering 2.2.1 History 2.2.2 Definitions 2.2.3 Evaluation 2.2.4 SPARQL 2.2.5 Controlled Vocabulary 2.2.6 Faceted Search 2.2.7 Keyword Search 2.3 Data Cubes 3 Related Work 3.1 Semantic Question Answering 3.1.1 Surveys 3.1.2 Evaluation Campaigns 3.1.3 System Frameworks 3.2 Question Answering on RDF Data Cubes 3.3 RDF Data Cube Data Sets 4 Systematic Survey of Semantic Question Answering 4.1 Methodology 4.1.1 Inclusion Criteria 4.1.2 Exclusion Criteria 4.1.3 Result 4.2 Systems 4.2.1 Implementation 4.2.2 Examples 4.2.3 Answer Presentation 4.3 Challenges 4.3.1 Lexical Gap 4.3.2 Ambiguity 4.3.3 Multilingualism 4.3.4 Complex Queries 4.3.5 Distributed Knowledge 4.3.6 Procedural, Temporal and Spatial Questions 4.3.7 Templates 5 Question Answering on RDF Data Cubes 5.1 Question Corpus 5.2 Corpus Analysis 5.3 Data Cube Operations 5.4 Algorithm 5.4.1 Preprocessing 5.4.2 Matching 5.4.3 Combining Matches to Constraints 5.4.4 Execution 6 LinkedSpending 6.1 Choice of Source Data 6.1.1 Government Spending 6.1.2 OpenSpending 6.2 OpenSpending Source Data 6.3 Conversion of OpenSpending to RDF 6.4 Publishing 6.5 Overview over the Data Sets 6.6 Data Set Quality Analysis 6.6.1 Intrinsic Dimensions 6.6.2 Representational Dimensions 6.7 Evaluation 6.7.1 Experimental Setup and Benchmark 6.7.2 Discussion 7 Conclusion 7.1 Research Question Summary 7.2 SQA Survey 7.2.1 Lexical Gap 7.2.2 Ambiguity 7.2.3 Multilingualism 7.2.4 Complex Operators 7.2.5 Distributed Knowledge 7.2.6 Procedural, Temporal and Spatial Data 7.2.7 Templates 7.2.8 Future Research 7.3 CubeQA 7.4 LinkedSpending 7.4.1 Shortcomings 7.4.2 Future Work Bibliography Appendix A The CubeQA Question Corpus Appendix B The QALD-6 Task 3 Benchmark Questions B.1 Training Data B.2 Testing Data
242

A Natural Language Interface for Querying Linked Data

Akrin, Christoffer, Tham, Simon January 2020 (has links)
The thesis introduces a proof of concept idea that could spark great interest from many industries. The idea consists of a remote Natural Language Interface (NLI), for querying Knowledge Bases (KBs). The system applies natural language technology tools provided by the Stanford CoreNLP, and queries KBs with the use of the query language SPARQL. Natural Language Processing (NLP) is used to analyze the semantics of a question written in natural language, and generates relational information about the question. With correctly defined relations, the question can be queried on KBs containing relevant Linked Data. The Linked Data follows the Resource Description Framework (RDF) model by expressing relations in the form of semantic triples: subject-predicate-object. With our NLI, any KB can be understood semantically. By providing correct training data, the AI can learn to understand the semantics of the RDF data stored in the KB. The ability to understand the RDF data allows for the process of extracting relational information from questions about the KB. With the relational information, questions can be translated to SPARQL and be queried on the KB.
243

Genealogická sémantická wiki / Genealogic Semantic Wiki

Brychová, Jana Unknown Date (has links)
This thesis project is about possibilities of storing genealogic data in different formats and based on the results it suggests data format which can be used consequently as a source for the vizualization by semantic web. In the scope of the project there is a genealogic application implemented for KiWi platform. This application enable visualization of designed format using prefuse technology. There are basic and aslo some other usefull information about the basic technologies of the semantic web like RDF, XM, ontology, OWL language stated in the document.
244

Investigating Electrical Properties of Polycrystaline Silver Sulfide from Structure-Property Relation of Ag2S Paramorph

Shaulin, Tahrina Tanjim 24 July 2023 (has links)
No description available.
245

Linked Data in VRA Core 4.0: Converting VRA XML Records into RDF/XML

Mixter, Jeffrey 08 May 2013 (has links)
No description available.
246

Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge Graphs

Nguyen, Vinh Thi Kim January 2017 (has links)
No description available.
247

Getting Graphical with Knowledge Graphs : A proof-of-concept for extending and modifying knowledge graphs

Granberg, Roberth, Hellman, Anton January 2022 (has links)
Knowledge Graph (KG) is an emerging topic of research. The promise of KGs is to be able to turn data into knowledge by supplying the data with context at the source. This could in turn allow machines to make sense of data by inference; looking at the context of the data and being able to derive knowledge from its context and relations, thus allowing for new ways of finding value in the sea of data that the world produces today. Working with KGs today involves many steps that are open to simplification and improvement, especially in regards to usability. In this thesis, we've aimed to design and produce an application that can be used to modify, extend and build KGs. The work includes the front-end library VueJS, the Scalable Vector Graphics (SVG) library D3 and the graph database Stardog. The project has made use of Scrum methodology to distribute and plan the work that took place over a span of six months, with two developers working halftime (20 hours/week). The result of the project is a working application that can be used by developers within the KG domain who want to be able to test and modify their graphs in a visual manner.
248

Vers une nouvelle architecture de l'information historique : L'impact du Web sémantique sur l'organisation du Répertoire du patrimoine culturel du Québec

Michon, Philippe January 2016 (has links)
Le Plan culturel numérique du Québec (PCNQ) souligne l’importance pour le domaine culturel québécois, auquel participe étroitement les historiens, de s’intéresser aux possibilités du Web sémantique. Dans cette idée, ce mémoire étudie les avantages et les inconvénients de l’association entre le Web sémantique et l’histoire. D’un côté, on retrouve une nouvelle configuration du Web sous forme de données liées qui tente de s’inscrire dans un cadre pratique et, de l’autre, une discipline qui souhaite comprendre et préserver les faits passés. La réunion des deux concepts nécessite une implication interdisciplinaire entre programmeurs, professionnels en sciences de l’information et historiens. Face à ce travail interdisciplinaire, quels sont les enjeux et le rôle de l’historien dans le développement d’une plate-forme sémantique sur le patrimoine québécois? Pour répondre à cette question, ce mémoire explique les liens étroits qui existent entre la discipline historique et les données liées. Après avoir défini un ensemble de concepts fondateurs tels que le Resource Description Framework (RDF), l’Uniform Resource Identifier (URI), les fichiers d’autorité et les ontologies, ce mémoire associe un corpus de personnes du Répertoire du patrimoine culturel du Québec (RPCQ) avec DBpedia, un joueur majeur du Web sémantique. Cette démonstration explique comment le patrimoine québécois s’articule dans le nuage des données liées. De cette expérimentation découle deux constats qui démontrent l’importance de l’implication historienne dans une structure sémantique. Le Québec n’a pas d’autorité sur ses propres données et on ne retrace actuellement que la grande histoire du Québec sans entrer dans ses particularités.
249

Contrôle d'accès et présentation contextuelle pour le Web des données

Costabello, Luca 29 November 2013 (has links) (PDF)
La thèse concerne le rôle joué par le contexte dans l'accès au Web de données depuis les dispositifs mobiles. Le travail analyse ce problème de deux points de vue distincts: adapter au contexte la présentation de triplets, et protéger l'accès aux bases des données RDF depuis les dispositifs mobiles. La première contribution est PRISSMA, un moteur de rendu RDF qui étend Fresnel avec la sélection de la meilleure représentation pour le contexte physique où on se trouve. Cette opération est effectuée par un algorithme de recherche de sous-graphes tolérant aux erreurs basé sur la notion de distance d'édition sur les graphes. L'algorithme considère les différences entre les descriptions de contexte et le contexte détecté par les capteurs, supporte des dimensions de contexte hétérogènes et est exécuté sur le client pour ne pas révéler des informations privées. La deuxième contribution concerne le système de contrôle d'accès Shi3ld. Shi3ld supporte tous les triple stores et il ne nécessite pas de les modifier. Il utilise exclusivement les langages du Web sémantique, et il n'ajoute pas des nouveaux langages de définition de règles d'accès, y compris des analyseurs syntaxiques et des procédures de validation. Shi3ld offre une protection jusqu'au niveau des triplets. La thèse décrit les modèles, algorithmes et prototypes de PRISSMA et de Shi3ld. Des expériences montrent la validité des résultats de PRISSMA ainsi que les performances au niveau de mémoire et de temps de réponse. Le module de contrôle d'accès Shi3ld a été testé avec différents triple stores, avec et sans moteur SPARQL. Les résultats montrent l'impact sur le temps de réponse et démontrent la faisabilité de l'approche.
250

A semantic content based methodology framework for e-government development / Jean Vincent Fonou Dombeu

Fonou Dombeu, Jean Vincent January 2011 (has links)
The integration and interoperability of autonomous and heterogeneous electronic government (e-government) systems of government departments and agencies for a seamless services delivery to citizens through one-stop e-government portals remain challenging issues in egovernment development. In recent years, Semantic Web technologies have emerged as promising solutions to these problems. Semantic Web technologies base on ontology allow the description and specification of electronic services (e-services), making it easy to compose, match, map and merge e-services and facilitate their semantic integration and interoperability. However, a unified and comprehensive methodology that provides structured guidelines for the semantic-driven planning and implementation of e-government systems does not exist yet. This study presents a methodology framework for the semantic-driven development of future e-government systems. The features of maturity models, software engineering and Semantic Web domains are investigated and employed to draw and specify the methodology framework. Thereafter, the semantic content of the methodology framework is further specified using ontology building methodology and Semantic Web ontology languages and platforms. The study would be useful to e-government developers, particularly those of developing countries where there is little or no practice of semantic content development in e-government processes as well as where little progress has been made towards the development of one-stop e-government portals for seamless services delivery to citizens. Part of the study would also be of interest to novice Semantic Web developers who might use it as a starting point for further investigations. / Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2012

Page generated in 0.0191 seconds