• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 27
  • 27
  • 21
  • 20
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 146
  • 123
  • 108
  • 81
  • 67
  • 63
  • 56
  • 54
  • 51
  • 49
  • 46
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Managing and Consuming Completeness Information for RDF Data Sources

Darari, Fariz 20 June 2017 (has links)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
122

Exploration et interrogation de données RDF intégrant de la connaissance métier / Integrating domain knowledge for RDF dataset exploration and interrogation

Ouksili, Hanane 21 October 2016 (has links)
Un nombre croissant de sources de données est publié sur le Web, décrites dans les langages proposés par le W3C tels que RDF, RDF(S) et OWL. Une quantité de données sans précédent est ainsi disponible pour les utilisateurs et les applications, mais l'exploitation pertinente de ces sources constitue encore un défi : l'interrogation des sources est en effet limitée d'abord car elle suppose la maîtrise d'un langage de requêtes tel que SPARQL, mais surtout car elle suppose une certaine connaissance de la source de données qui permet de cibler les ressources et les propriétés pertinentes pour les besoins spécifiques des applications. Le travail présenté ici s'intéresse à l'exploration de sources de données RDF, et ce selon deux axes complémentaires : découvrir d'une part les thèmes sur lesquels porte la source de données, fournir d'autre part un support pour l'interrogation d'une source sans l'utilisation de langage de requêtes, mais au moyen de mots clés. L'approche d'exploration proposée se compose ainsi de deux stratégies complémentaires : l'exploration thématique et la recherche par mots clés. La découverte de thèmes dans une source de données RDF consiste à identifier un ensemble de sous-graphes, non nécessairement disjoints, chacun représentant un ensemble cohérent de ressources sémantiquement liées et définissant un thème selon le point de vue de l'utilisateur. Ces thèmes peuvent être utilisés pour permettre une exploration thématique de la source, où les utilisateurs pourront cibler les thèmes pertinents pour leurs besoins et limiter l'exploration aux seules ressources composant les thèmes sélectionnés. La recherche par mots clés est une façon simple et intuitive d'interroger les sources de données. Dans le cas des sources de données RDF, cette recherche pose un certain nombre de problèmes, comme l'indexation des éléments du graphe, l'identification des fragments du graphe pertinents pour une requête spécifique, l'agrégation de ces fragments pour former un résultat, et le classement des résultats obtenus. Nous abordons dans cette thèse ces différents problèmes, et nous proposons une approche qui permet, en réponse à une requête mots clés, de construire une liste de sous-graphes et de les classer, chaque sous-graphe correspondant à un résultat pertinent pour la requête. Pour chacune des deux stratégies d'exploration d'une source RDF, nous nous sommes intéressés à prendre en compte de la connaissance externe, permettant de mieux répondre aux besoins des utilisateurs. Cette connaissance externe peut représenter des connaissances du domaine, qui permettent de préciser le besoin exprimé dans le cas d'une requête, ou de prendre en compte des connaissances permettant d'affiner la définition des thèmes. Dans notre travail, nous nous sommes intéressés à formaliser cette connaissance externe et nous avons pour cela introduit la notion de pattern. Ces patterns représentent des équivalences de propriétés et de chemins dans le graphe représentant la source. Ils sont évalués et intégrés dans le processus d'exploration pour améliorer la qualité des résultats. / An increasing number of datasets is published on the Web, expressed in languages proposed by the W3C to describe Web data such as RDF, RDF(S) and OWL. The Web has become a unprecedented source of information available for users and applications, but the meaningful usage of this information source is still a challenge. Querying these data sources requires the knowledge of a formal query language such as SPARQL, but it mainly suffers from the lack of knowledge about the source itself, which is required in order to target the resources and properties relevant for the specific needs of the application. The work described in this thesis addresses the exploration of RDF data sources. This exploration is done according to two complementary ways: discovering the themes or topics representing the content of the data source, and providing a support for an alternative way of querying the data sources by using keywords instead of a query formulated in SPARQL. The proposed exploration approach combines two complementary strategies: thematic-based exploration and keyword search. Theme discovery from an RDF dataset consists in identifying a set of sub-graphs which are not necessarily disjoints, and such that each one represents a set of semantically related resources representing a theme according to the point of view of the user. These themes can be used to enable a thematic exploration of the data source where users can target the relevant theme and limit their exploration to the resources composing this theme. Keyword search is a simple and intuitive way of querying data sources. In the case of RDF datasets, this search raises several problems, such as indexing graph elements, identifying the relevant graph fragments for a specific query, aggregating these relevant fragments to build the query results, and the ranking of these results. In our work, we address these different problems and we propose an approach which takes as input a keyword query and provides a list of sub-graphs, each one representing a candidate result for the query. These sub-graphs are ordered according to their relevance to the query. For both keyword search and theme identification in RDF data sources, we have taken into account some external knowledge in order to capture the users needs, or to bridge the gap between the concepts invoked in a query and the ones of the data source. This external knowledge could be domain knowledge allowing to refine the user's need expressed by a query, or to refine the definition of themes. In our work, we have proposed a formalization to this external knowledge and we have introduced the notion of pattern to this end. These patterns represent equivalences between properties and paths in the dataset. They are evaluated and integrated in the exploration process to improve the quality of the result.
123

[en] A KEYWORD-BASED QUERY PROCESSING METHOD FOR DATASETS WITH SCHEMAS / [pt] MÉTODO PARA O PROCESSAMENTO DE CONSULTAS POR PALAVRAS-CHAVES PARA BASES DE DADOS COM ESQUEMAS

GRETTEL MONTEAGUDO GARCÍA 23 June 2020 (has links)
[pt] Usuários atualmente esperam consultar dados de maneira semelhante ao Google, digitando alguns termos, chamados palavras-chave, e deixando para o sistema recuperar os dados que melhor correspondem ao conjunto de palavras-chave. O cenário é bem diferente em sistemas de gerenciamento de banco de dados em que os usuários precisam conhecer linguagens de consulta sofisticadas para recuperar dados, ou em aplicações de banco de dados em que as interfaces de usuário são projetadas como inúmeras caixas que o usuário deve preencher com seus parâmetros de pesquisa. Esta tese descreve um algoritmo e um framework projetados para processar consultas baseadas em palavras-chave para bases de dados com esquema, especificamente bancos relacionais e bases de dados em RDF. O algoritmo primeiro converte uma consulta baseada em palavras-chave em uma consulta abstrata e, em seguida, compila a consulta abstrata em uma consulta SPARQL ou SQL, de modo que cada resultado da consulta SPARQL (resp. SQL) seja uma resposta para a consulta baseada em palavras-chave. O algoritmo explora o esquema para evitar a intervenção do usuário durante o processo de busca e oferece um mecanismo de feedback para gerar novas respostas. A tese termina com experimentos nas bases de dados Mondial, IMDb e Musicbrainz. O algoritmo proposto obtém resultados satisfatórios para os benchmarks. Como parte dos experimentos, a tese também compara os resultados e o desempenho obtidos com bases de dados em RDF e bancos de dados relacionais. / [en] Users currently expect to query data in a Google-like style, by simply typing some terms, called keywords, and leaving it to the system to retrieve the data that best match the set of keywords. The scenario is quite different in database management systems, where users need to know sophisticated query languages to retrieve data, and in database applications, where the user interfaces are designed as a stack of pages with numerous boxes that the user must fill with his search parameters. This thesis describes an algorithm and a framework designed to support keywordbased queries for datasets with schema, specifically RDF datasets and relational databases. The algorithm first translates a keyword-based query into an abstract query, and then compiles the abstract query into a SPARQL or a SQL query such that each result of the SPARQL (resp. SQL) query is an answer for the keywordbased query. It explores the schema to avoid user intervention during the translation process and offers a feedback mechanism to generate new answers. The thesis concludes with experiments over the Mondial, IMDb, and Musicbrainz databases. The proposed translation algorithm achieves satisfactory results and good performance for the benchmarks. The experiments also compare the RDF and the relational alternatives.
124

Grafdatabaser en komparativ analys / Graph databases a comparativeanalysis

Lindström, Kasper, Lindqvist, Jonathan January 2023 (has links)
I dagens samhälle är energieffektivisering en viktig parameter när man pratar om hållbar-het. Många fastigheter saknar tekniska lösningar för att effektivt övervaka och hantera en-ergiförbrukning. För att möta detta behov strävar företag som iquest efter att digitalisera och automatisera energiövervakning. Iquest har i nuläget problem med ineffektivitet och dödläge vid uppladdning av stora datamängder i deras nuvarande grafdatabas. Genom en noggrann utvärdering har examensarbetet identifierat lämpliga alternativ för iquest att överväga.Under undersökningen presenterades grafdatabaserna Neo4j, Stardog, Allegrograph, Ama-zon Neptune, GraphDB, BlazingGraph och OrientDB. Baserat på grafdatabasernas egen-skaper och funktioner kunde det konstateras att Neo4j, Stardog, Allegrograph, Amazon Neptune och GraphDB uppfyllde kraven som ställdes för en lämplig grafdatabas.Implementeringen av grafdatabaserna begränsades av tidsbegränsningar och endast Neo4j, Stardog, Allegrograph och GraphDB kunde implementeras och genomgå tester. Trots att testerna utfördes med reducerade datamängder och gratisversionen av data-baserna, visade resultaten att två av de implementerade databaserna klarade alla testerna. / In today's society, energy efficiency is an important parameter when discussing sustaina-bility. Many buildings lack technical solutions to effectively monitor and manage energy consumption. To address this need, companies like iquest strive to digitize and automate energy monitoring. Currently, iquest faces issues of inefficiency and bottlenecks when up-loading large amounts of data into their current graph database. Through a thorough eval-uation, the thesis project has identified suitable alternatives for iquest to consider.During the investigation, the graph databases Neo4j, Stardog, Allegrograph, Amazon Nep-tune, GraphDB, BlazingGraph, and OrientDB were presented. Based on the characteristics and features of these graph databases, it was determined that Neo4j, Stardog, Allegro-graph, Amazon Neptune, and GraphDB meet the requirements for a suitable graph data-base.The implementation of the graph databases was limited by time constraints, and only Neo4j, Stardog, Allegrograph, and GraphDB could be implemented and subjected to test-ing. Despite conducting tests with reduced data volumes and using the free versions of the databases, the results showed that two of the implemented databases successfully passed all the tests.
125

[en] IMPROVING THE QUALITY OF THE USER EXPERIENCE BY QUERY ANSWER MODIFICATION / [pt] MELHORANDO A QUALIDADE DA EXPERIÊNCIA DO USUÁRIO ATRAVÉS DA MODIFICAÇÃO DA RESPOSTA DA CONSULTA

JOAO PEDRO VALLADAO PINHEIRO 30 June 2021 (has links)
[pt] A resposta de uma consulta, submetida a um banco de dados ou base de conhecimento, geralmente é longa e pode conter dados redundantes. O usuário é frequentemente forçado a navegar por uma longa resposta, ou refinar e repetir a consulta até que a resposta atinja um tamanho gerenciável. Sem o tratamento adequado, consumir a resposta da consulta pode se tornar uma tarefa tediosa. Este estudo, então, propõe um processo que modifica a apresentação da resposta da consulta para melhorar a qualidade de experiência do usuário, no contexto de uma base de conhecimento RDF. O processo reorganiza a resposta da consulta original aplicando heurísticas para comprimir os resultados. A consulta SPARQL original é modificada e uma exploração sobre o conjunto de resultados começa através de uma navegação guiada sobre predicados e suas facetas. O artigo também inclui experimentos baseados em versões RDF do MusicBrainz, enriquecido com dados do DBpedia, e IMDb, cada um com mais de 200 milhões de triplas RDF. Os experimentos utilizam exemplos de consultas de benchmarks conhecidos. / [en] The answer of a query, submitted to a database or a knowledge base, is often long and may contain redundant data. The user is frequently forced to browse thru a long answer, or to refine and repeat the query until the answer reaches a manageable size. Without proper treatment, consuming the query answer may indeed become a tedious task. This study then proposes a process that modifies the presentation of a query answer to improve the quality of the user s experience, in the context of an RDF knowledge base. The process reorganizes the original query answer by applying heuristics to summarize the results. The original SPARQL query is modified and an exploration over the result set starts thru a guided navigation over predicates and its facets. The article also includes experiments based on RDF versions of MusicBrainz, enriched with DBpedia data, and IMDb, each with over 200 million RDF triples. The experiments use sample queries from well-known benchmarks.
126

[pt] BUSCA POR PALAVRAS-CHAVE SOBRE GRAFOS RDF FEDERADOS EXPLORANDO SEUS ESQUEMAS / [en] KEYWORD SEARCH OVER FEDERATED RDF GRAPHS BY EXPLORING THEIR SCHEMAS

YENIER TORRES IZQUIERDO 28 July 2017 (has links)
[pt] O Resource Description Framework (RDF) foi adotado como uma recomendação do W3C em 1999 e hoje é um padrão para troca de dados na Web. De fato, uma grande quantidade de dados foi convertida em RDF, muitas vezes em vários conjuntos de dados fisicamente distribuídos ao longo de diferentes localizações. A linguagem de consulta SPARQL (sigla do inglês de SPARQL Protocol and RDF Query Language) foi oficialmente introduzido em 2008 para recuperar dados RDF e fornecer endpoints para consultar fontes distribuídas. Uma maneira alternativa de acessar conjuntos de dados RDF é usar consultas baseadas em palavras-chave, uma área que tem sido extensivamente pesquisada, com foco recente no conteúdo da Web. Esta dissertação descreve uma estratégia para compilar consultas baseadas em palavras-chave em consultas SPARQL federadas sobre conjuntos de dados RDF distribuídos, assumindo que cada conjunto de dados RDF tem um esquema e que a federação tem um esquema mediado. O processo de compilação da consulta SPARQL federada é explicado em detalhe, incluindo como computar o conjunto de joins externos entre as subconsultas locais geradas, como combinar, com a ajuda de cláusulas UNION, os resultados de consultas locais que não têm joins entre elas, e como construir a cláusula TARGET, de acordo com a composição da cláusula WHERE. Finalmente, a dissertação cobre experimentos com dados do mundo real para validar a implementação. / [en] The Resource Description Framework (RDF) was adopted as a W3C recommendation in 1999 and today is a standard for exchanging data in the Web. Indeed, a large amount of data has been converted to RDF, often as multiple datasets physically distributed over different locations. The SPARQL Protocol and RDF Query Language (SPARQL) was officially introduced in 2008 to retrieve RDF datasets and provide endpoints to query distributed sources. An alternative way to access RDF datasets is to use keyword-based queries, an area that has been extensively researched, with a recent focus on Web content. This dissertation describes a strategy to compile keyword-based queries into federated SPARQL queries over distributed RDF datasets, under the assumption that each RDF dataset has a schema and that the federation has a mediated schema. The compilation process of the federated SPARQL query is explained in detail, including how to compute a set of external joins between the local subqueries, how to combine, with the help of the UNION clauses, the results of local queries which have no external joins between them, and how to construct the TARGET clause, according to the structure of the WHERE clause. Finally, the dissertation covers experiments with real-world data to validate the implementation.
127

Verification, Validation and Completeness Support for Metadata Traceability

Darr, Timothy, Fernandes, Ronald, Hamilton, John, Jones, Charles 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / The complexity of modern test and evaluation (T&E) processes has resulted in an explosion of the quantity and diversity of metadata used to describe end-to-end T&E processes. Ideally, it would be possible to integrate metadata in such a way that disparate systems can seamlessly access the metadata and easily interoperate with other systems. Unfortunately, there are several barriers to achieving this goal: metadata is often designed for use with specific tools or specific purposes; metadata exists in a variety of formats (legacy, non-legacy, structured and unstructured metadata); and the same information is represented in multiple ways across different metadata formats.
128

Semantic Web Technologies for T&E Metadata Verification and Validation

Darr, Timothy, Fernandes, Ronald, Hamilton, John, Jones, Charles, Weisenseel, Annette 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The vision of the semantic web is to unleash the next generation of information sharing and interoperability by encoding meaning into the symbols that are used to describe various computational capabilities within the World Wide Web or other networks. This paper describes the application of semantic web technologies to Test and Evaluation (T&E) metadata verification and validation. Verification is a quality process that is used to evaluate whether or not a product, service, or system complies with a regulation, specification, or conditions imposed at the start of a development phase or which exists in the organization. Validation is the process of establishing documented evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. While this often involves acceptance and suitability with external customers, automation provides significant assistance to the customers.
129

Meeting report: Identifying practical applications of ontologies for biodiversity informatics

Deck, John, Guralnick, Robert, Walls, Ramona, Blum, Stanley, Haendel, Melissa, Matsunaga, Andréa, Wieczorek, John January 2015 (has links)
This report describes the outcomes of a recent workshop, building on a series of workshops from the last three years with the goal if integrating genomics and biodiversity research, with a more specific goal here to express terms in Darwin Core and Audubon Core, where class constructs have been historically underspecified, into a Biological Collections Ontology (BCO) framework. For the purposes of this workshop, the BCO provided the context for fully defining classes as well as object and data properties, including domain and range information, for both the Darwin Core and Audubon Core. In addition, the workshop participants reviewed technical specifications and approaches for annotating instance data with BCO terms. Finally, we laid out proposed activities for the next 3 to 18 months to continue this work.
130

Formalisms on semi-structured and unstructured data schema computations

Lee, Yau-tat, Thomas., 李猷達. January 2009 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy

Page generated in 0.0228 seconds