• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 12
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 81
  • 55
  • 49
  • 38
  • 32
  • 28
  • 22
  • 22
  • 21
  • 21
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dotazování RDF dat uložených v relačních databázích pomocí jazyků SPARQL a R2RML / Querying RDF graphs stored in a relational database using SPARQL and R2RML

Chaloupka, Miloš January 2014 (has links)
The RDF framework is becoming a popular framework for presenting data. It makes the data easily accessible and queryable. But the most common way how to store structured data is to use a relational database systems. The relational databases benefit from their long theoretical and practical history, however the relational database does not offer any convenient way how to publish the data. It is essential to create a mapping between these two worlds, to publish the data stored in a relational database in the RDF format. In the presented work we study the SPARQL algebra and create a transformation algorithm that enable us to create a virtual SPARQL endpoint over the relational data. We apply the acquired knowledge in implementation of a tool which uses the algorithm to proof the concept. Powered by TCPDF (www.tcpdf.org)
2

Semantic Integration of Coastal Buoys Data using SPARQL

Gourineni, Rakesh Kumar 12 May 2012 (has links)
Currently, the data provided by the heterogeneous buoy sensors/networks (e.g. National Data Buoy center (NDBC), Gulf Of Maine Ocean Observing System (GoMoos) etc. is not amenable to the development of integrated systems due to conflicts in the data representation at syntactic and structural levels. With the rapid increase in the amount of information, the integration of heterogeneous resources is an important issue and requires integrative technologies such as semantic web. In distributed data dissemination system, normally querying on single database will not provide relevant information and requires querying across interrelated data sources to retrieve holistic information. In this thesis we develop system for integrating two different Resource Description Framework (RDF) data sources through intelligent querying using Simple Protocol and RDF Query Language (SPARQL). We use Semantic Web application framework from AllegroGraph that provides functionality for developing triple store for the ontological representations, forming federated stores and querying it through SPARQL.
3

Efficient Source Selection For SPARQL Endpoint Query Federation

Saleem, Muhammad 28 October 2016 (has links) (PDF)
The Web of Data has grown enormously over the last years. Currently, it comprises a large compendium of linked and distributed datasets from multiple domains. Due to the decentralised architecture of the Web of Data, several of these datasets contain complementary data. Running complex queries on this compendium thus often requires accessing data from different data sources within one query. The abundance of datasets and the need for running complex query has thus motivated a considerable body of work on SPARQL query federation systems, the dedicated means to access data distributed over the Web of Data. This thesis addresses two key areas of federated SPARQL query processing: (1) efficient source selection, and (2) comprehensive SPARQL benchmarks to test and ranked federated SPARQL engines as well as triple stores. Efficient Source Selection: Efficient source selection is one of the most important optimization steps in federated SPARQL query processing. An overestimation of query relevant data sources increases the network traffic, result in irrelevant intermediate results, and can significantly affect the overall query processing time. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. Similarly, only little attention has been paid to the effect of duplicated data on federated querying. This thesis presents HiBISCuS and TBSS, novel hypergraph-based source selection approaches, and DAW, a duplicate-aware source selection approach to federated querying over the Web of Data. Each of these approaches can be combined directly with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We combined the three (HiBISCuS, DAW, and TBSS) source selections approaches with query rewriting to form a complete SPARQL query federation engine named Quetsal. Furthermore, we present TopFed, a Cancer Genome Atlas (TCGA) tailored federated query processing engine that exploits the data distribution to perform intelligent source selection while querying over large TCGA SPARQL endpoints. Finally, we address the issue of rights managements and privacy while accessing sensitive resources. To this end, we present SAFE: a global source selection approach that enables decentralised, policy-aware access to sensitive clinical information represented as distributed RDF Data Cubes. Comprehensive SPARQL Benchmarks: Benchmarking is indispensable when aiming to assess technologies with respect to their suitability for given tasks. While several benchmarks and benchmark generation frameworks have been developed to evaluate federated SPARQL engines and triple stores, they mostly provide a one-fits-all solution to the benchmarking problem. This approach to benchmarking is however unsuitable to evaluate the performance of a triple store for a given application with particular requirements. The fitness of current SPARQL query federation approaches for real applications is difficult to evaluate with current benchmarks as current benchmarks are either synthetic or too small in size and complexity. Furthermore, state-of-the-art federated SPARQL benchmarks mostly focused on a single performance criterion, i.e., the overall query runtime. Thus, they cannot provide a fine-grained evaluation of the systems. We address these drawbacks by presenting FEASIBLE, an automatic approach for the generation of benchmarks out of the query history of applications, i.e., query logs and LargeRDFBench, a billion-triple benchmark for SPARQL query federation which encompasses real data as well as real queries pertaining to real bio-medical use cases. Our evaluation results show that HiBISCuS, TBSS, TopFed, DAW, and SAFE all can significantly reduce the total number of sources selected and thus improve the overall query performance. In particular, TBSS is the first source selection approach to remain under 5% overall relevant sources overestimation. Quetsal has reduced the number of sources selected (without losing recall), the source selection time as well as the overall query runtime as compared to state-of-the-art federation engines. The LargeRDFBench evaluation results suggests that the performance of current SPARQL query federation systems on simple queries does not reflect the systems\\\' performance on more complex queries. Moreover, current federation systems seem unable to deal with many of the challenges that await them in the age of Big Data. Finally, the FEASIBLE\\\'s evaluation results shows that it generates better sample queries than the state-of-the-art. In addition, the better query selection and the larger set of query types used lead to triple store rankings which partly differ from the rankings generated by previous works.
4

Un intergiciel gérant des événements pour permettre l’émergence d’interactions dynamiques et ubiquitaires dans l’Internet des services / Pushing dynamic and ubiquitous event-based interactions in the Internet of services : a middleware for event clouds

Pellegrino, Laurent 03 April 2014 (has links)
Resource Description Framework (RDF) est devenu un modèle de données pertinentafin de décrire et de modéliser les informations qui sont partagées sur le Web.Cependant, fournir une solution permettant de stocker et de récupérer cesdonnées de manière efficace tout en passant à l’échelle reste un défi majeur.Dans le contexte de cette thèse nous proposons un intergiciel dévoué austockage, à la récupération synchrone mais aussi à la dissémination sélectiveet asynchrone en quasi temps réel d'informations RDF dans un environnementcomplètement distribué. L’objectif est de pouvoir tirer parti des informationsdu passé comme de celles filtrées en quasi temps réel. Contrairement à unegrande majorité de solutions existantes, nous avons avons fait le choixd’éviter le hachage pour indexer les données ce qui nous permet de traiter lesrequêtes à intervalles de manière efficace. Le filtrage des informations enquasi temps réel est permis par l’expression des intérêts à l’aide desouscriptions basées sur le contenu des évènements futurs. Nous avons proposédeux algorithmes qui permettent de vérifier la concordance des évènements RDFavec les souscriptions enregistrées. Les deux algorithmes ont été testésexpérimentalement. En sus de la récupération synchrone et de la diffusionasynchrone d’évènements, nous nous sommes intéressés à améliorer la répartitiondes données RDF qui souffrent de dissymétrie. Finalement, nous avons consacréun effort non négligeable à rendre notre intergiciel modulaire. / RDF has become a relevant data model for describing and modeling information on the Web but providing scalable solutions to store and retrieve RDF data in a responsive manner is still challenging. Within the context of this thesis we propose a middleware devoted to storing, retrieving synchronously but also disseminating selectively and asynchronously RDF data in a fully distributed environment. Its purposes is to allow to leverage historical information and filter data near real-time. To this aims we have built our system atop a slightly modified version of a 4-dimensional Content Addressable Network (CAN) overlay network reflecting the structure of an RDF tuple. Unlike many existing solutions we made the choice to avoid hashing for indexing data, thus allowing efficient range queries resolution. Near realtime filtering is enabled by expressing information preferences in advance through content-based subscriptions handled by a publish/subscribe layer designed atop the CAN architecture. We have proposed two algorithms to check RDF data or events satisfaction with subscriptions but also to forward solutions to interested parties. Both algorithms have been experimentally tested for throughput and scalability. Although one performs better than the other, they remain complementary to ensure correctness. Along with information retrieval and dissemination, we have proposed a solution to enhance RDF data distribution on our revised CAN network since RDF information suffers from skewness. Finally, to improve maintainability and reusability some efforts were also dedicated to provide a modular middleware reducing the coupling between its underlying software artifacts.
5

Utilização de ontologias para busca em base de dados de acórdãos do STF / Using an ontology for searching the decisions of the Brazilian Supreme Court

Oliveira, Rafael Brito de 30 November 2017 (has links)
O Supremo Tribunal Federal (STF) mantém uma base de documentos que relatam suas decisões tomadas em todos os julgamentos passados. Esses documentos são chamados de acórdãos e compõem a jurisprudência do STF, pois abordam assuntos que dizem respeito a constituição. Eles estão disponíveis a todos, porém encontrar uma informação relevante é uma tarefa árdua, que muitas vezes exige um nível de conhecimento da área jurídica. O STF oferece um mecanismo de busca para esses acórdãos, porém o mecanismo atual utiliza uma forma tradicional de busca baseado em formulários com inúmeros campos a serem preenchidos e selecionados, se assemelhando a um questionário, no qual cada pergunta está relacionada a filtragem de certas informações em toda a base persistida em bancos de dados relacional. Esta abordagem do ponto de vista do usuário é pouco intuitiva e em alguns casos imprecisa. Com base nesta dificuldade, neste trabalho é apresentada uma abordagem de um mecanismo de pesquisa que utiliza uma ontologia para a criação de uma representação do conhecimento contido nos acórdãos do STF. Sua construção é feita com o auxílio da tecnologia OBDA (Ontology Based Data Access), que permite a criação de uma camada semântica sobre uma base de dados relacional, o que possibilita a realização de consultas em SPARQL. / The Brazilian Supreme Federal Court (STF) keeps in its database documents describing past judgments decisions. This documents are called acórdãos (decisions) and are part of STF jurisprudence because they deal with matters that concern the Federal Constitution. They are publicly available, but finding relevant information is often requires a high level of knowledge about the juridical area. The STF offers a search mechanism for the acórdãos, but through a form with a lot of fields to be filled, looking like a questionnaire, where each question is related with certain filtered data persisted over the relational database. This approach from users perspective, is unintuitive and in some cases inaccurate. For this reason, this work presents an approach where the search mechanism is based on an ontology that represents the knowledge inside the STF acórdãos. Other technology used here is the OBDA (Ontology Based Data Access), that allows the use of an abstract semantic layer over a relational database, and with it is possible to query the database with SPARQL.
6

Consulta a ontologias utilizando linguagem natural controlada / Querying ontologies using controlled natural language

Luz, Fabiano Ferreira 31 October 2013 (has links)
A presente pesquisa explora areas de Processamento de Linguagem Natural (PLN), tais como, analisadores, gramaticas e ontologias no desenvolvimento de um modelo para o mapeamento de consulta em lingua portuguesa controlada para consultas SPARQL. O SPARQL e uma linguagem de consulta capaz de recuperar e manipular dados armazenados em RDF, que e a base para a construcao de Ontologias. Este projeto pretende investigar utilizacao das tecnicas supracitadas na mitigacao do problema de consulta a Ontologias utilizando linguagem natural controlada. A principal motivacao para o desenvolvimento deste trabalho e pesquisar tecnicas e modelos que possam proporcionar uma melhor interacao do homem com o computador. Facilidade na interacao homem-computador e convergida em produtividade, eficiencia, comodidade dentre outros beneficios implicitos. Nos nos concentramos em medir a eficiencia do modelo proposto e procurar uma boa combinacao entre todas as tecnicas em questao. / This research explores areas of Natural Language Processing (NLP), such as parsers, grammars and ontologies in the development of a model for mapping queries in controlled Portuguese into SPARQL queries. The SPARQL query language allows for manipulation and retrieval of data stored as RDF, which forms the basis for building ontologies. This project aims to investigate the use of the above techniques to help curb the problem of querying ontologies using controlled natural language. The main motivation for the development of this work is to research techniques and models that could provide a better interaction between man and computer. Ease in human-computer interaction is converted into productivity, efficiency, convenience, among other implicit benefits. We focus on measuring the effectiveness of the proposed model and look for a good combination of all the techniques in question.
7

Utilização de ontologias para busca em base de dados de acórdãos do STF / Using an ontology for searching the decisions of the Brazilian Supreme Court

Rafael Brito de Oliveira 30 November 2017 (has links)
O Supremo Tribunal Federal (STF) mantém uma base de documentos que relatam suas decisões tomadas em todos os julgamentos passados. Esses documentos são chamados de acórdãos e compõem a jurisprudência do STF, pois abordam assuntos que dizem respeito a constituição. Eles estão disponíveis a todos, porém encontrar uma informação relevante é uma tarefa árdua, que muitas vezes exige um nível de conhecimento da área jurídica. O STF oferece um mecanismo de busca para esses acórdãos, porém o mecanismo atual utiliza uma forma tradicional de busca baseado em formulários com inúmeros campos a serem preenchidos e selecionados, se assemelhando a um questionário, no qual cada pergunta está relacionada a filtragem de certas informações em toda a base persistida em bancos de dados relacional. Esta abordagem do ponto de vista do usuário é pouco intuitiva e em alguns casos imprecisa. Com base nesta dificuldade, neste trabalho é apresentada uma abordagem de um mecanismo de pesquisa que utiliza uma ontologia para a criação de uma representação do conhecimento contido nos acórdãos do STF. Sua construção é feita com o auxílio da tecnologia OBDA (Ontology Based Data Access), que permite a criação de uma camada semântica sobre uma base de dados relacional, o que possibilita a realização de consultas em SPARQL. / The Brazilian Supreme Federal Court (STF) keeps in its database documents describing past judgments decisions. This documents are called acórdãos (decisions) and are part of STF jurisprudence because they deal with matters that concern the Federal Constitution. They are publicly available, but finding relevant information is often requires a high level of knowledge about the juridical area. The STF offers a search mechanism for the acórdãos, but through a form with a lot of fields to be filled, looking like a questionnaire, where each question is related with certain filtered data persisted over the relational database. This approach from users perspective, is unintuitive and in some cases inaccurate. For this reason, this work presents an approach where the search mechanism is based on an ontology that represents the knowledge inside the STF acórdãos. Other technology used here is the OBDA (Ontology Based Data Access), that allows the use of an abstract semantic layer over a relational database, and with it is possible to query the database with SPARQL.
8

Consulta a ontologias utilizando linguagem natural controlada / Querying ontologies using controlled natural language

Fabiano Ferreira Luz 31 October 2013 (has links)
A presente pesquisa explora areas de Processamento de Linguagem Natural (PLN), tais como, analisadores, gramaticas e ontologias no desenvolvimento de um modelo para o mapeamento de consulta em lingua portuguesa controlada para consultas SPARQL. O SPARQL e uma linguagem de consulta capaz de recuperar e manipular dados armazenados em RDF, que e a base para a construcao de Ontologias. Este projeto pretende investigar utilizacao das tecnicas supracitadas na mitigacao do problema de consulta a Ontologias utilizando linguagem natural controlada. A principal motivacao para o desenvolvimento deste trabalho e pesquisar tecnicas e modelos que possam proporcionar uma melhor interacao do homem com o computador. Facilidade na interacao homem-computador e convergida em produtividade, eficiencia, comodidade dentre outros beneficios implicitos. Nos nos concentramos em medir a eficiencia do modelo proposto e procurar uma boa combinacao entre todas as tecnicas em questao. / This research explores areas of Natural Language Processing (NLP), such as parsers, grammars and ontologies in the development of a model for mapping queries in controlled Portuguese into SPARQL queries. The SPARQL query language allows for manipulation and retrieval of data stored as RDF, which forms the basis for building ontologies. This project aims to investigate the use of the above techniques to help curb the problem of querying ontologies using controlled natural language. The main motivation for the development of this work is to research techniques and models that could provide a better interaction between man and computer. Ease in human-computer interaction is converted into productivity, efficiency, convenience, among other implicit benefits. We focus on measuring the effectiveness of the proposed model and look for a good combination of all the techniques in question.
9

A Framework Supporting Development of Ontology-Based Web Applications

Tankashala, Shireesha 17 December 2010 (has links)
We have developed a framework to support development of ontology based Web applications. This framework is composed of a tree-view browser, an attribute selector, the ontology persistence module, an ontology query module, and a utility class that allows the users, to plug-in their own customized functions. The framework supports SPARQL-DL query language. The purpose of this framework is to shield the complexity of ontology from the users and thereby ease the development of ontology based Web applications. Having high quality ontology and using this framework, the end-users can develop Web applications in many domains. For example, a professor can create highly customized study guides; a domain expert can generate the Web forms for data collections; a geologist can create a Google Maps mashup. We have also reported three ontology-based Web applications in education, meteorology and geographic information system.
10

Integrating SciSPARQL and MATLAB

He, Xueming January 2014 (has links)
Nowadays many scientific experiment results involve multi-dimensional arrays. It is desirable to store these results in a persistent way and make queries against not only well-structured data objects like arrays but also the metadata that describe the experiments. SPARQL is a Semantic Web standard query language for data and metadata stored in terms of RDF. SciSPARQL is an extended version of SPARQL designed for scientific applications. It includes numeric multi-dimensional array operations and user-defined functions. The SciSPARQL Database Manager (SSDM) is a query processing engine for SciSPARQL. MATLAB is a popular and powerful scientific computing application programming language. We implemented an interface between MATLAB and SciSPARQL called MATLAB SciSPARQL Link (MSL). MSL provides SciSPARQL queries in MATLAB through a client/server interface. It optionally also provides an interface to enable calls to MATLAB in SciSPARQL queries.  With MSL MATLAB users can populate, update, and query SSDM databases it in terms of SciSPARQL queries. For the implementation we use C interfaces of MATLAB and SSDM, and the networking capabilities of SSDM. The DLL we made extends MATLAB with MSL interface functions.

Page generated in 0.042 seconds