• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 12
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 81
  • 55
  • 49
  • 38
  • 32
  • 28
  • 22
  • 22
  • 21
  • 21
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Diamond : a Rete-match linked data SPARQL environment

Depena, Rodolfo Kaplan 14 February 2011 (has links)
Diamond is a SPARQL query engine for linked data. Linked data is a sub-topic of the Semantic Web where data is represented as a labeled directed graph using the Resource Description Framework (RDF), a conceptual data model for web resources, to affect a web-wide interconnected, distributed labeled graph. SPARQL graph patterns entail portions of this distributed graph. Diamond compiles SPARQL queries into a physical query plan based on a set of newly defined operators that implement a new variant of the Rete match, a well known artificial intelligence (AI) algorithm used for complex pattern-matching problems. / text
32

Evaluating Query and Storage Strategies for RDF Archives

Fernandez Garcia, Javier David, Umbrich, Jürgen, Polleres, Axel, Knuth, Magnus January 2018 (has links) (PDF)
There is an emerging demand on efficiently archiving and (temporal) querying different versions of evolving semantic Web data. As novel archiving systems are starting to address this challenge, foundations/standards for benchmarking RDF archives are needed to evaluate its storage space efficiency and the performance of different retrieval operations. To this end, we provide theoretical foundations on the design of data and queries to evaluate emerging RDF archiving systems. Then, we instantiate these foundations along a concrete set of queries on the basis of a real-world evolving dataset. Finally, we perform an empirical evaluation of various current archiving techniques and querying strategies on this data that is meant to serve as a baseline of future developments on querying archives of evolving RDF data.
33

Modelos de Base de Datos de Grafo y RDF

Angles Rojas, Renzo January 2009 (has links)
En el año 2004 el Consorcio de la World Wide Web (W3C) estandarizó un lenguaje de metadatos para la descripción de recursos en la Web denominado Resource Description Framework (RDF). La motivación fue el definir un lenguaje que sirva de base para el modelado extensible de dominios altamente interconectados y redes de información. La especificación de RDF puede ser vista, desde un punto de vista de base de datos, como un modelo de base de datos. En efecto, subyacente a RDF se encuentra un modelo el cual trae a la mente la noción de datos con estructura de grafo. Por otra parte, la cifra creciente de información representada en el lenguaje RDF ha sido acompañada de varias propuestas sobre como almacenar y consultar datos RDF. En consecuencia, resulta natural el estudio de RDF desde un punto de vista de base de datos. El objetivo principal de esta tesis es el estudio de RDF desde una perspectiva de base de datos. Nuestro estudio se concentra en el modelado de base de datos, enfocándonos en modelos particulares llamados modelos de base de datos de grafo, los cuales parecen estar más relacionados a RDF desde un punto de vista teórico. Asimismo, esta tesis sigue de cerca los desarrollos del W3C en el sentido de entregar a RDF el soporte de un modelo de base de datos, en particular desarrollar los aspectos de consulta de datos. La principal contribución de la tesis es haber clarificado y desarrollado la relación entre RDF y los modelos de base de datos de grafo. Para esto, primero se estudió RDF como un modelo de base de datos. Segundo, se realizó una conceptualización del área de modelos de base de datos de grafo. Tercero, se propuso un conjunto deseable de propiedades de grafo las cuales deberían ser soportadas por un lenguaje de consulta ideal para RDF. Cuarto, se demostró que el lenguaje de consulta estándar definido por el W3C, denominado SPARQL, no soporta las consultas de grafo esenciales que se esperaría encontrar en un lenguaje de consulta para bases de datos de grafo. Esto fue conseguido al estudiar y determinar el poder expresivo de SPARQL. Finalmente, y debido a los resultados anteriores, nos concentramos en la aplicación de los modelos de bases de datos de grafo a la visualización de datos, esto al definir formalmente y caracterizar un modelo de visualización de grafo para RDF.
34

Možnosti zpracování a využití otevřených dat / Utilization of Open Data

Ferdan, Ondřej January 2016 (has links)
Main goal of this diploma thesis is characterization of open data, standards and analyzation of adoption and utilization of open principles in the public sector of the Czech Republic. And comparison with European Union and chosen countries. Identifies technology and tools for linked data, used for deployment of highest rating of data openness. Defines geographical data, its standards and INSPIRE directive for spatial information in Europe. The goal of practical part of thesis is to analyze adoption of open principles for geographical data between Czech institutions. Focusing on what data are available, if open principles are applied and on what circumstances are data available. Foreign countries are also covered for the comparison.
35

Un framework para el chequeo de consistencia en modelos uml

Rivas Chiessa, Sebastián Rodrigo January 2007 (has links)
Es indudable hoy en día que el desarrollo de software se ha convertido en una actividad de gran importancia, principalmente, debido a que puede repercutir en diversas actividades cotidianas de las personas. Dada dicha diversidad de actividades que abarcan los desarrollos, los equipos de trabajo suelen estar constituidos a su vez por personas que trabajan en áreas muy diversas. Resulta entonces vital para estos grupos de trabajo contar con un lenguaje común que les permita comunicarse de mejor manera. Justamente con esta intención es que nace UML (Unified Modeling Language) que se ha transformado en el lenguaje de modelado de sistemas de software más utilizado en la actualidad. UML es un lenguaje gráfico para visualizar, especificar, construir y documentar sistemas de software. Básicamente ofrece una familia de diagramas para describir distintos aspectos del sistema, incluyendo aspectos conceptuales tales como procesos de negocios y funciones del sistema, y aspectos concretos como expresiones de lenguajes de programación, esquemas de bases de datos y componentes de software reutilizables. UML fue adoptado por el OMG (Object Management Group) en el año 1997 como el estándar de-facto para el modelamiento orientado a objetos. Desde entonces atravesó varias revisiones y refinamientos hasta llegar a la versión actual (UML 2.0) aprobada en octubre de 2004 [11]. Si bien, como fue mencionado anteriormente, UML 2.0 es el estándar dentro en la industria, esto no significa que sea definitivo ya que cuenta con una serie de dificultades. De hecho, no define una clara relación entre la semántica de los distintos diagramas, ni ofrece políticas de versionamiento en el caso de la evolución de un modelo. Estas dificultades son justificadas aduciendo a que no toda inconsistencia es accidental. Por ejemplo, cuando se hace un diseño abarcando desde lo global a lo particular, se inicia el proceso de diseño con un modelo incompleto, por lo tanto inconsistente. Indudable es que el uso de herramientas CASE facilita bastante la labor del diseñador, sobre todo en desarrollos de gran tamaño y complejidad. Sin embargo, dada la posición de los creadores de UML respecto a la validez de las inconsistencias, el usuario de UML debe preocuparse de las inconsistencias en forma manual.
36

Porovnání přístupů k ukládání otevřených propojených dat / Comparison of approaches to storing linked open data

Hanuš, Jiří January 2015 (has links)
The aim of this diploma thesis is a detail description of current possibilities and ways of storing open data. It focuses on tools and database systems used for storing linked open data as well as on the selection of such systems for subsequent analysis and comparison. The practical part of the thesis then focuses on the comparison of selected systems based on a selected use case. This thesis introduces the fundamental terms and concepts concerning linked open data. Besides that, various approaches and formats for storing linked open data (namely file ori-ented approaches and database approaches) are analyzed. . The thesis also focuses on the RDF format and database systems. Ten triplestore database solutions (solutions for storing data in the RDF format) are introduced and described briefly. Out of these, three are cho-sen for a detailed analysis by which they are compared with one another and with a rela-tional database system. The core of the detail analysis lies in performance benchmarks. Ex-isting performance oriented benchmarks of triplestore systems are described and analyzed. In addition to that, the thesis introduces a newly developed benchmark as a collection of database queries. The benchmark is then used for the performance testing. The following systems have been tested: Apache Jena TDB/Fuseki, OpenLink Virtuoso, Oracle Spatial and Graph a Microsoft SQL Server. The main contribution of this thesis consists in a comprehensive presentation of current possibilities of storing linked open data.
37

Webové aplikace s využitím Linked Open Data / Web application using the Linked Open Data

Le Xuan, Dung January 2014 (has links)
This thesis deals with the issue of open data. The aim is to introduce to reader the currently very popular topic. Linking these data together gives us more advantages and opportuni-ties, however a large number of open data datasets are published in the format that cannot be linked together. Therefore, the author put great emphasis into his work on Linked Data. Emphasis is not placed only on the emergence, current status and future development, but also on the technical aspect. First, readers will be familiar with theoretical concepts, principles of Linked Open Data, expansion of open government data in the Czech Republic and abroad. In the next chapter, the author aimed at the data formats RDF, SPARQL language, etc. In the last section, the author introduce to readers the tools to work with Linked Open Data and design sample application using the Linked Open Data. The benefit of the whole work is a comprehensive view of the Linked Open Data both from a theoretical and from a practical part. The main goal is to provide to readers quality introduction to the issue.
38

Bezpečné nalezení zdrojů REST architektury na základě jejich sémantického popisu / Secure Semantically-Based Discovery of Resources in RESTful Architecutre

Koudelka, Jiří January 2018 (has links)
This thesis looks into the problematics of secure semantically-based discovery of resources in REST-ful architecture.  The subject of this thesis is the implementation and potential extension of the mRDP for secure discovery of resources in REST-ful architecture, including the implementation of a corresponding open-source library compatible with the Android operating system.  Another topic of this thesis is the implementation of simple example applications using this library.
39

Accelerating SPARQL Queries and Analytics on RDF Data

Al-Harbi, Razen 09 November 2016 (has links)
The complexity of SPARQL queries and RDF applications poses great challenges on distributed RDF management systems. SPARQL workloads are dynamic and con- sist of queries with variable complexities. Hence, systems that use static partitioning su↵er from communication overhead for workloads that generate excessive communi- cation. Concurrently, RDF applications are becoming more sophisticated, mandating analytical operations that extend beyond SPARQL queries. Being primarily designed and optimized to execute SPARQL queries, which lack procedural capabilities, exist- ing systems are not suitable for rich RDF analytics. This dissertation tackles the problem of accelerating SPARQL queries and RDF analytics on distributed shared-nothing RDF systems. First, a distributed RDF en- gine, coined AdPart, is introduced. AdPart uses lightweight hash partitioning for sharding triples using their subject values; rendering its startup overhead very low. The locality-aware query optimizer of AdPart takes full advantage of the partition- ing to (i) support the fully parallel processing of join patterns on subjects and (ii) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. By exploiting hash- based locality, AdPart achieves better or comparable performance to systems that employ sophisticated partitioning schemes. To cope with workloads dynamism, AdPart is extended to dynamically adapt to workload changes. AdPart monitors the data access patterns and dynamically redis- tributes and replicates the instances of the most frequent patterns among workers.Consequently, the communication cost for future queries is drastically reduced or even eliminated. Experiments with synthetic and real data verify that AdPart starts faster than all existing systems and gracefully adapts to the query load. Finally, to support and accelerate rich RDF analytical tasks, a vertex-centric RDF analytics framework is proposed. The framework, named SPARTex, bridges the gap between RDF and graph processing. To do so, SPARTex: (i) implements a generic SPARQL operator as a vertex-centric program. The operator is coupled with an optimizer that generates e cient execution plans. (ii) It allows SPARQL to invoke vertex-centric programs as stored procedures. Finally, (iii) it provides a unified in- memory data store that allows the persistence of intermediate results. Consequently, SPARTex can e ciently support RDF analytical tasks consisting of complex pipeline of operators.
40

Algorithms and Frameworks for Graph Analytics at Scale

Jamour, Fuad Tarek 28 February 2019 (has links)
Graph queries typically involve retrieving entities with certain properties and connectivity patterns. One popular property is betweenness centrality, which is a quantitative measure of importance used in many applications such as identifying influential users in social networks. Solving graph queries that involve retrieving important entities with user-defined connectivity patterns in large graphs requires efficient com- putation of betweenness centrality and efficient graph query engines. The first part of this thesis studies the betweenness centrality problem, while the second part presents a framework for building efficient graph query engines. Computing betweenness centrality entails computing all-pairs shortest paths; thus, exact computation is costly. The performance of existing approximation algorithms is not well understood due to the lack of an established benchmark. Since graphs in many applications are inherently evolving, several incremental algorithms were proposed. However, they cannot scale to large graphs: they either require excessive memory or perform unnecessary computations rendering them prohibitively slow. Existing graph query engines rely on exhaustive indices for accelerating query evaluation. The time and memory required to build these indices can be prohibitively high for large graphs. This thesis attempts to solve the aforementioned limitations in the graph analytics literature as follows. First, we present a benchmark for evaluating betweenness centrality approximation algorithms. Our benchmark includes ground-truth data for large graphs in addition to a systematic evaluation methodology. This benchmark is the first attempt to standardize evaluating betweenness centrality approximation algorithms and it is currently being used by several research groups working on approximate between- ness in large graphs. Then, we present a linear-space parallel incremental algorithm for updating betweenness centrality in large evolving graphs. Our algorithm uses biconnected components decomposition to localize processing graph updates, and it performs incremental computation even within affected components. Our algorithm is up to an order of magnitude faster than the state-of-the-art parallel incremental algorithm. Finally, we present a framework for building low memory footprint graph query engines. Our framework avoids building exhaustive indices and uses highly optimized matrix algebra operations instead. Our framework loads datasets, and evaluates data-intensive queries up to an order of magnitude faster than existing engines.

Page generated in 0.0268 seconds