• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 12
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 81
  • 55
  • 49
  • 38
  • 32
  • 28
  • 22
  • 22
  • 21
  • 21
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Grafdatabaser en komparativ analys / Graph databases a comparativeanalysis

Lindström, Kasper, Lindqvist, Jonathan January 2023 (has links)
I dagens samhälle är energieffektivisering en viktig parameter när man pratar om hållbar-het. Många fastigheter saknar tekniska lösningar för att effektivt övervaka och hantera en-ergiförbrukning. För att möta detta behov strävar företag som iquest efter att digitalisera och automatisera energiövervakning. Iquest har i nuläget problem med ineffektivitet och dödläge vid uppladdning av stora datamängder i deras nuvarande grafdatabas. Genom en noggrann utvärdering har examensarbetet identifierat lämpliga alternativ för iquest att överväga.Under undersökningen presenterades grafdatabaserna Neo4j, Stardog, Allegrograph, Ama-zon Neptune, GraphDB, BlazingGraph och OrientDB. Baserat på grafdatabasernas egen-skaper och funktioner kunde det konstateras att Neo4j, Stardog, Allegrograph, Amazon Neptune och GraphDB uppfyllde kraven som ställdes för en lämplig grafdatabas.Implementeringen av grafdatabaserna begränsades av tidsbegränsningar och endast Neo4j, Stardog, Allegrograph och GraphDB kunde implementeras och genomgå tester. Trots att testerna utfördes med reducerade datamängder och gratisversionen av data-baserna, visade resultaten att två av de implementerade databaserna klarade alla testerna. / In today's society, energy efficiency is an important parameter when discussing sustaina-bility. Many buildings lack technical solutions to effectively monitor and manage energy consumption. To address this need, companies like iquest strive to digitize and automate energy monitoring. Currently, iquest faces issues of inefficiency and bottlenecks when up-loading large amounts of data into their current graph database. Through a thorough eval-uation, the thesis project has identified suitable alternatives for iquest to consider.During the investigation, the graph databases Neo4j, Stardog, Allegrograph, Amazon Nep-tune, GraphDB, BlazingGraph, and OrientDB were presented. Based on the characteristics and features of these graph databases, it was determined that Neo4j, Stardog, Allegro-graph, Amazon Neptune, and GraphDB meet the requirements for a suitable graph data-base.The implementation of the graph databases was limited by time constraints, and only Neo4j, Stardog, Allegrograph, and GraphDB could be implemented and subjected to test-ing. Despite conducting tests with reduced data volumes and using the free versions of the databases, the results showed that two of the implemented databases successfully passed all the tests.
72

[pt] BUSCA POR PALAVRAS-CHAVE SOBRE GRAFOS RDF FEDERADOS EXPLORANDO SEUS ESQUEMAS / [en] KEYWORD SEARCH OVER FEDERATED RDF GRAPHS BY EXPLORING THEIR SCHEMAS

YENIER TORRES IZQUIERDO 28 July 2017 (has links)
[pt] O Resource Description Framework (RDF) foi adotado como uma recomendação do W3C em 1999 e hoje é um padrão para troca de dados na Web. De fato, uma grande quantidade de dados foi convertida em RDF, muitas vezes em vários conjuntos de dados fisicamente distribuídos ao longo de diferentes localizações. A linguagem de consulta SPARQL (sigla do inglês de SPARQL Protocol and RDF Query Language) foi oficialmente introduzido em 2008 para recuperar dados RDF e fornecer endpoints para consultar fontes distribuídas. Uma maneira alternativa de acessar conjuntos de dados RDF é usar consultas baseadas em palavras-chave, uma área que tem sido extensivamente pesquisada, com foco recente no conteúdo da Web. Esta dissertação descreve uma estratégia para compilar consultas baseadas em palavras-chave em consultas SPARQL federadas sobre conjuntos de dados RDF distribuídos, assumindo que cada conjunto de dados RDF tem um esquema e que a federação tem um esquema mediado. O processo de compilação da consulta SPARQL federada é explicado em detalhe, incluindo como computar o conjunto de joins externos entre as subconsultas locais geradas, como combinar, com a ajuda de cláusulas UNION, os resultados de consultas locais que não têm joins entre elas, e como construir a cláusula TARGET, de acordo com a composição da cláusula WHERE. Finalmente, a dissertação cobre experimentos com dados do mundo real para validar a implementação. / [en] The Resource Description Framework (RDF) was adopted as a W3C recommendation in 1999 and today is a standard for exchanging data in the Web. Indeed, a large amount of data has been converted to RDF, often as multiple datasets physically distributed over different locations. The SPARQL Protocol and RDF Query Language (SPARQL) was officially introduced in 2008 to retrieve RDF datasets and provide endpoints to query distributed sources. An alternative way to access RDF datasets is to use keyword-based queries, an area that has been extensively researched, with a recent focus on Web content. This dissertation describes a strategy to compile keyword-based queries into federated SPARQL queries over distributed RDF datasets, under the assumption that each RDF dataset has a schema and that the federation has a mediated schema. The compilation process of the federated SPARQL query is explained in detail, including how to compute a set of external joins between the local subqueries, how to combine, with the help of the UNION clauses, the results of local queries which have no external joins between them, and how to construct the TARGET clause, according to the structure of the WHERE clause. Finally, the dissertation covers experiments with real-world data to validate the implementation.
73

Verification, Validation and Completeness Support for Metadata Traceability

Darr, Timothy, Fernandes, Ronald, Hamilton, John, Jones, Charles 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / The complexity of modern test and evaluation (T&E) processes has resulted in an explosion of the quantity and diversity of metadata used to describe end-to-end T&E processes. Ideally, it would be possible to integrate metadata in such a way that disparate systems can seamlessly access the metadata and easily interoperate with other systems. Unfortunately, there are several barriers to achieving this goal: metadata is often designed for use with specific tools or specific purposes; metadata exists in a variety of formats (legacy, non-legacy, structured and unstructured metadata); and the same information is represented in multiple ways across different metadata formats.
74

Semantic Web Technologies for T&E Metadata Verification and Validation

Darr, Timothy, Fernandes, Ronald, Hamilton, John, Jones, Charles, Weisenseel, Annette 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The vision of the semantic web is to unleash the next generation of information sharing and interoperability by encoding meaning into the symbols that are used to describe various computational capabilities within the World Wide Web or other networks. This paper describes the application of semantic web technologies to Test and Evaluation (T&E) metadata verification and validation. Verification is a quality process that is used to evaluate whether or not a product, service, or system complies with a regulation, specification, or conditions imposed at the start of a development phase or which exists in the organization. Validation is the process of establishing documented evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. While this often involves acceptance and suitability with external customers, automation provides significant assistance to the customers.
75

Optimizing Analytical Queries over Semantic Web Sources / Optimisation de Requêtes Analytiques sur le Web Sémantique

Ibragimov, Dilshod 15 November 2017 (has links) (PDF)
Les données ont toujours été un atout clé pour beaucoup d’industries et d’entreprises ;cependant, ces derniers temps les possesseurs de données jouissent d’un véritable avantage compétitif sur les autres. De nos jours, les compagnies collectent de gros volumes de données et les stockent dans de grandes bases de données multidimensionnelles appelées entrepôts de données. Un entrepôt de données présente les données agrégées sous la forme d’un cube dont les cellules contiennent des faits et des informations contextuelles telles que des dates, des lieux, des informations sur les clients et fournisseurs, etc. Les solutions d’entreposage de données utilisent avec succès OLAP (Traitement Analytique En Ligne – en anglais Online Analytical Processing) afin d’analyser ces grands ensembles de données ;par exemple, les informations des ventes peuvent être agrégées selon le lieu et/ou la dimension temporelle. Les tendances récentes des technologies et du Web posent actuellement de nouveaux défis. Une bonne quantité de l’information disponible sur le Web s’y trouve sous une forme qui se prête au traitement par machine (Web Sémantique) ;les outils de veille économique (en anglais Business Intelligence ou BI) doivent être capables de découvrir et récupérer les informations pertinentes, et les présenter aux utilisateurs afin de les assister dans une bonne analyse de la situation. De nombreux gouvernements et organisations rendent leurs données publiquement accessible, identifiables avec des URI (Unified Resource Identifiers), et les lient à d’autres données. Cette collection de jeux de données interconnectés sur le Web s’appelle Linked Data [1]. Ces jeux de données sont basés sur le modèle RDF (Resource Description Framework) – un format standard pour l’échange de données sur le Web [2]. SPARQL, un protocole et un langage de requêtes pour RDF [4], est utilisé pour interroger et manipuler les jeux de données RDF stockés dans des triplestores SPARQL. SPARQL 1.1 Federated Query [6] définit également une extension pour exécuter des requêtes distribuées sur plusieurs triplestores. Le standard actuel permet donc des requêtes analytiques complexes sur de multiples sources de données, et l’intégration de ces données dans le processus d’analyse devient une nécessité pour les outils de BI. Cependant, en raison de la quantité et de la complexité des données disponibles sur le Web, leur incorporation et leur utilisation ne sont pas toujours évidentes. Par conséquent, une solution OLAP efficace sur des source Web Sémantiques est nécessaire pour améliorer les outils de BI. Cette thèse de doctorat se concentre sur les défis liés à l’optimisation des requêtes analytiques qui utilisent des données provenant de plusieurs triplestores SPARQL. Premièrement, cette thèse propose un framework pour la découverte, l’intégration et l’interrogation analytique des Linked Data – ce type d’OLAP a été nommé OLAP Exploratoire [21]. Ce framework est conçu pour utiliser un schéma multidimensionnel du cube OLAP exprimé dans des vocabulaires RDF, afin de pouvoir interroger des sources de données, extraire et agréger des données, et construire un cube de données. Nous proposons également un processus assisté par ordinateur pour découvrir des sources de données précédemment inconnues et construire un schéma multidimensionnel du cube. Deuxièmement, vu l’inefficacité actuelle des triplestores SPARQL pour l’exécution des requêtes analytiques fédérées, cette thèse propose un ensemble de stratégies pour le traitement de ces requêtes ainsi qu’un module (appelé Cost-based Optimizer for Distributed Aggregate ou CoDA) pour optimiser leur exécution. Troisièmement, afin de surmonter les défis liés aux techniques de traitement des requêtes SPARQL agrégées sur un seul triplestore, nous proposons MARVEL (MAterialized Rdf Views with Entailment and incompLeteness) – une approche qui utilise des techniques de vues matérialisées spécifiques à RDF pour traiter les requêtes agrégées complexes. Notre approche consiste en un algorithme de sélection de vues selon un modèle de coût associé spécifique à RDF, une syntaxe pour la définition des vues et un algorithme pour la réécriture des requêtes SPARQL en utilisant les vues matérialisées RDF. Finalement, nous nous concentrons sur les techniques relatives au support des requêtes analytiques SPARQL sur des données liées situées en de multiples triplestores, qui nous conduisent à d’intéressantes analyses et constatations à grande échelle. En particulier, la technique proposée est capable d’intégrer les schémas divers des endpoints SPARQL, donnant accès aux données via des hiérarchies dans le style d’OLAP pour permettre des analyses uniformes, efficaces et puissantes. Enfin, cette thèse préconise une plus grande attention au traitement des requêtes analytiques au sein des systèmes RDF distribués. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
76

Mise en oeuvre de politiques de protection de données à caractère personnel : ine approche reposant sur la réécriture de requêtes SPARQL

Oulmakhzoune, Said 29 April 2013 (has links) (PDF)
With the constant proliferation of information systems around the globe, the need for decentralized and scalable data sharing mechanisms has become a major factor of integration in a wide range of applications. Literature on information integration across autonomous entities has tacitly assumed that the data of each party can be revealed and shared to other parties. A lot of research, concerning the management of heterogeneous sources and database integration, has been proposed, for example based on centralized or distributed mediators that control access to data managed by different parties. On the other hand, real life data sharing scenarios in many application domains like healthcare, e-commerce market, e-government show that data integration and sharing are often hampered by legitimate and widespread data privacy and security concerns. Thus, protecting the individual data may be a prerequisite for organizations to share their data in open environments such as Internet. Work undertaken in this thesis aims to ensure security and privacy requirements of software systems, which take the form of web services, using query rewriting principles. The user query (SPARQL query) is rewritten in such a way that only authorized data are returned with respect to some confidentiality and privacy preferences policy. Moreover, the rewriting algorithm is instrumented by an access control model (OrBAC) for confidentiality constraints and a privacy-aware model (PrivOrBAC) for privacy constraints. A secure and privacy-preserving execution model for data services is then defined. Our model exploits the services¿ semantics to allow service providers to enforce locally their privacy and security policies without changing the implementation of their data services i.e., data services are considered as black boxes. We integrate our model to the architecture of Axis 2.0 and evaluate its efficiency in the healthcare application domain.
77

SWI-Prolog as a Semantic Web Tool for semantic querying in Bioclipse: Integration and performance benchmarking

Lampa, Samuel January 2010 (has links)
The huge amounts of data produced in high-throughput techniques in the life sciences and the need for integration of heterogeneous data from disparate sources in new fields such as Systems Biology and translational drug development require better approaches to data integration. The semantic web is anticipated to provide solutions through new formats for knowledge representation and management. Software libraries for semantic web formats are becoming mature, but there exist multiple tools based on foundationally different technologies. SWI-Prolog, a tool with semantic web support, was integrated into the Bioclipse bio- and cheminformatics workbench software and evaluated in terms of performance against non Prolog-based semantic web tools in Bioclipse, Jena and Pellet, for querying a data set consisting of mostly numerical, NMR shift values, in the semantic web format RDF. The integration has given access to the convenience of the Prolog language for working with semantic data and defining data management workflows in Bioclipse. The performance comparison shows that SWI-Prolog is superior in terms of performance over Jena and Pellet for this specific dataset and suggests Prolog-based tools as interesting for further evaluations.
78

Natural Language Query Processing In Ontology Based Multimedia Databases

Alaca Aygul, Filiz 01 May 2010 (has links) (PDF)
In this thesis a natural language query interface is developed for semantic and spatio-temporal querying of MPEG-7 based domain ontologies. The underlying ontology is created by attaching domain ontologies to the core Rhizomik MPEG-7 ontology. The user can pose concept, complex concept (objects connected with an &ldquo / AND&rdquo / or &ldquo / OR&rdquo / connector), spatial (left, right . . . ), temporal (before, after, at least 10 minutes before, 5 minutes after . . . ), object trajectory and directional trajectory (east, west, southeast . . . , left, right, upwards . . . ) queries to the system. Furthermore, the system handles the negative meaning in the user input. When the user enters a natural language (NL) input, it is parsed with the link parser. According to query type, the objects, attributes, spatial relation, temporal relation, trajectory relation, time filter and time information are extracted from the parser output by using predefined rules. After the information extraction, SPARQL queries are generated, and executed against the ontology by using an RDF API. Results are retrieved and they are used to calculate spatial, temporal, and trajectory relations between objects. The results satisfying the required relations are displayed in a tabular format and user can navigate through the multimedia content.
79

Evaluation of relational database implementation of triple-stores

Funes, Diego Leonardo 25 July 2011 (has links)
The Resource Description Framework (RDF) is the logical data model of the Semantic Web. RDF encodes information as a directed graph using a set of labeled edges known formally as resource-property-value statements or, in common usage, as RDF triples or simply triples. Values recorded in RDF triple form are either Universal Resource Identifiers (URIs) or literals. The use of URIs allows links between distributed data sources, which enables a logical model of data as a graph spanning the Internet. SPARQL is a standard SQL-like query language on RDF triples. This report describes the translation of SPARQL queries to equivalent SQL queries operating on a relational representation of RDF triples, and the physical optimization of that representation using the IBM DB2 relational database management system. Performance was evaluated using the Berlin SPARQL Benchmark. The results show that the implementation can perform well on certain queries, but more work is required to improved overall performance and scalability. / text
80

Linked-OWL: A new approach for dynamic linked data service workflow composition

Ahmad, Hussien, Dowaji, Salah 01 June 2013 (has links)
The shift from Web of Document into Web of Data based on Linked Data principles defined by Tim Berners-Lee posed a big challenge to build and develop applications to work in Web of Data environment. There are several attempts to build service and application models for Linked Data Cloud. In this paper, we propose a new service model for linked data "Linked-OWL" which is based on RESTful services and OWL-S and copes with linked data principles. This new model shifts the service concept from functions into linked data things and opens the road for Linked Oriented Architecture (LOA) and Web of Services as part and on top of Web of Data. This model also provides high level of dynamic service composition capabilities for more accurate dynamic composition and execution of complex business processes in Web of Data environment.

Page generated in 0.0826 seconds