• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 12
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 81
  • 55
  • 49
  • 38
  • 32
  • 28
  • 22
  • 22
  • 21
  • 21
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efektivní paralelní zpracování dat na moderním hardware / Towards Efficient Parallel Data Processing on Modern Hardware

Falt, Zbyněk January 2014 (has links)
Parallel data processing is a very hot topic in current research, since the amount of data and the complexity of the operations performed on them has been increasing significantly in the past few years. In this thesis, we focus on a specific domain of this research -- the design and implementation of parallel algorithms used mainly in database systems. First, we introduce important enhancements in the Bobox system, which is a framework for the development of parallel data processing applications. Then, we introduce a new domain specific language called Bobolang which makes the implementation of those applications easier. Next, we propose parallel and scalable algorithms used in the domain of databases, namely sort and merge join, and introduce their efficient implementation using the combination of Bobox and Bobolang. Finally, we introduce parallel runtime for SPARQL engine as an example of a parallel data processing application which demonstrates the main contributions of this thesis in complex and real-life situations. Powered by TCPDF (www.tcpdf.org)
12

Hybrid Question Answering over Linked Data

Bahmid, Rawan 13 August 2018 (has links)
The emergence of Linked Data in the form of knowledge graphs in RDF has been one of the most recent evolutions of the Semantic Web. This led to the development of question answering systems based on RDF and SPARQL to allow end users to access and benefit from these knowledge graphs. However, a lot of information on the Web is still unstructured, which restricts the ability of answering questions whose answer does not exist in a knowledge base. To tackle this issue, hybrid question answering has emerged as an important challenge. In fact, hybrid question answering entails the task of question answering by combining both structured (RDF) and unstructured knowledge sources (text) into one answer. This thesis tackles hybrid question answering based on natural language questions. It focuses on the analysis and improvement of an open source system called HAWK, identifies its limitations and provides solutions and recommendations in the form of a generic question-answering pipeline called HAWK_R. Our system mostly uses heuristic methods, patterns and the ontological schema and knowledge base and provides three main additions: question classification, annotation and answer verification and ranking based on query content. Our results show a clear improvement over the original HAWK based on several Question Answering over Linked Data (QALD) competitions. In fact, our methods are not limited to HAWK and can also help increase the performance of other question answering systems.
13

Updating RDFS ABoxes and TBoxes in SPARQL

Ahmeti, Albin, Calvanese, Diego, Polleres, Axel January 2014 (has links) (PDF)
Updates in RDF stores have recently been standardised in the SPARQL 1.1 Update specification. However, computing answers entailed by ontologies in triple stores is usually treated orthogonal to updates. Even the W3C's recent SPARQL 1.1 Update language and SPARQL 1.1 Entailment Regimes specifications explicitly exclude a standard behaviour how SPARQL endpoints should treat entailment regimes other than simple entailment in the context of updates. In this paper, we take a first step to close this gap. We define a fragment of SPARQL basic graph patterns corresponding to (the RDFS fragment of) DL-Lite and the corresponding SPARQL update language, dealing with updates both of ABox and of TBox statements. We discuss possible semantics along with potential strategies for implementing them. We treat both, (i) materialised RDF stores, which store all entailed triples explicitly, and (ii) reduced RDF Stores, that is, redundancy-free RDF stores that do not store any RDF triples (corresponding to DL-Lite ABox statements) entailed by others already. / Series: Working Papers on Information Systems, Information Business and Operations
14

Towards Making Distributed RDF processing FLINker

Azzam, Amr, Kirrane, Sabrina, Polleres, Axel January 2018 (has links) (PDF)
In the last decade, the Resource Description Framework (RDF) has become the de-facto standard for publishing semantic data on the Web. This steady adoption has led to a significant increase in the number and volume of available RDF datasets, exceeding the capabilities of traditional RDF stores. This scenario has introduced severe big semantic data challenges when it comes to managing and querying RDF data at Web scale. Despite the existence of various off-the-shelf Big Data platforms, processing RDF in a distributed environment remains a significant challenge. In this position paper, based on an indepth analysis of the state of the art, we propose to manage large RDF datasets in Flink, a well-known scalable distributed Big Data processing framework. Our approach, which we refer to as FLINKer extends the native graph abstraction of Flink, called Gelly, with RDF graph and SPARQL query processing capabilities.
15

Canonicalisation of SPARQL Queries

Salas Trejo, Jaime Osvaldo January 2018 (has links)
Magíster en Ciencias, Mención Computación. Ingeniero Civil en Computación / SPARQL es el lenguaje de consulta estándar para RDF, definido por el World Wide Web Consortium. Presentemente, hay una gran cantidad de servicios de consulta de SPARQL en la Web. También hay una gran demanda por estos servicios diariamente. Debido a la gran cantidad de consultas que deben procesar estos servicios, los procesadores sufren una sobrecarga que podría ser reducida si fuésemos capaces de detectar consultas equivalentes. Nuestra propuesta consiste en el diseño y la implementación de un algoritmo de canonicalización eficiente, el cual nos permitirá computar una forma canónica para las consultas. Las consultas que sean equivalentes deben tener la misma forma canónica, lo cual haría posible detectar un mayor número de consultas duplicadas. Hemos cubierto una parte significante de SPARQL 1.0 con nuestro trabajo; principalmente consultas que tengan la forma de uniones de consultas conjuntivas. El algoritmo que hemos desarrollado realiza una canonicalización completa de las consultas que contienen estas operaciones. Para las otras operaciones, realizamos una canonicalización parcial, pues estas otras operaciones son usadas popularmente en consultas reales. Hemos diseñado experimentos para probar la correctitud y el desempeño de nuestro algoritmo en comparación contra otros métodos sintácticos. Realizamos nuestros experimentos sobre consultas reales extraídas de bitácoras de bases de datos de RDF, y también sobre consultas sintéticas que diseñamos para forzar un mal comportamiento del algoritmo. Los resultados de nuestros experimentos son excitosos, pues la mayoría de las consultas reales son procesables en un tiempo corto, menor a un segundo. El número de consultas duplicadas encontradas es considerablemente superior que las encontradas por el algoritmo base definido. Finalmente, el algoritmo deja de funcionar únicamente para las consultas sintéticas diseñadas por nosotros, y de una gran complejidad, las cuales no ocurren en la práctica. El siguiente documento presenta el trabajo hecho.
16

Semantinės informacijos išrinkimo iš reliacinių duomenų bazių metodas taikant ontologijas / Method of Semantic Information Retrieval from Relational Databases Using Ontologies

Šukys, Algirdas 26 August 2010 (has links)
Ontologijos tampa vis populiaresnės, nes leidžia organizacijoms lanksčiau aprašyti dalykinę sritį, ieškoti informacijos iš skirtingų šaltinių ir pateikti semantiškai tiksliau atrinktus rezultatus vartotojams. Tačiau, augant informacijos ontologijoje kiekiui, saugoti ją tekstiniame faile tampa neefektyvu. Šio tyrimo tikslas yra pagerinti užklausų vykdymo ontologijoje galimybes, kai jos saugojamos reliacinėje duomenų bazėje. Tam buvo sukurtas SPARQL užklausų vykdymo metodas ontologijoje, saugojamoje reliacinėje duomenų bazėje pagal OWL2RDB algoritmą. Eksperimentas patvirtino, kad esant dideliam individų skaičiui, metodas leidžia greičiau vykdyti užklausas reliacinėje duomenų bazėje saugojamoje ontologijoje, negu ontologijoje, saugojamoje tekstiniame faile. / Ontologies are becoming increasingly popular because they allow organizations to describe their problem domains in a more flexible manner and to search information from multiple sources giving semantically significant results for users. However, the increasing amount of information in ontology makes its storing in a text file not effective. The aim of this research is to improve possibilities of querying large ontologies when these are kept in relational databases. The method was created for executing SPARQL queries in ontology, stored in a relational database created by OWL2RDB algorithm. Experiments have shown that the method improves query performance time in comparison with existing query engine especially for large ontologies having many individuals.
17

Razvoj ontološki baziranog informacionog sistema državnih kreditno-garancijskih fondova / Development of the ontological based information system of the state credit-guarantee funds

Arsovski Saša 26 November 2015 (has links)
<p>Tri osnovna cilja su predmet istraživanja u doktorskoj disertaciji. Prvi cilj je da se prouče mogućnosti modelovanja poslovnih informacionih sistema kori&scaron;ćenjem ontolo&scaron;kog pristupa modelovanju informacionih sistema. Drugi cilj je da se razvije model informacionog sistema državnih garancijskih fondova baziran na primeni objektnog i ontolo&scaron;kog pristupa modelovanju. Treći cilj je da se implementira prototip informacionog sistema državnih kreditno-garancijskih fondova i da se verifikuje na slučaju Grancijskog fonda APV. Metodologija: Za modelovanje sistema je kori&scaron;ćen ontolo&scaron;ki pristup primenom metodologije METHONTOLOGY, kao i alati za modeliranje i analizu ontologija (Protege). Za implementaciju prototipa je kori&scaron;ćena Microsoft .NET platforma. Rezultati istraživanja su verifikovani i testirani sa podacima iz poslovnih aktivnosti Garancijskog fonda APV.<br />Rezultati. U ovoj disertaciji su ostvareni sledeći rezultati: Na osnovu istraţivanja iz oblasti modelovanja informacionih sistema primenom ontolo&scaron;kog pristupa i identifikovanih modela funkcionisanja državnih garancijskih fondova, kreiran je konceptualni model drţavnih garancijskih fondova koji je poslužio kao osnova za razvoj ontolo&scaron;kog modela Garancijskog fonda APV. Ontolo&scaron;ki model OMGFAPV je kreiran upotrebom Methontology metodologije. OMGFAPV semantički opisuje poziciju i hijerarhiju Garancijskog fonda APV u okvirupokrajinske administracije kao i sadržaje koji su iskori&scaron;ćeni dalji za razvoj dela informacionog sistema Garancijskog fonda APV. U okviru ovih istraţivanja ostvarena su dva naučna doprinosa. Prvi doprinos je predlog metodologije transformacije semantičkih sadrţaja koji su opisani u ontolo&scaron;kom modelu u korisnički interfejs (KI) koji je standardna komponenta informacionog sistema. Drugi doprinos odnosi se na upotrebu ontolo&scaron;kih modela za modelovanje i implementaciju poslovne logike. Ova istraţivanja su obuhvatila specifičnosti podr&scaron;ke odlučivanju u procesu izdavanja garancija u državnim garancijskim fondovima. Rezultat istraţivanja je SCORE ontologija koja predstavlja zasebnu komponentu informacionog sistema Garancijskog fonda APV. Ograničenja istraživanja / implikacije: Ograničenjapredloţenog modela OMGFAPV odnose se na potrebu njegovog pro&scaron;irivanja usled specifičnih potreba, odnosno kompleksnijih operativnih procedura u procesu izdavanja garancija.Predložena metodologija transformacije semantičkih sadržaja u ontolo&scaron;kom modelu u korisnički interfejs ograničena je na kreiranje komponenti korisničkog interfejsa za unos ulaznih podataka poslovnih procedura. Osnovno ograničenjepredložene SCORE ontologije je njena stroga specijalizovanost. Predložena ontologija implementira socijalne i ekonomske karakteristike koje su od značaja za razvoj konkretnog regiona i jedan konkretan model evaluacije izdavanja garancija baziran na indeksima prioriteta. Ontologija ne sadrži generalizacije koje bi omogućile njeno direktno kori&scaron;ćenje za druge regione ili druge modele evaluacije predloga odluke i izdavanju garancija. Praktične implikacije:<br />Kreirani model informacionog sistema obezbeđuje kreiranje i praćenje podataka i dokumenata u svim fazama procesa izdavanja garancije. Pri tome je poseban naglasak na proceni kreditne sposobnosti aplikanata i distribuciji i razmeni podataka sasvimučesnicima učesnici u procesu izdavanja garancija. Sistem obezbeđuje minimalno sintaktička i koliko je moguće semantičku interoperabilnost sa eksternim učesnicima u procesu (druge finansijske institucije, organi uprave) kao i jednostavno kori&scaron;ćenje onim korisnicima (pre svega aplikanti, ali i administrativni sluţbenici) koji ne poseduju specifična stručna znanja iz oblasti rada garancijskog fonda. Originalnost/vrednost: Ontolo&scaron;kim pristupom modeliranju sistema a posebno kreiranjem ontologija kojima je predstavljeno znanje o administrativnim procesima stvaraju se pretpostavke za tehničku i organizacionu interoperabilnost različitih drţavnih organa. Pri tome, izuzetno vaţan aspekt je fleksibilan i ekonomičan mehanizam za kreiranje korisničkog interfejsa koji obezbeđuje interakciju različitih tipova korisnika (zaposleni u administraciji, zaposleni u privrednim subjektima i građanstvo) sa sistemom eUprave. Predlog modela ontolo&scaron;ki baziranog generisanja korisničkog interfejsa koji je opisan u disertaciji predstavlja ideju da se standardizuje reprezentacija korisničkog interfejsa. Generisanje i dizajn korisničkog interfejsa su svedeni na razvoj formalizovane ontologije administrativnog procesa koja uključuje opis interakcije korisnika sa sistemom putem anotacija operativnih procedura. Ontologija u kojoj su semantički predstavljene socijalne i ekonomske karakteristike koje obuhvata oblast delovanja državnih razvojnih fondova i koje su od značaja za razvoj analiziranog regiona obezbeđuje proces odlučivanja koji omogućuje dono&scaron;enje odluka o plasmanu državnih sredstava u skladu sa državnim strategijama razvoja.</p> / <p>The research in this doctoral dissertation has three main objectives. The first objective is to study the possibilities of modelling information systems by using an ontological approach. The second objective is to develop a model of information system of state guarantee funds based on the object oriented and the ontological approach to the modelling. The third goal is to implement a prototype of an information system of the state credit-guarantee funds and to verify the case of the Guarantee fund of APV. Methodology: Ontological approach is used for building a model. Methodology METHONTOLOGY was used to build the ontology, as well as tools for modelling and analysis of ontology (Protege). Microsoft .NET platform was used for the implementation of the prototype. The research results are verified and tested with data from operating activities of the Guarantee Fund APV.<br />Results. In this dissertation, the following results were achieved: Based on the research in the field of modelling of information systems using the ontological approach and identified models of the state guarantee funds, the author created a conceptual model of state guarantee funds, which served as the basis for the development of the ontological model of the Guarantee Fund APV. The ontological model OMGFAPV was created using Methontology methodology. OMGFAPV semantically describes the position and hierarchy of the Guarantee Fund of APV within the provincial administration and the contents of which are used for the further development of part of an information system for the Guarantee Fund of APV. Two scientific contributions were realized within these studies. The first contribution is the proposed methodology of transformation of the semantic contents that are described in the ontological model in the user interface component of the information system. The second contribution is the use of ontological models for modelling and implementing business logic. These studies included the specifics of the decision support in the process<br />KEYWORD DOCUMENTATION<br />252<br />of issuing guarantees in the state guaranty funds. The result of this research is a SCORE ontology that represents a separate component of the information system of the Guarantee Fund APV. Research Limitations / Implications: The limitations of the proposed model OMGFAPV refer to the need of expanding due to its specific needs, or more complex operating procedures in the process of issuing the guarantees. The proposed methodology for transformation of semantic content of the ontological model in the user interface is limited to the creation of components of the user interface for entering input data of the business procedures. The key limitation of the proposed SCORE ontology is its strict specialization. The proposed ontology implements social and economic characteristics that are important for the development of a region. The ontology does not contain generalizations that would allow its direct use in other regions or other models of decision proposal in the process of issuing guarantees. Practical implications: The created model of information systems provides the design and monitoring of data and documents in all stages of the process of issuing the guarantee. It is a special emphasis on assessing creditworthiness of applicants and the distribution and exchange of information with all stakeholders, participants in the process of issuing the guarantee. The system provides minimal syntactic and semantic interoperability with external stakeholders (other financial institutions, administrative authorities) as well as ease of use to the users (primarily applicants, and administrative officials) who do not possess specific expertise in the field of labor guarantee fund . Originality / value:<br />The ontological approach to modelling systems, particularly by creating an ontology which is represented by knowledge of administrative processes creates the prerequisites for technical and organizational interoperability of the various government bodies. In addition, an extremely important aspect is a flexible and cost-effective mechanism for creating a user interface that provides interaction of different types of users (employees of the administration, employees of companies and citizens) with a system of eGovernment. The proposal of the ontological model-based generation of user interface that is described in this dissertation presents the idea to standardize the representation of the user interface. Generating and designing of the user interface is reduced to the development of formalized ontology of the administrative process, which includes a description of user interaction with the system via operating procedures annotations. The ontology in which social and economic characteristics are semantically represented, which covers the area of operation of state development funds and which is of importance for the development of the analyzed region provides the decision-making process for making decisions about the placement of state funds in accordance with national development strategies.</p>
18

Dinaminio semantinių užklausų formavimo sąsaja / Dynamic user interface for semantic queries

Spudys, Kęstutis 07 August 2012 (has links)
Semantika informacinių technologijų kontekste yra duomenų apdorojimas pagal prasmę ir kontekstą. Tam įgyvendinti yra taikomas natūralios kalbos apdorojimas, pritaikytas informacijos paieškai, išrinkimui, analizavimui.Taikant semantines technologijas, kūrėjams dažnai kyla klausimas, kaip sukurti semantinės paieškos sąsają, kad ji būtų patogi ir duotų kuo tikslesnius atsakymus į vartotojų užklausas. Šiame darbe aprašomas sukurtas metodas, kuris padeda vartotojui palaipsniui formuoti SPARQL užklausą iš atskirų elementų, dinamiškai keičiant vartotojo sąsają. / Increasing popularity of Semantic Web raises a question how we could make a simple user interface for building semantic queries while keeping high precision of results returned. This thesis presents a method that helps users to create SPARQL queries by allowing to dynamically add components to user interface. The goal of the work is to improve of user interface model for semantic queries by allowing users to construct and change it dynamically till obtaining the desirable answer results. That model was created on the base of analysis of Semantic Web languages, tools and existing portals, their functions and user interfaces. Algorithms for dynamic user interface generation based on user actions were developed that allow creating queries of various complexities with minimal amount of user interface components. Implementation and testing the prototype of the system using movie and wine ontologies has shown that dynamic construction and generation of query interface has desirable functionality and is easily applicable to various ontologies. Experimental comparison with existing semantic search portals has shown that the proposed dynamic user interface generation method could improve precision and recall of semantic queries and may be applied in semantic search portal applications.
19

Ontologija grįsta kompiuterinių gedimų diagnostikos sistema / Ontology-based system for dealing with computer faults

Sakalauskas, Rimantas 03 September 2010 (has links)
Šiame darbe analizuojamos pasaulinio semantinio tinklo technologijos dalykinės srities žinioms užrašyti ir valdyti. Darbo metu buvo papildyta dalykinės srities ontologija bei sukurta nauja sistėminė vartotojo sąsajos formavimo ontologija, kuri palaiko daugiakalbystę. Realizuotos ir aprobuotos užklausos ir automatizavimo procesai dirbantys su minėtomis ontologijomis. Darbo vykdymo metu sukauptos žinios buvo surinktos, susistemintos ir pateikiamos kaip darbo metodika. Remiantis šia metodika sukurta eksperimentinė sistema, skirta padėti identifikuoti su BIOS klaidomis susijusias problemas ir/arba operacinių sistemų (OS) sutrikimo priežastis ir jas spręsti. Apie sukurtą sistemą buvo perskaitytas pranešimas konferencijoje „Mokslas ir studijos 2010: teorija ir praktika“, kuri įvyko Šiaurės Lietuvos kolegijoje. Metodiką galima pritaikyti bet kokiai dalykinei sričiai, tiek realizuojant e-mokymo(si) sistemose probleminio mokymosi principus, tiek paramos paslaugų teikimo prekės ar produkto vartotojui sferoje. / This work examines the global semantic web technologies in the subject area knowledge and record management. Work was completed in the subject area and Ontology, a new user interface making systemic ontology, which supports multilingualism. Realized and dealer inquiries, and automate processes in working with these ontologies. Work during the accumulated knowledge has been collected, systematised and presented as a working methodology. Based on the methodology developed an experimental system designed to help identify errors in the BIOS-related problems and / or operating systems (OS) and cause disruption to solve them. This system of notification was read in the conference of "Education and training 2010: Theory and Practice", which took place in Northern Lithuania College. The approach can be applied to any subject area, and implement e-learning (learning) problem of learning the principles of systems and support services or goods to the consumer product area.
20

Static analysis of semantic web queries with ShEx schema constraints / Analyse statique de requêtes au web sémantique avec des contraintes de schéma ShEx

Abbas, Abdullah 06 November 2017 (has links)
La disponibilité de gros volumes de données structurées selon le modèle Resource Description Framework (RDF) est en constante augmentation. Cette situation implique un intérêt scientifique et un besoin important de rechercher de nouvelles méthodes d’analyse et de compilation de requêtes pour tirer le meilleur parti de l’extraction de données RDF. SPARQL est le plus utilisé et le mieux supporté des langages de requêtes sur des données RDF. En parallèle des langages de requêtes, les langages de définition de schéma d’expression de contraintes sur des jeux de données RDF ont également évolués. Les Shape Expressions (ShEx) sont de plus en plus utilisées pour valider des données RDF et pour indiquer les motifs de graphes attendus. Les schémas sont importants pour les tâches d’analyse statique telles que l’optimisation ou l’injection de requêtes. Notre intention est d’examiner les moyens et méthodologies d’analyse statique et d’optimisation de requêtes associés à des contraintes de schéma.Notre contribution se divise en deux grandes parties. Dans la première, nous considérons le problème de l’injection de requêtes SPARQL en présence de contraintes ShEx. Nous proposons une procédure rigoureuse et complète pour le problème de l’injection de requêtes avec ShEx, en prenant en charge plusieurs fragments de SPARQL. Plus particulièrement, notre procédure gère les patterns de requêtes OPTIONAL, qui s’avèrent former un important fonctionnalité à étudier avec les schémas. Nous fournissons ensuite les limites de complexité de notre problème en considération des fragments gérés. Nous proposons également une méthode alternative pour l’injection de requêtes SPARQL avec ShEx. Celle-ci réduit le problème à une satisfiabilité de Logique de Premier Ordre, qui permet de considérer une extension du fragment SPARQL traité par la première méthode. Il s’agit de la première étude traitant l’injection de requêtes SPARQL en présence de contraintes ShEx.Dans la seconde partie de nos contributions, nous proposons une méthode d’analyse pour optimiser l’évaluation de requêtes SPARQL groupées, sur des graphes RDF, en tirant avantage des contraintes ShEx. Notre optimisation s’appuie sur le calcul et l’assignation de rangs aux triple patterns d’une requête, permettant de déterminer leur ordre d’exécution. La présence de jointures intermédiaires entre ces patterns est la raison pour laquelle l’ordonnancement est important pour gagner en efficicacité. Nous définissons un ensemble de schémas ShEx bien- formulés, qui possède d’intéressantes caractéristiques pour l’optimisation de requêtes SPARQL. Nous développons ensuite notre méthode d’optimisation par l’exploitation d’informations extraites d’un schéma ShEx. Enfin, nous rendons compte des résultats des évaluations effectuées, montrant les avantages de l’application de notre optimisation face à l’état de l’art des systèmes d’évaluation de requêtes. / Data structured in the Resource Description Framework (RDF) are increasingly available in large volumes. This leads to a major need and research interest in novel methods for query analysis and compilation for making the most of RDF data extraction. SPARQL is the widely used and well supported standard query language for RDF data. In parallel to query language evolutions, schema languages for expressing constraints on RDF datasets also evolve. Shape Expressions (ShEx) are increasingly used to validate RDF data, and to communicate expected graph patterns. Schemas in general are important for static analysis tasks such as query optimisation and containment. Our purpose is to investigate the means and methodologies for SPARQL query static analysis and optimisation in the presence of ShEx schema constraints.Our contribution is mainly divided into two parts. In the first part we consider the problem of SPARQL query containment in the presence of ShEx constraints. We propose a sound and complete procedure for the problem of containment with ShEx, considering several SPARQL fragments. Particularly our procedure considers OPTIONAL query patterns, that turns out to be an important feature to be studied with schemas. We provide complexity bounds for the containment problem with respect to the language fragments considered. We also propose alternative method for SPARQL query containment with ShEx by reduction into First Order Logic satisfiability, which allows for considering SPARQL fragment extension in comparison to the first method. This is the first work addressing SPARQL query containment in the presence of ShEx constraints.In the second part of our contribution we propose an analysis method to optimise the evaluation of conjunctive SPARQL queries, on RDF graphs, by taking advantage of ShEx constraints. The optimisation is based on computing and assigning ranks to query triple patterns, dictating their order of execution. The presence of intermediate joins between the query triple patterns is the reason why ordering is important in increasing efficiency. We define a set of well-formed ShEx schemas, that possess interesting characteristics for SPARQL query optimisation. We then develop our optimisation method by exploiting information extracted from a ShEx schema. We finally report on evaluation results performed showing the advantages of applying our optimisation on the top of an existing state-of-the-art query evaluation system.

Page generated in 0.0208 seconds