• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 12
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 81
  • 55
  • 49
  • 38
  • 32
  • 28
  • 22
  • 22
  • 21
  • 21
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analyse statique de requête pour le Web sémantique / Static Analysis of Semantic Web Queries

Chekol, Melisachew Wudage 19 December 2012 (has links)
L'inclusion de requête est un problème bien étudié sur plusieurs décennies de recherche. En règle générale, il est défini comme le problème de déterminer si le résultat d'une requête est inclus dans le résultat d'une autre requête pour tout ensemble de données. Elle a des applications importantes dans l'optimisation des requêtes et la vérification de bases de connaissances. L'objectif principal de cette thèse est de fournir des procédures solides et com- plètes pour déterminer l'inclusion des requêtes SPARQL en vertu d'exprimés en axiomes logiques de description. De plus, nous mettons en œuvre ces procédures à l'appui des résultats théoriques par l'expérimentation. À ce jour, test d'inclusion de requête a été effectuée à l'aide de différentes techniques: homomorphisme de graphes, bases de données canoniques, les tech- niques de la théorie des automates et par une réduction au problème de la va- lidité de la logique. Dans cette thèse, nous utilisons la derniere technique pour tester l'inclusion des requêtes SPARQL utilisant une logique expressive appelée μ-calcul. Pour ce faire, les graphes RDF sont codés comme des systèmes de transitions, et les requêtes et les axiomes du schéma sont codés comme des formules de μ-calcul. Ainsi, l'inclusion de requêtes peut être réduit á test de validité de formule logique. L'objectif de cette thèse est d'identifier les divers fragments de SPARQL (et PSPARQL) et les langages de description logique de schéma pour lequelle l'inculsion est décidable. En outre, afin de fournir théoriquement et expériment- alement éprouvées procédures de vérifier l'inclusion de ces fragments décid- ables. Pas durer au moins mais, cette thèse propose un point de repère pour les solveurs d'inclusion. Ce benchmark est utilisé pour tester et comparer l'état actuel des solveurs d'inclusion. / Query containment is a well-studied problem spanning over several decades of research. Generally, it is defined as the problem of determining if the result of one query is included in the result of another query for any given dataset. It has major applications in query optimization and knowledge base verification. The main objective of this thesis is to provide sound and complete procedures to determine containment of SPARQL queries under expressive description logic axioms. Further, to support theoretical results by experimentation. To date query containment has been done using different techniques: containment mapping, canonical databases, automata theory techniques and through a reduction to the validity problem in logic. In this thesis, we use the later technique to address containment using an expressive logic called mu-calculus. In doing so, RDF graphs are encoded as transitions systems, and queries and schema axioms are encoded as mu-calculus formulae. Thereby, query containment can be reduced to validity test in the logic. The focus of this thesis is to identify various fragments of SPARQL (and PSPARQL) and description logic schema languages for which containment is decidable. Additionally, to provide theoretically and experimentally proven procedures to check containment of those decidable fragments. Last not but least, this thesis proposes a benchmark for containment solvers. This benchmark is used to test and compare the current state-of-the-art containment solvers.
22

Castor : a constraint-based SPARQL engine with active filter processing / Castor : un moteur SPARQL basé sur les contraintes avec exploitation actif de filtres

Le Clement de Saint-Marcq, Vianney 16 December 2013 (has links)
SPARQL est le langage de requête standard pour les graphes de données du Web Sémantique. L’évaluation de requêtes est étroitement liée aux problèmes d’appariement de graphes. Il a été démontré que l’évaluation est NP-difficile. Les moteurs SPARQLde l’état-de-l’art résolvent les requêtes SPARQL en utilisant des techniques de bases de données traditionnelles. Cette approche est efficace pour les requêtes simples qui fournissent un point de départ précis dans le graphe. Par contre, les requêtes couvrant tout le graphe et impliquant des conditions de filtrage complexes ne passent pas bien à l’échelle. Dans cette thèse, nous proposons de résoudre les requêtes SPARQL en utilisant la Programmation par Contraintes (CP). La CP résout un problème combinatoire enexploitant les contraintes du problème pour élaguer l’arbre de recherche quand elle cherche des solutions. Cette technique s’est montrée efficace pour les problèmes d’appariement de graphes. Nous reformulons la sémantique de SPARQL en termes deproblèmes de satisfaction de contraintes (CSPs). Nous appuyant sur cette sémantique dénotationnelle, nous proposons une sémantique opérationnelle qui peut être utilisée pour résoudre des requêtes SPARQL avec des solveurs CP génériques.Les solveurs CP génériques ne sont cependant pas conçus pour traiter les domaines immenses qui proviennent des base de données du Web Sémantique. Afin de mieux traiter ces masses de données, nous introduisons Castor, un nouveau moteurSPARQL incorporant un solveur CP léger et spécialisé. Nous avons apporté une attention particulière à éviter tant que possible les structures de données et algorithmes dont la complexité temporelle ou spatiale est proportionnelle à la taille de la base dedonnées. Des évaluations expérimentales sur des jeux d’essai connus ont montré la faisabilité et l’efficacité de l’approche. Castor est compétitif avec des moteurs SPARQL de l’état-de-l’art sur des requêtes simples, et les surpasse sur des requête. / SPARQL is the standard query language for graphs of data in the SemanticWeb. Evaluating queries is closely related to graph matching problems, and has been shown to be NP-hard. State-of-the-art SPARQL engines solve queries with traditional relational database technology. Such an approach works well for simple queries that provide a clearly defined starting point in the graph. However, queries encompassing the whole graph and involving complex filtering conditions do not scale well. In this thesis we propose to solve SPARQL queries with Constraint Programming (CP). CP solves a combinatorial problem by exploiting the constraints of the problem to prune the search tree when looking for solutions. Such technique has been shown to work well for graph matching problems. We reformulate the SPARQL semantics by means of constraint satisfaction problems (CSPs). Based on this denotational semantics, we propose an operational semantics that can be used by off-theshelf CP solvers. Off-the-shelf CP solvers are not designed to handle the huge domains that come with SemanticWeb databases though. To handle large databases, we introduce Castor, a new SPARQL engine embedding a specialized lightweight CP solver. Special care has been taken to avoid as much as possible data structures and algorithms whosetime or space complexity are proportional to the database size. Experimental evaluations on well-known benchmarks show the feasibility and efficiency of the approach. Castor is competitive with state-of-the-art SPARQL engines on simple queries, and outperforms them on complex queries where filters can be actively exploited during the search.
23

Prédire les performances des requêtes et expliquer les résultats pour assister la consommation de données liées / Predicting query performance and explaining results to assist Linked Data consumption

Hasan, Rakebul 04 November 2014 (has links)
Prédire les performances des requêtes et expliquer les résultats pour assister la consommation de données liées. Notre objectif est d'aider les utilisateurs à comprendre les performances d'interrogation SPARQL, les résultats de la requête, et dérivations sur les données liées. Pour aider les utilisateurs à comprendre les performances des requêtes, nous fournissons des prévisions de performances des requêtes sur la base de d’historique de requêtes et d'apprentissage symbolique. Nous n'utilisons pas de statistiques sur les données sous-jacentes à nos prévisions. Ce qui rend notre approche appropriée au Linked Data où les statistiques sont souvent absentes. Pour aider les utilisateurs des résultats de la requête dans leur compréhension, nous fournissons des explications de provenance. Nous présentons une approche sans annotation pour expliquer le “pourquoi” des résultats de la requête. Notre approche ne nécessite pas de reconception du processeur de requêtes, du modèle de données, ou du langage de requête. Nous utilisons SPARQL 1.1 pour générer la provenance en interrogeant les données, ce qui rend notre approche appropriée pour les données liées. Nous présentons également une étude sur les utilisateurs montrant l'impact des explications. Enfin, pour aider les utilisateurs à comprendre les dérivations sur les données liées, nous introduisons le concept d’explications liées. Nous publions les métadonnées d’explication comme des données liées. Cela permet d'expliquer les résultats en suivant les liens des données utilisées dans le calcul et les liens des explications. Nous présentons une extension de l'ontologie PROV W3C pour décrire les métadonnées d’explication. Nous présentons également une approche pour résumer ces explications et aider les utilisateurs à filtrer les explications. / Our goal is to assist users in understanding SPARQL query performance, query results, and derivations on Linked Data. To help users in understanding query performance, we provide query performance predictions based on the query execution history. We present a machine learning approach to predict query performances. We do not use statistics about the underlying data for our predictions. This makes our approach suitable for the Linked Data scenario where statistics about the underlying data is often missing such as when the data is controlled by external parties. To help users in understanding query results, we provide provenance-based query result explanations. We present a non-annotation-based approach to generate why-provenance for SPARQL query results. Our approach does not require any re-engineering of the query processor, the data model, or the query language. We use the existing SPARQL 1.1 constructs to generate provenance by querying the data. This makes our approach suitable for Linked Data. We also present a user study to examine the impact of query result explanations. Finally to help users in understanding derivations on Linked Data, we introduce the concept of Linked Explanations. We publish explanation metadata as Linked Data. This allows explaining derived data in Linked Data by following the links of the data used in the derivation and the links of their explanation metadata. We present an extension of the W3C PROV ontology to describe explanation metadata. We also present an approach to summarize these explanations to help users filter information in the explanation, and have an understanding of what important information was used in the derivation.
24

Efficient Source Selection For SPARQL Endpoint Query Federation

Saleem, Muhammad 13 May 2016 (has links)
The Web of Data has grown enormously over the last years. Currently, it comprises a large compendium of linked and distributed datasets from multiple domains. Due to the decentralised architecture of the Web of Data, several of these datasets contain complementary data. Running complex queries on this compendium thus often requires accessing data from different data sources within one query. The abundance of datasets and the need for running complex query has thus motivated a considerable body of work on SPARQL query federation systems, the dedicated means to access data distributed over the Web of Data. This thesis addresses two key areas of federated SPARQL query processing: (1) efficient source selection, and (2) comprehensive SPARQL benchmarks to test and ranked federated SPARQL engines as well as triple stores. Efficient Source Selection: Efficient source selection is one of the most important optimization steps in federated SPARQL query processing. An overestimation of query relevant data sources increases the network traffic, result in irrelevant intermediate results, and can significantly affect the overall query processing time. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. Similarly, only little attention has been paid to the effect of duplicated data on federated querying. This thesis presents HiBISCuS and TBSS, novel hypergraph-based source selection approaches, and DAW, a duplicate-aware source selection approach to federated querying over the Web of Data. Each of these approaches can be combined directly with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We combined the three (HiBISCuS, DAW, and TBSS) source selections approaches with query rewriting to form a complete SPARQL query federation engine named Quetsal. Furthermore, we present TopFed, a Cancer Genome Atlas (TCGA) tailored federated query processing engine that exploits the data distribution to perform intelligent source selection while querying over large TCGA SPARQL endpoints. Finally, we address the issue of rights managements and privacy while accessing sensitive resources. To this end, we present SAFE: a global source selection approach that enables decentralised, policy-aware access to sensitive clinical information represented as distributed RDF Data Cubes. Comprehensive SPARQL Benchmarks: Benchmarking is indispensable when aiming to assess technologies with respect to their suitability for given tasks. While several benchmarks and benchmark generation frameworks have been developed to evaluate federated SPARQL engines and triple stores, they mostly provide a one-fits-all solution to the benchmarking problem. This approach to benchmarking is however unsuitable to evaluate the performance of a triple store for a given application with particular requirements. The fitness of current SPARQL query federation approaches for real applications is difficult to evaluate with current benchmarks as current benchmarks are either synthetic or too small in size and complexity. Furthermore, state-of-the-art federated SPARQL benchmarks mostly focused on a single performance criterion, i.e., the overall query runtime. Thus, they cannot provide a fine-grained evaluation of the systems. We address these drawbacks by presenting FEASIBLE, an automatic approach for the generation of benchmarks out of the query history of applications, i.e., query logs and LargeRDFBench, a billion-triple benchmark for SPARQL query federation which encompasses real data as well as real queries pertaining to real bio-medical use cases. Our evaluation results show that HiBISCuS, TBSS, TopFed, DAW, and SAFE all can significantly reduce the total number of sources selected and thus improve the overall query performance. In particular, TBSS is the first source selection approach to remain under 5% overall relevant sources overestimation. Quetsal has reduced the number of sources selected (without losing recall), the source selection time as well as the overall query runtime as compared to state-of-the-art federation engines. The LargeRDFBench evaluation results suggests that the performance of current SPARQL query federation systems on simple queries does not reflect the systems\\\'' performance on more complex queries. Moreover, current federation systems seem unable to deal with many of the challenges that await them in the age of Big Data. Finally, the FEASIBLE\\\''s evaluation results shows that it generates better sample queries than the state-of-the-art. In addition, the better query selection and the larger set of query types used lead to triple store rankings which partly differ from the rankings generated by previous works.
25

[pt] INFERÊNCIA DE TUNING ATRAVÉS DA ONDBTUNING / [en] TUNING INFERENCE THROUGH ONDBTUNING

LUCIANA DE SA SILVA PERCILIANO 11 April 2022 (has links)
[pt] OnDBTuning é uma ontologia de tuning (semi-automático) de banco de dados relacional. Ontologias são artefatos que representam o conhecimento de um domínio específico e podem ser usadas para se inferir conhecimentos. No entanto, em geral, a maioria das aplicações envolve apenas uma descrição formal e estática de conceitos. Além disso, como tuning de banco de dados envolve muitas regras baseadas na experiência e em algoritmos de caixa preta, torna-se um desafio descrever esse processo de inferência. Esse trabalho de pesquisa apresenta primeiramente a solução OnDBTuning que é uma ontologia no domínio de tuning. Em seguida, ele propõe uma implementação de regras em SPARQL Inferencing Notation (SPIN) na OnDBTuning. Por fim, mostra uma avaliação prática da solução para recomendação de índices e visões materializadas. / [en] OnDBTuning is a relational database (semi-automatic) tuning ontology. Ontologies are artifacts that represent specific domain knowledge and can be used to infer knowledge. However, in general, most applications include only a formal and static description of concepts. Moreover, as database tuning involves many rules-of-thumb and black-box algorithms, it becomes challenging to describe these inference procedures. This research work first presents the OnDBTuning ontology solution focusing on the inference of tuning actions. Next, it proposes an implementation of the OnDBtuning rules using SPARQL Inferencing Notation (SPIN). Finally, it shows a practical evaluation of our solution concerning index and materialized views recommendations.
26

Semantic Web Queries over Scientific Data

Andrejev, Andrej January 2016 (has links)
Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows. We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion. In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers. Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.
27

Scalable Discovery and Analytics on Web Linked Data

Abdelaziz, Ibrahim 07 1900 (has links)
Resource Description Framework (RDF) provides a simple way for expressing facts across the web, leading to Web linked data. Several distributed and federated RDF systems have emerged to handle the massive amounts of RDF data available nowadays. Distributed systems are optimized to query massive datasets that appear as a single graph, while federated systems are designed to query hundreds of decentralized and interlinked graphs. This thesis starts with a comprehensive experimental study of the state-of-the-art RDF systems. It identifies a set of research problems for improving the state-of-the-art, including: supporting the emerging RDF analytics required by many modern applications, querying linked data at scale, and enabling discovery on linked data. Addressing these problems is the focus of this thesis. First, we propose Spartex; a versatile framework for complex RDF analytics. Spartex extends SPARQL to seamlessly combine generic graph algorithms with SPARQL queries. Spartex implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. We demonstrate that Spartex scales to datasets with billions of edges, and is at least as fast as the state-of-the-art specialized RDF engines. For analytical tasks, Spartex is an order of magnitude faster than existing alternatives. To address the scalability limitation of federated RDF engines, we propose Lusail; a scalable system for querying geo-distributed RDF graphs. Lusail follows a two-tier strategy: (i) locality-aware decomposition of the query into subqueries to maximize the computations at the endpoints and minimize intermediary results, and (ii) selectivity-aware execution to reduce network latency and increase parallelism. Our experiments on billions of triples show that Lusail outperforms existing systems by orders of magnitude in scalability and response time. Finally, enabling discovery on linked data is challenging due to the prior knowledge required to formulate SPARQL queries. To address these challenges; we develop novel techniques to (i) predict semantically equivalent SPARQL queries from a set of keywords by leveraging word embeddings, and (ii) generate fine-grained and non-blocking query plans to get fast and early results.
28

Indexing RDF data using materialized SPARQL queries

Espinola, Roger Humberto Castillo 10 September 2012 (has links)
In dieser Arbeit schlagen wir die Verwendung von materialisierten Anfragen als Indexstruktur für RDF-Daten vor. Wir streben eine Reduktion der Bearbeitungszeit durch die Minimierung der Anzahl der Vergleiche zwischen Anfrage und RDF Datenmenge an. Darüberhinaus betonen wir die Rolle von Kostenmodellen und Indizes für die Auswahl eines efizienten Ausführungsplans in Abhängigkeit vom Workload. Wir geben einen Überblick über das Problem der Auswahl von materialisierten Anfragen in relationalen Datenbanken und diskutieren ihre Anwendung zur Optimierung der Anfrageverarbeitung. Wir stellen RDFMatView als Framework für SPARQL-Anfragen vor. RDFMatView benutzt materializierte Anfragen als Indizes und enthalt Algorithmen, um geeignete Indizes fur eine gegebene Anfrage zu finden und sie in Ausführungspläne zu integrieren. Die Auswahl eines effizienten Ausführungsplan ist das zweite Thema dieser Arbeit. Wir führen drei verschiedene Kostenmodelle für die Verarbeitung von SPARQL Anfragen ein. Ein detaillierter Vergleich der Kostmodelle zeigt, dass ein auf Index-- und Prädikat--Statistiken beruhendes Modell die genauesten Informationen liefert, um einen effizienten Ausführungsplan auszuwählen. Die Evaluation zeigt, dass unsere Methode die Anfragebearbeitungszeit im Vergleich zu unoptimierten SPARQL--Anfragen um mehrere Größenordnungen reduziert. Schließlich schlagen wir eine einfache, aber effektive Strategie für das Problem der Auswahl von materialisierten Anfragen über RDF-Daten vor. Ausgehend von einem bestimmten Workload werden algorithmisch diejenigen Indizes augewählt, die die Bearbeitungszeit des gesamten Workload minimieren sollen. Dann erstellen wir auf der Basis von Anfragemustern eine Menge von Index--Kandidaten und suchen in dieser Menge Zusammenhangskomponenten. Unsere Auswertung zeigt, dass unsere Methode zur Auswahl von Indizes im Vergleich zu anderen, die größten Einsparungen in der Anfragebearbeitungszeit liefert. / In this thesis, we propose to use materialized queries as a special index structure for RDF data. We strive to reduce the query processing time by minimizing the number of comparisons between the query and the RDF dataset. We also emphasize the role of cost models in the selection of execution plans as well as index sets for a given workload. We provide an overview of the materialized view selection problem in relational databases and discuss its application for optimization of query processing. We introduce RDFMatView, a framework for answering SPARQL queries using materialized views as indexes. We provide algorithms to discover those indexes that can be used to process a given query and we develop different strategies to integrate these views in query execution plans. The selection of an efficient execution plan states the topic of our second major contribution. We introduce three different cost models designed for SPARQL query processing with materialized views. A detailed comparison of these models reveals that a model based on index and predicate statistics provides the most accurate cost estimation. We show that selecting an execution plan using this cost model yields a reduction of processing time with several orders of magnitude compared to standard SPARQL query processing. Finally, we propose a simple yet effective strategy for the materialized view selection problem applied to RDF data. Based on a given workload of SPARQL queries we provide algorithms for selecting a set of indexes that minimizes the workload processing time. We create a candidate index by retrieving all connected components from query patterns. Our evaluation shows that using the set of suggested indexes usually achieves larger runtime savings than other index sets regarding the given workload.
29

Podpora sémantiky v CMS Drupal / Semantic support in CMS Drupal

Ivančo, Daniel January 2012 (has links)
Aim of this diploma thesis is to map semantic features of CMS Drupal version 7. The goal of the first part of this work is to theoretically describe semantic web problematic and CMS Drupal. The second -- practical part of this work maps in details all the features of semantic web, which are supported by described CMS Drupal. These semantic features are mapped in two different points of views -- implementation and functional. Main contribution of this work is the method used to map these features. It's based on Drupal plugins code modification and revision in order to draw or demonstrate these features, which are not necessarily completely documented or functional. Furthermore all of these features are demonstrated on examples created as a part of this thesis. Finally the last part of this work compares these mapped features to similar CMS systems.
30

Lenguaje de especificación para la delegación de tareas en Servidores Web mediante agentes

Chambilla Aquino, Teófilo January 2016 (has links)
Magíster en Ciencias, Mención Computación / La tecnología de los agentes se ha convertido en la base de una gran cantidad de aplicaciones ya que permite la incorporación de bases de conocimiento de acciones y tareas para resolver problemas complejos. Por otro lado, se sabe que los Servidores Web se sustentan en el protocolo HTTP, protocolo que solo permite las solicitudes y respuestas entre Cliente y Servidor y no delegar funciones a otros Servidores separados geográficamente. Esta investigación consiste en un estudio exploratorio del concepto de la delegación en el contexto de la Web, donde agentes que residen en diferentes Servidores Web puedan cooperan entre sí para resolver tareas complejas. Para ello, se propone un lenguaje de especificación para la delegación de tareas en Servidores Web mediante agentes, con propiedades necesarias para su autonomía y que puedan ser utilizados con flexibilidad en entornos distribuidos bajo la restricción del protocolo de comunicación HTTP. En primer lugar, se presenta el modelo abstracto de la delegación en el entorno de la Web y los componentes necesarios para la elaboración del lenguaje especificación propuesto, mediante la definición de acciones básicas y opcionales que son implementadas por los agentes participantes en el proceso de la delegación. En segundo lugar, como caso de estudio, se desarrolla la implementación de NautiLOD de manera distribuida mediante agentes. NautiLOD es un lenguaje de expresión declarativo que está diseñado para especificar patrones de navegación en la red Linked Open Data, donde sus primeras propuestas de implementación han sido con un enfoque centralizado. En un tercer lugar, se presenta Agent Server, una plataforma flexible y escalable para Sistema MultiAgentes basados en el ambiente de la Web, desarrollado bajo los principios de REST, que permite gestionar agentes distribuidos. La principal conclusión de la tesis es la validación del lenguaje de especificación en una plataforma homogénea como es Linked Data que gracias a su semántica permite a los agentes procesar su contenido, razonar sobre este y realizar deducciones lógicas. Esto se realizó con consultas propias en los Endpoints SPARQL expresados en NautiLOD.

Page generated in 0.0299 seconds