91 |
Contrôle d'accès et présentation contextuelle pour le Web des données / Context-aware access control and presentation of linked dataCostabello, Luca 29 November 2013 (has links)
La thèse concerne le rôle joué par le contexte dans l'accès au Web de données depuis les dispositifs mobiles. Le travail analyse ce problème de deux points de vue distincts: adapter au contexte la présentation de triplets, et protéger l'accès aux bases des données RDF depuis les dispositifs mobiles. La première contribution est PRISSMA, un moteur de rendu RDF qui étend Fresnel avec la sélection de la meilleure représentation pour le contexte physique où on se trouve. Cette opération est effectuée par un algorithme de recherche de sous-graphes tolérant aux erreurs basé sur la notion de distance d'édition sur les graphes. L'algorithme considère les différences entre les descriptions de contexte et le contexte détecté par les capteurs, supporte des dimensions de contexte hétérogènes et est exécuté sur le client pour ne pas révéler des informations privées. La deuxième contribution concerne le système de contrôle d'accès Shi3ld. Shi3ld supporte tous les triple stores et il ne nécessite pas de les modifier. Il utilise exclusivement les langages du Web sémantique, et il n'ajoute pas des nouveaux langages de définition de règles d'accès, y compris des analyseurs syntaxiques et des procédures de validation. Shi3ld offre une protection jusqu'au niveau des triplets. La thèse décrit les modèles, algorithmes et prototypes de PRISSMA et de Shi3ld. Des expériences montrent la validité des résultats de PRISSMA ainsi que les performances au niveau de mémoire et de temps de réponse. Le module de contrôle d'accès Shi3ld a été testé avec différents triple stores, avec et sans moteur SPARQL. Les résultats montrent l'impact sur le temps de réponse et démontrent la faisabilité de l'approche. / This thesis discusses the influence of mobile context awareness in accessing the Web of Data from handheld devices. The work dissects this issue into two research questions: how to enable context-aware adaptation for Linked Data consumption, and how to protect access to RDF stores from context-aware devices. The thesis contribution to this first research question is PRISSMA, an RDF rendering engine that extends Fresnel with a context-aware selecting of the best presentation according to mobile context. This operation is performed by an error-tolerant subgraph matching algorithm based on the notion of graph edit distance. The algorithm takes into account the discrepancies between context descriptions and the sensed context, supports heterogeneous context dimensions, and runs on the client-side - to avoid disclosing sensitive context information. The second research activity presented in the thesis is the Shi3ld access control framework for Linked Data servers. Shi3ld has the advantage of being a pluggable filter for generic triple stores, with no need to modify the endpoint itself. It adopts exclusively Semantic Web languages and it does not add new policy definition languages, parsers nor validation procedures. Shi3ld provides protection up to triple level. The thesis describes both PRISSMA and Shi3ld prototypes. Test campaigns show the validity of PRISSMA results, along with memory and response time performance. The Shi3ld access control module has been tested on different triple stores, with and without SPARQL engines. Results show the impact on response time, and demonstrate the feasibility of the approach.
|
92 |
Resource Centered StoreHeese, Ralf 04 January 2016 (has links)
Mit dem Resource Description Framework (RDF) können Eigenschaften von und die Beziehungen zwischen Ressourcen maschinenverarbeitbar beschrieben werden. Dadurch werden diese Daten für Maschinen zugänglicher und können unter anderem automatisch Daten zu einer Ressource lokalisieren und verarbeiten, unterschiedliche Bedeutungen einer Zeichenkette erkennen und implizite Informationen ableiten. Das Datenmodell von RDF und der zugehörigen Anfragesprache SPARQL basiert auf gerichteten und beschrifteten Multigraphen. Forschungsergebnisse haben gezeigt, dass relationale DBMS zum Verwalten von RDF-Daten ungeeignet sind. Native basierende RDF-DBMS können Anfragen in kürzerer Zeit verarbeiten. Der Leistungsgewinn wird durch redundantes Speichern von Tripeln in mehreren B+-Bäumen erzielt. Jedoch sind Join-ähnliche Operationen zum Berechnen des Ergebnisses erforderlich, was bei größeren Anfragen zu Leistungseinbußen führt. In dieser Arbeit wird der Resource Centered Store (RCS) entwickelt, dessen Speichermodell RDF-inhärente Eigenschaften ausnutzt, um Anfragen ohne die Notwendigkeit redundanter Speicherung effizient beantworten zu können. Die grundlegende Idee des RCS-Speichermodells besteht im Gruppieren der Daten als sternförmigen Teilgraphen auf Datenbankseiten. Die verwendeten Prinzipien ähnelt denen in RDBMS und daher können deren Algorithmen zur Beantwortung von Anfragen wiederverwendet werden. Darüber hinaus werden Transformationsregeln und Heuristiken zum Optimieren von SPARQL-Anfragen zum Finden eines möglichst optimalen Ausführungsplans definiert. In diesem Kontext wurden auch graphmusterbasierte Indexe spezifiziert und deren Nutzen für die Verarbeitung von Anfragen untersucht. Das RCS-Speichermodell wurde prototypisch implementiert und im Vergleich zum nativen RDF-DBMS Jena TDB evaluiert. Die durchgeführten Experimenten zeigen, dass das System insbesondere für das Beantworten von Anfragen mit großen sternförmigen Teilmustern geeignet ist. / The Resource Description Framework (RDF) is the conceptual foundation for representing properties of real-world or virtual resources and describing the relationships between them. Standards based on RDF allow machines to access and process information automatically and locate additional data about resources. It also supports the discovery of relationships between concepts. The smallest information unit in RDF are triples which form a directed labeled multi-graph. The query language SPARQL is also based on a graph model which makes it difficult for relational DBMS to store and query RDF data efficiently. The most performant DBMS for managing and querying RDF data implement a RDF-specific storage model based on a set of B+ tree indexes. The key disadvantages of these systems are the increased usage of secondary storage in cause of redundantly stored triples as well as the necessity of expensive join operation to compute the solutions of a SPARQL query. In this work we develop and describe the Resource Centered Store which exploits RDF inherent characteristics to avoid the requirement for storing triples redundantly while improving the query performance of larger queries. In the RCS storage model triples are grouped by their first component (subject) and storing these star-shaped subgraphs on database pages -- similar to relational DBMS. As a result the RCS can benefit from principles and algorithms that have been developed in the context of relational databases. Additionally, we defined transformation rules and heuristics to optimize SPARQL queries and generate an efficient query execution plan. In this context we also defined graph pattern based indexes and investigated their benefits for computing the solutions of queries. We implemented the RCS storage model prototypically and compared it to the native RDF DBMS Jena TDB. Our experiments showed that our storage model is especially suited to speed up the query performance of large star-shaped graph pattern.
|
93 |
Γραμματειακή υποστήριξη σχολών πανεπιστημίων : Ανάπτυξη ιστοσελίδας με χρήση τεχνολογιών Σημασιολογικού Ιστού (Semantic Web)Φωτεινός, Γεώργιος 30 April 2014 (has links)
Ένα υποσύνολο του τεράστιου όγκου πληροφοριών του Ιστού αφορά τα Ανοικτά Δεδομένα (Open Data), τα οποία αποτελούν πληροφορίες, δημόσιες ή άλλες, στις οποίες ο καθένας μπορεί να έχει πρόσβαση και να τις χρησιμοποιεί περαιτέρω για οποιονδήποτε σκοπό με στόχο να προσθέσει αξία σε αυτές. Η δυναμική των ανοιχτών δεδομένων γίνεται αντιληπτή όταν σύνολα δεδομένων των δημόσιων οργανισμών μετατρέπονται σε πραγματικά ανοιχτά δεδομένα, δηλαδή χωρίς νομικούς, οικονομικούς ή τεχνολογικούς περιορισμούς για την περαιτέρω χρήση τους από τρίτους. Τα ανοικτά δεδομένα ενός Τμήματος ή Σχολής Πανεπιστημίου μπορούν να δημιουργήσουν προστιθέμενη αξία και να έχουν θετικό αντίκτυπο σε πολλές διαφορετικές περιοχές, στη συμμετοχή, την καινοτομία, τη βελτίωση της αποδοτικότητας και αποτελεσματικότητας των Πανεπιστημιακών υπηρεσιών, την παραγωγή νέων γνώσεων από συνδυασμό στοιχείων κ.α. Ο τελικός στόχος είναι τα ανοικτά δεδομένα να καταστούν Ανοικτά Διασυνδεδεμένα Δεδομένα. Τα Διασυνδεδεμένα Δεδομένα, αποκτούν νόημα αντιληπτό και επεξεργάσιμο από μηχανές, επειδή περιγράφονται σημασιολογικά με την χρήση οντολογιών. Έτσι τα δεδομένα γίνονται πιο «έξυπνα» και πιο χρήσιμα μέσα από την διάρθρωση που αποκτούν. Στην παρούσα διπλωματική εργασία, υλοποιείται μια πρότυπη δικτυακή πύλη με την χρήση του Συστήματος Διαχείρισης Περιεχομένου CMS Drupal, το οποίο ενσωματώνει τεχνολογίες Σημασιολογικού Ιστού στον πυρήνα του, με σκοπό την μετατροπή των δεδομένων ενός Τμήματος ή Σχολής Πανεπιστημίου σε Ανοικτά Διασυνδεδεμένα Δεδομένα διαθέσιμα στην τρίτη γενιά του Ιστού τον Σημασιολογικό Ιστό. / A subset of the vast amount of information of the web is concerned with open data, which is information, whether public or other, in which everyone can have access and use it for any purpose with a view to add value. The dynamics of open data becomes noticeable when datasets of public bodies are transformed into truly open data , i.e. without legal, financial or technological limitations for further use by third parties. The open data of a university department or faculty can add value and have a positive impact on many different areas such as participation, innovation, improvisation of the efficiency and effectiveness of university services, generating new knowledge from a combination of elements , etc. The ultimate goal is to transform open data into open linked data. The linked data , become meaningful and processable by machines, given that they are semantically described, using ontologies. Thus, the data become more " intelligent " and more useful through the structure they acquire. In this thesis , a prototype web portal is implemented using the content management system CMS Drupal, which incorporates semantic web technologies in the core, in order to convert the data of a University Department or School in open linked data available in the third generation web semantic web.
|
94 |
Learning OWL Class ExpressionsLehmann, Jens 24 June 2010 (has links) (PDF)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems.
However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web.
In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work.
The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future.
The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold:
The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language.
The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently.
The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach.
The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
|
95 |
Accès et utilisation de documents multimédia complexes dans une bibliothèque numérique / Accessing and using complex multimedia documents in a digital libraryLy, Anh Tuan 09 July 2013 (has links)
Dans le cadre de trois projets européens, notre équipe a mis au point un modèle de données et un langage de requête pour bibliothèques numériques supportant l'identification, la structuration, les métadonnées, la réutilisation, et la découverte des ressources numériques. Le modèle proposé est inspiré par le Web et il est formalisé comme une théorie du premier ordre, dont certains modèles correspondent à la notion de bibliothèque numérique. En outre, une traduction complète du modèle en RDF et du langage de requêtes en SPARQL a également été proposée pour démontrer son adéquation à des applications pratiques. Le choix de RDF est dû au fait qu’il est un langage de représentation généralement accepté dans le cadre des bibliothèques numériques et du Web sémantique. L’objectif de cette thèse était double: concevoir et mettre en œuvre une forme simplifiée de système de gestion de bibliothèques numériques, d’une part, et contribuer à l’enrichissement du modèle, d’autre part. Pour atteindre cet objectif nous avons développé un prototype d’un système de bibliothèque numérique utilisant un stockage RDF pour faciliter la gestion interne des métadonnées. Le prototype permet aux utilisateurs de gérer et d’interroger les métadonnées des ressources numériques ou non-numériques dans le système en utilisant des URIs pour identifier les ressources, un ensemble de prédicats pour la description de ressources, et des requêtes conjonctives simples pour la découverte de connaissances dans le système. Le prototype est mis en œuvre en utilisant les technologies Java et l’environnement de Google Web Toolkit dont l'architecture du système se compose d'une couche de stockage, d’une couche de métier logique, d’une couche de service, et d’une interface utilisateur. Pendant la thèse, le prototype a été construit, testé et débogué localement, puis déployé sur Google App Engine. Dans l’avenir, il peut être étendu pour devenir un système complet de gestion de bibliothèques numériques. Par ailleurs, la thèse présente également notre contribution à la génération de contenu par réutilisation de ressources. Il s’agit d’un travail théorique dont le but est d’enrichir le modèle en lui ajoutant un service important, à savoir la possibilité de création de nouvelles ressources à partir de celles stockées dans le système. L’incorporation de ce service dans le système sera effectuée ultérieurement. / In the context of three European projects, our research team has developed a data model and query language for digital libraries supporting identification, structuring, metadata, and discovery and reuse of digital resources. The model is inspired by the Web and it is formalized as a first-order theory, certain models of which correspond to the notion of digital library. In addition, a full translation of the model to RDF and of the query language to SPARQL has been proposed to demonstrate the feasibility of the model and its suitability for practical applications. The choice of RDF is due to the fact that it is a generally accepted representation language in the context of digital libraries and the Semantic Web. One of the major aims of the thesis was to design and actually implement a simplified form of a digital library management system based on the theoretical model. To obtain this, we have developed a prototype based on RDF and SPARQL, which uses a RDF store to facilitate internal management of metadata. The prototype allows users to manage and query metadata of digital or non-digital resources in the system, using URIs as resource identifiers, a set of predicates to model descriptions of resources, and simple conjunctive queries to discover knowledge in the system. The prototype is implemented by using Java technologies and the Google Web Toolkit framework whose system architecture consists of a storage layer, a business logic layer, a service layer and a user interface. During the thesis work, the prototype was built, tested, and debugged locally and then deployed on Google App Engine. In the future, it will be expanded to become a full fledged digital library management system. Moreover, the thesis also presents our contribution to content generation by reuse. This is mostly theoretical work whose purpose is to enrich the model and query language by providing an important community service. The incorporation of this service in the implemented system is left to future work.
|
96 |
Traitement de requêtes SPARQL sur des données liées / SPARQL distributed query processing over linked dataMacina, Abdoul 17 December 2018 (has links)
De plus en plus de sources de données liées sont publiées à travers le Web en s'appuyant sur les technologies du Web sémantique, formant ainsi un large réseau de données distribuées. Cependant il est difficile pour les consommateurs de données de profiter de la richesse de ces données, compte tenu de leur distribution, de l'augmentation de leur volume et de l'autonomie des sources de données. Les moteurs fédérateurs de données permettent d'interroger ces sources de données en utilisant des techniques de traitement de requêtes distribuées. Cependant, une mise en œuvre naïve de ces techniques peut générer un nombre considérable de requêtes distantes et de nombreux résultats intermédiaires entraînant ainsi un long temps de traitement des requêtes et des communications réseau coûteuse. Par ailleurs, la sémantique des requêtes distribuées est souvent ignorée. L'expressivité des requêtes, le partitionnement des données et leur réplication sont d'autres défis auxquels doivent faire face les moteurs de requêtes. Pour répondre à ces défis, nous avons d'abord proposé une sémantique des requêtes distribuées compatible avec les standards SPARQL et RDF qui préserve l’expressivité de SPARQL. Nous avons ensuite présenté plusieurs stratégies d'optimisation pour un moteur de requêtes fédérées qui interroge de manière transparente des sources de données distribuées. La performance de ces optimisations est évaluée sur une implémentation d’un moteur de requêtes distribuées SPARQL / Driven by the Semantic Web standards, an increasing number of RDF data sources are published and connected over the Web by data providers, leading to a large distributed linked data network. However, exploiting the wealth of these data sources is very challenging for data consumers considering the data distribution, their volume growth and data sources autonomy. In the Linked Data context, federation engines allow querying these distributed data sources by relying on Distributed Query Processing (DQP) techniques. Nevertheless, a naive implementation of the DQP approach may generate a tremendous number of remote requests towards data sources and numerous intermediate results, thus leading to costly network communications. Furthermore, the distributed query semantics is often overlooked. Query expressiveness, data partitioning, and data replication are other challenges to be taken into account. To address these challenges, we first proposed in this thesis a SPARQL and RDF compliant Distributed Query Processing semantics which preserves the SPARQL language expressiveness. Afterwards, we presented several strategies for a federated query engine that transparently addresses distributed data sources, while managing data partitioning, query results completeness, data replication, and query processing performance. We implemented and evaluated our approach and optimization strategies in a federated query engine to prove their effectiveness.
|
97 |
Distributed Collaboration on Versioned Decentralized RDF Knowledge BasesArndt, Natanael 30 June 2021 (has links)
Ziel dieser Arbeit ist es, die Entwicklung von RDF-Wissensbasen in verteilten kollaborativen Szenarien zu unterstützen. In dieser Arbeit wird eine neue Methodik für verteiltes kollaboratives Knowledge Engineering – „Quit“ – vorgestellt. Sie geht davon aus, dass es notwendig ist, während des gesamten Kooperationsprozesses Dissens auszudrücken und individuelle Arbeitsbereiche für jeden Mitarbeiter bereitzustellen. Der Ansatz ist von der Git-Methodik zum kooperativen Software Engineering inspiriert und basiert auf dieser. Die Analyse des Standes der Technik zeigt, dass kein System die Git-Methodik konsequent auf das Knowledge Engineering überträgt. Die Hauptmerkmale der Quit-Methodik sind unabhängige Arbeitsbereiche für jeden Benutzer und ein gemeinsamer verteilter Arbeitsbereich für die Zusammenarbeit. Während des gesamten Kollaborationsprozesses spielt die Data-Provenance eine wichtige Rolle. Zur Unterstützung der Methodik ist der Quit-Stack als eine Sammlung von Microservices implementiert, die es ermöglichen, die Semantic-Web-Datenstruktur und Standardschnittstellen in den verteilten Kollaborationsprozess zu integrieren. Zur Ergänzung der verteilten Datenerstellung werden geeignete Methoden zur Unterstützung des Datenverwaltungsprozesses erforscht. Diese Managementprozesse sind insbesondere die Erstellung und das Bearbeiten von Daten sowie die Publikation und Exploration von Daten. Die Anwendung der Methodik wird in verschiedenen Anwendungsfällen für die verteilte Zusammenarbeit an Organisationsdaten und an Forschungsdaten gezeigt. Weiterhin wird die Implementierung quantitativ mit ähnlichen Arbeiten verglichen. Abschließend lässt sich feststellen, dass der konsequente Ansatz der Quit-Methodik ein breites Spektrum von Szenarien zum verteilten Knowledge Engineering im Semantic Web ermöglicht.:Preface by Thomas Riechert
Preface by Cesare Pautasso
1 Introduction
2 Preliminaries
3 State of the Art
4 The Quit Methodology
5 The Quit Stack
6 Data Creation and Authoring
7 Publication and Exploration
8 Application and Evaluation
9 Conclusion and Future Work
Bibliography
Web References
List of Figures
List of Tables
List of Listings
List of Definitions and Acronyms
List of Namespace Prefixes / The aim of this thesis is to support the development of RDF knowledge bases in a distributed collaborative setup. In this thesis, a new methodology for distributed collaborative knowledge engineering – called Quit – is presented. It follows the premise that it is necessary to express dissent throughout a collaboration process and to provide individual workspaces for each collaborator. The approach is inspired by and based on the Git methodology for collaboration in software engineering. The state-of-the-art analysis shows that no system is consequently transferring the Git methodology to knowledge engineering. The key features of the Quit methodology are independent workspaces for each user and a shared distributed workspace for the collaboration. Throughout the whole collaboration process data provenance plays an important role. To support the methodology the Quit Stack is implemented as a collection of microservices, that allow to integrate the Semantic Web data structure and standard
interfaces with the distributed collaborative process. To complement the distributed data authoring, appropriate methods to support the data management process are researched. These management processes are in particular the creation and authoring of data as well as the publication and exploration of data. The application of the methodology is shown in various use cases for the distributed collaboration on organizational data and on research data. Further, the implementation is quantitatively compared to the related work. Finally, it can be concluded that the consequent approach followed by the Quit methodology enables a wide range of distributed Semantic Web knowledge engineering scenarios.:Preface by Thomas Riechert
Preface by Cesare Pautasso
1 Introduction
2 Preliminaries
3 State of the Art
4 The Quit Methodology
5 The Quit Stack
6 Data Creation and Authoring
7 Publication and Exploration
8 Application and Evaluation
9 Conclusion and Future Work
Bibliography
Web References
List of Figures
List of Tables
List of Listings
List of Definitions and Acronyms
List of Namespace Prefixes
|
98 |
Le Linked Data à l'université : la plateforme LinkedWiki / Linked Data at university : the LinkedWiki platformRafes, Karima 25 January 2019 (has links)
Le Center for Data Science de l’Université Paris-Saclay a déployé une plateforme compatible avec le Linked Data en 2016. Or, les chercheurs rencontrent face à ces technologies de nombreuses difficultés. Pour surmonter celles-ci, une approche et une plateforme appelée LinkedWiki, ont été conçues et expérimentées au-dessus du cloud de l’université (IAAS) pour permettre la création d’environnements virtuels de recherche (VRE) modulaires et compatibles avec le Linked Data. Nous avons ainsi pu proposer aux chercheurs une solution pour découvrir, produire et réutiliser les données de la recherche disponibles au sein du Linked Open Data, c’est-à-dire du système global d’information en train d’émerger à l’échelle du Web. Cette expérience nous a permis de montrer que l’utilisation opérationnelle du Linked Data au sein d’une université est parfaitement envisageable avec cette approche. Cependant, certains problèmes persistent, comme (i) le respect des protocoles du Linked Data et (ii) le manque d’outils adaptés pour interroger le Linked Open Data avec SPARQL. Nous proposons des solutions à ces deux problèmes. Afin de pouvoir vérifier le respect d’un protocole SPARQL au sein du Linked Data d’une université, nous avons créé l’indicateur SPARQL Score qui évalue la conformité des services SPARQL avant leur déploiement dans le système d’information de l’université. De plus, pour aider les chercheurs à interroger le LOD, nous avons implémenté le démonstrateur SPARQLets-Finder qui démontre qu’il est possible de faciliter la conception de requêtes SPARQL à l’aide d’outils d’autocomplétion sans connaissance préalable des schémas RDF au sein du LOD. / The Center for Data Science of the University of Paris-Saclay deployed a platform compatible with Linked Data in 2016. Because researchers face many difficulties utilizing these technologies, an approach and then a platform we call LinkedWiki were designed and tested over the university’s cloud (IAAS) to enable the creation of modular virtual search environments (VREs) compatible with Linked Data. We are thus able to offer researchers a means to discover, produce and reuse the research data available within the Linked Open Data, i.e., the global information system emerging at the scale of the internet. This experience enabled us to demonstrate that the operational use of Linked Data within a university is perfectly possible with this approach. However, some problems persist, such as (i) the respect of protocols and (ii) the lack of adapted tools to interrogate the Linked Open Data with SPARQL. We propose solutions to both these problems. In order to be able to verify the respect of a SPARQL protocol within the Linked Data of a university, we have created the SPARQL Score indicator which evaluates the compliance of the SPARQL services before their deployments in a university’s information system. In addition, to help researchers interrogate the LOD, we implemented a SPARQLets-Finder, a demonstrator which shows that it is possible to facilitate the design of SPARQL queries using autocompletion tools without prior knowledge of the RDF schemas within the LOD.
|
99 |
Learning OWL Class ExpressionsLehmann, Jens 09 June 2010 (has links)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems.
However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web.
In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work.
The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future.
The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold:
The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language.
The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently.
The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach.
The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
|
100 |
[pt] CONTRIBUIÇÕES AO PROBLEMA DE BUSCA POR PALAVRAS-CHAVE EM CONJUNTOS DE DADOS E TRAJETÓRIAS SEMÂNTICAS BASEADOS NO RESOURCE DESCRIPTION FRAMEWORK / [en] CONTRIBUTIONS TO THE PROBLEM OF KEYWORD SEARCH OVER DATASETS AND SEMANTIC TRAJECTORIES BASED ON THE RESOURCE DESCRIPTION FRAMEWORKYENIER TORRES IZQUIERDO 18 May 2021 (has links)
[pt] Busca por palavras-chave fornece uma interface fácil de usar para recuperar
informação. Esta tese contribui para os problemas de busca por palavras chave
em conjuntos de dados sem esquema e trajetórias semânticas baseados
no Resource Description Framework.
Para endereçar o problema da busca por palavras-chave em conjuntos
de dados RDF sem esquema, a tese introduz um algoritmo para traduzir automaticamente
uma consulta K baseada em palavras-chave especificadas pelo
usuário em uma consulta SPARQL Q de tal forma que as respostas que Q retorna
também são respostas para K. O algoritmo não depende de um esquema
RDF, mas sintetiza as consultas SPARQL explorando a semelhança entre os
domínios e contradomínios das propriedades e os conjuntos de instâncias de
classe observados no grafo RDF. O algoritmo estima a similaridade entre conjuntos
com base em sinopses, que podem ser precalculadas, com eficiência, em
uma única passagem sobre o conjunto de dados RDF. O trabalho inclui dois
conjuntos de experimentos com uma implementação do algoritmo. O primeiro
conjunto de experimentos mostra que a implementação supera uma ferramenta
de pesquisa por palavras-chave sobre grafos RDF que explora o esquema RDF
para sintetizar as consultas SPARQL, enquanto o segundo conjunto indica que
a implementação tem um desempenho melhor do que sistemas de pesquisa
por palavras-chave em conjuntos de dados RDF baseados na abordagem de
documentos virtuais denominados TSA+BM25 e TSA+VDP. Finalmente, a
tese também computa a eficácia do algoritmo proposto usando uma métrica
baseada no conceito de relevância do grafo resposta.
O segundo problema abordado nesta tese é o problema da busca por
palavras-chave sobre trajetórias semânticas baseadas em RDF. Trajetórias semânticas
são trajetórias segmentadas em que as paradas e os deslocamentos de
um objeto móvel são semanticamente enriquecidos com dados adicionais. Uma
linguagem de consulta para conjuntos de trajetórias semânticas deve incluir
seletores para paradas ou deslocamentos com base em seus enriquecimentos
e expressões de sequência que definem como combinar os resultados dos seletores
com a sequência que a trajetória semântica define. A tese inicialmente
propõe um framework formal para definir trajetórias semânticas e introduz
expressões de sequências de paradas-e-deslocamentos (stop-and-move sequences),
com sintaxe e semântica bem definidas, que atuam como uma linguagem
de consulta expressiva para trajetórias semânticas. A tese descreve um modelo
concreto de trajetória semântica em RDF, define expressões de sequências
de paradas-e-deslocamentos em SPARQL e discute estratégias para compilar
tais expressões em consultas SPARQL. A tese define consultas sobre trajetórias
semânticas com base no uso de palavras-chave para especificar paradas e
deslocamentos e a adoção de termos com semântica predefinida para compor
expressões de sequência. Em seguida, descreve como compilar tais expressões
em consultas SPARQL, mediante o uso de padrões predefinidos. Finalmente,
a tese apresenta uma prova de conceito usando um conjunto de trajetórias semânticas
construído com conteúdo gerado pelos usuários do Flickr, combinado
com dados da Wikipedia. / [en] Keyword search provides an easy-to-use interface for retrieving information.
This thesis contributes to the problems of keyword search over schema-less
datasets and semantic trajectories based on RDF.
To address the keyword search over schema-less RDF datasets problem,
this thesis introduces an algorithm to automatically translate a user-specified
keyword-based query K into a SPARQL query Q so that the answers Q returns
are also answers for K. The algorithm does not rely on an RDF schema, but it
synthesizes SPARQL queries by exploring the similarity between the property
domains and ranges, and the class instance sets observed in the RDF dataset.
It estimates set similarity based on set synopses, which can be efficiently precomputed
in a single pass over the RDF dataset. The thesis includes two
sets of experiments with an implementation of the algorithm. The first set
of experiments shows that the implementation outperforms a baseline RDF
keyword search tool that explores the RDF schema, while the second set of
experiments indicate that the implementation performs better than the stateof-
the-art TSA+BM25 and TSA+VDP keyword search systems over RDF
datasets based on the virtual documents approach. Finally, the thesis also
computes the effectiveness of the proposed algorithm using a metric based on
the concept of graph relevance.
The second problem addressed in this thesis is the keyword search over
RDF semantic trajectories problem. Stop-and-move semantic trajectories are
segmented trajectories where the stops and moves are semantically enriched
with additional data. A query language for semantic trajectory datasets has
to include selectors for stops or moves based on their enrichments, and
sequence expressions that define how to match the results of selectors with
the sequence the semantic trajectory defines. The thesis first proposes a
formal framework to define semantic trajectories and introduces stop and move
sequence expressions, with well-defined syntax and semantics, which act as
an expressive query language for semantic trajectories. Then, it describes a
concrete semantic trajectory model in RDF, defines SPARQL stop-and-move
sequence expressions, and discusses strategies to compile such expressions
into SPARQL queries. Next, the thesis specifies user-friendly keyword search
expressions over semantic trajectories based on the use of keywords to specify
stop and move queries, and the adoption of terms with predefined semantics
to compose sequence expressions. It then shows how to compile such keyword
search expressions into SPARQL queries. Finally, it provides a proof-of-concept
experiment over a semantic trajectory dataset constructed with user-generated
content from Flickr, combined with Wikipedia data.
|
Page generated in 0.0323 seconds