• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 27
  • 27
  • 21
  • 20
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 332
  • 146
  • 123
  • 108
  • 81
  • 67
  • 63
  • 56
  • 54
  • 50
  • 49
  • 46
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Consulta a ontologias utilizando linguagem natural controlada / Querying ontologies using controlled natural language

Fabiano Ferreira Luz 31 October 2013 (has links)
A presente pesquisa explora areas de Processamento de Linguagem Natural (PLN), tais como, analisadores, gramaticas e ontologias no desenvolvimento de um modelo para o mapeamento de consulta em lingua portuguesa controlada para consultas SPARQL. O SPARQL e uma linguagem de consulta capaz de recuperar e manipular dados armazenados em RDF, que e a base para a construcao de Ontologias. Este projeto pretende investigar utilizacao das tecnicas supracitadas na mitigacao do problema de consulta a Ontologias utilizando linguagem natural controlada. A principal motivacao para o desenvolvimento deste trabalho e pesquisar tecnicas e modelos que possam proporcionar uma melhor interacao do homem com o computador. Facilidade na interacao homem-computador e convergida em produtividade, eficiencia, comodidade dentre outros beneficios implicitos. Nos nos concentramos em medir a eficiencia do modelo proposto e procurar uma boa combinacao entre todas as tecnicas em questao. / This research explores areas of Natural Language Processing (NLP), such as parsers, grammars and ontologies in the development of a model for mapping queries in controlled Portuguese into SPARQL queries. The SPARQL query language allows for manipulation and retrieval of data stored as RDF, which forms the basis for building ontologies. This project aims to investigate the use of the above techniques to help curb the problem of querying ontologies using controlled natural language. The main motivation for the development of this work is to research techniques and models that could provide a better interaction between man and computer. Ease in human-computer interaction is converted into productivity, efficiency, convenience, among other implicit benefits. We focus on measuring the effectiveness of the proposed model and look for a good combination of all the techniques in question.
12

Extraction des règles d'association dans des bases de connaissances / Rule mining in knowledge bases

Galarraga Del Prado, Luis 29 September 2016 (has links)
Le développement rapide des techniques d’extraction d’information a permis de construire de vastes bases de connaissances généralistes. Ces bases de connaissances contiennent des millions de faits portant sur des entités du monde réel, comme des personnes, des lieux, ou des organisations. Ces faits sont accessibles aux ordinateurs, et leur permettent ainsi de “comprendre” le monde réel. Ces bases trouvent donc de nombreuses applications, notamment pour la recherche d’information, le traitement de requêtes, et le raisonnement automatique. Les nombreuses informations contenues dans les bases de connaissances peuvent également être utilisées pour découvrir des motifs intéressants et fréquents dans les données. Cette tâche, l’extraction de règles d’association, permet de comprendre la structure des données ; les règles ainsi obtenues peuvent être employées pour l’analyse de données, la prédiction, et la maintenance de données, entre autres applications. Cette thèse présente deux contributions principales. En premier lieu, nous proposons une nouvelle méthode pour l’extraction de règles d’association dans les bases de connaissances. Cette méthode s’appuie sur un modèle d’extraction qui convient particulièrement aux bases de connaissances potentiellement incomplètes, comme celles qui sont extraites à partir des données du Web. En second lieu, nous montrons que l’extraction de règles peut être utilisée sur les bases de connaissances pour effectuer de nombreuses tâches orientées vers les données. Nous étudions notamment la prédiction de faits, l’alignement de schémas, la mise en forme canonique de bases de connaissances ouvertes, et la prédiction d’annotations de complétude. / The continuous progress of information extraction (IE) techniques has led to the construction of large general-purpose knowledge bases (KBs). These KBs contain millions of computer-readable facts about real-world entities such as people, organizations and places. KBs are important nowadays because they allow computers to “understand” the real world. They are used in multiple applications in Information Retrieval, Query Answering and Automatic Reasoning, among other fields. Furthermore, the plethora of information available in today’s KBs allows for the discovery of frequent patterns in the data, a task known as rule mining. Such patterns or rules convey useful insights about the data. These rules can be used in several applications ranging from data analytics and prediction to data maintenance tasks. The contribution of this thesis is twofold : First, it proposes a method to mine rules on KBs. The method relies on a mining model tailored for potentially incomplete webextracted KBs. Second, the thesis shows the applicability of rule mining in several data-oriented tasks in KBs, namely facts prediction, schema alignment, canonicalization of (open) KBs and prediction of completeness.
13

Towards RDF normalization / Vers une normalisation RDF

Ticona Herrera, Regina Paola 06 July 2016 (has links)
Depuis ces dernières décennies, des millions d'internautes produisent et échangent des données sur le Web. Ces informations peuvent être structurées, semi-structurées et/ou non-structurées, tels que les blogs, les commentaires, les pages Web, les contenus multimédias, etc. Afin de faciliter la publication ainsi que l'échange de données, le World Wide Web Consortium (ou W3C) a défini en 1999 le standard RDF. Ce standard est un modèle qui permet notamment de structurer une information sous la forme d'un réseau de données dans lequel il est possible d'y attacher des descriptions sémantiques. Ce modèle permet donc d'améliorer l'interopérabilité entre différentes applications exploitant des données diverses et variées présentes sur le Web.Actuellement, une grande quantité de descriptions RDF est disponible en ligne, notamment grâce à des projets de recherche qui traitent du Web de données liées, comme par exemple DBpedia et LinkedGeoData. De plus, de nombreux fournisseurs de données ont adopté les technologies issues de cette communauté du Web de données en partageant, connectant, enrichissant et publiant leurs informations à l'aide du standard RDF, comme les gouvernements (France, Canada, Grande-Bretagne, etc.), les universités (par exemple Open University) ainsi que les entreprises (BBC, CNN, etc.). Il en résulte que de nombreux acteurs actuels (particuliers ou organisations) produisent des quantités gigantesques de descriptions RDF qui sont échangées selon différents formats (RDF/XML, Turtle, N-Triple, etc.). Néanmoins, ces descriptions RDF sont souvent verbeuses et peuvent également contenir de la redondance d'information. Ceci peut concerner à la fois leur structure ou bien leur sérialisation (ou le format) qui en plus souffre de multiples variations d'écritures possibles au sein d'un même format. Tous ces problèmes induisent des pertes de performance pour le stockage, le traitement ou encore le chargement de ce type de descriptions. Dans cette thèse, nous proposons de nettoyer les descriptions RDF en éliminant les données redondantes ou inutiles. Ce processus est nommé « normalisation » de descriptions RDF et il est une étape essentielle pour de nombreuses applications, telles que la similarité entre descriptions, l'alignement, l'intégration, le traitement des versions, la classification, l'échantillonnage, etc. Pour ce faire, nous proposons une approche intitulée R2NR qui à partir de différentes descriptions relatives à une même information produise une et une seule description normalisée qui est optimisée en fonction de multiples paramètres liés à une application cible. Notre approche est illustrée en décrivant plusieurs cas d'étude (simple pour la compréhension mais aussi plus réaliste pour montrer le passage à l'échelle) nécessitant l'étape de normalisation. La contribution de cette thèse peut être synthétisée selon les points suivants :i. Produire une description RDF normalisée (en sortie) qui préserve les informations d'une description source (en entrée),ii. Éliminer les redondances et optimiser l'encodage d'une description normalisée,iii. Engendrer une description RDF optimisée en fonction d'une application cible (chargement rapide, stockage optimisée...),iv. Définir de manière complète et formelle le processus de normalisation à l'aide de fonctions, d'opérateurs, de règles et de propriétés bien fondées, etc.v. Fournir un prototype RDF2NormRDF (avec deux versions : en ligne et hors ligne) permettant de tester et de valider l'efficacité de notre approche.Afin de valider notre proposition, le prototype RDF2NormRDF a été utilisé avec une batterie de tests. Nos résultats expérimentaux ont montré des mesures très encourageantes par rapport aux approches existantes, notamment vis-à-vis du temps de chargement ou bien du stockage d'une description normalisée, tout en préservant le maximum d'informations. / Over the past three decades, millions of people have been producing and sharing information on the Web, this information can be structured, semi-structured, and/or non-structured such as blogs, comments, Web pages, and multimedia data, etc., which require a formal description to help their publication and/or exchange on the Web. To help address this problem, the Word Wide Web Consortium (or W3C) introduced in 1999 the RDF standard as a data model designed to standardize the definition and use of metadata, in order to better describe and handle data semantics, thus improving interoperability, and scalability, and promoting the deployment of new Web applications. Currently, billions of RDF descriptions are available on the Web through the Linked Open Data cloud projects (e.g., DBpedia and LinkedGeoData). Also, several data providers have adopted the principles and practices of the Linked Data to share, connect, enrich and publish their information using the RDF standard, e.g., Governments (e.g., Canada Government), universities (e.g., Open University) and companies (e.g., BBC and CNN). As a result, both individuals and organizations are increasingly producing huge collections of RDF descriptions and exchanging them through different serialization formats (e.g., RDF/XML, Turtle, N-Triple, etc.). However, many available RDF descriptions (i.e., graphs and serializations) are noisy in terms of structure, syntax, and semantics, and thus may present problems when exploiting them (e.g., more storage, processing time, and loading time). In this study, we propose to clean RDF descriptions of redundancies and unused information, which we consider to be an essential and required stepping stone toward performing advanced RDF processing as well as the development of RDF databases and related applications (e.g., similarity computation, mapping, alignment, integration, versioning, clustering, and classification, etc.). For that purpose, we have defined a framework entitled R2NR which normalizes different RDF descriptions pertaining to the same information into one normalized representation, which can then be tuned both at the graph level and at the serialization level, depending on the target application and user requirements. We illustrate this approach by introducing use cases (real and synthetics) that need to be normalized.The contributions of the thesis can be summarized as follows:i. Producing a normalized (output) RDF representation that preserves all the information in the source (input) RDF descriptions,ii. Eliminating redundancies and disparities in the normalized RDF descriptions, both at the logical (graph) and physical (serialization) levels,iii. Computing a RDF serialization output adapted w.r.t. the target application requirements (faster loading, better storage, etc.),iv. Providing a mathematical formalization of the normalization process with dedicated normalization functions, operators, and rules with provable properties, andv. Providing a prototype tool called RDF2NormRDF (desktop and online versions) in order to test and to evaluate the approach's efficiency.In order to validate our framework, the prototype RDF2NormRDF has been tested through extensive experimentations. Experimental results are satisfactory show significant improvements over existing approaches, namely regarding loading time and file size, while preserving all the information from the original description.
14

Přizpůsobitelný prohlížeč pro Linked Data / Customizable Linked Data Browser

Klíma, Karel January 2015 (has links)
The aim of this thesis is to identify key requirements for exploring Linked Data and design and implement a web application which serves as a Linked Data browser, including search and customization features. In comparison to existing approaches it will enable users to provide templates which define a visual style for presentation of particular types of Linked Data resources. Alternatively, the application can provide other means of altering the presentation of data or the appearance of the application. Powered by TCPDF (www.tcpdf.org)
15

Δημιουργία μηχανισμού επερώτησης και διατήρηση κατανεμημένου αποθέματος εγγράφων RDF στον παγκόσμιο ιστό

Σολωμού, Γεωργία 12 February 2008 (has links)
Το RDF (Resource Description Framework), πρότυπο του W3C, είναι ένα μοντέλο δεδομένων για την αναπαράσταση πληροφορίας στον Παγκόσμιο Ιστό και αποτελεί τη θεμελίωση ενός συνόλου τεχνολογιών για τη μοντελοποίηση κατανεμημένης γνώσης στο Σημαντικό Ιστό. Η παρούσα διπλωματική εργασία περιλαμβάνει τη μελέτη της τεχνολογίας RDF και της σημασιολογικής επέκτασης αυτής, του RDF Schema. Επίσης, στα πλαίσια αυτής της μελέτης έγινε συγκριτική αξιολόγηση των ήδη υπαρχόντων αρχιτεκτονικών για την αποθήκευση και επεξεργασία δεδομένων RDF, εκτιμώντας παράλληλα τη συμπεριφορά τους στην περίπτωση των κατανεμημένων αποθεμάτων. Επιπρόσθετα, αξιολογήθηκαν οι δυνατότητες που προσφέρει σε τέτοιους μηχανισμούς η γλώσσα SPARQL, μια γλώσσα επερωτήσεων για RDF η οποία αποτελεί πρότυπο του W3C. Τέλος, ερευνήθηκαν στο επίπεδο των κατανεμημένων αποθεμάτων δύο πολύ σημαντικά χαρακτηριστικά αυτής της τεχνολογίας, η δυνατότητα συνδυασμού των δεδομένων και της εξαγωγής συμπερασμάτων (inferencing) και η υποστασιοποίηση (reification). Στο τελευταίο στάδιο, και βάσει της παραπάνω αποτύπωσης, πραγματοποιήθηκε η ανάπτυξη μιας εφαρμογής σε γλώσσα Java, η οποία επιτρέπει τη σύνδεση σε ένα ή περισσότερα απομακρυσμένα ή και τοπικά αποθέματα RDF, διαθέτοντας τον απαραίτητο μηχανισμό αποστολής επερωτήσεων (queries) προς αυτά. Η συγκεκριμένη εφαρμογή επιτυγχάνει τον κατάλληλο συνδυασμό των διαθέσιμων κατανεμημένων πληροφοριών και την εξαγωγή συμπερασμών, μια διαδικασία που αποτελεί πρωταρχικό στόχο στο πεδίο του Σημαντικού Ιστού. Για την αξιολόγηση των χαρακτηριστικών της χρησιμοποιήθηκαν απλά παραδείγματα που επιβεβαιώνουν την ορθή λειτουργία της και φανερώνουν το εύρος των δυνατοτήτων της. Άλλωστε, η επεκτασιμότητα και η αξιοπιστία ενός τέτοιου μηχανισμού αποτέλεσαν τη φιλοσοφία πάνω στην οποία στηρίχθηκε η ανάπτυξη του, λαμβάνοντας συγχρόνως υπόψη τα ιδιαίτερα χαρακτηριστικά των κατανεμημένων αποθεμάτων εγγράφων RDF. / RDF (Resource Description Framework), a W3C recommendation, is a data model for representing information in the World Wide Web and constitutes the foundation of many existent technologies for the modeling of distributed knowledge in the Semantic Web. This thesis includes a study of the RDF technology and of its semantic extension, RDF Schema. Also, a comparative evaluation was made among already existing frameworks for the storage and management of RDF data, appreciating their behavior in the case of distributed repositories. Moreover, an evaluation was made for the possibilities that SPARQL offers in such mechanisms, a RDF query language and soon a W3C recommendation. Finally, two very important characteristics of this technology were researched in the field of distributed repositories, the possibility of combination of data and export of conclusions (inferencing) and reification. In the last part, and based on the above imprints, an application was developed in Java, which allows the connection to one or more remote and local RDF repositories, having the necessary mechanism as well for making queries. This application successfully combines distributed knowledge and leads to inferencing, something that is a fundamental objective in the field of Semantic Web. For the evaluation of this application's characteristics, simple examples were used that confirm its proper function and reveal the breadth of its possibilities. Scalability and reliability have been the main goals during this application's development phase, having always in mind that we refer to distributed RDF repositories, which are more complicated and have some special characteristics.
16

RDF vocabulary : Translation of policies with RDF / RDF vokabulär : Översättning av policy med RDF

Garcia Bernabeu, Sergio, Bergdahl, Lukas January 2023 (has links)
Throughout this thesis, we have worked on translating policies into RDF formats andtesting RDF vocabularies. Our goal is to create policies that can be applied to future indus-tries within a circular economy. While Onto-Deside is the primary source of motivation forthis work, we do not focus on it in this thesis. Instead, we focus on experimenting withpolicies and their potential translation into an RDF format. We also tested RDF vocabu-laries and made necessary edits. Cybersecurity is a significant concern, and our policieswere constructed with security in mind. Our documentation includes our struggles, fail-ures, and successes in realizing these policies into RDF and the changes made betweeneach iteration. Despite the time constraints, we achieved many iterations of policies andtranslations and compiled our conclusions about the translations and RDF vocabularies.
17

Nástroj pro práci s NDL / Tools for NDL Elaboration

Myazina, Elena January 2014 (has links)
Title: Tools for NDL Elaboration Author: Bc. Elena Myazina Department: Department of Software Engineering Supervisor of the master thesis: doc. Ing. Karel Richta, CSc., Dept. of Software Engineering, Faculty of Mathematics and Physics, Charles University in Prague Abstract: Current state of research of networks enables end users to create custom connections (lightpaths) for a given application and optical private networks (OPNs). Such adjustments require clear communication between the requesting application and the desired network. Language NDL (Network Description Language) is used to describe such optical networks. NDL is based on the Resource Description Framework (RDF). This thesis analyzes the network description language NDL and its treatment. Current status of the network can be loaded in the form of the description in the NDL and displayed as the graphical representation. The representation enables to find out paths in the network, or edit the network. The result can then be transformed back into NDL format. Keywords: NDL, RDF, optical technology.
18

Nástroj pro práci s NDL / Tools for NDL Elaboration

Myazina, Elena January 2014 (has links)
Title: Tools for NDL Elaboration Author: Bc. Elena Myazina Department: Department of Software Engineering Supervisor of the master thesis: doc. Ing. Karel Richta, CSc., Dept. of Software Engineering, Faculty of Mathematics and Physics, Charles University in Prague Abstract: Current state of research of networks enables end users to create custom connections (lightpaths) for a given application and optical private networks (OPNs). Such adjustments require clear communication between the requesting application and the desired network. Language NDL (Network Description Language) is used to describe such optical networks. NDL is based on the Resource Description Framework (RDF). This thesis analyzes the network description language NDL and its treatment. Current status of the network can be loaded in the form of the description in the NDL and displayed as the graphical representation. The representation enables to find out paths in the network, or edit the network. The result can then be transformed back into NDL format. Keywords: NDL, RDF, optical technology.
19

A Framework Supporting Development of Ontology-Based Web Applications

Tankashala, Shireesha 17 December 2010 (has links)
We have developed a framework to support development of ontology based Web applications. This framework is composed of a tree-view browser, an attribute selector, the ontology persistence module, an ontology query module, and a utility class that allows the users, to plug-in their own customized functions. The framework supports SPARQL-DL query language. The purpose of this framework is to shield the complexity of ontology from the users and thereby ease the development of ontology based Web applications. Having high quality ontology and using this framework, the end-users can develop Web applications in many domains. For example, a professor can create highly customized study guides; a domain expert can generate the Web forms for data collections; a geologist can create a Google Maps mashup. We have also reported three ontology-based Web applications in education, meteorology and geographic information system.
20

An ontology-based approach to Automatic Generation of GUI for Data Entry

Liu, Fangfang 20 December 2009 (has links)
This thesis reports an ontology-based approach to automatic generation of highly tailored GUI components that can make customized data requests for the end users. Using this GUI generator, without knowing any programming skill a domain expert can browse the data schema through the ontology file of his/her own field, choose attribute fields according to business's needs, and make a highly customized GUI for end users' data requests input. The interface for the domain expert is a tree view structure that shows not only the domain taxonomy categories but also the relationships between classes. By clicking the checkbox associated with each class, the expert indicates his/her choice of the needed information. These choices are stored in a metadata document in XML. From the viewpoint of programmers, the metadata contains no ambiguity; every class in an ontology is unique. The utilizations of the metadata can be various; I have carried out the process of GUI generation. Since every class and every attribute in the class has been formally specified in the ontology, generating GUI is automatic. This approach has been applied to a use case scenario in meteorological and oceanographic (METOC) area. The resulting features of this prototype have been reported in this thesis.

Page generated in 0.0314 seconds