• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 30
  • 14
  • 12
  • 12
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 223
  • 223
  • 105
  • 90
  • 52
  • 45
  • 38
  • 35
  • 31
  • 31
  • 30
  • 30
  • 28
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Frameworks for Personalized Privacy and Privacy Auditing

Samavi, M. Reza 13 August 2013 (has links)
As individuals are increasingly benefiting from the use of online services, there are growing concerns about the treatment of personal information. Society’s ongoing response to these concerns often gives rise to privacy policies expressed in legislation and regulation. These policies are written in natural language (or legalese) as privacy agreements that users must agree to, or presented as a set of privacy settings and options that users must opt in or out of in order to receive the service they want. But comprehensibility of privacy policies and settings is becoming increasingly challenging as agreements become longer and there are many privacy options to choose from. Additionally, organizations face the challenge of assuring compliance with policies that govern collecting, using, and sharing of personal data. This thesis proposes frameworks for personalized privacy and privacy auditing to address these two problems. In this thesis, we focus our investigation on the comprehensibility issues of personalized privacy using the concrete application domain of personal health data as recorded in systems known as personal health records (PHR). We develop the Privacy Goals and Settings Mediator (PGSM) model, which is based on i* multi-agent modelling techniques, as a way to help users comprehend privacy settings when employing multiple services over a web platform. Additionally, the PGSM model helps privacy experts contribute their privacy knowledge to the users’ privacy decision-making task. To address the privacy auditing problem, we propose two light-weight ontologies, L2TAP and SCIP, that are designed for deployment as Linked Data, an emerging standard for representing and publishing web data. L2TAP (Linked Data Log to Transparency, Accountability and Privacy) provides flexible and extensible provenance-enabled logging of privacy events. SCIP (Simple Contextual Integrity Privacy) provides a simple target for mapping the key concepts of Contextual Integrity and enables SPARQL query-based solutions for two important privacy processes: compliance checking and obligation derivation. This thesis validates the premise of PHR users’ privacy concerns, attitudes and behaviour through an empirical study. The usefulness of the PGSM model for privacy experts is evaluated through interviews with experts. Finally, the scalability and practical benefits of L2TAP+SCIP for log-based privacy auditing are validated experimentally.
82

Adaptable metadata creation for the Web of Data

Enoksson, Fredrik January 2014 (has links)
One approach to manage collections is to create data about the things in it. This descriptive data is called metadata, and this term is in this thesis used as a collective noun, i.e no plural form exists. A library is a typical example of an organization that uses metadata, to manage a collection of books. The metadata about a book describes certain attributes of it, for example who the author is. Metadata also provides possibilities for a person to judge if a book is interesting without having to deal with the book itself. The metadata of the things in a collection is a representation of the collection that is easier to deal with than the collection itself. Nowadays metadata is often managed in computer-based systems that enable search possibilities and sorting of search results according to different principles. Metadata can be created both by computers and humans. This thesis will deal with certain aspects of the human activity of creating metadata and includes an explorative study of this activity. The increased amount of public information that is produced is also required to be easily accessible and therefore the situation when metadata is a part of the Semantic Web has been considered an important part of this thesis. This situation is also referred to as the Web of Data or Linked Data. With the Web of Data, metadata records living in isolation from each other can now be linked together over the web. This will probably change what kind of metadata that is being created, but also how it is being created. This thesis describes the construction and use of a framework called Annotation Profiles, a set of artifacts developed to enable an adaptable metadata creation environment with respect to what metadata that can be created. The main artifact is the Annotation Profile Model (APM), a model that holds enough information for a software application to generate a customized metadata editor from it. An instance of this model is called an annotation profile, that can be seen as a configuration for metadata editors. Changes to what metadata can be edited in a metadata editor can be done without modifying the code of the application. Two code libraries that implement the APM have been developed and have been evaluated both internally within the research group where they were developed, but also externally via interviews with software developers that have used one of the code-libraries. Another artifact presented is a protocol for how RDF metadata can be remotely updated when metadata is edited through a metadata editor. It is also described how the APM opens up possibilities for end user development and this is one of the avenues of pursuit in future research related to the APM. / <p>QC 20141028</p>
83

Supporting loose forms of collaboration : Using Linked Data to realize an architecture for collective knowledge construction

Ebner, Hannes January 2014 (has links)
This thesis is driven by the motivation to explore a way of working collaboratively that closely reflects the World Wide Web (WWW), more specifically the potential of the Web architecture built on Semantic Web technologies and Linked Data. The goal is to describe a generic approach and architecture that satisfies the needs for loose collaboration and collective knowledge construction as exemplified by the applications described in this thesis. This thesis focuses on a contribution-centric architecture which allows for flexible applications that support loose forms of collaboration. The first research question deals with how Web-based collective knowledge construction can be supported. The second research question explores the characteristics of collective knowledge construction with respect to the Open World Assumption (OWA). The OWA implies that complete knowledge about a subject cannot be assumed at any time, which is one of the most fundamental properties of the WWW. The third research question investigates how Semantic Web technologies be used in order to support such a contribution-centric architecture. The thesis and its underlying publications are of a technical character and are always grounded in theoretical models and considerations that have led to functional implementations. The research has evolved in iterative development processes and was explicitly directed at building applications that can be used in collaborative settings and that are based on standardized Web technologies. One of the main outcomes, an information model, was developed together with such an application and provides a number of novel approaches in the context in which it was designed. The validity of the presented research is supported by evaluations from different perspectives: a list of implemented applications and showcases, results from structured interviews that have investigated the suitability for various resource annotation processes, as well as scalability aspects. The thesis concludes that it is ultimately up to the application how "loose" the collaboration should be and to which extent the OWA is incorporated. The presented architecture provides a toolkit to support the development of loosely collaborative applications. The showcased applications allow the construction of collaborative conceptual models and to collaboratively annotate educational resources. They show the potential of the used technology stack and the introduced contribution-centric architecture that sits on top if it. / <p>QC 20140417</p>
84

Μελέτη τεχνολογιών σημασιολογικού ιστού και ανάπτυξη συστήματος διαχείρισης πολιτισμικών δεδομένων

Μερτής, Αριστοτέλης 07 April 2011 (has links)
Η ψηφιακή εποχή έχει διεισδύσει σε όλες τις πτυχές της ανθρώπινης δραστηριότητας και τις μεταμορφώνει με έναν επαναστατικό και πρωτόγνωρο τρόπο. Ένας ιδιαίτερος ευαίσθητος τομέας για εμάς τους Έλληνες, ο πολιτισμός, δεν θα μπορούσε να μείνει ανεπηρέαστος από το κύμα της ψηφιακής εποχής. Η ψηφιακή εποχή έχει μεταμορφώσει τη πολιτιστική κληρονομιά τόσο από άποψη δημιουργίας όσο και από άποψη διατήρησης πολιτισμού. Ενώ κάποτε συλλέγαμε φυσικά αντικείμενα όπως ζωγραφιές, βιβλία και αγάλματα, τώρα πλέον διατηρούμε και ψηφιακές αναπαραστάσεις των πολιτιστικών αντικειμένων. Μέσω των νέων τεχνολογιών της Πληροφορικής και των Επικοινωνιών μπορούν να δημιουργηθούν, ταυτοποιηθούν και να ανακτηθούν τα ψηφιακά αυτά αγαθά. Η πολιτιστική κληρονομιά έχει κερδίσει μεγάλο ενδιαφέρον τα τελευταία χρόνια. Η επιστημονική κοινότητα ερευνά τις πιθανότητες για παροχή κατάλληλων τεχνολογιών για ολοκληρωμένη πρόσβαση στις συλλογές πολιτισμικής κληρονομιάς, ενώ οι οργανισμοί πολιτισμικής κληρονομιάς γίνονται ολοένα πιο πρόθυμοι να συνεργαστούν και να παρέχουν την καλύτερη δυνατή πρόσβαση στις συλλογές τους μέσα από εξατομικευμένη παρουσίαση και πλοήγηση. Ο Σημασιολογικός Ιστός βρίσκεται στο επίκεντρο της προσπάθειας αυτής. Ο Σημασιολογικός Ιστός είναι το επόμενο στάδιο του σημερινού Διαδικτύου κατά το οποίο, τα δεδομένα θα επισημειώνονται με μεταδεδομένα, τα οποία θα επιτρέπουν στις εφαρμογές του Διαδικτύου να προσφέρουν καλύτερες υπηρεσίες αναζήτησης στο χρήστη. Η διπλωματική αυτή πραγματεύεται τη χρήση των τεχνολογιών του Σημασιολογικού Ιστού για την βελτίωση της πρόσβασης σε πολιτισμικά δεδομένα. Έχει ως στόχο την εμβάθυνση στις τεχνολογίες Σημασιολογικού Ιστού, στην ανάπτυξη μιας καινοτόμου εφαρμογής και στην ανάδειξη των πλεονεκτημάτων. Στο δεύτερο κεφάλαιο παρουσιάζεται πως ο Σημασιολογικός Ιστός λύνει το πρόβλημα της συντακτικής συμβατότητας. Συγκεκριμένα, παρουσιάζεται η τεχνολογία της XML και των διάφορων τεχνολογιών γύρω από αυτήν. Στο τρίτο και στο τέταρτο κεφάλαιο παρουσιάζεται πως επιτυγχάνεται η Σημασιολογική Συμβατότητα. Στο τρίτο κεφάλαιο μελετάται το RDF μοντέλο δεδομένων, η μοντελοποίηση δεδομένων στο Σημασιολογικό Ιστό. Παρουσιάζονται οι διάφοροι τρόποι σύνταξης του καθώς και πως γίνεται αναζήτηση σε γράφους RDF με το πρωτόκολλο SPARQL. Στο τέταρτο κεφάλαιο παρουσιάζεται η έννοια της οντολογίας. Παρουσιάζονται διάφορες γλώσσες περιγραφής οντολογιών ενώ μελετάται σε βάθος η OWL. Στο πέμπτο κεφάλαιο παρουσιάζεται ένα σύνολο θησαυρών και οντολογιών που χρησιμοποιούνται περισσότερο από τους οργανισμούς. Παρουσιάζεται η οντολογία SKOS καθώς και μέθοδοι για τη μεταφορά παραδοσιακών θησαυρών στο Σημασιολογικό Ιστό μέσω του SKOS. Τέλος, παρουσιάζεται το CIDOC-CRM ως μία λύση για την ολοκλήρωση θησαυρών ποικιλίας γνωστικών πεδίων. Στο έκτο κεφάλαιο γίνεται μία ανασκόπηση από επιλεγμένα έργα των τελευταίων ετών που χαρακτηρίζονται από την εφαρμογή των τεχνολογιών του Σημασιολογικού Ιστού στο τομέα του Πολιτισμού και της Πολιτισμικής Κληρονομιάς. Τέλος στο έβδομο κεφάλαιο παρουσιάζεται μία εφαρμογή διαχείρισης πολιτιστικών δρώμενων. Επίσης παρουσιάζεται η πρωτοβουλία των Διασυνδεδεμένων Δεδομένων και πως γίνεται η εφαρμογή μας γίνεται μέρος του Σημασιολογικού Ιστού μέσω της πρωτοβουλίας αυτής. / The digital age has influenced every aspect of human activity and has transformed them in a revolutionary, previously unseen way. A special for us Greeks sector, cultural heritage, could not stay unaffected from the wave of the digital age. The Digital age has transformed Cultural Heritage both from the aspect of creation and the aspect of conservation of civilization. While once we collected only physical objects like paintings, books and statues, now we also collect digital representations of cultural objects. Through the new ICTs the objects can be created, authenticated and retrieved. The domain of Cultural Heritage has gained a lot of popularity during the last years. The scientific community researches new possibilities for integrated access of collections of cultural heritage, while the organizations of cultural heritage are increasingly eager to cooperate and provide the best possible access to their collections through personalized presentation and navigation. The Semantic Web stands in the center of this effort. The Semantic Web is the next stage of today’s Internet, in which, data are annotated with metadata that enable novel applications of the Internet to provide better search services to the user. This thesis researches the usage of Semantic Web technologies for the enhancement of the access to cultural data. Its goal is the study of Semantic Web technologies and the development of a novel application to emphasize its advantages. In the second chapter is presented the XML, which is the vehicle of Semantic Web data representations .In the third chapter , the RDF model is presented. Specifically, the various syntaxes of RDF and how RDF graphs are queried. In the fourth chapter the concept of the ontology is studied. Many ontology description languages are presented and OWL is studied more in depth. In the fifth chapter a number of thesaurus and ontologies are presented that are used by many CH organizations. The SKOS ontology is presented as well as the methods employed to migrate legacy thesauri to the Semantic Web. Also, the CIDOC-CRM ontology is presented as a solution for the integration of various domains. In the sixth chapter a review of selected projects of the last years is presented, that are characterized by the application of the technologies of Semantic Web in the sector of Culture and Cultural heritage. In the last chapter an application of cultural events management is presented. The initiative of Linked Data is also presented and how the application becomes a part of the Semantic Web through this initiative.
85

A framework to support developers in the integration and application of linked and open data

Heuss, Timm January 2016 (has links)
In the last years, the number of freely available Linked and Open Data datasets has multiplied into tens of thousands. The numbers of applications taking advantage of it, however, have not. Thus, large portions of potentially valuable data remain unexploited and are inaccessible for lay users. Therefore the upfront investment in releasing data in the first place is hard to justify. The lack of applications needs to be addressed in order not to undermine efforts put into Linked and Open Data. In existing research, strong indicators can be found that the dearth of applications is due to a lack of pragmatic, working architectures supporting these applications and guiding developers. In this thesis, a new architecture for the integration and application of Linked and Open Data is presented. Fundamental design decisions are backed up by two studies: firstly, based on real-world Linked and Open Data samples, characteristic properties are identified. A key finding is the fact that large amounts of structured data display tabular structures, do not use clear licensing and involve multiple different file formats. Secondly, following on from that study, a comparison of storage choices in relevant query scenarios is made. It includes the de-facto standard storage choice in this domain, Triples Stores, as well as relational and NoSQL approaches. Results show significant performance deficiencies of some technologies in certain scenarios. Consequently, when integrating Linked and Open Data in scenarios with application-specific entities, the first choice of storage is relational databases. Combining these findings and related best practices of existing research, a prototype framework is implemented using Java 8 and Hibernate. As a proof-of-concept it is employed in an existing Linked and Open Data integration project. Thereby, it is shown that a best practice architectural component is introduced successfully, while development effort to implement specific program code can be simplified. Thus, the present work provides an important foundation for the development of semantic applications based on Linked and Open Data and potentially leads to a broader adoption of such applications.
86

évaluation de la véracité des données : améliorer la découverte de la vérité en utilisant des connaissances a priori / data veracity assessment : enhancing truth discovery using a priori knowledge

Beretta, Valentina 30 October 2018 (has links)
Face au danger de la désinformation et de la prolifération de fake news (fausses nouvelles) sur le Web, la notion de véracité des données constitue un enjeu crucial. Dans ce contexte, il devient essentiel de développer des modèles qui évaluent de manière automatique la véracité des informations. De fait, cette évaluation est déjà très difficile pour un humain, en raison notamment du biais de confirmation qui empêche d’évaluer objectivement la fiabilité des informations. De plus, la quantité d'informations disponibles sur le Web rend cette tâche quasiment impossible. Il est donc nécessaire de disposer d'une grande puissance de calcul et de développer des méthodes capables d'automatiser cette tâche.Dans cette thèse, nous nous concentrons sur les modèles de découverte de la vérité. Ces approches analysent les assertions émises par différentes sources afin de déterminer celle qui est la plus fiable et digne de confiance. Cette étape est cruciale dans un processus d'extraction de connaissances, par exemple, pour constituer des bases de qualité, sur lesquelles pourront s'appuyer différents traitements ultérieurs (aide à la décision, recommandation, raisonnement…). Plus précisément, les modèles de la littérature sont des modèles non supervisés qui reposent sur un postulat : les informations exactes sont principalement fournies par des sources fiables et des sources fiables fournissent des informations exactes.Les approches existantes faisaient jusqu'ici abstraction de la connaissance a priori d'un domaine. Dans cette contribution, nous montrons comment les modèles de connaissance (ontologies de domaine) peuvent avantageusement être exploités pour améliorer les processus de recherche de vérité. Nous insistons principalement sur deux approches : la prise en compte de la hiérarchisation des concepts de l'ontologie et l'identification de motifs dans les connaissances qui permet, en exploitant certaines règles d'association, de renforcer la confiance dans certaines assertions. Dans le premier cas, deux valeurs différentes ne seront plus nécessairement considérées comme contradictoires ; elles peuvent, en effet, représenter le même concept mais avec des niveaux de détail différents. Pour intégrer cette composante dans les approches existantes, nous nous basons sur les modèles mathématiques associés aux ordres partiels. Dans le second cas, nous considérons des modèles récurrents (modélisés en utilisant des règles d'association) qui peuvent être dérivés à partir des ontologies et de bases de connaissances existantes. Ces informations supplémentaires peuvent renforcer la confiance dans certaines valeurs lorsque certains schémas récurrents sont observés. Chaque approche est validée sur différents jeux de données qui sont rendus disponibles à la communauté, tout comme le code de calcul correspondant aux deux approches. / The notion of data veracity is increasingly getting attention due to the problem of misinformation and fake news. With more and more published online information it is becoming essential to develop models that automatically evaluate information veracity. Indeed, the task of evaluating data veracity is very difficult for humans. They are affected by confirmation bias that prevents them to objectively evaluate the information reliability. Moreover, the amount of information that is available nowadays makes this task time-consuming. The computational power of computer is required. It is critical to develop methods that are able to automate this task.In this thesis we focus on Truth Discovery models. These approaches address the data veracity problem when conflicting values about the same properties of real-world entities are provided by multiple sources.They aim to identify which are the true claims among the set of conflicting ones. More precisely, they are unsupervised models that are based on the rationale stating that true information is provided by reliable sources and reliable sources provide true information. The main contribution of this thesis consists in improving Truth Discovery models considering a priori knowledge expressed in ontologies. This knowledge may facilitate the identification of true claims. Two particular aspects of ontologies are considered. First of all, we explore the semantic dependencies that may exist among different values, i.e. the ordering of values through certain conceptual relationships. Indeed, two different values are not necessary conflicting. They may represent the same concept, but with different levels of detail. In order to integrate this kind of knowledge into existing approaches, we use the mathematical models of partial order. Then, we consider recurrent patterns that can be derived from ontologies. This additional information indeed reinforces the confidence in certain values when certain recurrent patterns are observed. In this case, we model recurrent patterns using rules. Experiments that were conducted both on synthetic and real-world datasets show that a priori knowledge enhances existing models and paves the way towards a more reliable information world. Source code as well as synthetic and real-world datasets are freely available.
87

Everything you always wanted to know about blank nodes (but were afraid to ask)

Hogan, Aidan, Arenas, Macelo, Mallea, Alejandro, Polleres, Axel 06 May 2014 (has links) (PDF)
In this paper we thoroughly cover the issue of blank nodes, which have been defined in RDF as "existential variables". We first introduce the theoretical precedent for existential blank nodes from first order logic and incomplete Information in database theory. We then cover the different (and sometimes incompatible) treatment of blank nodes across the W3C stack of RDF-related standards. We present an empirical survey of the blank nodes present in a large sample of RDF data published on the Web (the BTC-2012 dataset), where we find that 25.7% of unique RDF terms are blank nodes, that 44.9% of documents and 66.2% of domains featured use of at least one blank node, and that aside from one Linked Data domain whose RDF data contains many "blank node cycles", the vast majority of blank nodes form tree structures that are efficient to compute simple entailment over. With respect to the RDF-merge of the full data, we show that 6.1% of blank-nodes are redundant under simple entailment. The vast majority of non-lean cases are isomorphisms resulting from multiple blank nodes with no discriminating information being given within an RDF document or documents being duplicated in multiple Web locations. Although simple entailment is NP-complete and leanness-checking is coNP-complete, in computing this latter result, we demonstrate that in practice, real-world RDF graphs are sufficiently "rich" in ground information for problematic cases to be avoided by non-naive algorithms.
88

Uma abordagem para publicação de visões RDF de dados relacionais / One approach to publishing RDF views of relational data

Teixeira Neto, Luis Eufrasio January 2014 (has links)
TEIXEIRA NETO, Luis Eufrasio. Uma abordagem para publicação de visões RDF de dados relacionais. 2014. 97 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2014. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-11T18:31:26Z No. of bitstreams: 1 2014_dis_leteixeiraneto.pdf: 2039098 bytes, checksum: 476ca3810a4d9341414016b0440023ba (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-20T11:55:57Z (GMT) No. of bitstreams: 1 2014_dis_leteixeiraneto.pdf: 2039098 bytes, checksum: 476ca3810a4d9341414016b0440023ba (MD5) / Made available in DSpace on 2016-07-20T11:55:57Z (GMT). No. of bitstreams: 1 2014_dis_leteixeiraneto.pdf: 2039098 bytes, checksum: 476ca3810a4d9341414016b0440023ba (MD5) Previous issue date: 2014 / The Linked Data initiative brought new opportunities for building the next generation of Web applications. However, the full potential of linked data depends on how easy it is to transform data stored in conventional, relational databases into RDF triples. Recently, the W3C RDB2RDF Working Group proposed a standard mapping language, called R2RML, to specify customized mappings between relational schemas and target RDF vocabularies. However, the generation of customized R2RML mappings is not an easy task. Thus, it is mandatory to define: (a) a solution that maps concepts from a relational schema to terms from a RDF schema; (b) a process to support the publication of relational data into RDF, and (c) a tool that implements this process. Correspondence assertions are proposed to formalize the mappings between relational schemas and RDF schemas. Views are created to publish data from a database to a new structure or schema. The definition of RDF views over relational data allows providing this data in terms of an OWL ontology structure without having to change the database schema. In this work, we propose a three-tier architecture – database, SQL views and RDF views – where the SQL views layer maps the database concepts into RDF terms. The creation of this intermediate layer facilitates the generation of R2RML mappings and prevents that changes in the data layer result in changes on R2RML mappings. Additionally, we define a three-step process to generate the RDF views of relational data. First, the user defines the schema of the relational database and the target OWL ontology. Then, he defines correspondence assertions that formally specify the relational database in terms of the target ontology. Using these assertions, an exported ontology is generated automatically. The second step produces the SQL views that perform the mapping defined by the assertions and a R2RML mapping between these views and the exported ontology. This dissertation describes a formalization of the correspondence assertions, the three-tier architecture, the publishing process steps, the algorithms needed, a tool that supports the entire process and a case study to validate the results obtained. / A iniciativa Linked Data trouxe novas oportunidades para a construção da nova geração de aplicações Web. Entretanto, a utilização das melhores práticas estabelecidas por este padrão depende de mecanismos que facilitem a transformação dos dados armazenados em bancos relacionais em triplas RDF. Recentemente, o grupo de trabalho W3C RDB2RDF propôs uma linguagem de mapeamento padrão, denominada R2RML, para especificar mapeamentos customizados entre esquemas relacionais e vocabulários RDF. No entanto, a geração de mapeamentos R2RML não é uma tarefa fácil. É imperativo, então, definir: (a) uma solução para mapear os conceitos de um esquema relacional em termos de um esquema RDF; (b) um processo que suporte a publicação dos dados relacionais no formato RDF; e (c) uma ferramenta para facilitar a aplicação deste processo. Assertivas de correspondência são propostas para formalizar mapeamentos entre esquemas relacionais e esquemas RDF. Visões são usadas para publicar dados de uma base de dados em uma nova estrutura ou esquema. A definição de visões RDF sobre dados relacionais permite que esses dados possam ser disponibilizados em uma estrutura de termos de uma ontologia OWL, sem que seja necessário alterar o esquema da base de dados. Neste trabalho, propomos uma arquitetura em três camadas – de dados, de visões SQL e de visões RDF – onde a camada de visões SQL mapeia os conceitos da camada de dados nos termos da camada de visões RDF. A criação desta camada intermediária de visões facilita a geração dos mapeamentos R2RML e evita que alterações na camada de dados impliquem em alterações destes mapeamentos. Adicionalmente, definimos um processo em três etapas para geração das visões RDF. Na primeira etapa, o usuário define o esquema do banco de dados relacional e a ontologia OWL alvo e cria assertivas de correspondência que mapeiam os conceitos do esquema relacional nos termos da ontologia alvo. A partir destas assertivas, uma ontologia exportada é gerada automaticamente. O segundo passo produz um esquema de visões SQL gerado a partir da ontologia exportada e um mapeamento R2RML do esquema de visões para a ontologia exportada, de forma automatizada. Por fim, no terceiro passo, as visões RDF são publicadas em um SPARQL endpoint. Neste trabalho são detalhados as assertivas de correspondência, a arquitetura, o processo, os algoritmos necessários, uma ferramenta que suporta o processo e um estudo de caso para validação dos resultados obtidos.
89

Um ambiente para processamento de consultas federadas em linked data Mashups / An environment for federated query processing in linked data Mashups

Magalhães, Regis Pires January 2012 (has links)
MAGALHÃES, Regis Pires. Um ambiente para processamento de consultas federadas em linked data Mashups. 2012. 117 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2012. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-12T16:08:12Z No. of bitstreams: 1 2012_dis_rpmagalhaes.pdf: 2883929 bytes, checksum: 1a04484a7e875cd8ead588d91693577a (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-21T16:05:44Z (GMT) No. of bitstreams: 1 2012_dis_rpmagalhaes.pdf: 2883929 bytes, checksum: 1a04484a7e875cd8ead588d91693577a (MD5) / Made available in DSpace on 2016-07-21T16:05:44Z (GMT). No. of bitstreams: 1 2012_dis_rpmagalhaes.pdf: 2883929 bytes, checksum: 1a04484a7e875cd8ead588d91693577a (MD5) Previous issue date: 2012 / Semantic Web technologies like RDF model, URIs and SPARQL query language, can reduce the complexity of data integration by making use of properly established and described links between sources.However, the difficulty to formulate distributed queries has been a challenge to harness the potential of these technologies due to autonomy, distribution and vocabulary of heterogeneous data sources. This scenario demands effective mechanisms for integrating data on Linked Data.Linked Data Mashups allow users to query and integrate structured and linked data on the web. This work proposes two architectures of Linked Data Mashups: one based on the use of mediators and the other based on the use of Linked Data Mashup Services (LIDMS). A module for efficient execution of federated query plans on Linked Data has been developed and is a component common to both proposed architectures.The execution module feasibility has been demonstrated through experiments. Furthermore, a LIDMS execution Web environment also has been defined and implemented as contributions of this work. / Tecnologias da Web Semântica como modelo RDF, URIs e linguagem de consulta SPARQL, podem reduzir a complexidade de integração de dados ao fazer uso de ligações corretamente estabelecidas e descritas entre fontes.No entanto, a dificuldade para formulação de consultas distribuídas tem sido um obstáculo para aproveitar o potencial dessas tecnologias em virtude da autonomia, distribuição e vocabulário heterogêneo das fontes de dados.Esse cenário demanda mecanismos eficientes para integração de dados sobre Linked Data.Linked Data Mashups permitem aos usuários executar consultas e integrar dados estruturados e vinculados na web.O presente trabalho propõe duas arquiteturas de Linked Data Mashups:uma delas baseada no uso de mediadores e a outra baseada no uso de Linked Data Mashup Services (LIDMS). Um módulo para execução eficiente de planos de consulta federados sobre Linked Data foi desenvolvido e é um componente comum a ambas as arquiteturas propostas.A viabilidade do módulo de execução foi demonstrada através de experimentos. Além disso, um ambiente Web para execução de LIDMS também foi definido e implementado como contribuições deste trabalho.
90

Um Modelo de Apresentação e Navegação de Linked Data para o Usuário Final

Rocha, André Luiz Carlomagno 31 March 2014 (has links)
Submitted by Santos Davilene (davilenes@ufba.br) on 2016-05-25T12:45:32Z No. of bitstreams: 1 Andre_Carlomagno_Dissertacao.pdf: 1757427 bytes, checksum: 0e8705dc6cbd590104c52bd09603036a (MD5) / Made available in DSpace on 2016-05-25T12:45:32Z (GMT). No. of bitstreams: 1 Andre_Carlomagno_Dissertacao.pdf: 1757427 bytes, checksum: 0e8705dc6cbd590104c52bd09603036a (MD5) / Linked Data permite a ligação entre dados de diferentes fontes para criar um único espaço global de dados. Dados publicados obedecendo aos princípios de Linked Data possuem significado explicitamente definido e podem ser tratados e processados por máquinas. No entanto, pessoas também podem se beneficiar diretamente da semântica explícita contida na estrutura dos dados. Este trabalho trata a carência de métodos, processos e modelos preocupados com a apresentação e navegação de dados estruturados obedecendo aos princípios de Linked Data. O foco da pesquisa é no usuário comum, aquele sem experiência com as técnicas que giram em torno da Web Semântica. Para tratar o problema em questão, o trabalho utiliza três linhas de investigação: as estratégias para apresentação e navegação de dados estruturados na Web Semântica; a modelagem de sistemas de hipertexto e a recuperação de dados estruturados embutidos em páginas Web. As contribuições desta pesquisa incluem: (i) um modelo de apresentação e navegação para Linked Data, focado no usuário comum, que pode ser implementado por sistemas interessados em prover tais recursos; (ii) um Serviço Web e uma biblioteca Javascript implementando o modelo, que podem ser utilizados pelo lado cliente de aplicações Web preocupadas em fornecer apresentação e navegação para Linked Data; e (iii) um protótipo desenvolvido para demonstrar a utilização do serviço e da biblioteca e, consequentemente, a viabilidade do modelo proposto, que fornece recursos de apresentação e navegação de dados estruturados embutidos em páginas Web.

Page generated in 0.0387 seconds