• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 375
  • 218
  • 76
  • 53
  • 24
  • 20
  • 20
  • 18
  • 18
  • 16
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 915
  • 915
  • 269
  • 205
  • 192
  • 160
  • 156
  • 126
  • 112
  • 109
  • 107
  • 107
  • 106
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Managing and Consuming Completeness Information for RDF Data Sources

Darari, Fariz 04 July 2017 (has links) (PDF)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
192

Methodology for Conflict Detection and Resolution in Semantic Revision Control Systems

Hensel, Stephan, Graube, Markus, Urbas, Leon 11 November 2016 (has links) (PDF)
Revision control mechanisms are a crucial part of information systems to keep track of changes. It is one of the key requirements for industrial application of technologies like Linked Data which provides the possibility to integrate data from different systems and domains in a semantic information space. A corresponding semantic revision control system must have the same functionality as established systems (e.g. Git or Subversion). There is also a need for branching to enable parallel work on the same data or concurrent access to it. This directly introduces the requirement of supporting merges. This paper presents an approach which makes it possible to merge branches and to detect inconsistencies before creating the merged revision. We use a structural analysis of triple differences as the smallest comparison unit between the branches. The differences that are detected can be accumulated to high level changes, which is an essential step towards semantic merging. We implemented our approach as a prototypical extension of therevision control system R43ples to show proof of concept.
193

Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and Applications

Gerber, Daniel 07 June 2016 (has links)
The Data Web has undergone a tremendous growth period. It currently consists of more then 3300 publicly available knowledge bases describing millions of resources from various domains, such as life sciences, government or geography, with over 89 billion facts. In the same way, the Document Web grew to the state where approximately 4.55 billion websites exist, 300 million photos are uploaded on Facebook as well as 3.5 billion Google searches are performed on average every day. However, there is a gap between the Document Web and the Data Web, since for example knowledge bases available on the Data Web are most commonly extracted from structured or semi-structured sources, but the majority of information available on the Web is contained in unstructured sources such as news articles, blog post, photos, forum discussions, etc. As a result, data on the Data Web not only misses a significant fragment of information but also suffers from a lack of actuality since typical extraction methods are time-consuming and can only be carried out periodically. Furthermore, provenance information is rarely taken into consideration and therefore gets lost in the transformation process. In addition, users are accustomed to entering keyword queries to satisfy their information needs. With the availability of machine-readable knowledge bases, lay users could be empowered to issue more specific questions and get more precise answers. In this thesis, we address the problem of Relation Extraction, one of the key challenges pertaining to closing the gap between the Document Web and the Data Web by four means. First, we present a distant supervision approach that allows finding multilingual natural language representations of formal relations already contained in the Data Web. We use these natural language representations to find sentences on the Document Web that contain unseen instances of this relation between two entities. Second, we address the problem of data actuality by presenting a real-time data stream RDF extraction framework and utilize this framework to extract RDF from RSS news feeds. Third, we present a novel fact validation algorithm, based on natural language representations, able to not only verify or falsify a given triple, but also to find trustworthy sources for it on the Web and estimating a time scope in which the triple holds true. The features used by this algorithm to determine if a website is indeed trustworthy are used as provenance information and therewith help to create metadata for facts in the Data Web. Finally, we present a question answering system that uses the natural language representations to map natural language question to formal SPARQL queries, allowing lay users to make use of the large amounts of data available on the Data Web to satisfy their information need.
194

Die Regensburger Verbundklassifikation (RVK) – „ein weites Feld“: Herausforderung von Semantic Web, Ontologien und Entitäten für dieDynamik einer Klassifikation

Werr, Naoka 28 January 2011 (has links)
Schlagwörter wie „information overload“, „digital natives“ oder „digital immigrants“ prägen die heutige Informations- und Wissensgesellschaft. Zahlreiche wissenschaftliche Untersuchungen belegen zudem nachdrücklich, dass die technische Entwicklung in den nächsten Jahren noch rasanter fortschreitet als man es jemals vermuten durfte. Internet- Kommunikationsangeboten kommt bereits jetzt eine außergewöhnliche Bedeutung zu - mit steigender Tendenz. Außerdem werden Kommunikationsservices wie Web 2.0-Anwendungen als ein zunehmend wichtiger Faktor von Internetnutzung unterstrichen und der aktuelle Trend zur persönlichen Vernetzung über das Internet stets hervorgehoben. Die Bedeutung der Kernnutzungen des Internets als Inhaltsquelle und Kommunikationsform wird demnach auch weiterhin zunehmen. Diesem Trend müssen sich auch Klassifikationssysteme stellen. Die RVK hat mit dem im Oktober 2009 lancierten Web-Portal einen ersten Schritt in Richtung Vernetzung getan. Die bisher auf verschiedenen Internetseiten disparat untergebrachten Informationen zur RVK sowie die Datenbanken zur RVK sind nunmehr unter einer Oberfläche vereint, miteinander verknüpft und mit Elementen sozialer Software (RVK-Wiki zur größeren Transparenz bei Abstimmungsvorgängen) angereichert. Im Kontext des derzeit ebenfalls als beliebtes Schlagwort thematisierten Semantic Web ist das Portal der RVK ein Paradigmenwechsel in der langen Geschichte der RVK: Das gesamte Wissen zur RVK wird entsprechend seiner Bedeutung konzeptionell verbunden und bereits weitgehend maschinenlesbar (beispielsweise bezogen auf die Suchfunktion in der Datenbank RVK-Online) offeriert. Wissensmanagement sowie die Verbesserung der Qualität der umfangreichen Informationen zur RVK auf semantischer Ebene sind sehr verbessert worden, verbunden mit dem RVK-Wiki könnte man gar von einem ersten Impuls in Richtung Web 3.0 für die RVK sprechen. Auch die hierarchische Struktur der RVK trägt wesentlich zum Semantic Web bei, da in einer Klassifikation gerade hierarchische Strukturen zur „Ordnung“ des im Überfluss vorhandenen implizierten Wissens beitragen. Wesentlich ist demnach die Definition der Relationen im Web (und somit der entsprechenden Ontologien und Entitäten), um der Quantität der Angebote im World Wide Web auch entsprechend qualitativ hochwertige Services mit bibliothekarischem Mehrwert entgegenzusetzen. Für das Datenmodell des Semantic Web ist somit die Bereitstellung von nachhaltigen Normdaten wie es für die RVK ja angedacht - respektive fast umgesetzt ist – notwendig.
195

Linked Open Projects: Nachnutzung von Ergebnissen im Semantic Web

Pfeffer, Magnus, Eckert, Kai 28 January 2011 (has links)
Semantic Web und Linked Data sind in aller Munde. Nach fast einem Jahrzehnt der Entwicklung der Technologien und Erforschung der Möglichkeiten des Semantic Webs rücken nun die Daten in den Mittelpunk, denn ohne diese wäre das Semantic Web nicht mehr als ein theoretisches Konstrukt. Fast wie das World Wide Web ohne Websites. Bibliotheken besitzen mit Normdaten (PND, SWD) und Titelaufnahmen eine Fülle Daten, die sich zur Befüllung des Semantic Web eignen und teilweise bereits für das Semantic Web aufbereitet und zur Nutzung freigegeben wurden. Die Universitätsbibliothek Mannheim hat sich in zwei verschiedenen Projekten mit der Nutzung solcher Daten befasst – allerdings standen diese zu diesem Zeitpunkt noch nicht als Linked Data zur Verfügung. In einem Projekt ging es um die automatische Erschließung von Publikationen auf der Basis von Abstracts, im anderen Projekt um die automatische Klassifikation von Publikationen auf der Basis von Titeldaten. Im Rahmen dieses Beitrags stellen wir die Ergebnisse der Projekte kurz vor, möchten aber im Schwerpunkt auf einen Nebenaspekt eingehen, der sich erst im Laufe dieser Projekte herauskristallisiert hat: Wie kann man die gewonnenen Ergebnisse dauerhaft und sinnvoll zur Nachnutzung durch Dritte präsentieren? Soviel vorweg: Beide Verfahren können und wollen einen Bibliothekar nicht ersetzen. Die Einsatzmöglichkeiten der generierten Daten sind vielfältig. Konkrete Einsätze, zum Beispiel das Einspielen in einen Verbundkatalog, sind aber aufgrund der Qualität und mangelnden Kontrolle der Daten umstritten. Die Bereitstellung dieser Daten als Linked Data im Semantic Web ist da eine naheliegende Lösung – jeder, der die Ergebnisse nachnutzen möchte, kann das tun, ohne dass ein bestehender Datenbestand damit kompromittiert werden könnte. Diese Herangehensweise wirft aber neue Fragen auf, nicht zuletzt auch nach der Identifizierbarkeit der Ursprungsdaten über URIs, wenn diese (noch) nicht als Linked Data zur Verfügung stehen. Daneben erfordert die Bereitstellung von Ergebnisdaten aber auch weitere Maßnahmen, die über die gängige Praxis von Linked Data hinaus gehen: Die Bereitstellung von Zusatzinformationen, die die Quelle und das Zustandekommen dieser Daten näher beschreiben (Provenienzinformationen), aber auch weitere Informationen, die über das zugrunde liegende Metadatenschema meist hinausgehen, wie Konfidenzwerte im Falle eines automatischen Verfahrens der Datenerzeugung. Dazu präsentieren wir Ansätze auf Basis von RDF Reification und Named Graphs und schildern die aktuellen Entwicklungen auf diesem Gebiet, wie sie zum Beispiel in der Provenance Incubator Group des W3C und in Arbeitsgruppen der Dublin Core Metadaten-Initiative diskutiert werden.
196

Methodology for Conflict Detection and Resolution in Semantic Revision Control Systems

Hensel, Stephan, Graube, Markus, Urbas, Leon January 2016 (has links)
Revision control mechanisms are a crucial part of information systems to keep track of changes. It is one of the key requirements for industrial application of technologies like Linked Data which provides the possibility to integrate data from different systems and domains in a semantic information space. A corresponding semantic revision control system must have the same functionality as established systems (e.g. Git or Subversion). There is also a need for branching to enable parallel work on the same data or concurrent access to it. This directly introduces the requirement of supporting merges. This paper presents an approach which makes it possible to merge branches and to detect inconsistencies before creating the merged revision. We use a structural analysis of triple differences as the smallest comparison unit between the branches. The differences that are detected can be accumulated to high level changes, which is an essential step towards semantic merging. We implemented our approach as a prototypical extension of therevision control system R43ples to show proof of concept.
197

Managing and Consuming Completeness Information for RDF Data Sources

Darari, Fariz 20 June 2017 (has links)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
198

[en] ENRICHING AND ANALYZING SEMANTIC TRAJECTORIES WITH LINKED OPEN DATA / [pt] ENRIQUECENDO E ANALISANDO TRAJETÓRIAS SEMÂNTICAS COM DADOS ABERTOS INTERLIGADOS

LIVIA COUTO RUBACK RODRIGUES 26 February 2018 (has links)
[pt] Os últimos anos testemunharam o uso crescente de dispositivos que rastreiam objetos móveis: equipamentos com GPS e telefones móveis, veículos ou outros sensores da Internet das Coisas, além de dados de localização de check-ins de redes sociais. Estes dados de mobilidade são representados como trajetórias, e armazenam a sequência de posições de um objeto móvel. Porém, estas sequências representam somente os dados de posição originais, que precisam ser semanticamente enriquecidos para permitir tarefas de análise e apoiar um entendimento profundo sobre o comportamento do movimento. Um outro espaço de dados global sem precedentes tem crescido rapidamente, a Web de Dados, graças à iniciativa de Dados Interligados. Estes dados semânticos ricos e livremente disponíveis fornecem uma nova maneira de enriquecer dados de trajetória. Esta tese apresenta contribuições para os desafios que surgem considerando este cenário. Em primeiro lugar, a tese investiga como dados de trajetória podem se beneficiar da iniciativa de dados interligados, guiando todo o processo de enriquecimento semântico utilizando fontes de dados externas. Em segundo lugar, aborda o tópico de computação de similaridade entre entidades representadas como dados interligados com o objetivo de computar a similaridade entre trajetórias semanticamente enriquecidas. A novidade da abordagem apresentada nesta tese consiste em considerar as características relevantes das entidades como listas ranqueadas. Por último, a tese aborda a computação da similaridade entre trajetórias enriquecidas comparando a similaridade entre todas as entidades representadas como dados interligados que representam as trajetórias enriquecidas. / [en] The last years witnessed a growing number of devices that track moving objects: personal GPS equipped devices and GSM mobile phones, vehicles or other sensors from the Internet of Things but also the location data deriving from the Social Networks check-ins. These mobility data are represented as trajectories, recording the sequence of locations of the moving object. However, these sequences only represent the raw location data and they need to be semantically enriched to be meaningful in the analysis tasks and to support a deep understanding of the movement behavior. Another unprecedented global space that is also growing at a fast pace is the Web of Data, thanks to the emergence of the Linked Data initiative. These freely available semantic rich datasets provide a novel way to enhance trajectory data. This thesis presents a contribution to the many challenges that arise from this scenario. First, it investigates how trajectory data may benefit from the Linked Data Initiative by guiding the whole trajectory enrichment process with the use of external datasets. Then, it addresses the pivotal topic of the similarity computation between Linked Data entities with the final objective of computing the similarity between semantically enriched trajectories. The novelty of our approach is that the thesis considers the relevant entity features as a ranked list. Finally, the thesis targets the computation of the similarity between enriched trajectories by comparing the similarity of the Linked Data entities that represent the enriched trajectories.
199

Towards a comprehensive functional layered architecture for the Semantic Web

Gerber, Aurona J. 30 November 2006 (has links)
The Semantic Web, as the foreseen successor of the current Web, is envisioned to be a semantically enriched information space usable by machines or agents that perform sophisticated tasks on behalf of their users. The realisation of the Semantic Web prescribe the development of a comprehensive and functional layered architecture for the increasingly semantically expressive languages that it comprises of. A functional architecture is a model specified at an appropriate level of abstraction identifying system components based on required system functionality, whilst a comprehensive architecture is an architecture founded on established design principles within Software Engineering. Within this study, an argument is formulated for the development of a comprehensive and functional layered architecture through the development of a Semantic Web status model, the extraction of the function of established Semantic Web technologies, as well as the development of an evaluation mechanism for layered architectures compiled from design principles as well as fundamental features of layered architectures. In addition, an initial version of such a comprehensive and functional layered architecture for the Semantic Web is constructed based on the building blocks described above, and this architecture is applied to several scenarios to establish the usefulness thereof. In conclusion, based on the evidence collected as result of the research in this study, it is possible to justify the development of an architectural model, or more specifically, a comprehensive and functional layered architecture for the languages of the Semantic Web. / Computing / PHD (Computer Science)
200

Towards a security framework for the semantic web

Mbaya, Ibrahim Rajab 30 November 2007 (has links)
With the increasing use of the Web and the need to automate, interoperate, and reason about resources and services on the Web, the Semantic Web aims to provide solutions for the future needs of World Wide Web computing. However, the autonomous, dynamic, open, distributed and heterogeneous nature of the Semantic Web introduces new security challenges. Various security standards and mechanisms exist that address different security aspects of the current Web and Internet, but these have not been integrated to address security aspects of the Semantic Web specifically. Hence, there is a need to have a security framework that integrates these disparate security tools to provide a holistic, secure environment for the Semantic Web. This study proposes a security framework that provides various security functionalities to Semantic Web entities, namely, agents, Web services and Web resources. The study commences with a literature survey carried out in order to establish security aspects related to the Semantic Web. In addition, requirements for a security framework for the Semantic Web are extracted from the literature. This is followed by a model-building study that is used to compile a security framework for the Semantic Web. In order to prove the feasibility thereof, the framework is then applied to different application scenarios as a proof-of-concept. Following the results of the evaluation, it is possible to argue that the proposed security framework allows for the description of security concepts and service workflows, reasoning about security concepts and policies, as well as the specification of security policies, security services and security mechanisms. The security framework is therefore useful in addressing the identified security requirements of the Semantic Web. / School of Computing / M.Sc. (Computer Science)

Page generated in 0.0649 seconds