The Data Web has undergone a tremendous growth period.
It currently consists of more then 3300 publicly available knowledge bases describing millions of resources from various domains, such as life sciences, government or geography, with over 89 billion facts.
In the same way, the Document Web grew to the state where approximately 4.55 billion websites exist, 300 million photos are uploaded on Facebook as well as 3.5 billion Google searches are performed on average every day.
However, there is a gap between the Document Web and the Data Web, since for example knowledge bases available on the Data Web are most commonly extracted from structured or semi-structured sources, but the majority of information available on the Web is contained in unstructured sources such as news articles, blog post, photos, forum discussions, etc.
As a result, data on the Data Web not only misses a significant fragment of information but also suffers from a lack of actuality since typical extraction methods are time-consuming and can only be carried out periodically.
Furthermore, provenance information is rarely taken into consideration and therefore gets lost in the transformation process.
In addition, users are accustomed to entering keyword queries to satisfy their information needs.
With the availability of machine-readable knowledge bases, lay users could be empowered to issue more specific questions and get more precise answers.
In this thesis, we address the problem of Relation Extraction, one of the key challenges pertaining to closing the gap between the Document Web and the Data Web by four means.
First, we present a distant supervision approach that allows finding multilingual natural language representations of formal relations already contained in the Data Web.
We use these natural language representations to find sentences on the Document Web that contain unseen instances of this relation between two entities.
Second, we address the problem of data actuality by presenting a real-time data stream RDF extraction framework and utilize this framework to extract RDF from RSS news feeds.
Third, we present a novel fact validation algorithm, based on natural language representations, able to not only verify or falsify a given triple, but also to find trustworthy sources for it on the Web and estimating a time scope in which the triple holds true.
The features used by this algorithm to determine if a website is indeed trustworthy are used as provenance information and therewith help to create metadata for facts in the Data Web.
Finally, we present a question answering system that uses the natural language representations to map natural language question to formal SPARQL queries, allowing lay users to make use of the large amounts of data available on the Data Web to satisfy their information need.
Identifer | oai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:13878 |
Date | 07 June 2016 |
Creators | Gerber, Daniel |
Contributors | Fähnrich, Klaus-Peter, Ngonga Ngomo, Axel-Cyrille, Polleres, Axel, Universität Leipzig |
Source Sets | Hochschulschriftenserver (HSSS) der SLUB Dresden |
Language | English |
Detected Language | English |
Type | doc-type:doctoralThesis, info:eu-repo/semantics/doctoralThesis, doc-type:Text |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0015 seconds