• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Generelle Queryverarbeitung für das Semantic Web effiziente Suche in strukturierten Daten

Badertscher, Guido January 2004 (has links)
Zugl.: Zürich, Univ., Diplomarbeit, 2004
2

Unsupervised duplicate detection using sample non-duplicates

Lehti, Patrick. Unknown Date (has links)
Techn. University, Diss., 2006--Darmstadt.
3

Ein Standard-File für 3D-Gebietsbeschreibungen. - Datenbasis und Programmschnittstelle data_read

Lohse, Dag 13 September 2005 (has links) (PDF)
Auf der Grundlage der in Preprint 98-11 dieser Reihe gegebenen Dokumentation des Dateiformats wird hier eine Programmschnittstelle beschrieben, die das Einlesen solcher Gebietsdatenfiles ermöglicht. Die Struktur der internen Datenbasis wird beschrieben.
4

Unscharfe Validierung strukturierter Daten ein Modell auf der Basis unscharfer Logik

Schlarb, Sven January 2007 (has links)
Zugl.: Köln, Univ., Diss., 2007
5

TopX efficient and versatile top-k query processing for text, structured, and semistructured data

Theobald, Martin January 2006 (has links)
Zugl.: Saarbrücken, Univ., Diss., 2006 / Hergestellt on demand
6

Ein Standard-File für 3D-Gebietsbeschreibungen

Lohse, Dag 12 September 2005 (has links) (PDF)
Es handelt sich hierbei um die Dokumentation eines Dateiformats zur Beschreibung dreidimensionaler FEM-Gebiete in Randrepräsentation. Eine interne Datenbasis dient als Verbindung zwischen externem Dateiformat und verschiedenen verarbeitenden Programmen.
7

Unscharfe Validierung strukturierter Daten : ein Modell auf der Basis unscharfer Logik /

Schlarb, Sven. January 2008 (has links)
Universiẗat, Diss.--Köln, 2007.
8

Ein Standard-File für 3D-Gebietsbeschreibungen

Lohse, Dag 12 September 2005 (has links)
Es handelt sich hierbei um die Dokumentation eines Dateiformats zur Beschreibung dreidimensionaler FEM-Gebiete in Randrepräsentation. Eine interne Datenbasis dient als Verbindung zwischen externem Dateiformat und verschiedenen verarbeitenden Programmen.
9

Ein Standard-File für 3D-Gebietsbeschreibungen. - Datenbasis und Programmschnittstelle data_read

Lohse, Dag 13 September 2005 (has links)
Auf der Grundlage der in Preprint 98-11 dieser Reihe gegebenen Dokumentation des Dateiformats wird hier eine Programmschnittstelle beschrieben, die das Einlesen solcher Gebietsdatenfiles ermöglicht. Die Struktur der internen Datenbasis wird beschrieben.
10

Recovering the Semantics of Tabular Web Data

Braunschweig, Katrin 26 October 2015 (has links) (PDF)
The Web provides a platform for people to share their data, leading to an abundance of accessible information. In recent years, significant research effort has been directed especially at tables on the Web, which form a rich resource for factual and relational data. Applications such as fact search and knowledge base construction benefit from this data, as it is often less ambiguous than unstructured text. However, many traditional information extraction and retrieval techniques are not well suited for Web tables, as they generally do not consider the role of the table structure in reflecting the semantics of the content. Tables provide a compact representation of similarly structured data. Yet, on the Web, tables are very heterogeneous, often with ambiguous semantics and inconsistencies in the quality of the data. Consequently, recognizing the structure and inferring the semantics of these tables is a challenging task that requires a designated table recovery and understanding process. In the literature, many important contributions have been made to implement such a table understanding process that specifically targets Web tables, addressing tasks such as table detection or header recovery. However, the precision and coverage of the data extracted from Web tables is often still quite limited. Due to the complexity of Web table understanding, many techniques developed so far make simplifying assumptions about the table layout or content to limit the amount of contributing factors that must be considered. Thanks to these assumptions, many sub-tasks become manageable. However, the resulting algorithms and techniques often have a limited scope, leading to imprecise or inaccurate results when applied to tables that do not conform to these assumptions. In this thesis, our objective is to extend the Web table understanding process with techniques that enable some of these assumptions to be relaxed, thus improving the scope and accuracy. We have conducted a comprehensive analysis of tables available on the Web to examine the characteristic features of these tables, but also identify unique challenges that arise from these characteristics in the table understanding process. To extend the scope of the table understanding process, we introduce extensions to the sub-tasks of table classification and conceptualization. First, we review various table layouts and evaluate alternative approaches to incorporate layout classification into the process. Instead of assuming a single, uniform layout across all tables, recognizing different table layouts enables a wide range of tables to be analyzed in a more accurate and systematic fashion. In addition to the layout, we also consider the conceptual level. To relax the single concept assumption, which expects all attributes in a table to describe the same semantic concept, we propose a semantic normalization approach. By decomposing multi-concept tables into several single-concept tables, we further extend the range of Web tables that can be processed correctly, enabling existing techniques to be applied without significant changes. Furthermore, we address the quality of data extracted from Web tables, by studying the role of context information. Supplementary information from the context is often required to correctly understand the table content, however, the verbosity of the surrounding text can also mislead any table relevance decisions. We first propose a selection algorithm to evaluate the relevance of context information with respect to the table content in order to reduce the noise. Then, we introduce a set of extraction techniques to recover attribute-specific information from the relevant context in order to provide a richer description of the table content. With the extensions proposed in this thesis, we increase the scope and accuracy of Web table understanding, leading to a better utilization of the information contained in tables on the Web.

Page generated in 0.0744 seconds