• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 459
  • 149
  • 51
  • 36
  • 35
  • 35
  • 23
  • 20
  • 13
  • 12
  • 9
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 946
  • 275
  • 275
  • 265
  • 252
  • 116
  • 100
  • 94
  • 89
  • 61
  • 57
  • 56
  • 56
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Extracting structured information from Wikipedia articles to populate infoboxes

Lange, Dustin, Böhm, Christoph, Naumann, Felix January 2010 (has links)
Roughly every third Wikipedia article contains an infobox - a table that displays important facts about the subject in attribute-value form. The schema of an infobox, i.e., the attributes that can be expressed for a concept, is defined by an infobox template. Often, authors do not specify all template attributes, resulting in incomplete infoboxes. With iPopulator, we introduce a system that automatically populates infoboxes of Wikipedia articles by extracting attribute values from the article's text. In contrast to prior work, iPopulator detects and exploits the structure of attribute values for independently extracting value parts. We have tested iPopulator on the entire set of infobox templates and provide a detailed analysis of its effectiveness. For instance, we achieve an average extraction precision of 91% for 1,727 distinct infobox template attributes. / Ungefähr jeder dritte Wikipedia-Artikel enthält eine Infobox - eine Tabelle, die wichtige Fakten über das beschriebene Thema in Attribut-Wert-Form darstellt. Das Schema einer Infobox, d.h. die Attribute, die für ein Konzept verwendet werden können, wird durch ein Infobox-Template definiert. Häufig geben Autoren nicht für alle Template-Attribute Werte an, wodurch unvollständige Infoboxen entstehen. Mit iPopulator stellen wir ein System vor, welches automatisch Infoboxen von Wikipedia-Artikeln durch Extrahieren von Attributwerten aus dem Artikeltext befüllt. Im Unterschied zu früheren Arbeiten erkennt iPopulator die Struktur von Attributwerten und nutzt diese aus, um die einzelnen Bestandteile von Attributwerten unabhängig voneinander zu extrahieren. Wir haben iPopulator auf der gesamten Menge der Infobox-Templates getestet und analysieren detailliert die Effektivität. Wir erreichen beispielsweise für die Extraktion einen durchschnittlichen Precision-Wert von 91% für 1.727 verschiedene Infobox-Template-Attribute.
92

Linked-OWL: A new approach for dynamic linked data service workflow composition

Ahmad, Hussien, Dowaji, Salah 01 June 2013 (has links)
The shift from Web of Document into Web of Data based on Linked Data principles defined by Tim Berners-Lee posed a big challenge to build and develop applications to work in Web of Data environment. There are several attempts to build service and application models for Linked Data Cloud. In this paper, we propose a new service model for linked data "Linked-OWL" which is based on RESTful services and OWL-S and copes with linked data principles. This new model shifts the service concept from functions into linked data things and opens the road for Linked Oriented Architecture (LOA) and Web of Services as part and on top of Web of Data. This model also provides high level of dynamic service composition capabilities for more accurate dynamic composition and execution of complex business processes in Web of Data environment.
93

Linked Data Quality Assessment and its Application to Societal Progress Measurement

Zaveri, Amrapali 19 May 2015 (has links) (PDF)
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.
94

Comparison of two methods for estimating antibody avidity a thesis submitted in partial fulfillment ... Master of Science in Periodontics ... /

Cilla, Brian. January 1989 (has links)
Thesis (M.S.)--University of Michigan, 1989.
95

Detection of antibodies to microorganisms associated with periodontal disease activity by enzyme-linked immunosorbent assays a thesis submitted in partial fulfillment ... periodontics ... /

Marquez, Christian. January 1983 (has links)
Thesis (M.S.)--University of Michigan, 1983.
96

Fibronectin degradation by oral microbes detected by microELISA a thesis submitted in partial fulfillment ... periodontics ... /

Van Raaphorst, Paul A. January 1989 (has links)
Thesis (M.S.)--University of Michigan, 1989.
97

Detection of antibodies to microorganisms associated with periodontal disease activity by enzyme-linked immunosorbent assays a thesis submitted in partial fulfillment ... periodontics ... /

Marquez, Christian. January 1983 (has links)
Thesis (M.S.)--University of Michigan, 1983.
98

Fibronectin degradation by oral microbes detected by microELISA a thesis submitted in partial fulfillment ... periodontics ... /

Van Raaphorst, Paul A. January 1989 (has links)
Thesis (M.S.)--University of Michigan, 1989.
99

Comparison of two methods for estimating antibody avidity a thesis submitted in partial fulfillment ... Master of Science in Periodontics ... /

Cilla, Brian. January 1989 (has links)
Thesis (M.S.)--University of Michigan, 1989.
100

Enzyme immunoassay in combination with liquid chromatography for sensitive and selective determination of drugs in biosamples

Lövgren, Ulf. January 1997 (has links)
Thesis (doctoral)--Lund University, 1997. / Added t.p. with thesis statement inserted.

Page generated in 0.0245 seconds