• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 12
  • 11
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Entity extraction, animal disease-related event recognition and classification from web

Volkova, Svitlana January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / Global epidemic surveillance is an essential task for national biosecurity management and bioterrorism prevention. The main goal is to protect the public from major health threads. To perform this task effectively one requires reliable, timely and accurate medical information from a wide range of sources. Towards this goal, we present a framework for epidemiological analytics that can be used to extract and visualize infectious disease outbreaks from the variety of unstructured web sources automatically. More precisely, in this thesis, we consider several research tasks including document relevance classification, entity extraction and animal disease-related event recognition in the veterinary epidemiology domain. First, we crawl web sources and classify collected documents by topical relevance using supervised learning algorithms. Next, we propose a novel approach for automated ontology construction in the veterinary medicine domain. Our approach is based on semantic relationship discovery using syntactic patterns. We then apply our automatically-constructed ontology for the domain-specific entity extraction task. Moreover, we compare our ontology-based entity extraction results with an alternative sequence labeling approach. We introduce a sequence labeling method for the entity tagging that relies on syntactic feature extraction using a sliding window. Finally, we present our novel sentence-based event recognition approach that includes three main steps: entity extraction of animal diseases, species, locations, dates and the confirmation status n-grams; event-related sentence classification into two categories - suspected or confirmed; automated event tuple generation and aggregation. We show that our document relevance classification results as well as entity extraction and disease-related event recognition results are significantly better compared to the results reported by other animal disease surveillance systems.
2

High-Performance Knowledge-Based Entity Extraction

Middleton, Anthony M. 01 January 2009 (has links)
Human language records most of the information and knowledge produced by organizations and individuals. The machine-based process of analyzing information in natural language form is called natural language processing (NLP). Information extraction (IE) is the process of analyzing machine-readable text and identifying and collecting information about specified types of entities, events, and relationships. Named entity extraction is an area of IE concerned specifically with recognizing and classifying proper names for persons, organizations, and locations from natural language. Extant approaches to the design and implementation named entity extraction systems include: (a) knowledge-engineering approaches which utilize domain experts to hand-craft NLP rules to recognize and classify named entities; (b) supervised machine-learning approaches in which a previously tagged corpus of named entities is used to train algorithms which incorporate statistical and probabilistic methods for NLP; or (c) hybrid approaches which incorporate aspects of both methods described in (a) and (b). Performance for IE systems is evaluated using the metrics of precision and recall which measure the accuracy and completeness of the IE task. Previous research has shown that utilizing a large knowledge base of known entities has the potential to improve overall entity extraction precision and recall performance. Although existing methods typically incorporate dictionary-based features, these dictionaries have been limited in size and scope. The problem addressed by this research was the design, implementation, and evaluation of a new high-performance knowledge-based hybrid processing approach and associated algorithms for named entity extraction, combining rule-based natural language parsing and memory-based machine learning classification facilitated by an extensive knowledge base of existing named entities. The hybrid approach implemented by this research resulted in improved precision and recall performance approaching human-level capability compared to existing methods measured using a standard test corpus. The system design incorporated a parallel processing system architecture with capabilities for managing a large knowledge base and providing high throughput potential for processing large collections of natural language text documents.
3

Extraktion von semantischen Relationen aus natürlichsprachlichem Text mit Hilfe von maschinellem Lernen

Biemann, Christian 20 October 2017 (has links)
Inhalt der vorliegenden Arbeit ist die Entwicklung eines Lernverfahrens, das aus großen Textkorpora semantische Relationen automatisch extrahiert. Den Kern des Verfahrens bildet die Iteration von Suchschritt und Verifikationsschritt, in denen in gesuchter Relation stehende Wörter gefunden und überprüft werden. Auf diese Weise ist es möglich, mit wenigen bekannten Wörtern eine große Anzahl in derselben Relation stehende Wörter zu gewinnen. So können mit wenig Aufwand große Listen von Wörtern erstellt werden, die in einem semantischen Zusammenhang stehen. Nach der Skizzierung des Algorithmus werden theoretische Vorhersagen bezüglich der für das Verfahren geeigneten Relationen getroffen, sowie der Ablauf modelliert. Einige mit einer Implementierung des Verfahrens erzielten Ergebnisse werden für verschiedene semantische Relationen vorgestellt, evaluiert und diskutiert, desweiteren werden Ausblicke und Verbesserungsmöglichkeiten angegeben. Schließlich wird eine Anwendung des Verfahrens vorgestellt, die im Rahmen des Projekt Deutscher Wortschatz in Zeitungsartikeln Personnennamen mit zugehörigen Berufsbezeichnungen markiert.
4

Using Concept Maps as a Tool for Cross-Language Relevance Determination

Richardson, W. Ryan 02 August 2007 (has links)
Concept maps, introduced by Novak, aid learners' understanding. I hypothesize that concept maps also can function as a summary of large documents, e.g., electronic theses and dissertations (ETDs). I have built a system that automatically generates concept maps from English-language ETDs in the computing field. The system also will provide Spanish translations of these concept maps for native Spanish speakers. Using machine translation techniques, my approach leads to concept maps that could allow researchers to discover pertinent dissertations in languages they cannot read, helping them to decide if they want a potentially relevant dissertation translated. I am using a state-of-the-art natural language processing system, called Relex, to extract noun phrases and noun-verb-noun relations from ETDs, and then produce concept maps automatically. I also have incorporated information from the table of contents of ETDs to create novel styles of concept maps. I have conducted five user studies, to evaluate user perceptions about these different map styles. I am using several methods to translate node and link text in concept maps from English to Spanish. Nodes labeled with single words from a given technical area can be translated using wordlists, but phrases in specific technical fields can be difficult to translate. Thus I have amassed a collection of about 580 Spanish-language ETDs from Scirus and two Mexican universities and I am using this corpus to mine phrase translations that I could not find otherwise. The usefulness of the automatically-generated and translated concept maps has been assessed in an experiment at Universidad de las Americas (UDLA) in Puebla, Mexico. This experiment demonstrated that concept maps can augment abstracts (translated using a standard machine translation package) in helping Spanish speaking users find ETDs of interest. / Ph. D.
5

[en] SECOND LEVEL RECOMMENDATION SYSTEM TO SUPPORT NEWS EDITING / [pt] SISTEMA DE RECOMENDAÇÃO DE SEGUNDO NÍVEL PARA SUPORTE À PRODUÇÃO DE MATÉRIAS JORNALÍSTICAS

DEMETRIUS COSTA RAPELLO 10 April 2014 (has links)
[pt] Sistemas de recomendação têm sido amplamente utilizados pelos grandes portais na Web, em decorrência do aumento do volume de dados disponíveis na Web. Tais sistemas são basicamente utilizados para sugerir informações relevantes para os seus usuários. Esta dissertação apresenta um sistema de recomendação de segundo nível para auxiliar equipes de jornalistas de portais de notícias no processo de recomendação de notícias relacionadas para os usuários do portal. O sistema é chamado de segundo nível pois apresenta recomendações aos jornalistas para que, por sua vez, geram recomendações aos usuários do portal. O modelo seguido pelo sistema consiste na recomendação de notícias relacionadas com base em características extraídas do próprio texto da notícia original. As características extraídas permitem a criação de consultas contra um banco de dados de notícias anteriormente publicadas. O resultado de uma consulta é uma lista de notícias candidatas à recomendação, ordenada pela similaridade com a notícia original e pela data de publicação, que o editor da notícia original manualmente processa para gerar a lista final de notícias relacionadas. / [en] Recommendation systems are widely used by major Web portals due to the increase in the volume of data available on the Web. Such systems are basically used to suggest information relevant to their users. This dissertation presents a second-level recommendation system, which aims at assisting the team of journalists of a news Web portal in the process of recommending related news for the users of the Web portal. The system is called second level since it creates recommendations to the journalists Who, in turn, generate recommendations to the users. The system follows a model based on features extracted from the text itself. The extracted features permit creating queries against a news database. The query result is a list of candidate news, sorted by score and date of publication, which the news editor manually processes to generate the final list of related news.
6

WebKnox: Web Knowledge Extraction

Urbansky, David 21 August 2009 (has links) (PDF)
This thesis focuses on entity and fact extraction from the web. Different knowledge representations and techniques for information extraction are discussed before the design for a knowledge extraction system, called WebKnox, is introduced. The main contribution of this thesis is the trust ranking of extracted facts with a self-supervised learning loop and the extraction system with its composition of known and refined extraction algorithms. The used techniques show an improvement in precision and recall in most of the matters for entity and fact extractions compared to the chosen baseline approaches.
7

Automatic Extraction and Assessment of Entities from the Web

Urbansky, David 23 October 2012 (has links) (PDF)
The search for information about entities, such as people or movies, plays an increasingly important role on the Web. This information is still scattered across many Web pages, making it more time consuming for a user to find all relevant information about an entity. This thesis describes techniques to extract entities and information about these entities from the Web, such as facts, opinions, questions and answers, interactive multimedia objects, and events. The findings of this thesis are that it is possible to create a large knowledge base automatically using a manually-crafted ontology. The precision of the extracted information was found to be between 75–90 % (facts and entities respectively) after using assessment algorithms. The algorithms from this thesis can be used to create such a knowledge base, which can be used in various research fields, such as question answering, named entity recognition, and information retrieval.
8

Dealing with unstructured data : A study about information quality and measurement / Hantera ostrukturerad data : En studie om informationskvalitet och mätning

Vikholm, Oskar January 2015 (has links)
Many organizations have realized that the growing amount of unstructured text may contain information that can be used for different purposes, such as making decisions. Organizations can by using so-called text mining tools, extract information from text documents. For example within military and intelligence activities it is important to go through reports and look for entities such as names of people, events, and the relationships in-between them when criminal or other interesting activities are being investigated and mapped. This study explores how information quality can be measured and what challenges it involves. It is done on the basis of Wang and Strong (1996) theory about how information quality can be measured. The theory is tested and discussed from empirical material that contains interviews from two case organizations. The study observed two important aspects to take into consideration when measuring information quality: context dependency and source criticism. Context dependency means that the context in which information quality should be measured in must be defined based on the consumer’s needs. Source criticism implies that it is important to take the original source into consideration, and how reliable it is. Further, data quality and information quality is often used interchangeably, which means that organizations needs to decide what they really want to measure. One of the major challenges in developing software for entity extraction is that the system needs to understand the structure of natural language, which is very complicated. / Många organisationer har insett att den växande mängden ostrukturerad text kan innehålla information som kan användas till flera ändamål såsom beslutsfattande. Genom att använda så kallade text-mining verktyg kan organisationer extrahera information från textdokument. Inom till exempel militär verksamhet och underrättelsetjänst är det viktigt att kunna gå igenom rapporter och leta efter exempelvis namn på personer, händelser och relationerna mellan dessa när brottslig eller annan intressant verksamhet undersöks och kartläggs. I studien undersöks hur informationskvalitet kan mätas och vilka utmaningar det medför. Det görs med utgångspunkt i Wang och Strongs (1996) teori om hur informationskvalité kan mätas. Teorin testas och diskuteras utifrån ett empiriskt material som består av intervjuer från två fall-organisationer. Studien uppmärksammar två viktiga aspekter att ta hänsyn till för att mäta informationskvalitét; kontextberoende och källkritik. Kontextberoendet innebär att det sammanhang inom vilket informationskvalitét mäts måste definieras utifrån konsumentens behov. Källkritik innebär att det är viktigt att ta hänsyn informationens ursprungliga källa och hur trovärdig den är. Vidare är det viktigt att organisationer bestämmer om det är data eller informationskvalitét som ska mätas eftersom dessa två begrepp ofta blandas ihop. En av de stora utmaningarna med att utveckla mjukvaror för entitetsextrahering är att systemen ska förstå uppbyggnaden av det naturliga språket, vilket är väldigt komplicerat.
9

Serviceorientiertes Text Mining am Beispiel von Entitätsextrahierenden Diensten

Pfeifer, Katja 08 September 2014 (has links) (PDF)
Der Großteil des geschäftsrelevanten Wissens liegt heute als unstrukturierte Information in Form von Textdaten auf Internetseiten, in Office-Dokumenten oder Foreneinträgen vor. Zur Extraktion und Verwertung dieser unstrukturierten Informationen wurde eine Vielzahl von Text-Mining-Lösungen entwickelt. Viele dieser Systeme wurden in der jüngeren Vergangenheit als Webdienste zugänglich gemacht, um die Verwertung und Integration zu vereinfachen. Die Kombination verschiedener solcher Text-Mining-Dienste zur Lösung konkreter Extraktionsaufgaben erscheint vielversprechend, da so bestehende Stärken ausgenutzt, Schwächen der Systeme minimiert werden können und die Nutzung von Text-Mining-Lösungen vereinfacht werden kann. Die vorliegende Arbeit adressiert die flexible Kombination von Text-Mining-Diensten in einem serviceorientierten System und erweitert den Stand der Technik um gezielte Methoden zur Auswahl der Text-Mining-Dienste, zur Aggregation der Ergebnisse und zur Abbildung der eingesetzten Klassifikationsschemata. Zunächst wird die derzeit existierende Dienstlandschaft analysiert und aufbauend darauf eine Ontologie zur funktionalen Beschreibung der Dienste bereitgestellt, so dass die funktionsgesteuerte Auswahl und Kombination der Text-Mining-Dienste ermöglicht wird. Des Weiteren werden am Beispiel entitätsextrahierender Dienste Algorithmen zur qualitätssteigernden Kombination von Extraktionsergebnissen erarbeitet und umfangreich evaluiert. Die Arbeit wird durch zusätzliche Abbildungs- und Integrationsprozesse ergänzt, die eine Anwendbarkeit auch in heterogenen Dienstlandschaften, bei denen unterschiedliche Klassifikationsschemata zum Einsatz kommen, gewährleisten. Zudem werden Möglichkeiten der Übertragbarkeit auf andere Text-Mining-Methoden erörtert.
10

Serviceorientiertes Text Mining am Beispiel von Entitätsextrahierenden Diensten

Pfeifer, Katja 16 June 2014 (has links)
Der Großteil des geschäftsrelevanten Wissens liegt heute als unstrukturierte Information in Form von Textdaten auf Internetseiten, in Office-Dokumenten oder Foreneinträgen vor. Zur Extraktion und Verwertung dieser unstrukturierten Informationen wurde eine Vielzahl von Text-Mining-Lösungen entwickelt. Viele dieser Systeme wurden in der jüngeren Vergangenheit als Webdienste zugänglich gemacht, um die Verwertung und Integration zu vereinfachen. Die Kombination verschiedener solcher Text-Mining-Dienste zur Lösung konkreter Extraktionsaufgaben erscheint vielversprechend, da so bestehende Stärken ausgenutzt, Schwächen der Systeme minimiert werden können und die Nutzung von Text-Mining-Lösungen vereinfacht werden kann. Die vorliegende Arbeit adressiert die flexible Kombination von Text-Mining-Diensten in einem serviceorientierten System und erweitert den Stand der Technik um gezielte Methoden zur Auswahl der Text-Mining-Dienste, zur Aggregation der Ergebnisse und zur Abbildung der eingesetzten Klassifikationsschemata. Zunächst wird die derzeit existierende Dienstlandschaft analysiert und aufbauend darauf eine Ontologie zur funktionalen Beschreibung der Dienste bereitgestellt, so dass die funktionsgesteuerte Auswahl und Kombination der Text-Mining-Dienste ermöglicht wird. Des Weiteren werden am Beispiel entitätsextrahierender Dienste Algorithmen zur qualitätssteigernden Kombination von Extraktionsergebnissen erarbeitet und umfangreich evaluiert. Die Arbeit wird durch zusätzliche Abbildungs- und Integrationsprozesse ergänzt, die eine Anwendbarkeit auch in heterogenen Dienstlandschaften, bei denen unterschiedliche Klassifikationsschemata zum Einsatz kommen, gewährleisten. Zudem werden Möglichkeiten der Übertragbarkeit auf andere Text-Mining-Methoden erörtert.

Page generated in 0.1488 seconds