• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 29
  • 26
  • 22
  • 11
  • 10
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 372
  • 372
  • 122
  • 106
  • 104
  • 92
  • 87
  • 70
  • 69
  • 66
  • 60
  • 52
  • 44
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

High-Performance Knowledge-Based Entity Extraction

Middleton, Anthony M. 01 January 2009 (has links)
Human language records most of the information and knowledge produced by organizations and individuals. The machine-based process of analyzing information in natural language form is called natural language processing (NLP). Information extraction (IE) is the process of analyzing machine-readable text and identifying and collecting information about specified types of entities, events, and relationships. Named entity extraction is an area of IE concerned specifically with recognizing and classifying proper names for persons, organizations, and locations from natural language. Extant approaches to the design and implementation named entity extraction systems include: (a) knowledge-engineering approaches which utilize domain experts to hand-craft NLP rules to recognize and classify named entities; (b) supervised machine-learning approaches in which a previously tagged corpus of named entities is used to train algorithms which incorporate statistical and probabilistic methods for NLP; or (c) hybrid approaches which incorporate aspects of both methods described in (a) and (b). Performance for IE systems is evaluated using the metrics of precision and recall which measure the accuracy and completeness of the IE task. Previous research has shown that utilizing a large knowledge base of known entities has the potential to improve overall entity extraction precision and recall performance. Although existing methods typically incorporate dictionary-based features, these dictionaries have been limited in size and scope. The problem addressed by this research was the design, implementation, and evaluation of a new high-performance knowledge-based hybrid processing approach and associated algorithms for named entity extraction, combining rule-based natural language parsing and memory-based machine learning classification facilitated by an extensive knowledge base of existing named entities. The hybrid approach implemented by this research resulted in improved precision and recall performance approaching human-level capability compared to existing methods measured using a standard test corpus. The system design incorporated a parallel processing system architecture with capabilities for managing a large knowledge base and providing high throughput potential for processing large collections of natural language text documents.
102

A Semi-Supervised Information Extraction Framework for Large Redundant Corpora

Normand, Eric 19 December 2008 (has links)
The vast majority of text freely available on the Internet is not available in a form that computers can understand. There have been numerous approaches to automatically extract information from human- readable sources. The most successful attempts rely on vast training sets of data. Others have succeeded in extracting restricted subsets of the available information. These approaches have limited use and require domain knowledge to be coded into the application. The current thesis proposes a novel framework for Information Extraction. From large sets of documents, the system develops statistical models of the data the user wishes to query which generally avoid the lim- itations and complexity of most Information Extractions systems. The framework uses a semi-supervised approach to minimize human input. It also eliminates the need for external Named Entity Recognition systems by relying on freely available databases. The final result is a query-answering system which extracts information from large corpora with a high degree of accuracy.
103

Extracting Dimensions of Interpersonal Interactions and Relationships

Rashid, Farzana 08 1900 (has links)
People interact with each other through natural language to express feelings, thoughts, intentions, instructions etc. These interactions as a result form relationships. Besides names of relationships like siblings, spouse, friends etc., a number of dimensions (e.g. cooperative vs. competitive, temporary vs. enduring, equal vs. hierarchical etc.) can also be used to capture the underlying properties of interpersonal interactions and relationships. More fine-grained descriptors (e.g. angry, rude, nice, supportive etc.) can also be used to indicate the reasons or social-acts behind the dimension cooperative vs. competitive. The way people interact with others may also tell us about their personal traits, which in turn may be indicative of their probable success in their future. The works presented in the dissertation involve creating corpora with fine-grained descriptors of interactions and relationships. We also described experiments and their results that indicated that the processes of identifying the dimensions can be automated.
104

Extração de informação de artigos científicos: uma abordagem baseada em indução de regras de etiquetagem / Information extraction from scientific articles: an approach based on induction of tagging rules

Álvarez, Alberto Cáceres 08 May 2007 (has links)
Este trabalho faz parte do projeto de uma ferramenta denominada FIP (Ferramenta Inteligente de Apoio à Pesquisa) para recuperação, organização e mineração de grandes coleções de documentos. No contexto da ferramenta FIP, diversas técnicas de Recuperação de Informação, Mineração de Dados, Visualização de Informações e, em particular, técnicas de Extração de Informações, foco deste trabalho, são usadas. Sistemas de Extração de Informação atuam sobre um conjunto de dados não estruturados e objetivam localizar informações específicas em um documento ou coleção de documentos, extraí-las e estruturá-las com o intuito de facilitar o uso dessas informações. O objetivo específico desenvolvido nesta dissertação é induzir, de forma automática, um conjunto de regras para a extração de informações de artigos científicos. O sistema de extração proposto, inicialmente, analisa e extrai informações presentes no corpo dos artigos (título, autores, a filiação, resumo, palavras chaves) e, posteriormente, foca na extração das informações de suas referências bibliográficas. A proposta para extração automática das informações das referências é uma abordagem nova, baseada no mapeamento do problema de part-of-speech tagging ao problema de extração de informação. Como produto final do processo de extração, tem-se uma base de dados com as informações extraídas e estruturadas no formato XML, disponível à ferramenta FIP ou a qualquer outra aplicação. Os resultados obtidos foram avaliados em termos das métricas precisão, cobertura e F-measure, alcançando bons resultados comparados com sistemas similares / This dissertation is part of a project of a tool named FIP (an Intelligent Tool for Research Supporting). FIP is a tool for retrieval, organization, and mining large document collections. In the context of FIP diverse techniques from Information Retrieval, Data Mining, Information Visualization, and particularly Information Extraction, focus of this work, are used. Information Extraction systems deal with unstructured data looking for specific information in a document or document collection, extracting and structuring them in order to facilitate their use. The specific objective presented in this dissertation is automatically to induce a set of rules for information extraction from scientific articles. The proposed extraction system initially analyzes and extracts information from the body of the articles (heading, authors, affiliation, abstract, and keywords) and then extracts information from each reference in its bibliographical references. The proposed approach for information extraction from references is a new technique based on the strategy of part-of-speech tagging. As the outcome of the extraction process, a database with extracted and structured information in XML format is made available for the FIP or any other application. The system has been evaluated using measures of Precision, Recall and F-measure, reaching good results compared to similar systems
105

Resolu??o de correfer?ncia nominal usando sem?ntica em l?ngua portuguesa

Fonseca, Evandro Brasil 19 March 2018 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-06-19T11:37:24Z No. of bitstreams: 1 EVANDRO BRASIL FONSECA_TES.pdf: 1972824 bytes, checksum: 9fca0c499753cd9d2822c59040e826bf (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-06-26T14:40:39Z (GMT) No. of bitstreams: 1 EVANDRO BRASIL FONSECA_TES.pdf: 1972824 bytes, checksum: 9fca0c499753cd9d2822c59040e826bf (MD5) / Made available in DSpace on 2018-06-26T14:48:46Z (GMT). No. of bitstreams: 1 EVANDRO BRASIL FONSECA_TES.pdf: 1972824 bytes, checksum: 9fca0c499753cd9d2822c59040e826bf (MD5) Previous issue date: 2018-03-19 / Coreference Resolution task is challenging for Natural Language Processing, considering the required linguistic knowledge and the sophistication of language processing techniques involved. Even though it is a demanding task, a motivating factor in the study of this phenomenon is its usefulness. Basically, several Natural Language Processing tasks may benefit from their results, such as named entities recognition, relation extraction between named entities, summarization, sentiment analysis, among others. Coreference Resolution is a process that consists on identifying certain terms and expressions that refer to the same entity. For example, in the sentence ? France is refusing. The country is one of the first in the ranking... ? we can say that [the country] is a coreference of [France]. By grouping these referential terms, we form coreference groups, more commonly known as coreference chains. This thesis proposes a process for coreference resolution between noun phrases for Portuguese, focusing on the use of semantic knowledge. Our proposed approach is based on syntactic-semantic linguistic rules. That is, we combine different levels of linguistic processing, using semantic relations as support, in order to infer referential relations between mentions. Models based on linguistic rules have been efficiently applied in other languages, such as: English, Spanish and Galician. In few words, these models are more efficient than machine learning approaches when we deal with less resourceful languages, since the lack of sample-rich corpora may produce a poor training. The proposed approach is the first model for Portuguese coreference resolution which uses semantic knowledge. Thus, we consider it as the main contribution of this thesis. / A tarefa de Resolu??o de Correfer?ncia ? um grande desafio para a ?rea de Processamento da Linguagem Natural, tendo em vista o conhecimento lingu?stico exigido e a sofistica??o das t?cnicas de processamento da l?ngua empregados. Mesmo sendo uma tarefa desafiadora, um fator motivador do estudo deste fen?meno se d? pela sua utilidade. Basicamente, v?rias tarefas de Processamento da Linguagem Natural podem se beneficiar de seus resultados, como, por exemplo, o reconhecimento de entidades nomeadas, extra??o de rela??o entre entidades nomeadas, sumariza??o, an?lise de sentimentos, entre outras. A Resolu??o de Correfer?ncia ? um processo que consiste em identificar determinados termos e express?es que remetem a uma mesma entidade. Por exemplo, na senten?a ?A Fran?a est? resistindo. O pa?s ? um dos primeiros no ranking...? podemos dizer que [o pa?s] ? uma correfer?ncia de [A Fran?a]. Realizando o agrupamento desses termos referenciais, formamos grupos de men??es correferentes, mais conhecidos como cadeias de correfer?ncia. Esta tese prop?e um processo para a resolu??o de correfer?ncia entre sintagmas nominais para a l?ngua portuguesa, tendo como foco a utiliza??o do conhecimento sem?ntico. Nossa abordagem proposta ? baseada em regras lingu?sticas sint?tico-sem?nticas. Ou seja, combinamos diferentes n?veis de processamento lingu?stico utilizando rela??es sem?nticas como apoio, de forma a inferir rela??es referenciais entre men??es. Modelos baseados em regras lingu?sticas t?m sido aplicados eficientemente em outros idiomas como o ingl?s, o espanhol e o galego. Esses modelos mostram-se mais eficientes que os baseados em aprendizado de m?quina quando lidamos com idiomas menos providos de recursos, dado que a aus?ncia de corpora ricos em amostras pode prejudicar o treino desses modelos. O modelo proposto nesta tese ? o primeiro voltado para a resolu??o de correfer?ncia em portugu?s que faz uso de conhecimento sem?ntico. Dessa forma, tomamos este fator como a principal contribui??o deste trabalho.
106

Extração de informações de conferências em páginas web

Garcia, Cássio Alan January 2017 (has links)
A escolha da conferência adequada para o envio de um artigo é uma tarefa que depende de diversos fatores: (i) o tema do trabalho deve estar entre os temas de interesse do evento; (ii) o prazo de submissão do evento deve ser compatível com tempo necessário para a escrita do artigo; (iii) localização da conferência e valores de inscrição são levados em consideração; e (iv) a qualidade da conferência (Qualis) avaliada pela CAPES. Esses fatores aliados à existência de milhares de conferências tornam a busca pelo evento adequado bastante demorada, em especial quando se está pesquisando em uma área nova. A fim de auxiliar os pesquisadores na busca de conferências, o trabalho aqui desenvolvido apresenta um método para a coleta e extração de dados de sites de conferências. Essa é uma tarefa desafiadora, principalmente porque cada conferência possui seu próprio site, com diferentes layouts. O presente trabalho apresenta um método chamado CONFTRACKER que combina a identificação de URLs de conferências da Tabela Qualis à identificação de deadlines a partir de seus sites. A extração das informações é realizada independente da conferência, do layout do site e da forma como são apresentadas as datas (formatação e rótulos). Para avaliar o método proposto, foram realizados experimentos com dados reais de conferências da Ciência da Computação. Os resultados mostraram que CONFTRACKER obteve resultados significativamente melhores em relação a um baseline baseado na posição entre rótulos e datas. Por fim, o processo de extração é executado para todas as conferências da Tabela Qualis e os dados coletados populam uma base de dados que pode ser consultada através de uma interface online. / Choosing the most suitable conference to submit a paper is a task that depends on various factors: (i) the topic of the paper needs to be among the topics of interest of the conference; (ii) submission deadlines need to be compatible with the necessary time for paper writing; (iii) conference location and registration costs; and (iv) the quality or impact of the conference. These factors allied to the existence of thousands of conferences, make the search of the right event very time consuming, especially when researching in a new area. Intending to help researchers finding conferences, this work presents a method developed to retrieve and extract data from conference web sites. Our method combines the identification of conference URL and deadline extraction. This is a challenging task as each web site has its own layout. Here, we propose CONFTRACKER, which combines the identification of the URLs of conferences listed in the Qualis Table and the extraction of their deadlines. Information extraction is carried out independent from the page’s layout and how the dates are presented. To evaluate our proposed method, we carried out experiments with real web data from Computer Science conferences. The results show that CONFTRACKER outperformed a baseline method based on the position of labels and dates. Finaly, the extracted data is stored in a database to be searched with an online tool.
107

Découverte et réconciliation de données numeriques relatives aux personnes pour la gestion des ressources humaines / Digital Identity Discovery and Reconciliation for Human Resources Management

Ghufran, Mohammad 27 November 2017 (has links)
La gestion des ressources humaines est une tâche importante pour toutes les organisations. Avec le nombre de candidatures en augmentation grâce à plusieurs plateformes en ligne, il est souhaitable de faire correspondre automatiquement les candidats avec des offres d’emploi. Les approches existantes utilisent les CVs sans compléter les informations par des recherches sur le Web, notamment le Web social. L’objectif de cette thèse est de surmonter cette limitation et proposer des méthodes pour découvrir des ressources en ligne pertinentes pour un demandeur d’emploi. À cet égard, une nouvelle méthode pour l’extraction d’informations clés à partir des CVs est proposée. Il s’agit d’un problème difficile puisque les CVs peuvent être multilingues et avoir des structures assez variées. En plus, les entités présentes sont suivant ambiguës. L’identification et la réconciliation des ressources en ligne en utilisant les informations clés sont un autre défi. Nous proposons un algorithme pour générer des requêtes et classer les résultats pour obtenir les ressources en ligne les plus pertinentes pour un demandeur d’emploi.. En outre, nous abordons spécifiquement la réconciliation de profils dans les réseaux sociaux grâce à une méthode qui est capable d’identifier les profils de individus à travers différents réseaux. Cette méthode utilise notamment les informations relatives à la localisation géographique des profils. A cet égard, nous proposons un algorithme permettant de désambiguïser les toponymes utilisés dans les profils pour indiquer une localité géographique ; cet algorithme peut être également utilisé pour inférer la localité d’un individu lorsqu’il ne l’a pas renseignée. Des expériences sur des ensembles de données réelles sont menées pour tous les différents algorithmes proposés dans cette thèse qui montrent de bons résultats. / Finding the appropriate individual to hire is a crucial part of any organization. With the number of applications increasing due to the introduction of online job portals, it is desired to automatically match applicants with job offers. Existing approaches that match applicants with job offers take resumes as they are and do not attempt to complete the information on a resume by looking for more information on the Internet. The objective of this thesis is to fill this gap by discovering online resources pertinent to an applicant. To this end, a novel method for extraction of key information from resumes is proposed. This is a challenging task since resumes can have diverse structures and formats, and the entities present within are ambiguous. Identification of Web results using the key information and their reconciliation is another challenge. We propose an algorithm to generate queries, and rank the results to obtain the most pertinent online resources. In addition, we specifically tackle reconciliation of social network profiles through a method that is able to identify profiles of individuals across different networks. Moreover, a method to resolve ambiguity in locations, or predict it when absent, is also presented. Experiments on real data sets are conducted for all the different algorithms proposed in this thesis and they show good results.
108

Topic Segmentation and Medical Named Entities Recognition for Pictorially Visualizing Health Record Summary System

Ruan, Wei 03 April 2019 (has links)
Medical Information Visualization makes optimized use of digitized data of medical records, e.g. Electronic Medical Record. This thesis is an extended work of Pictorial Information Visualization System (PIVS) developed by Yongji Jin (Jin, 2016) Jiaren Suo (Suo, 2017) which is a graphical visualization system by picturizing patient’s medical history summary depicting patients’ medical information in order to help patients and doctors to easily capture patients’ past and present conditions. The summary information has been manually entered into the interface where the information can be taken from clinical notes. This study proposes a methodology of automatically extracting medical information from patients’ clinical notes by using the techniques of Natural Language Processing in order to produce medical history summarization from past medical records. We develop a Named Entities Recognition system to extract the information of the medical imaging procedure (performance date, human body location, imaging results and so on) and medications (medication names, frequency and quantities) by applying the model of conditional random fields with three main features and others: word-based, part-of-speech, Metamap semantic features. Adding Metamap semantic features is a novel idea which raised the accuracy compared to previous studies. Our evaluation shows that our model has higher accuracy than others on medication extraction as a case study. For enhancing the accuracy of entities extraction, we also propose a methodology of Topic Segmentation to clinical notes using boundary detection by determining the difference of classification probabilities of subsequence sequences, which is different from the traditional Topic Segmentation approaches such as TextTiling, TopicTiling and Beeferman Statistical Model. With Topic Segmentation combined for Named Entities Extraction, we observed higher accuracy for medication extraction compared to the case without the segmentation. Finally, we also present a prototype of integrating our information extraction system with PIVS by simply building the database of interface coordinates and the terms of human body parts.
109

Ontology Learning and Information Extraction for the Semantic Web

Kavalec, Martin January 2006 (has links)
The work gives overview of its three main topics: semantic web, information extraction and ontology learning. A method for identification relevant information on web pages is described and experimentally tested on pages of companies offering products and services. The method is based on analysis of a sample web pages and their position in the Open Directory catalogue. Furthermore, a modfication of association rules mining algorithm is proposed and experimentally tested. In addition to an identification of a relation between ontology concepts, it suggest possible naming of the relation.
110

Toponym resolution in text

Leidner, Jochen Lothar January 2007 (has links)
Background. In the area of Geographic Information Systems (GIS), a shared discipline between informatics and geography, the term geo-parsing is used to describe the process of identifying names in text, which in computational linguistics is known as named entity recognition and classification (NERC). The term geo-coding is used for the task of mapping from implicitly geo-referenced datasets (such as structured address records) to explicitly geo-referenced representations (e.g., using latitude and longitude). However, present-day GIS systems provide no automatic geo-coding functionality for unstructured text. In Information Extraction (IE), processing of named entities in text has traditionally been seen as a two-step process comprising a flat text span recognition sub-task and an atomic classification sub-task; relating the text span to a model of the world has been ignored by evaluations such as MUC or ACE (Chinchor (1998); U.S. NIST (2003)). However, spatial and temporal expressions refer to events in space-time, and the grounding of events is a precondition for accurate reasoning. Thus, automatic grounding can improve many applications such as automatic map drawing (e.g. for choosing a focus) and question answering (e.g. for questions like How far is London from Edinburgh?, given a story in which both occur and can be resolved). Whereas temporal grounding has received considerable attention in the recent past (Mani and Wilson (2000); Setzer (2001)), robust spatial grounding has long been neglected. Concentrating on geographic names for populated places, I define the task of automatic Toponym Resolution (TR) as computing the mapping from occurrences of names for places as found in a text to a representation of the extensional semantics of the location referred to (its referent), such as a geographic latitude/longitude footprint. The task of mapping from names to locations is hard due to insufficient and noisy databases, and a large degree of ambiguity: common words need to be distinguished from proper names (geo/non-geo ambiguity), and the mapping between names and locations is ambiguous (London can refer to the capital of the UK or to London, Ontario, Canada, or to about forty other Londons on earth). In addition, names of places and the boundaries referred to change over time, and databases are incomplete. Objective. I investigate how referentially ambiguous spatial named entities can be grounded, or resolved, with respect to an extensional coordinate model robustly on open-domain news text. I begin by comparing the few algorithms proposed in the literature, and, comparing semiformal, reconstructed descriptions of them, I factor out a shared repertoire of linguistic heuristics (e.g. rules, patterns) and extra-linguistic knowledge sources (e.g. population sizes). I then investigate how to combine these sources of evidence to obtain a superior method. I also investigate the noise effect introduced by the named entity tagging step that toponym resolution relies on in a sequential system pipeline architecture. Scope. In this thesis, I investigate a present-day snapshot of terrestrial geography as represented in the gazetteer defined and, accordingly, a collection of present-day news text. I limit the investigation to populated places; geo-coding of artifact names (e.g. airports or bridges), compositional geographic descriptions (e.g. 40 miles SW of London, near Berlin), for instance, is not attempted. Historic change is a major factor affecting gazetteer construction and ultimately toponym resolution. However, this is beyond the scope of this thesis. Method. While a small number of previous attempts have been made to solve the toponym resolution problem, these were either not evaluated, or evaluation was done by manual inspection of system output instead of curating a reusable reference corpus. Since the relevant literature is scattered across several disciplines (GIS, digital libraries, information retrieval, natural language processing) and descriptions of algorithms are mostly given in informal prose, I attempt to systematically describe them and aim at a reconstruction in a uniform, semi-formal pseudo-code notation for easier re-implementation. A systematic comparison leads to an inventory of heuristics and other sources of evidence. In order to carry out a comparative evaluation procedure, an evaluation resource is required. Unfortunately, to date no gold standard has been curated in the research community. To this end, a reference gazetteer and an associated novel reference corpus with human-labeled referent annotation are created. These are subsequently used to benchmark a selection of the reconstructed algorithms and a novel re-combination of the heuristics catalogued in the inventory. I then compare the performance of the same TR algorithms under three different conditions, namely applying it to the (i) output of human named entity annotation, (ii) automatic annotation using an existing Maximum Entropy sequence tagging model, and (iii) a na¨ıve toponym lookup procedure in a gazetteer. Evaluation. The algorithms implemented in this thesis are evaluated in an intrinsic or component evaluation. To this end, we define a task-specific matching criterion to be used with traditional Precision (P) and Recall (R) evaluation metrics. This matching criterion is lenient with respect to numerical gazetteer imprecision in situations where one toponym instance is marked up with different gazetteer entries in the gold standard and the test set, respectively, but where these refer to the same candidate referent, caused by multiple near-duplicate entries in the reference gazetteer. Main Contributions. The major contributions of this thesis are as follows: • A new reference corpus in which instances of location named entities have been manually annotated with spatial grounding information for populated places, and an associated reference gazetteer, from which the assigned candidate referents are chosen. This reference gazetteer provides numerical latitude/longitude coordinates (such as 51320 North, 0 50 West) as well as hierarchical path descriptions (such as London > UK) with respect to a world wide-coverage, geographic taxonomy constructed by combining several large, but noisy gazetteers. This corpus contains news stories and comprises two sub-corpora, a subset of the REUTERS RCV1 news corpus used for the CoNLL shared task (Tjong Kim Sang and De Meulder (2003)), and a subset of the Fourth Message Understanding Contest (MUC-4; Chinchor (1995)), both available pre-annotated with gold-standard. This corpus will be made available as a reference evaluation resource; • a new method and implemented system to resolve toponyms that is capable of robustly processing unseen text (open-domain online newswire text) and grounding toponym instances in an extensional model using longitude and latitude coordinates and hierarchical path descriptions, using internal (textual) and external (gazetteer) evidence; • an empirical analysis of the relative utility of various heuristic biases and other sources of evidence with respect to the toponym resolution task when analysing free news genre text; • a comparison between a replicated method as described in the literature, which functions as a baseline, and a novel algorithm based on minimality heuristics; and • several exemplary prototypical applications to show how the resulting toponym resolution methods can be used to create visual surrogates for news stories, a geographic exploration tool for news browsing, geographically-aware document retrieval and to answer spatial questions (How far...?) in an open-domain question answering system. These applications only have demonstrative character, as a thorough quantitative, task-based (extrinsic) evaluation of the utility of automatic toponym resolution is beyond the scope of this thesis and left for future work.

Page generated in 0.1584 seconds