• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 4
  • 3
  • Tagged with
  • 49
  • 49
  • 26
  • 24
  • 23
  • 22
  • 22
  • 18
  • 14
  • 13
  • 12
  • 10
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Robust relationship extraction in the biomedical domain

Thomas, Philippe 25 November 2015 (has links)
Seit Jahrhunderten wird menschliches Wissen in Form von natürlicher Sprache ausgetauscht und in Dokumenten schriftlich aufgezeichnet. In den letzten Jahren konnte man auf dem Gebiet der Lebenswissenschaften eine exponentielle Zunahme wissenschaftlicher Publikationen beobachten. Diese Dissertation untersucht die automatische Extraktion von Beziehungen zwischen Eigennamen. Innerhalb dieses Gebietes beschäftigt sich die Arbeit mit der Steigerung der Robustheit für die Relationsextraktion. Zunächst wird der Einsatz von Ensemble-Methoden anhand von Daten aus der "Drug-drug-interaction challenge 2013" evaluiert. Ensemble-Methoden erhöhen die Robustheit durch Aggregation unterschiedlicher Klassifikationssysteme zu einem Modell. Weiterhin wird in dieser Arbeit das Problem der Relationsextraktion auf Dokumenten mit unbekannten Texteigenschaften beschrieben. Es wird gezeigt, dass die Verwendung des halb-überwachten Lernverfahrens self training in solchen Fällen eine höhere Robustheit erzielt als die Nutzung eines Klassifikators, der lediglich auf einem manuell annotierten Korpus trainiert wurde. Zur Ermittlung der Robustheit wird das Verfahren des cross-learnings verwendet. Zuletzt wird die Verwendung von distant-supervision untersucht. Korpora, welche mit der distant-supervision-Methode erzeugt wurden, weisen ein inhärentes Rauschen auf und profitieren daher von robusten Relationsextraktionsverfahren. Es werden zwei verschiedene Methoden untersucht, die auf solchen Korpora trainiert werden. Beide Ansätze zeigen eine vergleichbare Leistung wie vollständig überwachte Klassifikatoren, welche mit dem cross-learning-Verfahren evaluiert wurden. Um die Nutzung von Ergebnissen der Informationsextraktion zu erleichtern, wurde die semantische Suchmaschine GeneView entwickelt. Anforderungen an die Rechenkapazität beim Erstellen von GeneView werden diskutiert und Anwendungen auf den von verschiedenen Text-Mining-Komponenten extrahierten Daten präsentiert. / For several centuries, a great wealth of human knowledge has been communicated by natural language, often recorded in written documents. In the life sciences, an exponential increase of scientific articles has been observed, hindering the effective and fast reconciliation of previous finding into current research projects. This thesis studies the automatic extraction of relationships between named entities. Within this topic, it focuses on increasing robustness for relationship extraction. First, we evaluate the use of ensemble methods to improve performance using data provided by the drug-drug-interaction challenge 2013. Ensemble methods aggregate several classifiers into one model, increasing robustness by reducing the risk of choosing an inappropriate single classifier. Second, this work discusses the problem of applying relationship extraction to documents with unknown text characteristics. Robustness of a text mining component is assessed by cross-learning, where a model is evaluated on a corpus different from the training corpus. We apply self-training, a semi-supervised learning technique, in order to increase cross-learning performance and show that it is more robust in comparison to a classifier trained on manually annotated text only. Third, we investigate the use of distant supervision to overcome the need of manually annotated training instances. Corpora derived by distant supervision are inherently noisy, thus benefiting from robust relationship extraction methods. We compare two different methods and show that both approaches achieve similar performance as fully supervised classifiers, evaluated in the cross-learning scenario. To facilitate the usage of information extraction results, including those developed within this thesis, we develop the semantic search engine GeneView. We discuss computational requirements to build this resource and present some applications utilizing the data extracted by different text-mining components.
42

Extraction d'arguments de relations n-aires dans les textes guidée par une RTO de domaine / Extraction of arguments in N-ary relations in texts guided by a domain OTR

Berrahou, Soumia Lilia 29 September 2015 (has links)
Aujourd'hui, la communauté scientifique a l'opportunité de partager des connaissances et d'accéder à de nouvelles informations à travers les documents publiés et stockés dans les bases en ligne du web. Dans ce contexte, la valorisation des données disponibles reste un défi majeur pour permettre aux experts de les réutiliser et les analyser afin de produire de la connaissance du domaine. Pour être valorisées, les données pertinentes doivent être extraites des documents puis structurées. Nos travaux s'inscrivent dans la problématique de la capitalisation des données expérimentales issues des articles scientifiques, sélectionnés dans des bases en ligne, afin de les réutiliser dans des outils d'aide à la décision. Les mesures expérimentales (par exemple, la perméabilité à l'oxygène d'un emballage ou le broyage d'une biomasse) réalisées sur différents objets d'études (par exemple, emballage ou procédé de bioraffinerie) sont représentées sous forme de relations n-aires dans une Ressource Termino-Ontologique (RTO). La RTO est modélisée pour représenter les relations n-aires en associant une partie terminologique et/ou linguistique aux ontologies afin d'établir une distinction claire entre la manifestation linguistique (le terme) et la notion qu'elle dénote (le concept). La thèse a pour objectif de proposer une contribution méthodologique d'extraction automatique ou semi-automatique d'arguments de relations n-aires provenant de documents textuels afin de peupler la RTO avec de nouvelles instances. Les méthodologies proposées exploitent et adaptent conjointement des approches de Traitement automatique de la Langue (TAL) et de fouille de données, le tout s'appuyant sur le support sémantique apporté par la RTO de domaine. De manière précise, nous cherchons, dans un premier temps, à extraire des termes, dénotant les concepts d'unités de mesure, réputés difficiles à identifier du fait de leur forte variation typographique dans les textes. Après la localisation de ces derniers par des méthodes de classification automatique, les variants d'unités sont identifiés en utilisant des mesures d'édition originales. La seconde contribution méthodologique de nos travaux repose sur l'adaptation et la combinaison de méthodes de fouille de données (extraction de motifs et règles séquentiels) et d'analyse syntaxique pour identifier les instances d'arguments de la relation n-aire recherchée. / Today, a huge amount of data is made available to the research community through several web-based libraries. Enhancing data collected from scientific documents is a major challenge in order to analyze and reuse efficiently domain knowledge. To be enhanced, data need to be extracted from documents and structured in a common representation using a controlled vocabulary as in ontologies. Our research deals with knowledge engineering issues of experimental data, extracted from scientific articles, in order to reuse them in decision support systems. Experimental data can be represented by n-ary relations which link a studied object (e.g. food packaging, transformation process) with its features (e.g. oxygen permeability in packaging, biomass grinding) and capitalized in an Ontological and Terminological Ressource (OTR). An OTR associates an ontology with a terminological and/or a linguistic part in order to establish a clear distinction between the term and the notion it denotes (the concept). Our work focuses on n-ary relation extraction from scientific documents in order to populate a domain OTR with new instances. Our contributions are based on Natural Language Processing (NLP) together with data mining approaches guided by the domain OTR. More precisely, firstly, we propose to focus on unit of measure extraction which are known to be difficult to identify because of their typographic variations. We propose to rely on automatic classification of texts, using supervised learning methods, to reduce the search space of variants of units, and then, we propose a new similarity measure that identifies them, taking into account their syntactic properties. Secondly, we propose to adapt and combine data mining methods (sequential patterns and rules mining) and syntactic analysis in order to overcome the challenging process of identifying and extracting n-ary relation instances drowned in unstructured texts.
43

Concept-based and relation-based corpus navigation : applications of natural language processing in digital humanities / Navigation en corpus fondée sur les concepts et les relations : applications du traitement automatique des langues aux humanités numériques

Ruiz Fabo, Pablo 23 June 2017 (has links)
La recherche en Sciences humaines et sociales repose souvent sur de grandes masses de données textuelles, qu'il serait impossible de lire en détail. Le Traitement automatique des langues (TAL) peut identifier des concepts et des acteurs importants mentionnés dans un corpus, ainsi que les relations entre eux. Ces informations peuvent fournir un aperçu du corpus qui peut être utile pour les experts d'un domaine et les aider à identifier les zones du corpus pertinentes pour leurs questions de recherche. Pour annoter automatiquement des corpus d'intérêt en Humanités numériques, les technologies TAL que nous avons appliquées sont, en premier lieu, le liage d'entités (plus connu sous le nom de Entity Linking), pour identifier les acteurs et concepts du corpus ; deuxièmement, les relations entre les acteurs et les concepts ont été déterminées sur la base d'une chaîne de traitements TAL, qui effectue un étiquetage des rôles sémantiques et des dépendances syntaxiques, entre autres analyses linguistiques. La partie I de la thèse décrit l'état de l'art sur ces technologies, en soulignant en même temps leur emploi en Humanités numériques. Des outils TAL génériques ont été utilisés. Comme l'efficacité des méthodes de TAL dépend du corpus d'application, des développements ont été effectués, décrits dans la partie II, afin de mieux adapter les méthodes d'analyse aux corpus dans nos études de cas. La partie II montre également une évaluation intrinsèque de la technologie développée, avec des résultats satisfaisants. Les technologies ont été appliquées à trois corpus très différents, comme décrit dans la partie III. Tout d'abord, les manuscrits de Jeremy Bentham, un corpus de philosophie politique des 18e et 19e siècles. Deuxièmement, le corpus PoliInformatics, qui contient des matériaux hétérogènes sur la crise financière américaine de 2007--2008. Enfin, le Bulletin des Négociations de la Terre (ENB dans son acronyme anglais), qui couvre des sommets internationaux sur la politique climatique depuis 1995, où des traités comme le Protocole de Kyoto ou les Accords de Paris ont été négociés. Pour chaque corpus, des interfaces de navigation ont été développées. Ces interfaces utilisateur combinent les réseaux, la recherche en texte intégral et la recherche structurée basée sur des annotations TAL. À titre d'exemple, dans l'interface pour le corpus ENB, qui couvre des négociations en politique climatique, des recherches peuvent être effectuées sur la base d'informations relationnelles identifiées dans le corpus: les acteurs de la négociation ayant discuté un sujet concret en exprimant leur soutien ou leur opposition peuvent être recherchés. Le type de la relation entre acteurs et concepts est exploité, au-delà de la simple co-occurrence entre les termes du corpus. Les interfaces ont été évaluées qualitativement avec des experts de domaine, afin d'estimer leur utilité potentielle pour la recherche dans leurs domaines respectifs. Tout d'abord, il a été vérifié si les représentations générées pour le contenu des corpus sont en accord avec les connaissances des experts du domaine, pour déceler des erreurs d'annotation. Ensuite, nous avons essayé de déterminer si les experts pourraient être en mesure d'avoir une meilleure compréhension du corpus grâce à avoir utilisé les applications, par exemple, s'ils ont trouvé de l'évidence nouvelle pour leurs questions de recherche existantes, ou s'ils ont trouvé de nouvelles questions de recherche. On a pu mettre au jour des exemples où un gain de compréhension sur le corpus est observé grâce à l'interface dédiée au Bulletin des Négociations de la Terre, ce qui constitue une bonne validation du travail effectué dans la thèse. En conclusion, les points forts et faiblesses des applications développées ont été soulignés, en indiquant de possibles pistes d'amélioration en tant que travail futur. / Social sciences and Humanities research is often based on large textual corpora, that it would be unfeasible to read in detail. Natural Language Processing (NLP) can identify important concepts and actors mentioned in a corpus, as well as the relations between them. Such information can provide an overview of the corpus useful for domain-experts, and help identify corpus areas relevant for a given research question. To automatically annotate corpora relevant for Digital Humanities (DH), the NLP technologies we applied are, first, Entity Linking, to identify corpus actors and concepts. Second, the relations between actors and concepts were determined based on an NLP pipeline which provides semantic role labeling and syntactic dependencies among other information. Part I outlines the state of the art, paying attention to how the technologies have been applied in DH.Generic NLP tools were used. As the efficacy of NLP methods depends on the corpus, some technological development was undertaken, described in Part II, in order to better adapt to the corpora in our case studies. Part II also shows an intrinsic evaluation of the technology developed, with satisfactory results. The technologies were applied to three very different corpora, as described in Part III. First, the manuscripts of Jeremy Bentham. This is a 18th-19th century corpus in political philosophy. Second, the PoliInformatics corpus, with heterogeneous materials about the American financial crisis of 2007-2008. Finally, the Earth Negotiations Bulletin (ENB), which covers international climate summits since 1995, where treaties like the Kyoto Protocol or the Paris Agreements get negotiated.For each corpus, navigation interfaces were developed. These user interfaces (UI) combine networks, full-text search and structured search based on NLP annotations. As an example, in the ENB corpus interface, which covers climate policy negotiations, searches can be performed based on relational information identified in the corpus: the negotiation actors having discussed a given issue using verbs indicating support or opposition can be searched, as well as all statements where a given actor has expressed support or opposition. Relation information is employed, beyond simple co-occurrence between corpus terms.The UIs were evaluated qualitatively with domain-experts, to assess their potential usefulness for research in the experts' domains. First, we payed attention to whether the corpus representations we created correspond to experts' knowledge of the corpus, as an indication of the sanity of the outputs we produced. Second, we tried to determine whether experts could gain new insight on the corpus by using the applications, e.g. if they found evidence unknown to them or new research ideas. Examples of insight gain were attested with the ENB interface; this constitutes a good validation of the work carried out in the thesis. Overall, the applications' strengths and weaknesses were pointed out, outlining possible improvements as future work.
44

Toward Robust Information Extraction Models for Multimedia Documents

Ebadat, Ali-Reza 17 October 2012 (has links) (PDF)
Au cours de la dernière décennie, d'énormes quantités de documents multimédias ont été générées. Il est donc important de trouver un moyen de gérer ces données, notamment d'un point de vue sémantique, ce qui nécessite une connaissance fine de leur contenu. Il existe deux familles d'approches pour ce faire, soit par l'extraction d'informations à partir du document (par ex., audio, image), soit en utilisant des données textuelles extraites du document ou de sources externes (par ex., Web). Notre travail se place dans cette seconde famille d'approches ; les informations extraites des textes peuvent ensuite être utilisées pour annoter les documents multimédias et faciliter leur gestion. L'objectif de cette thèse est donc de développer de tels modèles d'extraction d'informations. Mais les textes extraits des documents multimédias étant en général petits et bruités, ce travail veille aussi à leur nécessaire robustesse. Nous avons donc privilégié des techniques simples nécessitant peu de connaissances externes comme garantie de robustesse, en nous inspirant des travaux en recherche d'information et en analyse statistique des textes. Nous nous sommes notamment concentré sur trois tâches : l'extraction supervisée de relations entre entités, la découverte de relations, et la découverte de classes d'entités. Pour l'extraction de relations, nous proposons une approche supervisée basée sur les modèles de langues et l'algorithme d'apprentissage des k-plus-proches voisins. Les résultats expérimentaux montrent l'efficacité et la robustesse de nos modèles, dépassant les systèmes état-de-l'art tout en utilisant des informations linguistiques plus simples à obtenir. Dans la seconde tâche, nous passons à un modèle non supervisé pour découvrir les relations au lieu d'en extraire des prédéfinies. Nous modélisons ce problème comme une tâche de clustering avec une fonction de similarité là encore basée sur les modèles de langues. Les performances, évaluées sur un corpus de vidéos de matchs de football, montrnt l'intérêt de notre approche par rapport aux modèles classiques. Enfin, dans la dernière tâche, nous nous intéressons non plus aux relations mais aux entités, source d'informations essentielles dans les documents. Nous proposons une technique de clustering d'entités afin de faire émerger, sans a priori, des classes sémantiques parmi celles-ci, en adoptant une représentation nouvelle des données permettant de mieux tenir compte des chaque occurrence des entités. En guise de conclusion, nous avons montré expérimentalement que des techniques simples, exigeant peu de connaissances a priori, et utilisant des informations linguistique facilement accessibles peuvent être suffisantes pour extraire efficacement des informations précises à partir du texte. Dans notre cas, ces bons résultats sont obtenus en choisissant une représentation adaptée pour les données, basée sur une analyse statistique ou des modèles de recherche d'information. Le chemin est encore long avant d'être en mesure de traiter directement des documents multimédia, mais nous espérons que nos propositions pourront servir de tremplin pour les recherches futures dans ce domaine.
45

Uncovering and Managing the Impact of Methodological Choices for the Computational Construction of Socio-Technical Networks from Texts

Diesner, Jana 01 September 2012 (has links)
This thesis is motivated by the need for scalable and reliable methods and technologies that support the construction of network data based on information from text data. Ultimately, the resulting data can be used for answering substantive and graph-theoretical questions about socio-technical networks. One main limitation with constructing network data from text data is that the validation of the resulting network data can be hard to infeasible, e.g. in the cases of covert, historical and large-scale networks. This thesis addresses this problem by identifying the impact of coding choices that must be made when extracting network data from text data on the structure of networks and network analysis results. My findings suggest that conducting reference resolution on text data can alter the identity and weight of 76% of the nodes and 23% of the links, and can cause major changes in the value of commonly used network metrics. Also, performing reference resolution prior to relation extraction leads to the retrieval of completely different sets of key entities in comparison to not applying this pre-processing technique. Based on the outcome of the presented experiments, I recommend strategies for avoiding or mitigating the identified issues in practical applications. When extracting socio-technical networks from texts, the set of relevant node classes might go beyond the classes that are typically supported by tools for named entity extraction. I address this lack of technology by developing an entity extractor that combines an ontology for sociotechnical networks that originates from the social sciences, is theoretically grounded and has been empirically validated in prior work, with a supervised machine learning technique that is based on probabilistic graphical models. This thesis does not stop at showing that the resulting prediction models achieve state of the art accuracy rates, but I also describe the process of integrating these models into an existing and publically available end-user product. As a result, users can apply these models to new text data in a convenient fashion. While a plethora of methods for building network data from information explicitly or implicitly contained in text data exists, there is a lack of research on how the resulting networks compare with respect to their structure and properties. This also applies to networks that can be extracted by using the aforementioned entity extractor as part of the relation extraction process. I address this knowledge gap by comparing the networks extracted by using this process to network data built with three alternative methods: text coding based on thesauri that associate text terms with node classes, the construction of network data from meta-data on texts, such as key words and index terms, and building network data in collaboration with subject matter experts. The outcomes of these comparative analyses suggest that thesauri generated with the entity extractor developed for this thesis need adjustments with respect to particular categories and types of errors. I am providing tools and strategies to assist with these refinements. My results also show that once these changes have been made and in contrast to manually constructed thesauri, the prediction models generalize with acceptable accuracy to other domains (news wire data, scientific writing, emails) and writing styles (formal, casual). The comparisons of networks constructed with different methods show that ground truth data built by subject matter experts are hardly resembled by any automated method that analyzes text bodies, and even less so by exploiting existing meta-data from text corpora. Thus, aiming to reconstruct social networks from text data leads to largely incomplete networks. Synthesizing the findings from this work, I outline which types of information on socio-technical networks are best captured by what network data construction method, and how to best combine these methods in order to gain a more comprehensive view on a network. When both, text data and relational data, are available as a source of information on a network, people have previously integrated these data by enhancing social networks with content nodes that represent salient terms from the text data. I present a methodological advancement to this technique and test its performance on the datasets used for the previously mentioned evaluation studies. By using this approach, multiple types of behavioral data, namely interactions between people as well as their language use, can be taken into account. I conclude that extracting content nodes from groups of structurally equivalent agents can be an appropriate strategy for enabling the comparison of the content that people produce, perceive or disseminate. These equivalence classes can represent a variety of social roles and social positions that network members occupy. At the same time, extracting content nodes from groups of structurally coherent agents can be suitable for enabling the enhancement of social networks with content nodes. The results from applying the latter approach to text data include a comparison of the outcome of topic modeling; an efficient and unsupervised information extraction technique, to the outcomes of alternative methods, including entity extraction based on supervised machine learning. My findings suggest that key entities from meta-data knowledge networks might serve as proper labels for unlabeled topics. Also, unsupervised and supervised learning leads to the retrieval of similar entities as highly likely members of highly likely topics, and key nodes from text-based knowledge networks, respectively. In summary, the contributions made with this thesis help people to collect, manage and analyze rich network data at any scale. This is a precondition for asking substantive and graph-theoretical questions, testing hypotheses, and advancing theories about networks. This thesis uses an interdisciplinary and computationally rigorous approach to work towards this goal; thereby advancing the intersection of network analysis, natural language processing and computing.
46

Towards deep content extraction from specialized discourse : the case of verbal relations in patent claims

Ferraro, Gabriela 20 July 2012 (has links)
This thesis addresses the problem of the development of Natural Language Processing techniques for the extraction and generalization of compositional and functional relations from specialized written texts and, in particular, from patent claims. One of the most demanding tasks tackled in the thesis is, according to the state of the art, the semantic generalization of linguistic denominations of relations between object components and processes described in the texts. These denominations are usually verbal expressions or nominalizations that are too concrete to be used as standard labels in knowledge representation forms -as, for example, “A leads to B”, and “C provokes D”, where “leads to” and “provokes” both express, in abstract terms, a cause, such that in both cases “A CAUSE B” and “C CAUSE D” would be more appropriate. A semantic generalization of the relations allows us to achieve a higher degree of abstraction of the relationships between objects and processes described in the claims and reduces their number to a limited set that is oriented towards relations as commonly used in the generic field of knowledge representation. / Esta tesis se centra en el del desarrollo de tecnologías del Procesamiento del Lenguage Natural para la extracción y generalización de relaciones encontradas en textos especializados; concretamente en las reivindicaciones de patentes. Una de las tareas más demandadas de nuestro trabajo, desde el punto vista del estado de la cuestión, es la generalización de las denominaciones lingüísticas de las relaciones. Estas denominaciones, usualmente verbos, son demasiado concretas para ser usadas como etiquetas de relaciones en el contexto de la representación del conocimiento; por ejemplo, “A lleva a B”, “B es el resultado de A” están mejor representadas por “A causa B”. La generalización de relaciones permite reducir el n\'umero de relaciones a un conjunto limitado, orientado al tipo de relaciones utilizadas en el campo de la representación del conocimiento.
47

Relation Classification using Semantically-Enhanced Syntactic Dependency Paths : Combining Semantic and Syntactic Dependencies for Relation Classification using Long Short-Term Memory Networks

Capshaw, Riley January 2018 (has links)
Many approaches to solving tasks in the field of Natural Language Processing (NLP) use syntactic dependency trees (SDTs) as a feature to represent the latent nonlinear structure within sentences. Recently, work in parsing sentences to graph-based structures which encode semantic relationships between words—called semantic dependency graphs (SDGs)—has gained interest. This thesis seeks to explore the use of SDGs in place of and alongside SDTs within a relation classification system based on long short-term memory (LSTM) neural networks. Two methods for handling the information in these graphs are presented and compared between two SDG formalisms. Three new relation extraction system architectures have been created based on these methods and are compared to a recent state-of-the-art LSTM-based system, showing comparable results when semantic dependencies are used to enhance syntactic dependencies, but with significantly fewer training parameters.
48

Ontoilper: an ontology- and inductive logic programming-based method to extract instances of entities and relations from texts

Lima, Rinaldo José de, Freitas, Frederico Luiz Gonçalves de 31 January 2014 (has links)
Submitted by Nayara Passos (nayara.passos@ufpe.br) on 2015-03-13T12:33:46Z No. of bitstreams: 2 TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T13:16:54Z (GMT) No. of bitstreams: 2 TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T13:16:54Z (GMT). No. of bitstreams: 2 TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2014 / CNPq, CAPES. / Information Extraction (IE) consists in the task of discovering and structuring information found in a semi-structured or unstructured textual corpus. Named Entity Recognition (NER) and Relation Extraction (RE) are two important subtasks in IE. The former aims at finding named entities, including the name of people, locations, among others, whereas the latter consists in detecting and characterizing relations involving such named entities in text. Since the approach of manually creating extraction rules for performing NER and RE is an intensive and time-consuming task, researchers have turned their attention to how machine learning techniques can be applied to IE in order to make IE systems more adaptive to domain changes. As a result, a myriad of state-of-the-art methods for NER and RE relying on statistical machine learning techniques have been proposed in the literature. Such systems typically use a propositional hypothesis space for representing examples, i.e., an attribute-value representation. In machine learning, the propositional representation of examples presents some limitations, particularly in the extraction of binary relations, which mainly demands not only contextual and relational information about the involving instances, but also more expressive semantic resources as background knowledge. This thesis attempts to mitigate the aforementioned limitations based on the hypothesis that, to be efficient and more adaptable to domain changes, an IE system should exploit ontologies and semantic resources in a framework for IE that enables the automatic induction of extraction rules by employing machine learning techniques. In this context, this thesis proposes a supervised method to extract both entity and relation instances from textual corpora based on Inductive Logic Programming, a symbolic machine learning technique. The proposed method, called OntoILPER, benefits not only from ontologies and semantic resources, but also relies on a highly expressive relational hypothesis space, in the form of logical predicates, for representing examples whose structure is relevant to the information extraction task. OntoILPER automatically induces symbolic extraction rules that subsume examples of entity and relation instances from a tailored graph-based model of sentence representation, another contribution of this thesis. Moreover, this graph-based model for representing sentences also enables the exploitation of domain ontologies and additional background knowledge in the form of a condensed set of features including lexical, syntactic, semantic, and relational ones. Differently from most of the IE methods (a comprehensive survey is presented in this thesis, including the ones that also apply ILP), OntoILPER takes advantage of a rich text preprocessing stage which encompasses various shallow and deep natural language processing subtasks, including dependency parsing, coreference resolution, word sense disambiguation, and semantic role labeling. Further mappings of nouns and verbs to (formal) semantic resources are also considered. OntoILPER Framework, the OntoILPER implementation, was experimentally evaluated on both NER and RE tasks. This thesis reports the results of several assessments conducted using six standard evaluationcorpora from two distinct domains: news and biomedical. The obtained results demonstrated the effectiveness of OntoILPER on both NER and RE tasks. Actually, the proposed framework outperforms some of the state-of-the-art IE systems compared in this thesis. / A área de Extração de Informação (IE) visa descobrir e estruturar informações dispostas em documentos semi-estruturados ou desestruturados. O Reconhecimento de Entidades Nomeadas (REN) e a Extração de Relações (ER) são duas subtarefas importantes em EI. A primeira visa encontrar entidades nomeadas, incluindo nome de pessoas e lugares, entre outros; enquanto que a segunda, consiste na detecção e caracterização de relações que envolvem as entidades nomeadas presentes no texto. Como a tarefa de criar manualmente as regras de extração para realizar REN e ER é muito trabalhosa e onerosa, pesquisadores têm voltado suas atenções na investigação de como as técnicas de aprendizado de máquina podem ser aplicadas à EI a fim de tornar os sistemas de ER mais adaptáveis às mudanças de domínios. Como resultado, muitos métodos do estado-da-arte em REN e ER, baseados em técnicas estatísticas de aprendizado de máquina, têm sido propostos na literatura. Tais sistemas normalmente empregam um espaço de hipóteses com expressividade propositional para representar os exemplos, ou seja, eles são baseado na tradicional representação atributo-valor. Em aprendizado de máquina, a representação proposicional apresenta algums fatores limitantes, principalmente na extração de relações binárias que exigem não somente informações contextuais e estruturais (relacionais) sobre as instâncias, mas também outras formas de como adicionar conhecimento prévio do problema durante o processo de aprendizado. Esta tese visa atenuar as limitações acima mencionadas, tendo como hipótese de trabalho que, para ser eficiente e mais facilmente adaptável às mudanças de domínio, os sistemas de EI devem explorar ontologias e recursos semânticos no contexto de um arcabouço para EI que permita a indução automática de regras de extração de informação através do emprego de técnicas de aprendizado de máquina. Neste contexto, a presente tese propõe um método supervisionado capaz de extrair instâncias de entidades (ou classes de ontologias) e de relações a partir de textos apoiando-se na Programação em Lógica Indutiva (PLI), uma técnica de aprendizado de máquina supervisionada capaz de induzir regras simbólicas de classificação. O método proposto, chamado OntoILPER, não só se beneficia de ontologias e recursos semânticos, mas também se baseia em um expressivo espaço de hipóteses, sob a forma de predicados lógicos, capaz de representar exemplos cuja estrutura é relevante para a tarefa de EI consideradas nesta tese. OntoILPER automaticamente induz regras simbólicas para classificar exemplos de instâncias de entidades e relações a partir de um modelo de representação de frases baseado em grafos. Tal modelo de representação é uma das constribuições desta tese. Além disso, o modelo baseado em grafos para representação de frases e exemplos (instâncias de classes e relações) favorece a integração de conhecimento prévio do problema na forma de um conjunto reduzido de atributos léxicos, sintáticos, semânticos e estruturais. Diferentemente da maioria dos métodos de EI (uma pesquisa abrangente é apresentada nesta tese, incluindo aqueles que também se aplicam a PLI), OntoILPER faz uso de várias subtarefas do Processamento de Linguagem
49

Prerequisites for Extracting Entity Relations from Swedish Texts

Lenas, Erik January 2020 (has links)
Natural language processing (NLP) is a vibrant area of research with many practical applications today like sentiment analyses, text labeling, questioning an- swering, machine translation and automatic text summarizing. At the moment, research is mainly focused on the English language, although many other lan- guages are trying to catch up. This work focuses on an area within NLP called information extraction, and more specifically on relation extraction, that is, to ex- tract relations between entities in a text. What this work aims at is to use machine learning techniques to build a Swedish language processing pipeline with part-of- speech tagging, dependency parsing, named entity recognition and coreference resolution to use as a base for later relation extraction from archival texts. The obvious difficulty lies in the scarcity of Swedish annotated datasets. For exam- ple, no large enough Swedish dataset for coreference resolution exists today. An important part of this work, therefore, is to create a Swedish coreference solver using distantly supervised machine learning, which means creating a Swedish dataset by applying an English coreference solver on an unannotated bilingual corpus, and then using a word-aligner to translate this machine-annotated En- glish dataset to a Swedish dataset, and then training a Swedish model on this dataset. Using Allen NLP:s end-to-end coreference resolution model, both for creating the Swedish dataset and training the Swedish model, this work achieves an F1-score of 0.5. For named entity recognition this work uses the Swedish BERT models released by the Royal Library of Sweden in February 2020 and achieves an overall F1-score of 0.95. To put all of these NLP-models within a single Lan- guage Processing Pipeline, Spacy is used as a unifying framework. / Natural Language Processing (NLP) är ett stort och aktuellt forskningsområde idag med många praktiska tillämpningar som sentimentanalys, textkategoriser- ing, maskinöversättning och automatisk textsummering. Forskningen är för när- varande mest inriktad på det engelska språket, men många andra språkområ- den försöker komma ikapp. Det här arbetet fokuserar på ett område inom NLP som kallas informationsextraktion, och mer specifikt relationsextrahering, det vill säga att extrahera relationer mellan namngivna entiteter i en text. Vad det här ar- betet försöker göra är att använda olika maskininlärningstekniker för att skapa en svensk Language Processing Pipeline bestående av part-of-speech tagging, de- pendency parsing, named entity recognition och coreference resolution. Denna pipeline är sedan tänkt att användas som en bas for senare relationsextrahering från svenskt arkivmaterial. Den uppenbara svårigheten med detta ligger i att det är ont om stora, annoterade svenska dataset. Till exempel så finns det inget till- räckligt stort svenskt dataset för coreference resolution. En stor del av detta arbete går därför ut på att skapa en svensk coreference solver genom att implementera distantly supervised machine learning, med vilket menas att använda en engelsk coreference solver på ett oannoterat engelskt-svenskt corpus, och sen använda en word-aligner för att översätta detta maskinannoterade engelska dataset till ett svenskt, och sen träna en svensk coreference solver på detta dataset. Det här arbetet använder Allen NLP:s end-to-end coreference solver, både för att skapa det svenska datasetet, och för att träna den svenska modellen, och uppnår en F1-score på 0.5. Vad gäller named entity recognition så använder det här arbetet Kungliga Bibliotekets BERT-modeller som bas, och uppnår genom detta en F1- score på 0.95. Spacy används som ett enande ramverk för att samla alla dessa NLP-komponenter inom en enda pipeline.

Page generated in 0.0914 seconds