Spelling suggestions: "subject:"[een] NLP"" "subject:"[enn] NLP""
211 |
Natural language processing techniques for the purpose of sentinel event information extractionBarrett, Neil 23 November 2012 (has links)
An approach to biomedical language processing is to apply existing natural language processing (NLP) solutions to biomedical texts. Often, existing NLP solutions are less successful in the biomedical domain relative to their non-biomedical domain performance (e.g., relative to newspaper text). Biomedical NLP is likely best served by methods, information and tools that account for its particular challenges. In this thesis, I describe an NLP system specifically engineered for sentinel event extraction from clinical documents. The NLP system's design accounts for several biomedical NLP challenges. The specific contributions are as follows.
- Biomedical tokenizers differ, lack consensus over output tokens and are difficult to extend. I developed an extensible tokenizer, providing a tokenizer design pattern and implementation guidelines. It evaluated as equivalent to a leading biomedical tokenizer (MedPost).
- Biomedical part-of-speech (POS) taggers are often trained on non-biomedical corpora and applied to biomedical corpora. This results in a decrease in tagging accuracy. I built a token centric POS tagger, TcT, that is more accurate than three existing POS taggers (mxpost, TnT and Brill) when trained on a non-biomedical corpus and evaluated on biomedical corpora. TcT achieves this increase in tagging accuracy by ignoring previously assigned POS tags and restricting the tagger's scope to the current token, previous token and following token.
- Two parsers, MST and Malt, have been evaluated using perfect POS tag input. Given that perfect input is unlikely in biomedical NLP tasks, I evaluated these two parsers on imperfect POS tag input and compared their results. MST was most affected by imperfectly POS tagged biomedical text. I attributed MST's drop in performance to verbs and adjectives where MST had more potential for performance loss than Malt. I attributed Malt's resilience to POS tagging errors to its use of a rich feature set and a local scope in decision making.
- Previous automated clinical coding (ACC) research focuses on mapping narrative phrases to terminological descriptions (e.g., concept descriptions). These methods make little or no use of the additional semantic information available through topology. I developed a token-based ACC approach that encodes tokens and manipulates token-level encodings by mapping linguistic structures to topological operations in SNOMED CT. My ACC method recalled most concepts given their descriptions and performed significantly better than MetaMap.
I extended my contributions for the purpose of sentinel event extraction from clinical letters. The extensions account for negation in text, use medication brand names during ACC and model (coarse) temporal information. My software system's performance is similar to state-of-the-art results. Given all of the above, my thesis is a blueprint for building a biomedical NLP system. Furthermore, my contributions likely apply to NLP systems in general. / Graduate
|
212 |
Liage de données RDF : évaluation d'approches interlingues / RDF Data Interlinking : evaluation of Cross-lingual MethodsLesnikova, Tatiana 04 May 2016 (has links)
Le Web des données étend le Web en publiant des données structurées et liées en RDF. Un jeu de données RDF est un graphe orienté où les ressources peuvent être des sommets étiquetées dans des langues naturelles. Un des principaux défis est de découvrir les liens entre jeux de données RDF. Étant donnés deux jeux de données, cela consiste à trouver les ressources équivalentes et les lier avec des liens owl:sameAs. Ce problème est particulièrement difficile lorsque les ressources sont décrites dans différentes langues naturelles.Cette thèse étudie l'efficacité des ressources linguistiques pour le liage des données exprimées dans différentes langues. Chaque ressource RDF est représentée comme un document virtuel contenant les informations textuelles des sommets voisins. Les étiquettes des sommets voisins constituent le contexte d'une ressource. Une fois que les documents sont créés, ils sont projetés dans un même espace afin d'être comparés. Ceci peut être réalisé à l'aide de la traduction automatique ou de ressources lexicales multilingues. Une fois que les documents sont dans le même espace, des mesures de similarité sont appliquées afin de trouver les ressources identiques. La similarité entre les documents est prise pour la similarité entre les ressources RDF.Nous évaluons expérimentalement différentes méthodes pour lier les données RDF. En particulier, deux stratégies sont explorées: l'application de la traduction automatique et l'usage des banques de données terminologiques et lexicales multilingues. Dans l'ensemble, l'évaluation montre l'efficacité de ce type d'approches. Les méthodes ont été évaluées sur les ressources en anglais, chinois, français, et allemand. Les meilleurs résultats (F-mesure > 0.90) ont été obtenus par la traduction automatique. L'évaluation montre que la méthode basée sur la similarité peut être appliquée avec succès sur les ressources RDF indépendamment de leur type (entités nommées ou concepts de dictionnaires). / The Semantic Web extends the Web by publishing structured and interlinked data using RDF.An RDF data set is a graph where resources are nodes labelled in natural languages. One of the key challenges of linked data is to be able to discover links across RDF data sets. Given two data sets, equivalent resources should be identified and linked by owl:sameAs links. This problem is particularly difficult when resources are described in different natural languages.This thesis investigates the effectiveness of linguistic resources for interlinking RDF data sets. For this purpose, we introduce a general framework in which each RDF resource is represented as a virtual document containing text information of neighboring nodes. The context of a resource are the labels of the neighboring nodes. Once virtual documents are created, they are projected in the same space in order to be compared. This can be achieved by using machine translation or multilingual lexical resources. Once documents are in the same space, similarity measures to find identical resources are applied. Similarity between elements of this space is taken for similarity between RDF resources.We performed evaluation of cross-lingual techniques within the proposed framework. We experimentally evaluate different methods for linking RDF data. In particular, two strategies are explored: applying machine translation or using references to multilingual resources. Overall, evaluation shows the effectiveness of cross-lingual string-based approaches for linking RDF resources expressed in different languages. The methods have been evaluated on resources in English, Chinese, French and German. The best performance (over 0.90 F-measure) was obtained by the machine translation approach. This shows that the similarity-based method can be successfully applied on RDF resources independently of their type (named entities or thesauri concepts). The best experimental results involving just a pair of languages demonstrated the usefulness of such techniques for interlinking RDF resources cross-lingually.
|
213 |
ELiTe-[FLE]2 : un environnement d'ALAO fondé sur la linguistique textuelle, pour la formation linguistique des futurs enseignants de FLE en Colombie / ELiTe-[FLE]2 : a CALL Environment Based on Text Linguistics, Aimed at Helping future FFL teachers in Colombia through Linguistics' TrainingMolina Mejia, Jorge Mauricio 06 November 2015 (has links)
Nous présentons, dans ce manuscrit, un dispositif informatique d'aide à la formation des futurs enseignants de FLE en Colombie. Il prend ses sources dans la linguistique textuelle et cherche à améliorer le niveau linguistique des étudiants universitaires actuellement en formation. Pour ce faire, le dispositif est fondé sur un corpus textuel spécifiquement annoté et étiqueté grâce aux outils de traitement automatique de langues (TAL) et à des annotations manuelles en format XML. Ceci permet de développer des activités à visée formative, en tenant compte des besoins exprimés par les publics cibles (enseignants-formateurs et leurs étudiants en formation).Comme nous l'exposons tout au long de cette thèse, l'élaboration d'un système comme le nôtre est le produit de la mise en œuvre de connaissances et de compétences issues de plusieurs disciplines et/ou domaines : didactique des langues, ingénierie pédagogique, linguistique générale, linguistique textuelle, linguistique de corpus, TAL et ALAO. Il se veut, principalement, un dispositif pédagogique pour la formation des étudiants en FLE dans le contexte de l'éducation supérieure en Colombie, un outil pensé en fonction des besoins et des objectifs de cet apprentissage. L'originalité de notre système repose sur le type de public choisi, le modèle didactique de formation mis en œuvre et la spécificité du corpus utilisé. À notre connaissance, il s'agit d'un des premiers systèmes d'ALAO fondé sur la linguistique textuelle s'adressant à la formation des futurs enseignants de FLE dans un contexte exolingue. / This thesis presents a computer device aimed at helping future FFL teacher training in Colombian universities. It is grounded in text linguistics and aims to contribute to improving the linguistic level of university students currently in training. To do so, this device is based on a textual corpus specifically annotated and labeled thanks to natural language processing (NLP) tools and to manual annotations in XML format. This should allow the development of activities with a formative aim, while also taking into account the needs expressed by the target public (teachers/trainers and their students, the trainees).As explained throughout this thesis, the elaboration of such a system is based on knowledge and skills stemming from several disciplines and/or fields: language didactics, educational engineering, general linguistics, textual linguistics, corpus linguistics, NLP and CALL. The ambition is to provide trainees and trainers in higher education in Colombia with a tool designed according to their needs and their learning aims and objectives. Finally, the originality of this system consists in the choice of target users, the didactic training model implemented and the specificity of the corpus annotated for the activities. It is one of the first CALL systems based on textual linguistics specifically targeted at training future FFL teachers in a non-native language context.
|
214 |
Computational Analyses of Scientific Publications Using Raw and Manually Curated Data with Applications to Text VisualizationShokat, Imran January 2018 (has links)
Text visualization is a field dedicated to the visual representation of textual data by using computer technology. A large number of visualization techniques are available, and now it is becoming harder for researchers and practitioners to choose an optimal technique for a particular task among the existing techniques. To overcome this problem, the ISOVIS Group developed an interactive survey browser for text visualization techniques. ISOVIS researchers gathered papers which describe text visualization techniques or tools and categorized them according to a taxonomy. Several categories were manually assigned to each visualization technique. In this thesis, we aim to analyze the dataset of this browser. We carried out several analyses to find temporal trends and correlations of the categories present in the browser dataset. In addition, a comparison of these categories with a computational approach has been made. Our results show that some categories became more popular than before whereas others have declined in popularity. The cases of positive and negative correlation between various categories have been found and analyzed. Comparison between manually labeled datasets and results of computational text analyses were presented to the experts with an opportunity to refine the dataset. Data which is analyzed in this thesis project is specific to text visualization field, however, methods that are used in the analyses can be generalized for applications to other datasets of scientific literature surveys or, more generally, other manually curated collections of textual documents.
|
215 |
Predicative Analysis for Information Extraction : application to the biology domain / Analyse prédicative pour l'extraction d'information : application au domaine de la biologieRatkovic, Zorana 11 December 2014 (has links)
L’abondance de textes dans le domaine biomédical nécessite le recours à des méthodes de traitement automatique pour améliorer la recherche d’informations précises. L’extraction d’information (EI) vise précisément à extraire de l’information pertinente à partir de données non-structurées. Une grande partie des méthodes dans ce domaine se concentre sur les approches d’apprentissage automatique, en ayant recours à des traitements linguistiques profonds. L’analyse syntaxique joue notamment un rôle important, en fournissant une analyse précise des relations entre les éléments de la phrase.Cette thèse étudie le rôle de l’analyse syntaxique en dépendances dans le cadre d’applications d’EI dans le domaine biomédical. Elle comprend l’évaluation de différents analyseurs ainsi qu’une analyse détaillée des erreurs. Une fois l’analyseur le plus adapté sélectionné, les différentes étapes de traitement linguistique pour atteindre une EI de haute qualité, fondée sur la syntaxe, sont abordés : ces traitements incluent des étapes de pré-traitement (segmentation en mots) et des traitements linguistiques de plus haut niveau (lié à la sémantique et à l’analyse de la coréférence). Cette thèse explore également la manière dont les différents niveaux de traitement linguistique peuvent être représentés puis exploités par l’algorithme d’apprentissage. Enfin, partant du constat que le domaine biomédical est en fait extrêmement diversifié, cette thèse explore l’adaptation des techniques à différents sous-domaines, en utilisant des connaissances et des ressources déjà existantes. Les méthodes et les approches décrites sont explorées en utilisant deux corpus biomédicaux différents, montrant comment les résultats d’IE sont utilisés dans des tâches concrètes. / The abundance of biomedical information expressed in natural language has resulted in the need for methods to process this information automatically. In the field of Natural Language Processing (NLP), Information Extraction (IE) focuses on the extraction of relevant information from unstructured data in natural language. A great deal of IE methods today focus on Machine Learning (ML) approaches that rely on deep linguistic processing in order to capture the complex information contained in biomedical texts. In particular, syntactic analysis and parsing have played an important role in IE, by helping capture how words in a sentence are related. This thesis examines how dependency parsing can be used to facilitate IE. It focuses on a task-based approach to dependency parsing evaluation and parser selection, including a detailed error analysis. In order to achieve a high quality of syntax-based IE, different stages of linguistic processing are addressed, including both pre-processing steps (such as tokenization) and the use of complementary linguistic processing (such as the use of semantics and coreference analysis). This thesis also explores how the different levels of linguistics processing can be represented for use within an ML-based IE algorithm, and how the interface between these two is of great importance. Finally, biomedical data is very heterogeneous, encompassing different subdomains and genres. This thesis explores how subdomain-adaptationcan be achieved by using already existing subdomain knowledge and resources. The methods and approaches described are explored using two different biomedical corpora, demonstrating how the IE results are used in real-life tasks.
|
216 |
Enfrentamento do problema das divergências de tradução por um sistema de tradução automática : um exercício exploratório /Oliveira, Mirna Fernanda de. January 2006 (has links)
Orientador: Bento Carlos Dias da Silva / Banca: Beatriz Nunes de Oliveira Longo / Banca: Dirce Charara Monteiro / Banca: Gladis Maria de Barcellos Almeida / Banca: Heronides Maurílio de Melo Moura / Resumo: O objetivo desta tese é desenvolver um estudo lingüístico-computacional exploratório de um problema específico que deve ser enfrentado por sistemas de tradução automática: o problema da divergências de tradução quer de natureza sintática quer de natureza léxico-semântica que se verificam entre pares de sentenças de línguas naturais diferentes. Para isso, fundamenta-se na metodologia de pesquisa interdisciplinar em PLN (Processamento Automático de Línguas Naturais) de Dias-da-Silva (1996, 1998 e 2003) e na teoria lingüístico-computacional subjacente ao sistema de tradução automática UNITRAN de Dorr (1993), que, por sua vez é subsidiado pela teoria sintática dos princípios e Parâmetros de Chomsky (1981) e pela teoria semântica das Estruturas conceituais de Jackendoff (1990). Como contribuição, a tese descreve a composição e o funcionamento do UNITRAN, desenhado para dar conta de parte do problema posto pelas divergências de tradução e ilustra a possibilidade de inclusão do português nesse sistema através do exame de alguns tipos de divergências que se verificam entre frases do inglês e do português. / Abstract: This dissertation aims to develop an exploratory linguistic and computational study of an especific type of problem that must be faced by machine translation systems: the problem of translation divergences, whether syntactic or lexical-semantic ones that can be verified between distinct natural language sentence. In order to achieve this aim, this work is based on the interdisciplinary research metodology of the NLP (Natural Language Processing) field developed by Dias-da-Silva (1996, 1998 & 2003) and on the linguistic computacional theory behind UNITRAN, a machine translation systemdeveloped by Dorr (1993), a system that is on its turned based on Chomsky's syntactic theory of Government and Binding (1981) and Jackendoff's semantic theory of Conceptual Structures (1990). As a contribution to the field of NLP, this dissertation describes the machinery of UNITRAN, designed to deal with part of the problem of translation divergencies, and it illustrates the possibility of including Brazilian Portuguese language in the system through the investigation of certain kinds of divergences that can be found between English and Brazilian Portuguese senteces. / Doutor
|
217 |
Predicting and Estimating Execution Time of Manual Test Cases - A Case Study in Railway DomainAmeerjan, Sharvathul Hasan January 2017 (has links)
Testing plays a vital role in the software development life cycle by verifying and validating the software's quality. Since software testing is considered as an expensive activity and due to thelimitations of budget and resources, it is necessary to know the execution time of the test cases for an efficient planning of test-related activities such as test scheduling, prioritizing test cases and monitoring the test progress. In this thesis, an approach is proposed to predict and estimate the execution time of manual test cases written in English natural language. The method uses test specifications and historical data that are available from previously executed test cases. Our approach works by obtaining timing information from each and every step of previously executed test cases. The collected data is used to estimate the execution time for non-executed test cases by mapping them using text from their test specifications. Using natural language processing, texts are extracted from the test specification document and mapped with the obtained timing information. After estimating the time from this mapping, a linear regression analysis is used to predict the execution time of non-executed test cases. A case study has been conducted in Bombardier Transportation (BT) where the proposed method is implemented and the results are validated. The obtained results show that the predicted execution time of studied test cases are close to their actual execution time.
|
218 |
Detection of deceptive reviews : using classification and natural language processing featuresFernquist, Johan January 2016 (has links)
With the great growth of open forums online where anyone can givetheir opinion on everything, the Internet has become a place wherepeople are trying to mislead others. By assuming that there is acorrelation between a deceptive text's purpose and the way to writethe text, our goal with this thesis was to develop a model fordetecting these fake texts by taking advantage of this correlation.Our approach was to use classification together with threedifferent feature types, term frequency-inverse document frequency,word2vec and probabilistic context-free grammar. We have managed todevelop a model which have improved all, to us known, results for twodifferent datasets.With machine translation, we have detected that there is apossibility to hide the stylometric footprints and thecharacteristics of deceptive texts, making it possible to slightlydecrease the accuracy of a classifier and still convey a message.Finally we investigated whether it was possible to train and test ourmodel on data from different sources and managed to achieve anaccuracy hardly better than chance. That indicated the resultingmodel is not versatile enough to be used on different kinds ofdeceptive texts than it has been trained on.
|
219 |
Semantic Analysis of Natural Language and Definite Clause Grammar using Statistical Parsing and ThesauriDagerman, Björn January 2013 (has links)
Services that rely on the semantic computations of users’ natural linguistic inputs are becoming more frequent. Computing semantic relatedness between texts is problematic due to the inherit ambiguity of natural language. The purpose of this thesis was to show how a sentence could be compared to a predefined semantic Definite Clause Grammar (DCG). Furthermore, it should show how a DCG-based system could benefit from such capabilities. Our approach combines openly available specialized NLP frameworks for statistical parsing, part-of-speech tagging and word-sense disambiguation. We compute the semantic relatedness using a large lexical and conceptual-semantic thesaurus. Also, we extend an existing programming language for multimodal interfaces, which uses static predefined DCGs: COactive Language Definition (COLD). That is, every word that should be acceptable by COLD needs to be explicitly defined. By applying our solution, we show how our approach can remove dependencies on word definitions and improve grammar definitions in DCG-based systems.
|
220 |
Bootstrapping Named Entity Annotation by Means of Active Machine Learning: A Method for Creating CorporaOlsson, Fredrik January 2008 (has links)
This thesis describes the development and in-depth empirical investigation of a method, called BootMark, for bootstrapping the marking up of named entities in textual documents. The reason for working with documents, as opposed to for instance sentences or phrases, is that the BootMark method is concerned with the creation of corpora. The claim made in the thesis is that BootMark requires a human annotator to manually annotate fewer documents in order to produce a named entity recognizer with a given performance, than would be needed if the documents forming the basis for the recognizer were randomly drawn from the same corpus. The intention is then to use the created named en- tity recognizer as a pre-tagger and thus eventually turn the manual annotation process into one in which the annotator reviews system-suggested annotations rather than creating new ones from scratch. The BootMark method consists of three phases: (1) Manual annotation of a set of documents; (2) Bootstrapping – active machine learning for the purpose of selecting which document to an- notate next; (3) The remaining unannotated documents of the original corpus are marked up using pre-tagging with revision. Five emerging issues are identified, described and empirically investigated in the thesis. Their common denominator is that they all depend on the real- ization of the named entity recognition task, and as such, require the context of a practical setting in order to be properly addressed. The emerging issues are related to: (1) the characteristics of the named entity recognition task and the base learners used in conjunction with it; (2) the constitution of the set of documents annotated by the human annotator in phase one in order to start the bootstrapping process; (3) the active selection of the documents to annotate in phase two; (4) the monitoring and termination of the active learning carried out in phase two, including a new intrinsic stopping criterion for committee-based active learning; and (5) the applicability of the named entity recognizer created during phase two as a pre-tagger in phase three. The outcomes of the empirical investigations concerning the emerging is- sues support the claim made in the thesis. The results also suggest that while the recognizer produced in phases one and two is as useful for pre-tagging as a recognizer created from randomly selected documents, the applicability of the recognizer as a pre-tagger is best investigated by conducting a user study involving real annotators working on a real named entity recognition task.
|
Page generated in 0.0554 seconds