• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 929
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1601
  • 1601
  • 1601
  • 622
  • 565
  • 464
  • 383
  • 376
  • 266
  • 256
  • 245
  • 228
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
761

Proposition-based summarization with a coherence-driven incremental model

Fang, Yimai January 2019 (has links)
Summarization models which operate on meaning representations of documents have been neglected in the past, although they are a very promising and interesting class of methods for summarization and text understanding. In this thesis, I present one such summarizer, which uses the proposition as its meaning representation. My summarizer is an implementation of Kintsch and van Dijk's model of comprehension, which uses a tree of propositions to represent the working memory. The input document is processed incrementally in iterations. In each iteration, new propositions are connected to the tree under the principle of local coherence, and then a forgetting mechanism is applied so that only a few important propositions are retained in the tree for the next iteration. A summary can be generated using the propositions which are frequently retained. Originally, this model was only played through by hand by its inventors using human-created propositions. In this work, I turned it into a fully automatic model using current NLP technologies. First, I create propositions by obtaining and then transforming a syntactic parse. Second, I have devised algorithms to numerically evaluate alternative ways of adding a new proposition, as well as to predict necessary changes in the tree. Third, I compared different methods of modelling local coherence, including coreference resolution, distributional similarity, and lexical chains. In the first group of experiments, my summarizer realizes summary propositions by sentence extraction. These experiments show that my summarizer outperforms several state-of-the-art summarizers. The second group of experiments concerns abstractive generation from propositions, which is a collaborative project. I have investigated the option of compressing extracted sentences, but generation from propositions has been shown to provide better information packaging.
762

Extração de informação e documentação de laudos médicos. / Information extraction and medical reports documentation.

Bacic, Alice Shimada 09 May 2007 (has links)
Os sistemas de informação hospitalares geram diariamente uma quantidade significativa de dados em formato de texto livre, principalmente através de laudos médicos. Os laudos geralmente são recuperados do sistema através de informações associadas, como identificação do paciente, por datas ou profissional responsável. A recuperação da informação a partir do conteúdo descritivo é uma tarefa não trivial, pois os sistemas hospitalares em geral não são capazes de verificar o conteúdo de um texto livre em uma busca. Não havendo uma estrutura básica de organização, categorização ou indexação do texto livre armazenado nas bases hospitalares, uma grande quantidade de informação deixa de estar disponível para profissionais que necessitam delas, pois não sabem como recuperá-las. A capacidade de recuperação do conhecimento armazenado nestas bases de dados seria de grande valia para pesquisadores, estudantes ou mesmo para o estudo de casos clínicos. Segundo o contexto descrito, este trabalho propõe a criação de uma ferramenta de documentação automática que tem por objetivo gerar uma formatação associada ao texto livre de laudos em radiologia através da adição de informações obtidas a partir de sistemas de terminologias médicos padronizados. Com este procedimento, pretende-se facilitar a pesquisa pelo conhecimento armazenado em uma base de dados médicos através da informação adicional gerada. Para tanto o trabalho envolve pesquisas nas áreas de Ontologias e Extração deInformação, uma subárea do Processamento de linguagem Natural. As ontologias são importantes neste trabalho por tratarem o problema da padronização das terminologias usadas na escrita dos laudos, bem como para fornecer a organização e formatação necessária para que os laudos passem a ser partes de uma base de conhecimento. ) A Extração de Informação fornece os algoritmos e técnicas necessárias para que os laudos sejam documentados de forma automática, minimizando a necessidade de intervenção humana, normalmente muito custosa em termos de trabalho manual e tempo. Como resultado final obteve-se um conjunto de metodologias e ferramentas capazes de receber um laudo em texto livre e gerar um documento XML rotulado com códigos de conceitos definidos em um sistema de terminologias médico, como o UMLS ou o Radlex. Em todas as fases de processamento, até a obtenção do arquivo XML de saída, obteve-se valores de precisão superiores a 70%, um resultado bastante satisfatório se considerado que os algoritmos de PLN utilizados são todos baseados em regras. Em adição às ferramentas de PLN desenvolvidas, cita-se como resultados, os trabalhos desenvolvidos para avaliação de ontologias médicas segundo uma área médica prédefinido, a organização das ontologias em um formato útil para a utilização por algoritmos de PLN, a criação de um Corpus de laudos de Raio-X de Tórax em português para treinamento e testes de aplicações de PLN e um modelo de informação para documentação dos laudos. / Hospital Information Systems generate each day a significant amount of data in free text format, mainly as medical reports. Normally the reports are recovered from the system through associated information like patient identification, dates or responsible identification, for example. To recover a report by its content is not a trivial task since hospital systems are not capable of searching the free text content. Without a basic organizational structure, some categorization and indexing the free text stored on the hospital database is not accessible, since it cannot be recovered in the right context when it is needed. The ability of recovering the knowledge stored on these databases would be valuable for researchers, students or for the study of clinical cases. According to the described context, this work considers the creation of a tool for automatic documentation of medical reports written in free text. The main objective is to format radiological reports to achieve a more efficient way of recovering the knowledge stored in medical report\'s databases. To achieve this goal, information from medical terminology systems is added to the original report automatically. Such task requires some research in the field of Ontologies and Information Extraction, a sub field of Natural Language Processing. Ontologies are important in this work because they provide the standardization needed for the terminologies used in the written reports. It is important too forsupplying the organization necessary to format the reports in an adequate way to be stored on the knowledge base. Information Extraction supplies the algorithms and the necessary techniques to register in an automatic way the radiological reports, minimizing the human intervention, normally with a high cost in terms of handwork and time. ) The final result achieved was a set of methodologies and tools used to process a free text report, generating a XML document tagged with codes extracted from a medical terminology system. Considering all process steps, it was achieved a precision of at least 70%, in each step, a good score, if we consider that all the algorithms are rule based. In addiction to the NLP tools results, there are results concerning to medical ontologies evaluation for a pre-defined medical area, the organization need to make the ontologies usable by the NLP tools, the creation of a x-ray Corpus of reports in Portuguese and an information model used to document the reports. The Corpus could be used on the evaluation and test of NLP tools.
763

Desenvolvimento de técnicas baseadas em redes complexas para sumarização extrativa de textos / Development of techniques based on complex networks for extractive text summarization

Antiqueira, Lucas 27 February 2007 (has links)
A Sumarização Automática de Textos tem considerável importância nas tarefas de localização e utilização de conteúdo relevante em meio à quantidade enorme de informação disponível atualmente em meio digital. Nessa área, procura-se desenvolver técnicas que possibilitem obter o conteúdo mais relevante de documentos, de maneira condensada, sem alterar seu significado original, e com mínima intervenção humana. O objetivo deste trabalho de mestrado foi investigar de que maneira conceitos desenvolvidos na área de Redes Complexas podem ser aplicados à Sumarização Automática de Textos, mais especificamente à sumarização extrativa. Embora grande parte das pesquisas em sumarização tenha se voltado para a utilização de técnicas extrativas, ainda é possível melhorar o nível de informatividade dos extratos gerados automaticamente. Neste trabalho, textos foram representados como redes, das quais foram extraídas medidas tradicionalmente utilizadas na caracterização de redes complexas (por exemplo, coeficiente de aglomeração, grau hierárquico e índice de localidade), com o intuito de fornecer subsídios à seleção das sentenças mais significativas de um texto. Essas redes são formadas pelas sentenças (representadas pelos vértices) de um determinado texto, juntamente com as repetições (representadas pelas arestas) de substantivos entre sentenças após lematização. Cada método de sumarização proposto foi aplicado no córpus TeMário, de textos jornalísticos em português, e em córpus das conferências DUC, de textos jornalísticos em inglês. A avaliação desse estudo foi feita por meio da realização de quatro experimentos, fazendo-se uso de métodos de avaliação automática (Rouge-1 e Precisão/Cobertura de sentenças) e comparando-se os resultados com os de outros sistemas de sumarização extrativa. Os melhores sumarizadores propostos referem-se aos seguintes conceitos: d-anel, grau, k-núcleo e caminho mínimo. Foram obtidos resultados comparáveis aos dos melhores métodos de sumarização já propostos para o português, enquanto que, para o inglês, os resultados são menos expressivos. / Automatic Text Summarization has considerably importance in tasks such as finding and using relevant content in the enormous amount of information available nowadays in digital media. The focus in this field is on the development of techniques that allow someone to obtain the most relevant content of documents, in a condensed way, preserving the original meaning and with little (or even none) human help. The purpose of this MSc project was to investigate a way of applying concepts borrowed from the studies of Complex Networks to the Automatic Text Summarization field, specifically to the task of extractive summarization. Although the majority of works in summarization have focused on extractive techniques, it is still possible to obtain better levels of informativity in extracts automatically generated. In this work, texts were represented as networks, from which the most significant sentences were selected through the use of ranking algorithms. Such networks are obtained from a text in the following manner: the sentences are represented as nodes, and an edge between two nodes is created if there is at least one repetition of a noun in both sentences, after the lemmatization step. Measurements typically employed in the characterization of complex networks, such as clustering coefficient, hierarchical degree and locality index, were used on the basis of the process of node (sentence) selection in order to build an extract. Each summarization technique proposed was applied to the TeMário corpus, which comprises newspaper articles in Portuguese, and to the DUC corpora, which comprises newspaper articles in English. Four evaluation experiments were carried out, by means of automatic evaluation measurements (Rouge-1 and sentence Precision/Recall) and comparison with the results obtained by other extractive summarization systems. The best summarizers are the ones based on the following concepts: d-ring, degree, k-core and shortest path. Performances comparable to the best summarization systems for Portuguese were achieved, whilst the results are less significant for English.
764

Multimodal Machine Translation / Traduction Automatique Multimodale

Caglayan, Ozan 27 August 2019 (has links)
La traduction automatique vise à traduire des documents d’une langue à une autre sans l’intervention humaine. Avec l’apparition des réseaux de neurones profonds (DNN), la traduction automatique neuronale(NMT) a commencé à dominer le domaine, atteignant l’état de l’art pour de nombreuses langues. NMT a également ravivé l’intérêt pour la traduction basée sur l’interlangue grâce à la manière dont elle place la tâche dans un cadre encodeur-décodeur en passant par des représentations latentes. Combiné avec la flexibilité architecturale des DNN, ce cadre a aussi ouvert une piste de recherche sur la multimodalité, ayant pour but d’enrichir les représentations latentes avec d’autres modalités telles que la vision ou la parole, par exemple. Cette thèse se concentre sur la traduction automatique multimodale(MMT) en intégrant la vision comme une modalité secondaire afin d’obtenir une meilleure compréhension du langage, ancrée de façon visuelle. J’ai travaillé spécifiquement avec un ensemble de données contenant des images et leurs descriptions traduites, où le contexte visuel peut être utile pour désambiguïser le sens des mots polysémiques, imputer des mots manquants ou déterminer le genre lors de la traduction vers une langue ayant du genre grammatical comme avec l’anglais vers le français. Je propose deux approches principales pour intégrer la modalité visuelle : (i) un mécanisme d’attention multimodal qui apprend à prendre en compte les représentations latentes des phrases sources ainsi que les caractéristiques visuelles convolutives, (ii) une méthode qui utilise des caractéristiques visuelles globales pour amorcer les encodeurs et les décodeurs récurrents. Grâce à une évaluation automatique et humaine réalisée sur plusieurs paires de langues, les approches proposées se sont montrées bénéfiques. Enfin,je montre qu’en supprimant certaines informations linguistiques à travers la dégradation systématique des phrases sources, la véritable force des deux méthodes émerge en imputant avec succès les noms et les couleurs manquants. Elles peuvent même traduire lorsque des morceaux de phrases sources sont entièrement supprimés. / Machine translation aims at automatically translating documents from one language to another without human intervention. With the advent of deep neural networks (DNN), neural approaches to machine translation started to dominate the field, reaching state-ofthe-art performance in many languages. Neural machine translation (NMT) also revived the interest in interlingual machine translation due to how it naturally fits the task into an encoder-decoder framework which produces a translation by decoding a latent source representation. Combined with the architectural flexibility of DNNs, this framework paved the way for further research in multimodality with the objective of augmenting the latent representations with other modalities such as vision or speech, for example. This thesis focuses on a multimodal machine translation (MMT) framework that integrates a secondary visual modality to achieve better and visually grounded language understanding. I specifically worked with a dataset containing images and their translated descriptions, where visual context can be useful forword sense disambiguation, missing word imputation, or gender marking when translating from a language with gender-neutral nouns to one with grammatical gender system as is the case with English to French. I propose two main approaches to integrate the visual modality: (i) a multimodal attention mechanism that learns to take into account both sentence and convolutional visual representations, (ii) a method that uses global visual feature vectors to prime the sentence encoders and the decoders. Through automatic and human evaluation conducted on multiple language pairs, the proposed approaches were demonstrated to be beneficial. Finally, I further show that by systematically removing certain linguistic information from the input sentences, the true strength of both methods emerges as they successfully impute missing nouns, colors and can even translate when parts of the source sentences are completely removed.
765

Deep Neural Networks for Multi-Label Text Classification: Application to Coding Electronic Medical Records

Rios, Anthony 01 January 2018 (has links)
Coding Electronic Medical Records (EMRs) with diagnosis and procedure codes is an essential task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient’s well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. Therefore, it is necessary to develop automated diagnosis and procedure code recommendation methods that can be used by professional medical coders. The main difficulty with developing automated EMR coding methods is the nature of the label space. The standardized vocabularies used for medical coding contain over 10 thousand codes. The label space is large, and the label distribution is extremely unbalanced - most codes occur very infrequently, with a few codes occurring several orders of magnitude more than others. A few codes never occur in training dataset at all. In this work, we present three methods to handle the large unbalanced label space. First, we study how to augment EMR training data with biomedical data (research articles indexed on PubMed) to improve the performance of standard neural networks for text classification. PubMed indexes more than 23 million citations. Many of the indexed articles contain relevant information about diagnosis and procedure codes. Therefore, we present a novel method of incorporating this unstructured data in PubMed using transfer learning. Second, we combine ideas from metric learning with recent advances in neural networks to form a novel neural architecture that better handles infrequent codes. And third, we present new methods to predict codes that have never appeared in the training dataset. Overall, our contributions constitute advances in neural multi-label text classification with potential consequences for improving EMR coding.
766

Using Word Embeddings to Explore the Language of Depression on Twitter

Gopchandani, Sandhya 01 January 2019 (has links)
How do people discuss mental health on social media? Can we train a computer program to recognize differences between discussions of depression and other topics? Can an algorithm predict that someone is depressed from their tweets alone? In this project, we collect tweets referencing “depression” and “depressed” over a seven year period, and train word embeddings to characterize linguistic structures within the corpus. We find that neural word embeddings capture the contextual differences between “depressed” and “healthy” language. We also looked at how context around words may have changed over time to get deeper understanding of contextual shifts in the word usage. Finally, we trained a deep learning network on a much smaller collection of tweets authored by individuals formally diagnosed with depression. The best performing model for the prediction task is Convolutional LSTM (CNN-LSTM) model with a F-score of 69% on test data. The results suggest social media could serve as a valuable screening tool for mental health.
767

Indirect Relatedness, Evaluation, and Visualization for Literature Based Discovery

Henry, Sam 01 January 2019 (has links)
The exponential growth of scientific literature is creating an increased need for systems to process and assimilate knowledge contained within text. Literature Based Discovery (LBD) is a well established field that seeks to synthesize new knowledge from existing literature, but it has remained primarily in the theoretical realm rather than in real-world application. This lack of real-world adoption is due in part to the difficulty of LBD, but also due to several solvable problems present in LBD today. Of these problems, the ones in most critical need of improvement are: (1) the over-generation of knowledge by LBD systems, (2) a lack of meaningful evaluation standards, and (3) the difficulty interpreting LBD output. We address each of these problems by: (1) developing indirect relatedness measures for ranking and filtering LBD hypotheses; (2) developing a representative evaluation dataset and applying meaningful evaluation methods to individual components of LBD; (3) developing an interactive visualization system that allows a user to explore LBD output in its entirety. In addressing these problems, we make several contributions, most importantly: (1) state of the art results for estimating direct semantic relatedness, (2) development of set association measures, (3) development of indirect association measures, (4) development of a standard LBD evaluation dataset, (5) division of LBD into discrete components with well defined evaluation methods, (6) development of automatic functional group discovery, and (7) integration of indirect relatedness measures and automatic functional group discovery into a comprehensive LBD visualization system. Our results inform future development of LBD systems, and contribute to creating more effective LBD systems.
768

Encoding and parsing of algebraic expressions by experienced users of mathematics

Jansen, Anthony Robert, 1973- January 2002 (has links)
Abstract not available
769

Finite-state methods and natural language processing : 6th International Workshop, FSMNLP 2007 Potsdam, Germany, september 14 - 16 ; revised papers

January 2008 (has links)
Proceedings with the revised papers of the FSMNLP (Finite-state Methods and Natural Language Processing) 2007 Workshop in Potsdam / Tagungsband mit den Beiträgen der FSMNLP (Finite-state Methods and Natural Language Processing) 2007 in Potsdam
770

Shades of Certainty : Annotation and Classification of Swedish Medical Records

Velupillai, Sumithra January 2012 (has links)
Access to information is fundamental in health care. This thesis presents research on Swedish medical records with the overall goal of building intelligent information access tools that can aid health personnel, researchers and other professions in their daily work, and, ultimately, improve health care in general. The issue of ethics and identifiable information is addressed by creating an annotated gold standard corpus and porting an existing de-identification system to Swedish from English. The aim is to move towards making textual resources available to researchers without risking exposure of patients’ confidential information. Results for the rule-based system are not encouraging, but results for the gold standard are fairly high. Affirmed, uncertain and negated information needs to be distinguished when building accurate information extraction tools. Annotation models are created, with the aim of building automated systems. One model distinguishes certain and uncertain sentences, and is applied on medical records from several clinical departments. In a second model, two polarities and three levels of certainty are applied on diagnostic statements from an emergency department. Overall results are promising. Differences are seen depending on clinical practice, annotation task and level of domain expertise among the annotators. Using annotated resources for automatic classification is studied. Encouraging overall results using local context information are obtained. The fine-grained certainty levels are used for building classifiers for real-world e-health scenarios. This thesis contributes two annotation models of certainty and one of identifiable information, applied on Swedish medical records. A deeper understanding of the language use linked to conveying certainty levels is gained. Three annotated resources that can be used for further research have been created, and implications for automated systems are presented.

Page generated in 0.0828 seconds