• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 929
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1601
  • 1601
  • 1601
  • 622
  • 565
  • 464
  • 383
  • 376
  • 266
  • 256
  • 245
  • 228
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
821

Answering Deep Queries Specified in Natural Language with Respect to a Frame Based Knowledge Base and Developing Related Natural Language Understanding Components

January 2015 (has links)
abstract: Question Answering has been under active research for decades, but it has recently taken the spotlight following IBM Watson's success in Jeopardy! and digital assistants such as Apple's Siri, Google Now, and Microsoft Cortana through every smart-phone and browser. However, most of the research in Question Answering aims at factual questions rather than deep ones such as ``How'' and ``Why'' questions. In this dissertation, I suggest a different approach in tackling this problem. We believe that the answers of deep questions need to be formally defined before found. Because these answers must be defined based on something, it is better to be more structural in natural language text; I define Knowledge Description Graphs (KDGs), a graphical structure containing information about events, entities, and classes. We then propose formulations and algorithms to construct KDGs from a frame-based knowledge base, define the answers of various ``How'' and ``Why'' questions with respect to KDGs, and suggest how to obtain the answers from KDGs using Answer Set Programming. Moreover, I discuss how to derive missing information in constructing KDGs when the knowledge base is under-specified and how to answer many factual question types with respect to the knowledge base. After having the answers of various questions with respect to a knowledge base, I extend our research to use natural language text in specifying deep questions and knowledge base, generate natural language text from those specification. Toward these goals, I developed NL2KR, a system which helps in translating natural language to formal language. I show NL2KR's use in translating ``How'' and ``Why'' questions, and generating simple natural language sentences from natural language KDG specification. Finally, I discuss applications of the components I developed in Natural Language Understanding. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
822

Health Information Extraction from Social Media

January 2016 (has links)
abstract: Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks such as pharmacovigilance via the use of Natural Language Processing (NLP) techniques. One of the critical steps in information extraction pipelines is Named Entity Recognition (NER), where the mentions of entities such as diseases are located in text and their entity type are identified. However, the language in social media is highly informal, and user-expressed health-related concepts are often non-technical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and advanced machine learning-based NLP techniques have been underutilized. This work explores the effectiveness of different machine learning techniques, and particularly deep learning, to address the challenges associated with extraction of health-related concepts from social media. Deep learning has recently attracted a lot of attention in machine learning research and has shown remarkable success in several applications particularly imaging and speech recognition. However, thus far, deep learning techniques are relatively unexplored for biomedical text mining and, in particular, this is the first attempt in applying deep learning for health information extraction from social media. This work presents ADRMine that uses a Conditional Random Field (CRF) sequence tagger for extraction of complex health-related concepts. It utilizes a large volume of unlabeled user posts for automatic learning of embedding cluster features, a novel application of deep learning in modeling the similarity between the tokens. ADRMine significantly improved the medical NER performance compared to the baseline systems. This work also presents DeepHealthMiner, a deep learning pipeline for health-related concept extraction. Most of the machine learning methods require sophisticated task-specific manual feature design which is a challenging step in processing the informal and noisy content of social media. DeepHealthMiner automatically learns classification features using neural networks and utilizing a large volume of unlabeled user posts. Using a relatively small labeled training set, DeepHealthMiner could accurately identify most of the concepts, including the consumer expressions that were not observed in the training data or in the standard medical lexicons outperforming the state-of-the-art baseline techniques. / Dissertation/Thesis / Doctoral Dissertation Biomedical Informatics 2016
823

Detecting Political Framing Shifts and the Adversarial Phrases within\\ Rival Factions and Ranking Temporal Snapshot Contents in Social Media

January 2018 (has links)
abstract: Social Computing is an area of computer science concerned with dynamics of communities and cultures, created through computer-mediated social interaction. Various social media platforms, such as social network services and microblogging, enable users to come together and create social movements expressing their opinions on diverse sets of issues, events, complaints, grievances, and goals. Methods for monitoring and summarizing these types of sociopolitical trends, its leaders and followers, messages, and dynamics are needed. In this dissertation, a framework comprising of community and content-based computational methods is presented to provide insights for multilingual and noisy political social media content. First, a model is developed to predict the emergence of viral hashtag breakouts, using network features. Next, another model is developed to detect and compare individual and organizational accounts, by using a set of domain and language-independent features. The third model exposes contentious issues, driving reactionary dynamics between opposing camps. The fourth model develops community detection and visualization methods to reveal underlying dynamics and key messages that drive dynamics. The final model presents a use case methodology for detecting and monitoring foreign influence, wherein a state actor and news media under its control attempt to shift public opinion by framing information to support multiple adversarial narratives that facilitate their goals. In each case, a discussion of novel aspects and contributions of the models is presented, as well as quantitative and qualitative evaluations. An analysis of multiple conflict situations will be conducted, covering areas in the UK, Bangladesh, Libya and the Ukraine where adversarial framing lead to polarization, declines in social cohesion, social unrest, and even civil wars (e.g., Libya and the Ukraine). / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
824

Immersion dans des documents scientifiques et techniques : unités, modèles théoriques et processus / Immersion in scientific and technical documents : units, theoretical models and processes

Andreani, Vanessa 23 September 2011 (has links)
Cette thèse aborde la problématique de l'accès à l'information scientifique et technique véhiculée par de grands ensembles documentaires. Pour permettre à l'utilisateur de trouver l'information qui lui est pertinente, nous avons oeuvré à la définition d'un modèle répondant à l'exigence de souplesse de notre contexte applicatif industriel ; nous postulons pour cela la nécessité de segmenter l'information tirée des documents en plans ontologiques. Le modèle résultant permet une immersion documentaire, et ce grâce à trois types de processus complémentaires : des processus endogènes (exploitant le corpus pour analyser le corpus), exogènes (faisant appel à des ressources externes) et anthropogènes (dans lesquels les compétences de l'utilisateur sont considérées comme ressource) sont combinés. Tous concourent à l'attribution d'une place centrale à l'utilisateur dans le système, en tant qu'agent interprétant de l'information et concepteur de ses connaissances, dès lors qu'il est placé dans un contexte industriel ou spécialisé. / This thesis adresses the issue of accessing scientific and technical information conveyed by large sets of documents. To enable the user to find his own relevant information, we worked on a model meeting the requirement of flexibility imposed by our industrial application context ; to do so, we postulated the necessity of segmenting information from documents into ontological facets. The resulting model enables a documentary immersion, thanks to three types of complementary processes : endogenous processes (exploiting the corpus to analyze the corpus), exogenous processes (using external resources) and anthropogenous ones (in which the user's skills are considered as a resource) are combined. They all contribute to granting the user a fundamental role in the system, as an interpreting agent and as a knowledge creator, provided that he is placed in an industrial or specialised context.
825

La représentation des documents par réseaux de neurones pour la compréhension de documents parlés / Neural network representations for spoken documents understanding

Janod, Killian 27 November 2017 (has links)
Les méthodes de compréhension de la parole visent à extraire des éléments de sens pertinents du signal parlé. On distingue principalement deux catégories dans la compréhension du signal parlé : la compréhension de dialogues homme/machine et la compréhension de dialogues homme/homme. En fonction du type de conversation, la structure des dialogues et les objectifs de compréhension varient. Cependant, dans les deux cas, les systèmes automatiques reposent le plus souvent sur une étape de reconnaissance automatique de la parole pour réaliser une transcription textuelle du signal parlé. Les systèmes de reconnaissance automatique de la parole, même les plus avancés, produisent dans des contextes acoustiques complexes des transcriptions erronées ou partiellement erronées. Ces erreurs s'expliquent par la présence d'informations de natures et de fonction variées, telles que celles liées aux spécificités du locuteur ou encore l'environnement sonore. Celles-ci peuvent avoir un impact négatif important pour la compréhension. Dans un premier temps, les travaux de cette thèse montrent que l'utilisation d'autoencodeur profond permet de produire une représentation latente des transcriptions d'un plus haut niveau d'abstraction. Cette représentation permet au système de compréhension de la parole d'être plus robuste aux erreurs de transcriptions automatiques. Dans un second temps, nous proposons deux approches pour générer des représentations robustes en combinant plusieurs vues d'un même dialogue dans le but d'améliorer les performances du système la compréhension. La première approche montre que plusieurs espaces thématiques différents peuvent être combinés simplement à l'aide d'autoencodeur ou dans un espace thématique latent pour produire une représentation qui augmente l'efficacité et la robustesse du système de compréhension de la parole. La seconde approche propose d'introduire une forme d'information de supervision dans les processus de débruitages par autoencodeur. Ces travaux montrent que l'introduction de supervision de transcription dans un autoencodeur débruitant dégrade les représentations latentes, alors que les architectures proposées permettent de rendre comparables les performances d'un système de compréhension reposant sur une transcription automatique et un système de compréhension reposant sur des transcriptions manuelles. / Application of spoken language understanding aim to extract relevant items of meaning from spoken signal. There is two distinct types of spoken language understanding : understanding of human/human dialogue and understanding in human/machine dialogue. Given a type of conversation, the structure of dialogues and the goal of the understanding process varies. However, in both cases, most of the time, automatic systems have a step of speech recognition to generate the textual transcript of the spoken signal. Speech recognition systems in adverse conditions, even the most advanced one, produce erroneous or partly erroneous transcript of speech. Those errors can be explained by the presence of information of various natures and functions such as speaker and ambience specificities. They can have an important adverse impact on the performance of the understanding process. The first part of the contribution in this thesis shows that using deep autoencoders produce a more abstract latent representation of the transcript. This latent representation allow spoken language understanding system to be more robust to automatic transcription mistakes. In the other part, we propose two different approaches to generate more robust representation by combining multiple views of a given dialogue in order to improve the results of the spoken language understanding system. The first approach combine multiple thematic spaces to produce a better representation. The second one introduce new autoencoders architectures that use supervision in the denoising autoencoders. These contributions show that these architectures reduce the difference in performance between a spoken language understanding using automatic transcript and one using manual transcript.
826

Enfrentamento do problema das divergências de tradução por um sistema de tradução automática : um exercício exploratório /

Oliveira, Mirna Fernanda de. January 2006 (has links)
Orientador: Bento Carlos Dias da Silva / Banca: Beatriz Nunes de Oliveira Longo / Banca: Dirce Charara Monteiro / Banca: Gladis Maria de Barcellos Almeida / Banca: Heronides Maurílio de Melo Moura / Resumo: O objetivo desta tese é desenvolver um estudo lingüístico-computacional exploratório de um problema específico que deve ser enfrentado por sistemas de tradução automática: o problema da divergências de tradução quer de natureza sintática quer de natureza léxico-semântica que se verificam entre pares de sentenças de línguas naturais diferentes. Para isso, fundamenta-se na metodologia de pesquisa interdisciplinar em PLN (Processamento Automático de Línguas Naturais) de Dias-da-Silva (1996, 1998 e 2003) e na teoria lingüístico-computacional subjacente ao sistema de tradução automática UNITRAN de Dorr (1993), que, por sua vez é subsidiado pela teoria sintática dos princípios e Parâmetros de Chomsky (1981) e pela teoria semântica das Estruturas conceituais de Jackendoff (1990). Como contribuição, a tese descreve a composição e o funcionamento do UNITRAN, desenhado para dar conta de parte do problema posto pelas divergências de tradução e ilustra a possibilidade de inclusão do português nesse sistema através do exame de alguns tipos de divergências que se verificam entre frases do inglês e do português. / Abstract: This dissertation aims to develop an exploratory linguistic and computational study of an especific type of problem that must be faced by machine translation systems: the problem of translation divergences, whether syntactic or lexical-semantic ones that can be verified between distinct natural language sentence. In order to achieve this aim, this work is based on the interdisciplinary research metodology of the NLP (Natural Language Processing) field developed by Dias-da-Silva (1996, 1998 & 2003) and on the linguistic computacional theory behind UNITRAN, a machine translation systemdeveloped by Dorr (1993), a system that is on its turned based on Chomsky's syntactic theory of Government and Binding (1981) and Jackendoff's semantic theory of Conceptual Structures (1990). As a contribution to the field of NLP, this dissertation describes the machinery of UNITRAN, designed to deal with part of the problem of translation divergencies, and it illustrates the possibility of including Brazilian Portuguese language in the system through the investigation of certain kinds of divergences that can be found between English and Brazilian Portuguese senteces. / Doutor
827

Distinção de grupos linguísticos através de desempenho da linguagem / Distinction of linguistic groups through linguistic performance

Wilkens, Rodrigo Souza January 2016 (has links)
A aquisição e o desempenho de linguagem humana é um processo pelo qual todas as pessoas passam. No entanto, esse processo não é completamente entendido, o que gera amplo espaço para pesquisa nessa área. Além disso, mesmo após o processo de aquisição da linguagem pela criança estar completo, ainda não há garantia de domínio da língua em suas diferentes modalidades, especialmente de leitura e escrita. Recentemente, em 2016, divulgou-se que 49,3% dos estudantes brasileiros não possuem proficiência de compreensão de leitura plena em português. Isso é particularmente importante ao considerarmos a quantidade de textos disponíveis, mas não acessíveis a pessoas com diferentes tipos de problemas de proficiência na língua. Sob o ponto de vista computacional, há estudos que visam modelar os processos de aquisição da linguagem e medir o nível do falante, leitor ou redator. Em vista disso, neste trabalho propomos uma abordagem computacional independente de idioma para modelar o nível de desenvolvimento linguístico de diferentes tipos de usuários da língua, de crianças e adultos, sendo a nossa proposta fortemente baseada em características linguísticas. Essas características são dependentes de corpora orais transcritos, no segmento de crianças, e de corpora escritos, no segmento de adultos. Para alcançar esse modelo abrangente, são considerados como objetivos a identificação de atributos e valores que diferenciam os níveis de desenvolvimento da linguagem do indivíduo, assim como o desenvolvimento de um modelo capaz de indicá-los. Para a identificação dos atributos, utilizamos métodos baseados em estatística, como o teste de hipóteses e divergência de distribuição. A fim de comprovar a abrangência da abordagem, realizamos experimentos com os corpora que espelham diferentes etapas do desenvolvimento da linguagem humana: (1) etapa de aquisição da linguagem oral de pela criança e (2) etapa pós aquisição, através da percepção de complexidade da linguagem escrita. Como resultados, obtivemos um grande conjunto anotado de dados sobre aquisição e desempenho de linguagem que podem contribuir para outros estudos. Assim como um perfil de atributos para os vários níveis de desenvolvimento. Também destacamos como resultados, os modelos computacionais que identificam textos quanto ao nível de desenvolvimento de linguagem. Em especial, o são resultados do trabalho o modelo de identificação de palavras complexas, que ultrapassou o estado da arte para o corpus estudado, e o modelo de identificação de idade de crianças que ultrapassou os baselines utilizados, incluindo uma medida clássica de desenvolvimento linguístico. / Language acquisition and language performance is a process by which all the people experience. However, this process is not completely understood, which creates room for research in this area. Moreover, even after the acquisition process by a child is completed, there is still no guarantee of language proficiency in different modalities, specially reading and writing. Recently, in 2016, OECD/PIAAC released that 49,3% of Brazilian students do not have written and read proficiency in Portuguese. This is more important when we take into account the large number of available text, but they are not accessible by people with different types of language proficiency issues. In computational point of view, there are some studies which aim to model the language acquisition process and measure the speaker level. For that, we propose an computational approach independent of language to model language development level of different types of language users, children and adults. In that sense our proposal is highly based on linguistics features. Those features dependents of transcript oral corpora from children and adults. To achieve this model, we considered aim to identify attributes and values able to differentiate between leves of development by an individual, as well the desenvolvimento of a model able to indicate them. The attribute identification are based on statistical methods such as hypothesis testing and divergence distribution. Aiming to validate our approach, we performed experiments with the corpora that reflect at different stages of development of human language: (1) oral language acquisition by a child and (2) post-acquisition stage, through the perception of difficulty of written language. With this work, we obtained a large corpus of annotated language acquisition data that can contribute to the acquisition of other studies. We also build an attribute profile of the development levels. From all of our results we highlight the computer models that identify texts and language development level. In particular, the complex word identification model that exceeded the state of the art for the studied corpus, and the children age identifier model, who exceeded the baselines, including a classic measure of language development.
828

Leitura, tradução e medidas de complexidade textual em contos da literatura para leitores com letramento básico

Pasqualini, Bianca Franco January 2012 (has links)
Este trabalho trata dos temas da complexidade textual e de padrões de legibilidade a partir de um enfoque computacional, situando o tema em meio à descrição de textos originais e traduzidos, aproveitando postulados teóricos da Tradutologia, da Linguística de Corpus e do Processamento de Línguas Naturais. Investigou-se a suposição de que há traduções de literatura em língua inglesa produzidas no Brasil que tendem a gerar textos mais complexos do que seus originais, tendo como parâmetro o leitor brasileiro médio, cuja proficiência de leitura situa-se em nível básico. Para testar essa hipótese, processamos, usando as ferramentas Coh-Metrix e Coh-Metrix-Port, um conjunto de contos literários de vários autores em língua inglesa e suas traduções para o português brasileiro, e, como contraste, um conjunto de contos de autores brasileiros publicados na mesma época e suas traduções para o inglês. As ferramentas Coh-Metrix e Coh-Metrix-Port calculam parâmetros de coesão, coerência e inteligibilidade textual em diferentes níveis linguísticos, e as métricas estudadas foram as linguística e gramaticalmente equivalentes entre as duas línguas. Foi realizado também um teste estatístico (t-Student), para cada métrica e entre as traduções, para avaliar a diferença entre as médias significativas dentre resultados obtidos. Por fim, são introduzidas tecnologias tipicamente usadas em Linguística Computacional, como a Aprendizagem de Máquina (AM), para o aprofundamento da análise. Os resultados indicam que as traduções para o português produziram textos mais complexos do que seus textos-fonte em algumas das medidas analisadas, e que tais traduções não são adequadas para leitores com nível de letramento básico. Além disso, o índice Flesch de legibilidade mostrou-se como a medida mais discriminante entre textos traduzidos do inglês para o português brasileiro e textos escritos originalmente em português. Conclui-se que é importante: a) revisar equivalências de medidas de complexidade entre o sistema Coh-Metrix para o inglês e para o português; b) propor medidas específicas das línguas estudadas; e c) ampliar os critérios de adequação para além do nível lexical. / This work analyzes textual complexity and readability patterns from a computational perspective, situating the problem through the description of original and translated texts, based on Translation Studies, Corpus Linguistics and Natural Language Processing theoretical postulates. We investigated the hypothesis that there are English literature translations made in Brazil that tend to generate more complex texts than their originals, considering – as parameter – the typical Brazilian reader, whose reading skills are at a basic level according to official data. To test this hypothesis, we processed –using the Coh-Metrix and Coh-Metrix-Port tools – a set of literary short stories by various authors in English and their translations into Brazilian Portuguese, and – as contrast – a set of short stories by Brazilian literature authors from the same period and their translations into English. The Coh-Metrix and Coh-Metrix-Port tools calculate cohesion, coherence and textual intelligibility parameters at different linguistic levels, and the metrics studied were the linguistic and grammatical equivalents between the two languages. We also carried out a statistical test (t-test) for each metric, and between translations, to assess whether the difference between the mean results are significant. Finally, we introduced Computational Linguistics methods such as Machine Learning, to improve the results obtained with the mentioned tools. The results indicate that translations into Portuguese are more complex than their source texts in some of the measures analyzed and they are not suitable for readers with basic reading skills. We conclude that it is important to: a) review complexity metrics of equivalence between Coh-Metrix system for English and Portuguese; b) propose specific metrics for the languages studied, and c) expand the criteria of adaptation beyond the lexical level.
829

A generic and open framework for multiword expressions treatment : from acquisition to applications

Ramisch, Carlos Eduardo January 2012 (has links)
The treatment of multiword expressions (MWEs), like take off, bus stop and big deal, is a challenge for NLP applications. This kind of linguistic construction is not only arbitrary but also much more frequent than one would initially guess. This thesis investigates the behaviour of MWEs across different languages, domains and construction types, proposing and evaluating an integrated methodological framework for their acquisition. There have been many theoretical proposals to define, characterise and classify MWEs. We adopt generic definition stating that MWEs are word combinations which must be treated as a unit at some level of linguistic processing. They present a variable degree of institutionalisation, arbitrariness, heterogeneity and limited syntactic and semantic variability. There has been much research on automatic MWE acquisition in the recent decades, and the state of the art covers a large number of techniques and languages. Other tasks involving MWEs, namely disambiguation, interpretation, representation and applications, have received less emphasis in the field. The first main contribution of this thesis is the proposal of an original methodological framework for automatic MWE acquisition from monolingual corpora. This framework is generic, language independent, integrated and contains a freely available implementation, the mwetoolkit. It is composed of independent modules which may themselves use multiple techniques to solve a specific sub-task in MWE acquisition. The evaluation of MWE acquisition is modelled using four independent axes. We underline that the evaluation results depend on parameters of the acquisition context, e.g., nature and size of corpora, language and type of MWE, analysis depth, and existing resources. The second main contribution of this thesis is the application-oriented evaluation of our methodology proposal in two applications: computer-assisted lexicography and statistical machine translation. For the former, we evaluate the usefulness of automatic MWE acquisition with the mwetoolkit for creating three lexicons: Greek nominal expressions, Portuguese complex predicates and Portuguese sentiment expressions. For the latter, we test several integration strategies in order to improve the treatment given to English phrasal verbs when translated by a standard statistical MT system into Portuguese. Both applications can benefit from automatic MWE acquisition, as the expressions acquired automatically from corpora can both speed up and improve the quality of the results. The promising results of previous and ongoing experiments encourage further investigation about the optimal way to integrate MWE treatment into other applications. Thus, we conclude the thesis with an overview of the past, ongoing and future work.
830

Extrakce informací z lékařských textů / Extracting Information from Medical Texts

Zvára, Karel January 2017 (has links)
The aim of my work was to find out the specific features of Czech medical reports in terms of the possibility of extracting specific information from them. For my work, I had a total of 268 anonymized narrative medical reports from two outpatient departments. I have studied standards for preserving electronic health records and for transferring clinical information between healthcare information systems. I have also participated in the process of implementing electronic medical record in the field of dentistry. First of all, I tried to process narrative medical reports using natural language processing (NLP) tools. I came to the conclusion that narrative medical reports in the Czech language are very different than a typical Czech text, especially because it mostly contains short telegraphic phrases and the texts lack typical Czech sentence structure. It also contains many misspellings, acronyms and abbreviations. Another problem was the absence of existence of the Czech translation of the main international classification systems. Therefore I decided to continue the research by developing the method for pro-processing the input text for translation and its semantic annotation. The main objective of this part of the research was to propose a method and support software for interactive correction...

Page generated in 0.1894 seconds