Spelling suggestions: "subject:"[een] NATURAL LANGUAGE PROCESSING"" "subject:"[enn] NATURAL LANGUAGE PROCESSING""
231 |
Génération automatique de questionnaires à choix multiples pédagogiques : évaluation de l'homogénéité des options / Automatic generation of educational multiple-choice questions : evaluation of option homogeneityPho, Van-Minh 24 September 2015 (has links)
Ces dernières années ont connu un renouveau des Environnements Informatiques pour l'Apprentissage Humain. Afin que ces environnements soient largement utilisés par les enseignants et les apprenants, ils doivent fournir des moyens pour assister les enseignants dans leur tâche de génération d'exercices. Parmi ces exercices, les Questionnaires à Choix Multiples (QCM) sont très présents. Cependant, la rédaction d'items à choix multiples évaluant correctement le niveau d'apprentissage des apprenants est une tâche complexe. Des consignes ont été développées pour rédiger manuellement des items, mais une évaluation automatique de la qualité des items constituerait un outil pratique pour les enseignants.Nous nous sommes intéressés à l'évaluation automatique de la qualité des distracteurs (mauvais choix de réponse). Pour cela, nous avons étudié les caractéristiques des distracteurs pertinents à partir de consignes de rédaction de QCM. Cette étude nous a conduits à considérer que l'homogénéité des distracteurs et de la réponse est un critère important pour valider les distracteurs. L'homogénéité est d'ordre syntaxique et sémantique. Nous avons validé la définition de l'homogénéité par une analyse de corpus de QCM, et nous avons proposé des méthodes de reconnaissance automatique de l'homogénéité syntaxique et sémantique à partir de cette analyse.Nous nous sommes ensuite focalisé sur l'homogénéité sémantique des distracteurs. Pour l'estimer automatiquement, nous avons proposé un modèle d'ordonnancement par apprentissage, combinant différentes mesures d'homogénéité sémantique. L'évaluation du modèle a montré que notre méthode est plus efficace que les travaux existants pour estimer l'homogénéité sémantique des distracteurs. / Recent years have seen a revival of Intelligent Tutoring Systems. In order to make these systems widely usable by teachers and learners, they have to provide means to assist teachers in their task of exercise generation. Among these exercises, multiple-choice tests are very common. However, writing Multiple-Choice Questions (MCQ) that correctly assess a learner's level is a complex task. Guidelines were developed to manually write MCQs, but an automatic evaluation of MCQ quality would be a useful tool for teachers.We are interested in automatic evaluation of distractor (wrong answer choice) quality. To do this, we studied characteristics of relevant distractors from multiple-choice test writing guidelines. This study led us to assume that homogeneity between distractors and answer is an important criterion to validate distractors. Homogeneity is both syntactic and semantic. We validated the definition of homogeneity by a MCQ corpus analysis, and we proposed methods for automatic recognition of syntactic and semantic homogeneity based on this analysis.Then, we focused our work on distractor semantic homogeneity. To automatically estimate it, we proposed a ranking model by machine learning, combining different semantic homogeneity measures. The evaluation of the model showed that our method is more efficient than existing work to estimate distractor semantic homogeneity
|
232 |
Drug Name Recognition in Reports on Concomitant MedicationGräns, Arvid January 2019 (has links)
This thesis evaluates if and how drug name recognition can be used to find drug names in verbatims from reports on concomitant medication in clinical trial studies. In clinical trials, reports on concomitant medication are written if a trial participant takes other drugs than the studied drug. This information needs to be coded to a drug reference dictionary. Coded verbatims were used to create the data needed to train the drug name recognition models in this thesis. Labels for where in each verbatim the coded drugs name was, were created using a Levensthein distance. The drug name recognition models were trained and tested on verbatims with labels. Drug name recognition was performed using a logistic regression model and a bidirectional long short-term memory model. The bidirectional long short-term memory model performed the best result with an F1 score of 82.5% on classifying which words in the verbatims that were drug names. When the results were studied from case to case, they showed that the bidirectional long short-term memory classifications sometimes outperformed labels it was trained on in single word verbatims. The model was also tested on manually labelled golden standard data where it performed an F1-score of 46.4%. The results indicate that a bidirectional long short-term memory model can be implemented for drug name recognition, but that label reliability is an issue in this thesis.
|
233 |
Domain Adaptation for Hypernym Discovery via Automatic Collection of Domain-Specific Training Data / Domänanpassning för identifiering av hypernymer via automatisk insamling av domänspecifikt träningsdataPalm Myllylä, Johannes January 2019 (has links)
Identifying semantic relations in natural language text is an important component of many knowledge extraction systems. This thesis studies the task of hypernym discovery, i.e discovering terms that are related by the hypernymy (is-a) relation. Specifically, this thesis explores how state-of-the-art methods for hypernym discovery perform when applied in specific language domains. In recent times, state-of-the-art methods for hypernym discovery are mostly made up by supervised machine learning models that leverage distributional word representations such as word embeddings. These models require labeled training data in the form of term pairs that are known to be related by hypernymy. Such labeled training data is often not available when working with a specific language domain. This thesis presents experiments with an automatic training data collection algorithm. The algorithm leverages a pre-defined domain-specific vocabulary, and the lexical resource WordNet, to extract training pairs automatically. This thesis contributes by presenting experimental results when attempting to leverage such automatically collected domain-specific training data for the purpose of domain adaptation. Experiments are conducted in two different domains: One domain where there is a large amount of text data, and another domain where there is a much smaller amount of text data. Results show that the automatically collected training data has a positive impact on performance in both domains. The performance boost is most significant in the domain with a large amount of text data, with mean average precision increasing by up to 8 points.
|
234 |
Matching events and activities by integrating behavioral aspects and label analysisBaier, Thomas, Di Ciccio, Claudio, Mendling, Jan, Weske, Mathias 05 1900 (has links) (PDF)
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.
|
235 |
Extração de informações de narrativas clínicas / Clinical reports information retrievalOleynik, Michel 02 October 2013 (has links)
Narrativas clínicas são normalmente escritas em linguagem natural devido a seu poder descritivo e facilidade de comunicação entre os especialistas. Processar esses dados para fins de descoberta de conhecimento e coleta de estatísticas exige técnicas de extração de informações, com alguns resultados já apresentados na literatura para o domínio jornalístico, mas ainda raras no domínio médico. O presente trabalho visa desenvolver um classificador de laudos de anatomia patológica que seja capaz de inferir a topografia e a morfologia de um câncer na Classificação Internacional de Doenças para Oncologia (CID-O). Dados fornecidos pelo A.C. Camargo Cancer Center em São Paulo foram utilizados para treinamento e validação. Técnicas de processamento de linguagem natural (PLN) aliadas a classificadores bayesianos foram exploradas na busca de qualidade da recuperação da informação, avaliada por meio da medida-F2. Valores acima de 74% para o grupo topográfico e de 61% para o grupo morfológico são relatados, com pequena contribuição das técnicas de PLN e suavização. Os resultados corroboram trabalhos similares e demonstram a necessidade de retreinamento das ferramentas de PLN no domínio médico. / Clinical reports are usually written in natural language due to its descriptive power and ease of communication among specialists. Processing data for knowledge discovery and statistical analysis requires information retrieval techniques, already established for newswire texts, but still rare in the medical subdomain. The present work aims at developing an automated classifier of pathology reports, which should be able to infer the topography and the morphology classes of a cancer using codes of the International Classification of Diseases for Oncology (ICD-O). Data provided by the A.C. Camargo Cancer Center located in Sao Paulo was used for training and validation. Techniques of natural language processing (NLP) and Bayes classifiers were used in search for information retrieval quality, evaluated by F2-score. Measures upper than 74% in the topographic group and 61% in the morphologic group are reported, with small contribution from NLP or smoothing techniques. The results agree with similar studies and show that a retraining of NLP tools in the medical domain is necessary.
|
236 |
Development of new models for authorship recognition using complex networks / Desenvolvimento de novos modelos para reconhecimento de autoria com a utilização de redes complexasMarinho, Vanessa Queiroz 14 July 2017 (has links)
Complex networks have been successfully applied to different fields, being the subject of study in different areas that include, for example, physics and computer science. The finding that methods of complex networks can be used to analyze texts in their different complexity levels has implied in advances in natural language processing (NLP) tasks. Examples of applications analyzed with the methods of complex networks are keyword identification, development of automatic summarizers, and authorship attribution systems. The latter task has been studied with some success through the representation of co-occurrence (or adjacency) networks that connect only the closest words in the text. Despite this success, only a few works have attempted to extend this representation or employ different ones. Moreover, many approaches use a similar set of measurements to characterize the networks and do not combine their techniques with the ones traditionally used for the authorship attribution task. This Masters research proposes some extensions to the traditional co-occurrence model and investigates new attributes and other representations (such as mesoscopic and named entity networks) for the task. The connectivity information of function words is used to complement the characterization of authors writing styles, as these words are relevant for the task. Finally, the main contribution of this research is the development of hybrid classifiers, called labelled motifs, that combine traditional factors with properties obtained with the topological analysis of complex networks. The relevance of these classifiers is verified in the context of authorship attribution and translationese identification. With this hybrid approach, we show that it is possible to improve the performance of networkbased techniques when they are combined with traditional ones usually employed in NLP. By adapting, combining and improving the model, not only the performance of authorship attribution systems was improved, but also it was possible to better understand what are the textual quantitative factors (measured through networks) that can be used in stylometry studies. The advances obtained during this project may be useful to study related applications, such as the analysis of stylistic inconsistencies and plagiarism, and the analysis of text complexity. Furthermore, most of the methods proposed in this work can be easily applied to many natural languages. / Redes complexas vem sendo aplicadas com sucesso em diferentes domínios, sendo o tema de estudo de distintas áreas que incluem, por exemplo, a física e a computação. A descoberta de que métodos de redes complexas podem ser utilizados para analisar textos em seus distintos níveis de complexidade proporcionou avanços em tarefas de processamento de línguas naturais (PLN). Exemplos de aplicações analisadas com os métodos de redes complexas são a detecção de palavras-chave, a criação de sumarizadores automáticos e o reconhecimento de autoria. Esta última tarefa tem sido estudada com certo sucesso através da representação de redes de co-ocorrência (ou adjacência) de palavras que conectam apenas as palavras mais próximas no texto. Apesar deste sucesso, poucos trabalhos tentaram estender essas redes ou utilizar diferentes representações. Além disso, muitas das abordagens utilizam um conjunto semelhante de medidas de redes complexas e não combinam suas técnicas com as utilizadas tradicionalmente na tarefa de reconhecimento de autoria. Esta pesquisa de mestrado propõe extensões à modelagem tradicional de co-ocorrência e investiga a adequabilidade de novos atributos e de outras modelagens (como as redes mesoscópicas e de entidades nomeadas) para a tarefa. A informação de conectividade de palavras funcionais é utilizada para complementar a caracterização da escrita dos autores, uma vez que essas palavras são relevantes para a tarefa. Finalmente, a maior contribuição deste trabalho consiste no desenvolvimento de classificadores híbridos, denominados labelled motifs, que combinam fatores tradicionais com as propriedades fornecidas pela análise topológica de redes complexas. A relevância desses classificadores é verificada no contexto de reconhecimento de autoria e identificação de translationese. Com esta abordagem híbrida, mostra-se que é possível melhorar o desempenho de técnicas baseadas em rede ao combiná-las com técnicas tradicionais em PLN. Através da adaptação, combinação e aperfeiçoamento da modelagem, não apenas o desempenho dos sistemas de reconhecimento de autoria foi melhorado, mas também foi possível entender melhor quais são os fatores quantitativos textuais (medidos via redes) que podem ser utilizados na área de estilometria. Os avanços obtidos durante este projeto podem ser utilizados para estudar aplicações relacionadas, como é o caso da análise de inconsistências estilísticas e plagiarismos, e análise da complexidade textual. Além disso, muitos dos métodos propostos neste trabalho podem ser facilmente aplicados em diversas línguas naturais.
|
237 |
Exploração de métodos de sumarização automática multidocumento com base em conhecimento semântico-discursivo / Exploration of automatic methods for multi-document summarization using discourse modelsCardoso, Paula Christina Figueira 05 September 2014 (has links)
A sumarização automática multidocumento visa à produção de um sumário a partir de um conjunto de textos relacionados, para ser utilizado por um usuário particular e/ou para determinada tarefa. Com o crescimento exponencial das informações disponíveis e a necessidade das pessoas obterem a informação em um curto espaço de tempo, a tarefa de sumarização automática tem recebido muita atenção nos últimos tempos. Sabe-se que em um conjunto de textos relacionados existem informações redundantes, contraditórias e complementares, que representam os fenômenos multidocumento. Em cada texto-fonte, o assunto principal é descrito em uma sequência de subtópicos. Além disso, as sentenças de um texto-fonte possuem graus de relevância diferentes. Nesse contexto, espera-se que um sumário multidocumento consista das informações relevantes que representem o total de textos do conjunto. No entanto, as estratégias de sumarização automática multidocumento adotadas até o presente utilizam somente os relacionamentos entre textos e descartam a análise da estrutura textual de cada texto-fonte, resultando em sumários que são pouco representativos dos subtópicos textuais e menos informativos do que poderiam ser. A fim de tratar adequadamente a relevância das informações, os fenômenos multidocumento e a distribuição de subtópicos, neste trabalho de doutorado, investigou-se como modelar o processo de sumarização automática usando o conhecimento semântico-discursivo em métodos de seleção de conteúdo e o impacto disso para a produção de sumários mais informativos e representativos dos textos-fonte. Na formalização do conhecimento semântico-discursivo, foram utilizadas as teorias semântico-discursivas RST (Rhetorical Structure Theory) e CST (Cross-document Structure Theory). Para apoiar o trabalho, um córpus multidocumento foi anotado com RST e subtópicos, consistindo em um recurso disponível para outras pesquisas. A partir da análise de córpus, foram propostos 10 métodos de segmentação em subtópicos e 13 métodos inovadores de sumarização automática. A avaliação dos métodos de segmentação em subtópicos mostrou que existe uma forte relação entre a estrutura de subtópicos e a análise retórica de um texto. Quanto à avaliação dos métodos de sumarização automática, os resultados indicam que o uso do conhecimento semântico-discursivo em boas estratégias de seleção de conteúdo afeta positivamente a produção de sumários informativos. / The multi-document summarization aims at producing a summary from a set of related texts to be used for an individual or/and a particular task. Nowadays, with the exponential growth of available information and the peoples need to obtain information in a short time, the task of automatic summarization has received wide attention. It is known that in a set of related texts there are pieces of redundant, contradictory and complementary information that represent the multi-document phenomenon. In each source text, the main subject is described in a sequence of subtopics. Furthermore, some sentences in the same text are more relevant than others. Considering this context, it is expected that a multi-document summary consists of relevant information that represents a set of texts. However, strategies for automatic multi-document summarization adopted until now have used only the relationships between texts and dismissed the analysis of textual structure of each source text, resulting in summaries that are less representative of subtopics and less informative than they could be. In order to properly treat the relevance of information, multi-document phenomena and distribution of subtopics, in this thesis, we investigated how to model the summarization process using the semantic-discursive knowledge and its impact for producing more informative and representative summaries from source texts. In order to formalize the semantic-discursive knowledge, we adopted RST (Rhetorical Structure Theory) and CST (Cross-document Structure Theory) theories. To support the work, a multi-document corpus was annotated with RST and subtopics, consisting of a new resource available for other researchers. From the corpus analysis, 10 methods for subtopic segmentation and 13 orignal methods for automatic summarization were proposed. The assessment of methods for subtopic segmentation showed that there is a strong relationship between the subtopics structure and the rhetorical analysis of a text. In regards to the assessment of the methods for automatic summarization, the results indicate that the use of semantic-discursive knowledge in good strategies for content selection affects positively the production of informative summaries.
|
238 |
Wed 2.0: improving customer experience with wedding service providers through investigation of the ranking mechanism and sentiment analysis of user feedback on InstagramJäderlund, Maria January 2019 (has links)
Instagram is one of the main social platforms for business promotion. Millions of potential customers and endless visual marketing opportunities makes Instagram a perfect place to increase online sales. There are many tools and mechanisms to promote brands on Instagram such as paid advertising or using a pre-generated set of popular hashtags. In this regard, the presence and content of users’ comments becomes an important socio-psychological factor in the motivation to buy or use a product or service. The goal of this degree project is to investigate natural language processing techniques applied to users’ comments on Instagram in order to determine a new algorithm that will include content analysis to the list of feed ranking factors. As it is now, the user has to read through posts on Instagram to get an idea of the quality of a product or service. Therefore, a way to classify and rank products and services is needed. We propose a new algorithm called "Wed 2.0" that can assist consumers in their search of wedding services and products on Instagram. Data mining techniques and sentiment analysis are used to define the mood of the comments and structure user opinions as well as to rank accounts based on this knowledge.
|
239 |
An evaluation of machine learning algorithms for tweet sentiment analysisUnknown Date (has links)
Sentiment analysis of tweets is an application of mining Twitter, and is growing
in popularity as a means of determining public opinion. Machine learning algorithms
are used to perform sentiment analysis; however, data quality issues such as high dimensionality, class imbalance or noise may negatively impact classifier performance.
Machine learning techniques exist for targeting these problems, but have not been
applied to this domain, or have not been studied in detail. In this thesis we discuss
research that has been conducted on tweet sentiment classification, its accompanying
data concerns, and methods of addressing these concerns. We test the impact
of feature selection, data sampling and ensemble techniques in an effort to improve
classifier performance. We also evaluate the combination of feature selection and
ensemble techniques and examine the effects of high dimensionality when combining
multiple types of features. Additionally, we provide strategies and insights for
potential avenues of future work. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2015 / FAU Electronic Theses and Dissertations Collection
|
240 |
Investigação de métodos de desambiguação lexical de sentidos de verbos do português do Brasil / Research of word sense disambiguation methods for verbs in brazilian portugueseCabezudo, Marco Antonio Sobrevilla 28 August 2015 (has links)
A Desambiguação Lexical de Sentido (DLS) consiste em determinar o sentido mais apropriado da palavra em um contexto determinado, utilizando-se um repositório de sentidos pré-especificado. Esta tarefa é importante para outras aplicações, por exemplo, a tradução automática. Para o inglês, a DLS tem sido amplamente explorada, utilizando diferentes abordagens e técnicas, contudo, esta tarefa ainda é um desafio para os pesquisadores em semântica. Analisando os resultados dos métodos por classes gramaticais, nota-se que todas as classes não apresentam os mesmos resultados, sendo que os verbos são os que apresentam os piores resultados. Estudos ressaltam que os métodos de DLS usam informações superficiais e os verbos precisam de informação mais profunda para sua desambiguação, como frames sintáticos ou restrições seletivas. Para o português, existem poucos trabalhos nesta área e só recentemente tem-se investigado métodos de uso geral. Além disso, salienta-se que, nos últimos anos, têm sido desenvolvidos recursos lexicais focados nos verbos. Nesse contexto, neste trabalho de mestrado, visou-se investigar métodos de DLS de verbos em textos escritos em português do Brasil. Em particular, foram explorados alguns métodos tradicionais da área e, posteriormente, foi incorporado conhecimento linguístico proveniente da Verbnet.Br. Para subsidiar esta investigação, o córpus CSTNews foi anotado com sentidos de verbos usando a WordNet-Pr como repositório de sentidos. Os resultados obtidos mostraram que os métodos de DLS investigados não conseguiram superar o baseline mais forte e que a incorporação de conhecimento da VerbNet.Br produziu melhorias nos métodos, porém, estas melhorias não foram estatisticamente significantes. Algumas contribuições deste trabalho de mestrado foram um córpus anotado com sentidos de verbos, a criação de uma ferramenta que auxilie a anotação de sentidos, a investigação de métodos de DLS e o uso de informações especificas de verbos (provenientes da VerbNet.Br) na DLS de verbos. / Word Sense Disambiguation (WSD) aims at identifying the appropriate sense of a word in a given context, using a pre-specified sense-repository. This task is important to other applications as Machine Translation. For English, WSD has been widely studied, using different approaches and techniques, however, this task is still a challenge for researchers in Semantics. Analyzing the performance of different methods by the morphosyntactic class, note that not all classes have the same results, and the worst results are obtained for Verbs. Studies highlight that WSD methods use shallow information and Verbs need deeper information for its disambiguation, like syntactic frames or selectional restrictions. For Portuguese, there are few works in WSD and, recently, some works for general purpose. In addition, it is noted that, recently, have been developed lexical resources focused on Verbs. In this context, this master work aimed at researching WSD methods for verbs in texts written in Brazilian Portuguese. In particular, traditional WSD methods were explored and, subsequently, linguistic knowledge of VerbNet.Br was incorporated in these methods. To support this research, CSTNews corpus was annotated with verb senses using the WordNet-Pr as a sense-repository. The results showed that explored WSD methods did not outperform the hard baseline and the incorporation of VerbNet.Br knowledge yielded improvements in the methods, however, these improvements were not statistically significant. Some contributions of this work were the sense-annotated corpus, the creation of a tool for support the sense-annotation, the research of WSD methods for verbs and the use of specific information of verbs (from VerbNet.Br) in the WSD of verbs.
|
Page generated in 0.0567 seconds