Spelling suggestions: "subject:"batural language aprocessing"" "subject:"batural language eprocessing""
531 |
Domain-Agnostic Context-Aware Assistant Framework for Task-Based EnvironmentJanuary 2020 (has links)
abstract: Smart home assistants are becoming a norm due to their ease-of-use. They employ spoken language as an interface, facilitating easy interaction with their users. Even with their obvious advantages, natural-language based interfaces are not prevalent outside the domain of home assistants. It is hard to adopt them for computer-controlled systems due to the numerous complexities involved with their implementation in varying fields. The main challenge is the grounding of natural language base terms into the underlying system's primitives. The existing systems that do use natural language interfaces are specific to one problem domain only.
In this thesis, a domain-agnostic framework that creates natural language interfaces for computer-controlled systems has been developed by making the mapping between the language constructs and the system primitives customizable. The framework employs ontologies built using OWL (Web Ontology Language) for knowledge representation purposes and machine learning models for language processing tasks. It has been evaluated within a simulation environment consisting of objects and a robot. This environment has been deployed as a web application, providing anonymous user testing for evaluation, and generating training data for machine learning components. Performance evaluation has been done on metrics such as time taken for a task or the number of instructions given by the user to the robot to accomplish a task. Additionally, the framework has been used to create a natural language interface for a database system to demonstrate its domain independence. / Dissertation/Thesis / Masters Thesis Software Engineering 2020
|
532 |
Toward Automatic Fact-Checking of Statistic Claims / Vers une vérification automatique des affirmations statistiquesCao, Tien Duc 26 September 2019 (has links)
La thèse vise à explorer des modèles et algorithmes d'extraction de connaissance et d'interconnexion de bases de données hétérogènes, appliquée à la gestion de contenus tels que rencontrés fréquemment dans le quotidien des journalistes. Le travail se déroulera dans le cadre du projet ANR ContentCheck (2016-2019) qui fournit le financement et dans le cadre duquel nous collaborons aussi avec l'équipe "Les Décodeurs" (journalistes spécialisés dans le fact-checking) du journal Le Monde.La démarche scientifique de la thèse se décompose comme suit:1. Identifier les technologies et domaines de gestion de contenu (texte, données, connaissances) intervenant de façon recurrente (ou dont le besoin est ressenti comme important) dans l'activité des journalistes.Il est par exemple déjà clair que ceux-ci ont l'habitude d'utiliser "en interne" quelques bases de données construites par les journalistes eux-mêmes ; ils disposent aussi d'outils internes (à la rédaction) de recherche par mots-clé ; cependant, ils souhaiterait augmenter leur capacité d'indexation sémantique...Parmi ces problèmes, identifier ceux pour lesquels des solutions techniques (informatiques) sont connues, et le cas échéant mis en oeuvre dans des systèmes existants.2. S'attaquer aux problèmes ouverts (sur le plan de la recherche), pour lesquels des réponses satisfaisantes manquent, liés à la modélisation et à l'algorithmique efficace pour des contenus textuels, sémantiques, et des données, dans un contexte journalistique. / Digital content is increasingly produced nowadays in a variety of media such as news and social network sites, personal Web sites, blogs etc. In particular, a large and dynamic part of such content is related to media-worthy events, whether of general interest (e.g., the war in Syria) or of specialized interest to a sub-community of users (e.g., sport events or genetically modified organisms). While such content is primarily meant for the human users (readers), interest is growing in its automatic analysis, understanding and exploitation. Within the ANR project ContentCheck, we are interested in developing textual and semantic tools for analyzing content shared through digital media. The proposed PhD project takes place within this contract, and will be developed based on the interactions with our partner from Le Monde. The PhD project aims at developing algorithms and tools for :Classifying and annotating mixed content (from articles, structured databases, social media etc.) based on an existing set of topics (or ontology) ;Information and relation extraction from a text which may comprise a statement to be fact-checked, with a particular focus on capturing the time dimension ; a sample statement is for instance « VAT on iron in France was the highest in Europe in 2015 ».Building structured queries from extracted information and relations, to be evaluated against reference databases used as trusted information against which facts can be checked.
|
533 |
EXTRACTING SYMPTOMS FROM NARRATIVE TEXTUSING ARTIFICIAL INTELLIGENCEPriyanka Rakesh Gandhi (9713879) 07 January 2021 (has links)
<div><div><div><p>Electronic health records collect an enormous amount of data about patients. However, the information about the patient’s illness is stored in progress notes that are in an un- structured format. It is difficult for humans to annotate symptoms listed in the free text. Recently, researchers have explored the advancements of deep learning can be applied to pro- cess biomedical data. The information in the text can be extracted with the help of natural language processing. The research presented in this thesis aims at automating the process of symptom extraction. The proposed methods use pre-trained word embeddings such as BioWord2Vec, BERT, and BioBERT to generate vectors of the words based on semantics and syntactic structure of sentences. BioWord2Vec embeddings are fed into a BiLSTM neural network with a CRF layer to capture the dependencies between the co-related terms in the sentence. The pre-trained BERT and BioBERT embeddings are fed into the BERT model with a CRF layer to analyze the output tags of neighboring tokens. The research shows that with the help of the CRF layer in neural network models, longer phrases of symptoms can be extracted from the text. The proposed models are compared with the UMLS Metamap tool that uses various sources to categorize the terms in the text to different semantic types and Stanford CoreNLP, a dependency parser, that analyses syntactic relations in the sentence to extract information. The performance of the models is analyzed by using strict, relaxed, and n-gram evaluation schemes. The results show BioBERT with a CRF layer can extract the majority of the human-labeled symptoms. Furthermore, the model is used to extract symptoms from COVID-19 tweets. The model was able to extract symptoms listed by CDC as well as new symptoms.</p></div></div></div>
|
534 |
Mapping medical expressions to MedDRA using Natural Language ProcessingWallner, Vanja January 2020 (has links)
Pharmacovigilance, also referred to as drug safety, is an important science for identifying risks related to medicine intake. Side effects of medicine can be caused by for example interactions, high dosage and misuse. In order to find patterns in what causes the unwanted effects, information needs to be gathered and mapped to predefined terms. This mapping is today done manually by experts which can be a very difficult and time consuming task. In this thesis the aim is to automate the process of mapping side effects by using machine learning techniques. The model was developed using information from preexisting mappings of verbatim expressions of side effects. The final model that was constructed made use of the pre-trained language model BERT, which has received state-of-the-art results within the NLP field. When evaluating on the test set the final model performed an accuracy of 80.21%. It was found that some verbatims were very difficult for our model to classify mainly because of ambiguity or lack of information contained in the verbatim. As it is very important for the mappings to be done correctly, a threshold was introduced which left for manual mapping the verbatims that were most difficult to classify. This process could however still be improved as suggested terms were generated from the model, which could be used as support for the specialist responsible for the manual mapping.
|
535 |
Describing and retrieving visual content using natural languageRamanishka, Vasili 11 February 2021 (has links)
Modern deep learning methods have boosted research progress in visual recognition and text understanding but it is a non-trivial task to unite these advances from both disciplines. In this thesis, we develop models and techniques that allow us to connect natural language and visual content enabling automatic video subtitling, visual grounding, and text-based image search. Such models could be useful in a wide range of applications in robotics and human-computer interaction bridging the gap in vision and language understanding.
First, we develop a model that generates natural language descriptions of the main activities and scenes depicted in short videos. While previous methods were constrained to a predefined list of objects, actions, or attributes, our model learns to generate descriptions directly from raw pixels. The model exploits available audio information and the video’s category (e.g., cooking, movie, education) to generate more relevant and coherent sentences.
Then, we introduce a technique for visual grounding of generated sentences using the same video description model. Our approach allows for explaining the model’s prediction by localizing salient video regions for corresponding words in the generated sentence.
Lastly, we address the problem of image retrieval. Existing cross-modal retrieval methods work by learning a common embedding space for different modalities using parallel data such as images and their accompanying descriptions. Instead, we focus on the case when images are connected by relative annotations: given the context set as an image and its metadata, the user can specify desired semantic changes using natural language instructions. The model needs to capture distinctive visual differences between image pairs as described by the user. Our approach enables interactive image search such that the natural language feedback significantly improves the efficacy of image retrieval.
We show that the proposed methods advance the state-of-the-art for video captioning and image retrieval tasks in terms of both accuracy and interpretability.
|
536 |
A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification TaskWerfelmann, Robert 24 May 2018 (has links)
Native Language Identification (NLI) is the task of predicting the native language of an author from their text written in a second language. The idea is to find writing habits that transfer from an author’s native language to their second language. Many approaches to this task have been studied, from simple word frequency analysis, to analyzing grammatical and spelling mistakes to find patterns and traits that are common between different authors of the same native language. This can be a very complex task, depending on the native language and the proficiency of the author’s second language. The most common approach that has seen very good results is based on the usage of n-gram features of words and characters.
In this thesis, we attempt to extract lexical, grammatical, and semantic features from the sentences of non-native English essays using neural networks. The training and testing data was obtained from a large corpus of publicly available essays written by authors of several countries around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions of the neural networks, which were then used as feature inputs to a Support Vector Machine, making the final prediction.
Results show that Long Short-Term Memory neural network can improve performance over a naive bag of words approach, but with a much smaller feature set. With more fine-tuning of neural network hyperparameters, these results will likely improve significantly.
|
537 |
Generalizability of Electronic Health Record-Based Machine Learning ModelsWissel, Benjamin D. 05 October 2021 (has links)
No description available.
|
538 |
Construction and Visualization of Semantic Spaces for Domain-Specific Text CorporaChoudhary, Rishabh R. 04 October 2021 (has links)
No description available.
|
539 |
EMERGENCY MEDICAL SERVICE EMR-DRIVEN CONCEPT EXTRACTION FROM NARRATIVE TEXTSusanna S George (10947207) 05 August 2021 (has links)
Being in the midst of a pandemic with patients having minor symptoms that quickly
become fatal to patients with situations like a stemi heart attack, a fatal accident injury,
and so on, the importance of medical research to improve speed and efficiency in patient
care, has increased. As researchers in the computer domain work hard to use automation
in technology in assisting the first responders in the work they do, decreasing the cognitive
load on the field crew, time taken for documentation of each patient case and improving
accuracy in details of a report has been a priority.
<br>This paper presents an information extraction algorithm that custom engineers certain
existing extraction techniques that work on the principles of natural language processing
like metamap along with syntactic dependency parser like spacy for analyzing the sentence
structure and regular expressions to recurring patterns, to retrieve patient-specific information from medical narratives. These concept value pairs automatically populates the fields
of an EMR form which could be reviewed and modified manually if needed. This report can
then be reused for various medical and billing purposes related to the patient.
|
540 |
Machine Learning Based Drug-Disease Relationship Prediction and CharacterizationYaddanapudi, Suryanarayana 01 October 2019 (has links)
No description available.
|
Page generated in 0.1056 seconds