• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 929
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1601
  • 1601
  • 1601
  • 622
  • 565
  • 464
  • 383
  • 376
  • 266
  • 256
  • 245
  • 228
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
971

Inference of string mappings for speech technology

Jansche, Martin 15 October 2003 (has links)
No description available.
972

Email Thread Summarization with Conditional Random Fields

Shockley, Darla Magdalene 23 August 2010 (has links)
No description available.
973

Detecting and Diagnosing Grammatical Errors for Beginning Learners of German: From Learner Corpus Annotation to Constraint Satisfaction Problems

Boyd, Adriane Amelia 06 January 2012 (has links)
No description available.
974

[en] AN APPROACH TO ANSWERING NATURAL LANGUAGE QUESTIONS IN PORTUGUESE FROM ONTOLOGIES AND KNOWLEDGE BASES / [pt] UMA ABORDAGEM PARA RESPONDER PERGUNTAS EM LINGUAGEM NATURAL NA LÍNGUA PORTUGUESA A PARTIR DE ONTOLOGIAS E BASES DE CONHECIMENTO

ALYSSON GOMES DE SOUSA 29 April 2020 (has links)
[pt] Nos últimos anos temos visto o crescimento do volume de dados não estruturados gerados naWeb tradicional, e por isso aWeb Semântica nasceu como um paradigma que se propõe a estruturar o conteúdo da Web de uma forma flexível, por meio de ontologias de domínio e o modelo RDF, tornando os computadores capazes de processar automaticamente esses dados e possibilitando a geração de mais informação e conhecimento. Mas para tornar estas informações acessíveis para usuários de outros domínios, é necessário que haja uma maneira mais conveniente de consultar estas bases de conhecimento. A área de Processamento de Linguagem Natural (PLN) forneceu ferramentas para permitir que a linguagem natural (falada ou escrita) seja um meio conveniente para realizar consultas em bases de conhecimento. Contudo, para que o uso da linguagem natural seja realmente efetivo, é necessário um método que converta uma pergunta ou pedido em linguagem natural em uma consulta estruturada. Tendo em vista este objetivo, o presente trabalho propõe uma abordagem que converte uma pergunta/pedido em Português em uma consulta estruturada na linguagem SPARQL, por meio do uso de árvores de dependências e ontologias estruturada em grafos, e que também permite o enriquecimento dos resultados das perguntas/pedidos por meio da geração de perguntas relacionadas. / [en] In recent years we have seen the growth of the volume of unstructured data generated in the traditional Web. Therefore the Semantic Web was born as a paradigm that proposes to structure the content of the Web flexibly through domain ontologies and the RDF model, making computers capable of automatically processing this data, enabling the generation of more information and knowledge. However, to make this information accessible to users in other domains, there needs to be a more convenient way of looking at these knowledge bases. The Natural Language Processing (NLP) area has provided tools to allow natural (spoken or writing) is a convenient way to perform queries in knowledge bases. However, for the use of natural language to be useful, a method is required that converts a natural language question or request into a structured query. With this objective, the present work proposes an approach that converts a question/request in Portuguese into a structured query in the SPARQL language, through the use of dependency trees and structured ontologies in graphs, and that also enables the enrichment of question/request results by generating related questions.
975

Named Entity Recognition for Search Queries in the Music Domain / Identifiering av namngivna enheter för sökfrågor inom musikdomänen

Liljeqvist, Sandra January 2016 (has links)
This thesis addresses the problem of named entity recognition (NER) in music-related search queries. NER is the task of identifying keywords in text and classifying them into predefined categories. Previous work in the field has mainly focused on longer documents of editorial texts. However, in recent years, the application of NER for queries has attracted increased attention. This task is, however, acknowledged to be challenging due to queries being short, ungrammatical and containing minimal linguistic context. The usage of NER for queries is especially useful for the implementation of natural language queries in domain-specific search applications. These applications are often backed by a database, where the query format otherwise is restricted to keyword search or the usage of a formal query language. In this thesis, two techniques for NER for music-related queries are evaluated; a conditional random field based solution and a probabilistic solution based on context words. As a baseline, the most elementary implementation of NER, commonly applied on editorial text, is used. Both of the evaluated approaches outperform the baseline and demonstrate an overall F1 score of 79.2% and 63.4% respectively. The experimental results show a high precision for the probabilistic approach and the conditional random field based solution demonstrates an F1 score comparable to previous studies from other domains. / Denna avhandling redogör för identifiering av namngivna enheter i musikrelaterade sökfrågor. Identifiering av namngivna enheter innebär att extrahera nyckelord från text och att klassificera dessa till någon av ett antal förbestämda kategorier. Tidigare forskning kring ämnet har framför allt fokuserat på längre redaktionella dokument. Däremot har intresset för tillämpningar på sökfrågor ökat de senaste åren. Detta anses vara ett svårt problem då sökfrågor i allmänhet är korta, grammatiskt inkorrekta och innehåller minimal språklig kontext. Identifiering av namngivna enheter är framför allt användbart för domänspecifika sökapplikationer där målet är att kunna tolka sökfrågor skrivna med naturligt språk. Dessa applikationer baseras ofta på en databas där formatet på sökfrågorna annars är begränsat till att enbart använda nyckelord eller användande av ett formellt frågespråk. I denna avhandling har två tekniker för identifiering av namngivna enheter för musikrelaterade sökfrågor undersökts; en metod baserad på villkorliga slumpfält (eng. conditional random field) och en probabilistisk metod baserad på kontextord. Som baslinje har den mest grundläggande implementationen, som vanligtvis används för redaktionella texter, valts. De båda utvärderade metoderna presterar bättre än baslinjen och ges ett F1-värde på 79,2% respektive 63,4%. De experimentella resultaten visar en hög precision för den probabilistiska implementationen och metoden ba- serad på villkorliga slumpfält visar på resultat på en nivå jämförbar med tidigare studier inom andra domäner.
976

Mining Heterogeneous Electronic Health Records Data

Bai, Tian January 2019 (has links)
Electronic health record (EHR) systems are used by medical providers to streamline the workflow and enable sharing of patient data with different providers. Beyond that primary purpose, EHR data have been used in healthcare research for exploratory and predictive analytics. EHR data are heterogeneous collections of both structured and unstructured information. In order to store data in a structured way, several ontologies have been developed to describe diagnoses and treatments. On the other hand, the unstructured clinical notes contain various more nuanced information about patients. The multidimensionality and complexity of EHR data pose many unique challenges and problems for both data mining and medical communities. In this thesis, we address several important issues and develop novel deep learning approaches in order to extract insightful knowledge from these data. Representing words as low dimensional vectors is very useful in many natural language processing tasks. This idea has been extended to medical domain where medical codes listed in medical claims are represented as vectors to facilitate exploratory analysis and predictive modeling. However, depending on a type of a medical provider, medical claims can use medical codes from different ontologies or from a combination of ontologies, which complicates learning of the representations. To be able to properly utilize such multi-source medical claim data, we propose an approach that represents medical codes from different ontologies in the same vector space. The new approach was evaluated on the code cross-reference problem, which aims at identifying similar codes across different ontologies. In our experiments, we show the proposed approach provide superior cross-referencing when compared to several existing approaches. Furthermore, considering EHR data also contain unstructured clinical notes, we also propose a method that jointly learns medical concept and word representations. The jointly learned representations of medical codes and words can be used to extract phenotypes of different diseases. Various deep learning models have recently been applied to predictive modeling of Electronic Health Records (EHR). In EHR data, each patient is represented as a sequence of temporally ordered irregularly sampled visits to health providers, where each visit is recorded as an unordered set of medical codes specifying patient's diagnosis and treatment provided during the visit. We propose a novel interpretable deep learning model, called Timeline. The main novelty of Timeline is that it has a mechanism that learns time decay factors for every medical code. We evaluated Timeline on two large-scale real world data sets. The specific task was to predict what is the primary diagnosis category for the next hospital visit given previous visits. Our results show that Timeline has higher accuracy than the state of the art deep learning models based on RNN. Clinical notes contain detailed information about health status of patients for each of their encounters with a health system. Developing effective models to automatically assign medical codes to clinical notes has been a long-standing active research area. Considering the large amount of online disease knowledge sources, which contain detailed information about signs and symptoms of different diseases, their risk factors, and epidemiology, we consider Wikipedia as an external knowledge source and propose Knowledge Source Integration (KSI), a novel end-to-end code assignment framework, which can integrate external knowledge during training of any baseline deep learning model. To evaluate KSI, we experimented with automatic assignment of ICD-9 diagnosis codes to clinical notes, aided by Wikipedia documents corresponding to the ICD-9 codes. The results show that KSI consistently improves the baseline models and that it is particularly successful in rare codes prediction. / Computer and Information Science
977

Entity Information Extraction using Structured and Semi-structured resources

Sil, Avirup January 2014 (has links)
Among all the tasks that exist in Information Extraction, Entity Linking, also referred to as entity disambiguation or entity resolution, is a new and important problem which has recently caught the attention of a lot of researchers in the Natural Language Processing (NLP) community. The task involves linking/matching a textual mention of a named-entity (like a person or a movie-name) to an appropriate entry in a database (e.g. Wikipedia or IMDB). If the database does not contain the entity it should return NIL (out-of-database) value. Existing techniques for linking named entities in text mostly focus on Wikipedia as a target catalog of entities. Yet for many types of entities, such as restaurants and cult movies, relational databases exist that contain far more extensive information than Wikipedia. In this dissertation, we introduce a new framework, called Open-Database Entity Linking (Open-DB EL), in which a system must be able to resolve named entities to symbols in an arbitrary database, without requiring labeled data for each new database. In experiments on two domains, our Open-DB EL strategies outperform a state-of-the-art Wikipedia EL system by over 25% in accuracy. Existing approaches typically perform EL using a pipeline architecture: they use a Named-Entity Recognition (NER) system to find the boundaries of mentions in text, and an EL system to connect the mentions to entries in structured or semi-structured repositories like Wikipedia. However, the two tasks are tightly coupled, and each type of system can benefit significantly from the kind of information provided by the other. We propose and develop a joint model for NER and EL, called NEREL, that takes a large set of candidate mentions from typical NER systems and a large set of candidate entity links from EL systems, and ranks the candidate mention-entity pairs together to make joint predictions. In NER and EL experiments across three datasets, NEREL significantly outperforms or comes close to the performance of two state-of-the-art NER systems, and it outperforms 6 competing EL systems. On the benchmark MSNBC dataset, NEREL, provides a 60% reduction in error over the next best NER system and a 68% reduction in error over the next-best EL system. We also extend the idea of using semi-structured resources to a relatively less explored area of entity information extraction. Most previous work on information extraction from text has focused on named-entity recognition, entity linking, and relation extraction. Much less attention has been paid to extracting the temporal scope for relations between named-entities; for example, the relation president-Of (John F. Kennedy, USA) is true only in the time-frame (January 20, 1961 - November 22, 1963). In this dissertation we present a system for temporal scoping of relational facts, called TSRF which is trained on distant supervision based on the largest semi-structured resource available: Wikipedia. TSRF employs language models consisting of patterns automatically bootstrapped from sentences collected from Wikipedia pages that contain the main entity of a page and slot-fillers extracted from the infobox tuples. This proposed system achieves state-of-the-art results on 6 out of 7 relations on the benchmark Text Analysis Conference (TAC) 2013 dataset for the task of temporal slot filling (TSF). Overall, the system outperforms the next best system that participated in the TAC evaluation by 10 points on the TAC-TSF evaluation metric. / Computer and Information Science
978

Conversational artificial intelligence - demystifying statistical vs linguistic NLP solutions

Panesar, Kulvinder 05 October 2020 (has links)
yes / This paper aims to demystify the hype and attention on chatbots and its association with conversational artificial intelligence. Both are slowly emerging as a real presence in our lives from the impressive technological developments in machine learning, deep learning and natural language understanding solutions. However, what is under the hood, and how far and to what extent can chatbots/conversational artificial intelligence solutions work – is our question. Natural language is the most easily understood knowledge representation for people, but certainly not the best for computers because of its inherent ambiguous, complex and dynamic nature. We will critique the knowledge representation of heavy statistical chatbot solutions against linguistics alternatives. In order to react intelligently to the user, natural language solutions must critically consider other factors such as context, memory, intelligent understanding, previous experience, and personalized knowledge of the user. We will delve into the spectrum of conversational interfaces and focus on a strong artificial intelligence concept. This is explored via a text based conversational software agents with a deep strategic role to hold a conversation and enable the mechanisms need to plan, and to decide what to do next, and manage the dialogue to achieve a goal. To demonstrate this, a deep linguistically aware and knowledge aware text based conversational agent (LING-CSA) presents a proof-of-concept of a non-statistical conversational AI solution.
979

Low-resource Semantic Role Labeling Through Improved Transfer Learning

Lindbäck, Hannes January 2024 (has links)
For several more complex tasks, such as semantic role labeling (SRL), large annotated datasets are necessary. For smaller and lower-resource languages, these are not readily available. As a way to overcome this data bottleneck, this thesis investigates the possibilities of using transfer learning from a high-resource language to a low-resource language, and then perform zero-shot SRL on the low-resource language. We additionally investigate if the transfer-learning can be improved by freezing the parameters of a layer in the pre-trained model, leveraging the model to instead focus on learning the parameters of the layers necessary for the task. By training models in English and then evaluating on Spanish, Catalan, German and Chinese CoNLL-2009 data, we find that transfer learning zero-shot SRL can be an effective technique, and in certain cases outperform models trained on low amounts of data. We also find that the results improve when freezing parameters of the lower layers of the model, the layers focused on surface tasks, as this allowed the model to improve the layers necessary for SRL.
980

Fine-tuning and evaluating a Swedish language model for automatic discharge summary gener- ation from Swedish clinical notes

Berg, Nils January 2023 (has links)
Background Healthcare professionals spend large amounts of time on documentation tasks in contemporary healthcare. One such documentation task is the discharge summary which summarizes a care episode. However, research shows that many discharge summaries written today are of lacking quality. One method which has the po- tential to alleviate the situation is natural language processing, specifically text summarization, as it could automatically summarize patient notes into a discharge summary. Aim This thesis aims to provide initial knowledge on the topic of summarization of Swedish clinical text into discharge summaries. Furthermore, this thesis aims to provide knowledge specifically on performing summarization using the Stockholm EPR Gastro ICD-10 Pseudo Corpus II dataset, consisting of Swedish electronic health record data. Method Using the design science framework, an artefact was produced in the form of a model, based on a pre-trained Swedish BART model, which can summarize patient notes into a discharge summary. This model was developed using the Hugging Face library and evaluated both via ROUGE scores as well as via a manual evaluation performed by a now retired healthcare professional. Results The discharge summaries produced from a test set by the artefact model achieved ROUGE-1/2/L/S scores of 0.280/0.057/0.122/0.068. The manual evaluation im- plies that the artefact is prone to fail to accurately include clinically important information, that the artefact produces text with low readability, and that the artefact is very prone to produce severe hallucinations. Conclusion The artefact’s performance is worse than the results of previous studies on the topic of summarization of patient notes into discharge summaries, in terms of ROUGE scores. The manual evaluation of the artefact performance suggests sev- eral shortcomings in its capabilities to accurately summarize a care episode. Since this was the first major work conducted on the topic of text summarization using the Stockholm EPR Gastro ICD-10 Pseudo Corpus II dataset, there are many possible directions for future works.

Page generated in 0.1294 seconds