• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 33
  • 24
  • 22
  • 21
  • 21
  • 20
  • 17
  • 17
  • 16
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

An analysis of hierarchical text classification using word embeddings

Stein, Roger Alan 28 March 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2019-03-07T14:41:05Z No. of bitstreams: 1 Roger Alan Stein_.pdf: 476239 bytes, checksum: a87a32ffe84d0e5d7a882e0db7b03847 (MD5) / Made available in DSpace on 2019-03-07T14:41:05Z (GMT). No. of bitstreams: 1 Roger Alan Stein_.pdf: 476239 bytes, checksum: a87a32ffe84d0e5d7a882e0db7b03847 (MD5) Previous issue date: 2018-03-28 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Efficient distributed numerical word representation models (word embeddings) combined with modern machine learning algorithms have recently yielded considerable improvement on automatic document classification tasks. However, the effectiveness of such techniques has not been assessed for the hierarchical text classification (HTC) yet. This study investigates application of those models and algorithms on this specific problem by means of experimentation and analysis. Classification models were trained with prominent machine learning algorithm implementations—fastText, XGBoost, and Keras’ CNN—and noticeable word embeddings generation methods—GloVe, word2vec, and fastText—with publicly available data and evaluated them with measures specifically appropriate for the hierarchical context. FastText achieved an LCAF1 of 0.871 on a single-labeled version of the RCV1 dataset. The results analysis indicates that using word embeddings is a very promising approach for HTC. / Modelos eficientes de representação numérica textual (word embeddings) combinados com algoritmos modernos de aprendizado de máquina têm recentemente produzido uma melhoria considerável em tarefas de classificação automática de documentos. Contudo, a efetividade de tais técnicas ainda não foi avaliada com relação à classificação hierárquica de texto. Este estudo investiga a aplicação daqueles modelos e algoritmos neste problema em específico através de experimentação e análise. Modelos de classificação foram treinados usando implementações proeminentes de algoritmos de aprendizado de máquina—fastText, XGBoost e CNN (Keras)— e notórios métodos de geração de word embeddings—GloVe, word2vec e fastText—com dados disponíveis publicamente e avaliados usando métricas especificamente adequadas ao contexto hierárquico. Nesses experimentos, fastText alcançou um LCAF1 de 0,871 usando uma versão da base de dados RCV1 com apenas uma categoria por tupla. A análise dos resultados indica que a utilização de word embeddings é uma abordagem muito promissora para classificação hierárquica de texto.
32

Biomedical Concept Association and Clustering Using Word Embeddings

Setu Shah (5931128) 12 February 2019 (has links)
<div>Biomedical data exists in the form of journal articles, research studies, electronic health records, care guidelines, etc. While text mining and natural language processing tools have been widely employed across various domains, these are just taking off in the healthcare space.</div><div><br></div><div>A primary hurdle that makes it difficult to build artificial intelligence models that use biomedical data, is the limited amount of labelled data available. Since most models rely on supervised or semi-supervised methods, generating large amounts of pre-processed labelled data that can be used for training purposes becomes extremely costly. Even for datasets that are labelled, the lack of normalization of biomedical concepts further affects the quality of results produced and limits the application to a restricted dataset. This affects reproducibility of the results and techniques across datasets, making it difficult to deploy research solutions to improve healthcare services.</div><div><br></div><div>The research presented in this thesis focuses on reducing the need to create labels for biomedical text mining by using unsupervised recurrent neural networks. The proposed method utilizes word embeddings to generate vector representations of biomedical concepts based on semantics and context. Experiments with unsupervised clustering of these biomedical concepts show that concepts that are similar to each other are clustered together. While this clustering captures different synonyms of the same concept, it also captures the similarities between various diseases and the symptoms that those diseases are symptomatic of.</div><div><br></div><div>To test the performance of the concept vectors on corpora of documents, a document vector generation method that utilizes these concept vectors is also proposed. The document vectors thus generated are used as an input to clustering algorithms, and the results show that across multiple corpora, the proposed methods of concept and document vector generation outperform the baselines and provide more meaningful clustering. The applications of this document clustering are huge, especially in the search and retrieval space, providing clinicians, researchers and patients more holistic and comprehensive results than relying on the exclusive term that they search for.</div><div><br></div><div>At the end, a framework for extracting clinical information that can be mapped to electronic health records from preventive care guidelines is presented. The extracted information can be integrated with the clinical decision support system of an electronic health record. A visualization tool to better understand and observe patient trajectories is also explored. Both these methods have potential to improve the preventive care services provided to patients.</div>
33

Data-driven language understanding for spoken dialogue systems

Mrkšić, Nikola January 2018 (has links)
Spoken dialogue systems provide a natural conversational interface to computer applications. In recent years, the substantial improvements in the performance of speech recognition engines have helped shift the research focus to the next component of the dialogue system pipeline: the one in charge of language understanding. The role of this module is to translate user inputs into accurate representations of the user goal in the form that can be used by the system to interact with the underlying application. The challenges include the modelling of linguistic variation, speech recognition errors and the effects of dialogue context. Recently, the focus of language understanding research has moved to making use of word embeddings induced from large textual corpora using unsupervised methods. The work presented in this thesis demonstrates how these methods can be adapted to overcome the limitations of language understanding pipelines currently used in spoken dialogue systems. The thesis starts with a discussion of the pros and cons of language understanding models used in modern dialogue systems. Most models in use today are based on the delexicalisation paradigm, where exact string matching supplemented by a list of domain-specific rephrasings is used to recognise users' intents and update the system's internal belief state. This is followed by an attempt to use pretrained word vector collections to automatically induce domain-specific semantic lexicons, which are typically hand-crafted to handle lexical variation and account for a plethora of system failure modes. The results highlight the deficiencies of distributional word vectors which must be overcome to make them useful for downstream language understanding models. The thesis next shifts focus to overcoming the language understanding models' dependency on semantic lexicons. To achieve that, the proposed Neural Belief Tracking (NBT) model forsakes the use of standard one-hot n-gram representations used in Natural Language Processing in favour of distributed representations of user utterances, dialogue context and domain ontologies. The NBT model makes use of external lexical knowledge embedded in semantically specialised word vectors, obviating the need for domain-specific semantic lexicons. Subsequent work focuses on semantic specialisation, presenting an efficient method for injecting external lexical knowledge into word vector spaces. The proposed Attract-Repel algorithm boosts the semantic content of existing word vectors while simultaneously inducing high-quality cross-lingual word vector spaces. Finally, NBT models powered by specialised cross-lingual word vectors are used to train multilingual belief tracking models. These models operate across many languages at once, providing an efficient method for bootstrapping language understanding models for lower-resource languages with limited training data.
34

Réduire la probabilité de disparité des termes en exploitant leurs relations sémantiques / Reducing Term Mismatch Probability by Exploiting Semantic Term Relations

Almasri, Mohannad 27 June 2017 (has links)
Les systèmes de recherche d’information utilisent généralement une multitude de fonctionnalités pour classer les documents. Néanmoins, un élément reste essentiel pour le classement, qui est les modèles standards de recherche d’information.Cette thèse aborde une limitation fondamentale des modèles de recherche d’information, à savoir le problème de la disparité des termes <Term Mismatch Problem>. Le problème de la disparité des termes est un problème de longue date dans la recherche d'informations. Cependant, le problème de la récurrence de la disparité des termes n'a pas bien été défini dans la recherche d'information, son importance, et à quel point cela affecterai les résultats de la recherche. Cette thèse tente de répondre aux problèmes présentés ci-dessus.Nos travaux de recherche sont rendus possibles par la définition formelle de la probabilité de la disparité des termes. Dans cette thèse, la disparité des termes est définie comme étant la probabilité d'un terme ne figurant pas dans un document pertinent pour la requête. De ce fait, cette thèse propose des approches pour réduire la probabilité de la disparité des termes. De plus, nous confortons nos proposions par une analyse quantitative de la probabilité de la disparité des termes qui décrit de quelle manière les approches proposées permettent de réduire la probabilité de la disparité des termes tout en conservant les performances du système.Au première niveau, à savoir le document, nous proposons une approche de modification des documents en fonction de la requête de l'utilisateur. Il s'agit de traiter les termes de la requête qui n'apparaissent pas dans le document. Le modèle de document modifié est ensuite utilisé dans un modèle standard de recherche afin d'obtenir un modèle permettant de traiter explicitement la disparité des termes.Au second niveau, à savoir la requête, nous avons proposé deux majeures contributions.Premièrement, nous proposons une approche d'expansion de requête sémantique basée sur une ressource collaborative. Nous concentrons plutôt sur la structure de ressources collaboratives afin d'obtenir des termes d'expansion intéressants qui contribuent à réduire la probabilité de la disparité des termes, et par conséquent, d'améliorer la qualité de la recherche.Deuxièmement, nous proposons un modèle d'expansion de requête basé sur les modèles de langue neuronaux. Les modèles de langue neuronaux sont proposés pour apprendre les représentations vectorielles des termes dans un espace latent, appelées <Distributed Neural Embeddings>. Ces représentations vectorielles s'appuient sur les relations entre les termes permettant ainsi d'obtenir des résultats impressionnants en comparaison avec l'état de l'art dans les taches de similarité de termes. Cependant, nous proposons d'utiliser ces représentations vectorielles comme une ressource qui définit les relations entre les termes.Nous adaptons la définition de la probabilité de la disparité des termes pour chaque contribution ci-dessus. Nous décrivons comment nous utilisons des corpus standard avec des requêtes et des jugements de pertinence pour estimer la probabilité de la disparité des termes. Premièrement, nous estimons la probabilité de la disparité des termes à l'aide les documents et les requêtes originaux. Ainsi, nous présentons les différents cas de la disparité des termes clairement identifiée dans les systèmes de recherche pour les différents types de termes d'indexation. Ensuite, nous indiquons comment nos contributions réduisent la probabilité de la disparité des termes estimée et améliorent le rappel du système.Des directions de recherche prometteuses sont identifiées dans le domaine de la disparité des termes qui pourrait présenter éventuellement un impact significatif sur l'amélioration des scénarios de la recherche. / Even though modern retrieval systems typically use a multitude of features to rank documents, the backbone for search ranking is usually the standard retrieval models.This thesis addresses a limitation of the standard retrieval models, the term mismatch problem. The term mismatch problem is a long standing problem in information retrieval. However, it was not well understood how often term mismatch happens in retrieval, how important it is for retrieval, or how it affects retrieval performance. This thesis answers the above questions.This research is enabled by the formal definition of term mismatch. In this thesis, term mismatch is defined as the probability that a term does not appear in a document given that this document is relevant. We propose several approaches for reducing term mismatch probability through modifying documents or queries. Our proposals are then followed by a quantitative analysis of term mismatch probability that shows how much the proposed approaches reduce term mismatch probability with maintaining the system performance. An essential component for achieving term mismatch probability reduction is the knowledge resource that defines terms and their relationships.First, we propose a document modification approach according to a user query. The main idea of our document modification approach is to deal with mismatched query terms. While prior research on document enrichment provides a static approach for document modification, we are concerned to only modify the document in case of mismatch. The modified document is then used in a standard retrieval model in order to obtain a mismatch aware retrieval model.Second, we propose a semantic query expansion approach based on a collaborative knowledge resource. We focus on the collaborative resource structure to obtain interesting expansion terms that contribute to reduce term mismatch probability, and as a result, improve the effectiveness of search.Third, we propose a query expansion approach based on neural language models. Neural language models are proposed to learn term vector representations, called distributed neural embeddings. Distributed neural embeddings capture relationships between terms, and they obtained impressive results comparing with state of the art approaches in term similarity tasks. However, in information retrieval, distributed neural embeddings are newly started to be exploited. We propose to use distributed neural embeddings as a knowledge resource in a query expansion scenario.Fourth, we apply the term mismatch probability definition for each contribution of the above contributions. We show how we use standard retrieval corpora with queries and relevance judgments to estimate the term mismatch probability. We estimate the term mismatch probability using original documents and queries, and we figure out how mismatch problem is clearly found in search systems for different types of indexing terms. Then, we point out how much our contributions reduce the estimated mismatch probability, and improve the system recall. As a result, we present how the modified document and query representations contribute to build a mismatch aware retrieval model that mitigate term mismatch problem theoretically and practically.This dissertation shows the effectiveness of our proposals to improve retrieval performance. Our experiments are conducted on corpora from two different domains: medical domain and cultural heritage domain. Moreover, we use two different types of indexing terms for representing documents and queries: words and concepts, and we exploit several types of relationships between indexing terms: hierarchical relationships, relationships based on a collaborative resource structure, relationships defined on distributed neural embeddings.Promising research directions are identified where the term mismatch research may make a significance impact on improving the search scenarios.
35

Semantiska modeller för syntetisk textgenerering - en jämförelsestudie / Semantic Models for Synthetic Textgeneration - A Comparative Study

Åkerström, Joakim, Peñaloza Aravena, Carlos January 2018 (has links)
Denna kunskapsöversikt undersöker det forskningsfält som rör musikintegrerad matematikundervisning. Syftet med översikten är att få en inblick i hur musiken påverkar elevernas matematikprestationer samt hur forskningen ser ut inom denna kombination. Därför är vår frågeställning: Vad kännetecknar forskningen om integrationen mellan matematik och musik? För att besvara denna fråga har vi utfört litteratursökningar för att finna studier och artiklar som tillsammans bildar en överblick. Med hjälp av den metod som Claes Nilholm beskriver i SMART (2016) har vi skapat en struktur för hur vi arbetat. Ur det material som vi fann under sökningarna har vi funnit mönster som talar för musikens positiva inverkan på matematikundervisning. Förmågan att uttrycka sina känslor i form av ord eller beröra andra med dem har alltid varit enbeundransvärd och sällsynt egenskap. Det här projektet handlar om att skapa en text generatorkapabel av att skriva text i stil med enastående män och kvinnor med den här egenskapen. Arbetet har genomförts genom att träna ett neuronnät med citat skrivna av märkvärdigamänniskor såsom Oscar Wilde, Mark Twain, Charles Dickens, etc. Nätverket samarbetar med två olika semantiska modeller: Word2Vec och One-Hot och alla tre är delarna som vår textgenerator består av. Med dessa genererade texterna gjordes en enkätudersökning för att samlaåsikter från studenter om kvaliteten på de genererade texterna för att på så vis utvärderalämpligheten hos de olika semantiska modellerna. Efter analysen av resultatet lärde vi oss att de flesta respondenter tyckte att texterna de läste var sammanhängande och roliga. Vi lärde oss också att Word2Vec, presterade signifikant bättre än One-hot. / The ability of expressing feelings in words or moving others with them has always been admired and rare feature. This project involves creating a text generator able to write text in the style of remarkable men and women with this ability, this gift. This has been done by training a neural network with quotes written by outstanding people such as Oscar Wilde, Mark Twain, Charles Dickens, et alt. This neural network cooperate with two different semantic models: Word2Vec and One-Hot and the three of them compound our text generator. With the text generated we carried out a survey in order to collect the opinion of students about the quality of the text generated by our generator. Upon examination of the result, we proudly learned that most of the respondents thought the texts were coherent and fun to read, we also learned that the former semantic model performed, not by a factor of magnitude, better than the latter.
36

Word Embeddings in Database Systems

Günther, Michael 18 November 2021 (has links)
Research in natural language processing (NLP) focuses recently on the development of learned language models called word embedding models like word2vec, fastText, and BERT. Pre-trained on large amounts of unstructured text in natural language, those embedding models constitute a rich source of common knowledge in the domain of the text used for the training. In the NLP community, significant improvements are achieved by using those models together with deep neural network models. To support applications to benefit from word embeddings, we extend the capabilities of traditional relational database systems, which are still by far the most common DBMSs but only provide limited text analysis features. Therefore, we implement (a) novel database operations involving embedding representations to allow a database user to exploit the knowledge encoded in word embedding models for advanced text analysis operations. The integration of those operations into database query language enables users to construct queries using novel word embedding operations in conjunction with traditional query capabilities of SQL. To allow efficient retrieval of embedding representations and fast execution of the operations, we implement (b) novel search algorithms and index structures for approximated kNN-Joins and integrate those into a relational database management system. Moreover, we investigate techniques to optimize embedding representations of text values in database systems. Therefore, we design (c) a novel context adaptation algorithm. This algorithm utilizes the structured data present in the database to enrich the embedding representations of text values to model their context-specific semantic in the database. Besides, we provide (d) support for selecting a word embedding model suitable for a user's application. Therefore, we developed a data processing pipeline to construct a dataset for domain-specific word embedding evaluation. Finally, we propose (e) novel embedding techniques for pre-training on tabular data to support applications working with text values in tables. Our proposed embedding techniques model semantic relations arising from the alignment of words in tabular layouts that can only hardly be derived from text documents, e.g., relations between table schema and table body. In this way, many applications, which either employ embeddings in supervised machine learning models, e.g., to classify cells in spreadsheets, or through the application of arithmetic operations, e.g., table discovery applications, can profit from the proposed embedding techniques.:1 INTRODUCTION 1.1 Contribution 1.2 Outline 2 REPRESENTATION OF TEXT FOR NATURAL LANGUAGE PROCESSING 2.1 Natural Language Processing Systems 2.2 Word Embedding Models 2.2.1 Matrix Factorization Methods 2.2.2 Learned Distributed Representations 2.2.3 Contextualize Word Embeddings 2.2.4 Advantages of Contextualize and Static Word Embeddings 2.2.5 Properties of Static Word Embeddings 2.2.6 Node Embeddings 2.2.7 Non-Euclidean Embedding Techniques 2.3 Evaluation of Word Embeddings 2.3.1 Similarity Evaluation 2.3.2 Analogy Evaluation 2.3.3 Cluster-based Evaluation 2.4 Application for Tabular Data 2.4.1 Semantic Search 2.4.2 Data Curation 2.4.3 Data Discovery 3 SYSTEM OVERVIEW 3.1 Opportunities of an Integration 3.2 Characteristics of Word Vectors 3.3 Objectives and Challenges 3.4 Word Embedding Operations 3.5 Performance Optimization of Operations 3.6 Context Adaptation 3.7 Requirements for Model Recommendation 3.8 Tabular Embedding Models 4 MANAGEMENT OF EMBEDDING REPRESENTATIONS IN DATABASE SYSTEMS 4.1 Integration of Operations in an RDBMS 4.1.1 System Architecture 4.1.2 Storage Formats 4.1.3 User-Defined Functions 4.1.4 Web Application 4.2 Nearest Neighbor Search 4.2.1 Tree-based Methods 4.2.2 Proximity Graphs 4.2.3 Locality-Sensitive Hashing 4.2.4 Quantization Techniques 4.3 Applicability of ANN Techniques for Word Embedding kNN-Joins 4.4 Related Work on kNN Search in Database Systems 4.5 ANN-Joins for Relational Database Systems 4.5.1 Index Architecture 4.5.2 Search Algorithm 4.5.3 Distance Calculation 4.5.4 Optimization Capabilities 4.5.5 Estimation of the Number of Targets 4.5.6 Flexible Product Quantization 4.5.7 Further Optimizations 4.5.8 Parameter Tuning 4.5.9 kNN-Joins for Word2Bits 4.6 Evaluation 4.6.1 Experimental Setup 4.6.2 Influence of Index Parameters on Precision and Execution Time 4.6.3 Performance of Subroutines 4.6.4 Flexible Product Quantization 4.6.5 Accuracy of the Target Size Estimation 4.6.6 Performance of Word2Bits kNN-Join 4.7 Summary 5 CONTEXT ADAPTATION FOR WORD EMBEDDING OPTIMIZATION 5.1 Related Work 5.1.1 Graph and Text Joint Embedding Methods 5.1.2 Retrofitting Approaches 5.1.3 Table Embedding Models 5.2 Relational Retrofitting Approach 5.2.1 Data Preparation 5.2.2 Relational Retrofitting Problem 5.2.3 Relational Retrofitting Algorithm 5.2.4 Online-RETRO 5.3 Evaluation Platform: Retro Live 5.3.1 Functionality 5.3.2 Interface 5.4 Evaluation 5.4.1 Datasets 5.4.2 Training of Embeddings 5.4.3 Machine Learning Models 5.4.4 Evaluation of ML Models 5.4.5 Run-time Measurements 5.4.6 Online Retrofitting 5.5 Summary 6 MODEL RECOMMENDATION 6.1 Related Work 6.1.1 Extrinsic Evaluation 6.1.2 Intrinsic Evaluation 6.2 Architecture of FacetE 6.3 Evaluation Dataset Construction Pipeline 6.3.1 Web Table Filtering and Facet Candidate Generation 6.3.2 Check Soft Functional Dependencies 6.3.3 Post-Filtering 6.3.4 Categorization 6.4 Evaluation of Popular Word Embedding Models 6.4.1 Domain-Agnostic Evaluation 6.4.2 Evaluation of a Single Facet 6.4.3 Evaluation of an Object Set 6.5 Summary 7 TABULAR TEXT EMBEDDINGS 7.1 Related Work 7.1.1 Static Table Embedding Models 7.1.2 Contextualized Table Embedding Models 7.2 Web Table Embedding Model 7.2.1 Preprocessing 7.2.2 Text Serialization 7.2.3 Encoding Model 7.2.4 Embedding Training 7.3 Applications for Table Embeddings 7.3.1 Table Union Search 7.3.2 Classification Tasks 7.4 Evaluation 7.4.1 Intrinsic Evaluation 7.4.2 Table Union Search Evaluation 7.4.3 Table Layout Classification 7.4.4 Spreadsheet Cell Classification 7.5 Summary 8 CONCLUSION 8.1 Summary 8.2 Directions for Future Work BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES A CONVEXITY OF RELATIONAL RETROFITTING B EVALUATION OF THE RELATIONAL RETROFITTING HYPERPARAMETERS
37

Descriptive Music Search With Domain-Specific Word Embeddings / Deskriptiv musiksökning med domänspecifika ordinbäddningar

Liu, Alva January 2019 (has links)
Descriptive search is a type of exploratory search that allows users to search for content by providing descriptors. Instead of having a specific target in mind, the user looks for a recommendation of items that matches the given descriptors. However in the music domain, descriptive words do not necessarily have the same semantic meaning as they have in a generic text corpus. In this study, we investigate if we can train a shallow neural model on playlist data for descriptive music search, and if the model can capture music-specific word semantics. We carry out three experiments to evaluate our model. The first and the second experiments evaluate if the model can predict tracks that are relevant to given search queries, and the third experiment evaluates whether the model successfully captures domain-specific word semantics. From our experiments, we conclude that our model trained on playlist data indeed can capture music-specific word semantics and generate reasonable track predictions. For future work, we suggest to explore possibilities to re-rank the top results retrieved by the model and diversify and/or personalize the ordering of the results. / Deskriptiv sökning är en typ av utforskande informationshämtning där användare söker efter material med hjälp av beskrivande sökord. Istället för att ange namnet på ett objekt i söksträngen så kan användaren med ord beskriva objekt som efterfrågas. I ett musiksammanhang har dock många beskrivande ord inte samma betydelse som de har i ett generellt sammanhang. Vi undersöker därför i vår studie om vi kan träna ett grunt neuralt nätverk med spellistsdata för deskriptiv musiksökning, och om modellen kan lära sig musik-specifika betydelser av ord. Vi utför totalt tre olika experiment för att utvärdera modellen. De första två experimenten undersöker om modellen kan föreslå relevanta låtar givet beskrivande söksträngar och det sista experimentet undersöker om modellen fångar domän-specifika betydelser av sökorden. Resultaten från våra experiment tyder på att modellen lyckas fånga musik-specifika språkmönster och kan föreslå rimliga låtar för deskriptiva söksträngar. För att göra modellen mer användningsbar föreslår vi att undersöka möjligheterna att omranka toppresultaten från modellen, och diversifiera samt personalisera ordningen av resultaten efter individuella användare.
38

Evaluation of Sentence Representations in Semantic Text Similarity Tasks / Utvärdering av meningsrepresentation för semantisk textlikhet

Balzar Ekenbäck, Nils January 2021 (has links)
This thesis explores the methods of representing sentence representations for semantic text similarity using word embeddings and benchmarks them against sentence based evaluation test sets. Two methods were used to evaluate the representations: STS Benchmark and STS Benchmark converted to a binary similarity task. Results showed that preprocessing of the word vectors could significantly boost performance in both tasks and conclude that word embed-dings still provide an acceptable solution for specific applications. The study also concluded that the dataset used might not be ideal for this type of evalua-tion, as the sentence pairs in general had a high lexical overlap. To tackle this, the study suggests that a paraphrasing dataset could act as a complement but that further investigation would be needed. / Denna avhandling undersöker metoder för att representera meningar i vektor-form för semantisk textlikhet och jämför dem med meningsbaserade testmäng-der. För att utvärdera representationerna användes två metoder: STS Bench-mark, en vedertagen metod för att utvärdera språkmodellers förmåga att ut-värdera semantisk likhet, och STS Benchmark konverterad till en binär lik-hetsuppgift. Resultaten visade att förbehandling av texten och ordvektorerna kunde ge en signifikant ökning i resultatet för dessa uppgifter. Studien konklu-derade även att datamängden som användes kanske inte är ideal för denna typ av utvärdering, då meningsparen i stort hade ett högt lexikalt överlapp. Som komplement föreslår studien en parafrasdatamängd, något som skulle kräva ytterligare studier.
39

Designing a Question Answering System in the Domain of Swedish Technical Consulting Using Deep Learning / Design av ett frågebesvarande system inom svensk konsultverksamhet med användning av djupinlärning

Abrahamsson, Felix January 2018 (has links)
Question Answering systems are greatly sought after in many areas of industry. Unfortunately, as most research in Natural Language Processing is conducted in English, the applicability of such systems to other languages is limited. Moreover, these systems often struggle in dealing with long text sequences. This thesis explores the possibility of applying existing models to the Swedish language, in a domain where the syntax and semantics differ greatly from typical Swedish texts. Additionally, the text length may vary arbitrarily. To solve these problems, transfer learning techniques and state-of-the-art Question Answering models are investigated. Furthermore, a novel, divide-and-conquer based technique for processing long texts is developed. Results show that the transfer learning is partly unsuccessful, but the system is capable of perform reasonably well in the new domain regardless. Furthermore, the system shows great performance improvement on longer text sequences with the use of the new technique. / System som givet en text besvarar frågor är högt eftertraktade inom många arbetsområden. Eftersom majoriteten av all forskning inom naturligtspråkbehandling behandlar engelsk text är de flesta system inte direkt applicerbara på andra språk. Utöver detta har systemen ofta svårt att hantera långa textsekvenser. Denna rapport utforskar möjligheten att applicera existerande modeller på det svenska språket, i en domän där syntaxen och semantiken i språket skiljer sig starkt från typiska svenska texter. Dessutom kan längden på texterna variera godtyckligt. För att lösa dessa problem undersöks flera tekniker inom transferinlärning och frågebesvarande modeller i forskningsfronten. En ny metod för att behandla långa texter utvecklas, baserad på en dekompositionsalgoritm. Resultaten visar på att transfer learning delvis misslyckas givet domänen och modellerna, men att systemet ändå presterar relativt väl i den nya domänen. Utöver detta visas att systemet presterar väl på långa texter med hjälp av den nya metoden.
40

Word embeddings for monolingual and cross-language domain-specific information retrieval / Ordinbäddningar för enspråkig och tvärspråklig domänspecifik informationssökning

Wigder, Chaya January 2018 (has links)
Various studies have shown the usefulness of word embedding models for a wide variety of natural language processing tasks. This thesis examines how word embeddings can be incorporated into domain-specific search engines for both monolingual and cross-language search. This is done by testing various embedding model hyperparameters, as well as methods for weighting the relative importance of words to a document or query. In addition, methods for generating domain-specific bilingual embeddings are examined and tested. The system was compared to a baseline that used cosine similarity without word embeddings, and for both the monolingual and bilingual search engines the use of monolingual embedding models improved performance above the baseline. However, bilingual embeddings, especially for domain-specific terms, tended to be of too poor quality to be used directly in the search engines. / Flera studier har visat att ordinbäddningsmodeller är användningsbara för många olika språkteknologiuppgifter. Denna avhandling undersöker hur ordinbäddningsmodeller kan användas i sökmotorer för både enspråkig och tvärspråklig domänspecifik sökning. Experiment gjordes för att optimera hyperparametrarna till ordinbäddningsmodellerna och för att hitta det bästa sättet att vikta ord efter hur viktiga de är i dokumentet eller sökfrågan. Dessutom undersöktes metoder för att skapa domänspecifika tvåspråkiga inbäddningar. Systemet jämfördes med en baslinje utan inbäddningar baserad på cosinuslikhet, och för både enspråkiga och tvärspråkliga sökningar var systemet som använde enspråkiga inbäddningar bättre än baslinjen. Däremot var de tvåspråkiga inbäddningarna, särskilt för domänspecifika ord, av låg kvalitet och gav för dåliga resultat för direkt användning inom sökmotorer.

Page generated in 0.0498 seconds