Spelling suggestions: "subject:"embedding""
51 |
Étude sur les représentations continues de mots appliquées à la détection automatique des erreurs de reconnaissance de la parole / A study of continuous word representations applied to the automatic detection of speech recognition errorsGhannay, Sahar 20 September 2017 (has links)
Nous abordons, dans cette thèse, une étude sur les représentations continues de mots (en anglais word embeddings) appliquées à la détection automatique des erreurs dans les transcriptions de la parole. Notre étude se concentre sur l’utilisation d’une approche neuronale pour améliorer la détection automatique des erreurs dans les transcriptions automatiques, en exploitant les word embeddings. L’exploitation des embeddings repose sur l’idée que la détection d’erreurs consiste à trouver les possibles incongruités linguistiques ou acoustiques au sein des transcriptions automatiques. L’intérêt est donc de trouver la représentation appropriée du mot qui permet de capturer des informations pertinentes pour pouvoir détecter ces anomalies. Notre contribution dans le cadre de cette thèse porte sur plusieurs axes. D’abord, nous commençons par une étude préliminaire dans laquelle nous proposons une architecture neuronale capable d’intégrer différents types de descripteurs, y compris les embeddings. Ensuite, nous nous focalisons sur une étude approfondie des représentations continues de mots. Cette étude porte d’une part sur l’évaluation de différents types d’embeddings linguistiques puis sur leurs combinaisons. D’autre part, elle s’intéresse aux embeddings acoustiques de mots. Puis, nous présentons une étude sur l’analyse des erreurs de classifications, qui a pour objectif de percevoir les erreurs difficiles à détecter.Finalement, nous exploitons les embeddings linguistiques et acoustiques ainsi que l’information fournie par notre système de détections d’erreurs dans plusieurs cadres applicatifs. / My thesis concerns a study of continuous word representations applied to the automatic detection of speech recognition errors. Our study focuses on the use of a neural approach to improve ASR errors detection, using word embeddings. The exploitation of continuous word representations is motivated by the fact that ASR error detection consists on locating the possible linguistic or acoustic incongruities in automatic transcriptions. The aim is therefore to find the appropriate word representation which makes it possible to capture pertinent information in order to be able to detect these anomalies. Our contribution in this thesis concerns several initiatives. First, we start with a preliminary study in which we propose a neural architecture able to integrate different types of features, including word embeddings. Second, we propose a deep study of continuous word representations. This study focuses on the evaluation of different types of linguistic word embeddings and their combination in order to take advantage of their complementarities. On the other hand, it focuses on acoustic word embeddings. Then, we present a study on the analysis of classification errors, with the aim of perceiving the errors that are difficult to detect. Perspectives for improving the performance of our system are also proposed, by modeling the errors at the sentence level. Finally, we exploit the linguistic and acoustic embeddings as well as the information provided by our ASR error detection system in several downstream applications.
|
52 |
Modèles d'embeddings à valeurs complexes pour les graphes de connaissances / Complex-Valued Embedding Models for Knowledge GraphsTrouillon, Théo 29 September 2017 (has links)
L'explosion de données relationnelles largement disponiblessous la forme de graphes de connaissances a permisle développement de multiples applications, dont les agents personnels automatiques,les systèmes de recommandation et l'amélioration desrésultats de recherche en ligne.La grande taille et l'incomplétude de ces bases de donnéesnécessite le développement de méthodes de complétionautomatiques pour rendre ces applications viables.La complétion de graphes de connaissances, aussi appeléeprédiction de liens, se doit de comprendre automatiquementla structure des larges graphes de connaissances (graphes dirigéslabellisés) pour prédire les entrées manquantes (les arêtes labellisées).Une approche gagnant en popularité consiste à représenter ungraphe de connaissances comme un tenseur d'ordre 3, etd'utiliser des méthodes de décomposition de tenseur pourprédire leurs entrées manquantes.Les modèles de factorisation existants proposent différentscompromis entre leur expressivité, et leur complexité en temps et en espace.Nous proposons un nouveau modèle appelé ComplEx, pour"Complex Embeddings", pour réconcilier expressivité etcomplexité par l'utilisation d'une factorisation en nombre complexes,dont nous explorons le lien avec la diagonalisation unitaire.Nous corroborons notre approche théoriquement en montrantque tous les graphes de connaissances possiblespeuvent être exactement décomposés par le modèle proposé.Notre approche, basées sur des embeddings complexesreste simple, car n'impliquant qu'un produit trilinéaire complexe,là où d'autres méthodes recourent à des fonctions de compositionde plus en plus compliquées pour accroître leur expressivité.Le modèle proposé ayant une complexité linéaire en tempset en espace est passable à l'échelle, tout endépassant les approches existantes sur les jeux de données de référencepour la prédiction de liens.Nous démontrons aussi la capacité de ComplEx àapprendre des représentations vectorielles utiles pour d'autres tâches,en enrichissant des embeddings de mots, qui améliorentles prédictions sur le problème de traitement automatiquedu langage d'implication entre paires de phrases.Dans la dernière partie de cette thèse, nous explorons lescapacités de modèles de factorisation à apprendre lesstructures relationnelles à partir d'observations.De part leur nature vectorielle,il est non seulement difficile d'interpréter pourquoicette classe de modèles fonctionne aussi bien,mais aussi où ils échouent et comment ils peuventêtre améliorés. Nous conduisons une étude expérimentalesur les modèles de l'état de l'art, non pas simplementpour les comparer, mais pour comprendre leur capacitésd'induction. Pour évaluer les forces et faiblessesde chaque modèle, nous créons d'abord des tâches simplesreprésentant des propriétés atomiques despropriétés des relations des graphes de connaissances ;puis des tâches représentant des inférences multi-relationnellescommunes au travers de généalogies synthétisées.À partir de ces résultatsexpérimentaux, nous proposons de nouvelles directionsde recherches pour améliorer les modèles existants,y compris ComplEx. / The explosion of widely available relational datain the form of knowledge graphsenabled many applications, including automated personalagents, recommender systems and enhanced web search results.The very large size and notorious incompleteness of these data basescalls for automatic knowledge graph completion methods to make these applicationsviable. Knowledge graph completion, also known as link-prediction,deals with automatically understandingthe structure of large knowledge graphs---labeled directed graphs---topredict missing entries---labeled edges. An increasinglypopular approach consists in representing knowledge graphs as third-order tensors,and using tensor factorization methods to predict their missing entries.State-of-the-art factorization models propose different trade-offs between modelingexpressiveness, and time and space complexity. We introduce a newmodel, ComplEx---for Complex Embeddings---to reconcile both expressivenessand complexity through the use of complex-valued factorization, and exploreits link with unitary diagonalization.We corroborate our approach theoretically and show that all possibleknowledge graphs can be exactly decomposed by the proposed model.Our approach based on complex embeddings is arguably simple,as it only involves a complex-valued trilinear product,whereas other methods resort to more and more complicated compositionfunctions to increase their expressiveness. The proposed ComplEx model isscalable to large data sets as it remains linear in both space and time, whileconsistently outperforming alternative approaches on standardlink-prediction benchmarks. We also demonstrateits ability to learn useful vectorial representations for other tasks,by enhancing word embeddings that improve performanceson the natural language problem of entailment recognitionbetween pair of sentences.In the last part of this thesis, we explore factorization models abilityto learn relational patterns from observed data.By their vectorial nature, it is not only hard to interpretwhy this class of models works so well,but also to understand where they fail andhow they might be improved. We conduct an experimentalsurvey of state-of-the-art models, not towardsa purely comparative end, but as a means to get insightabout their inductive abilities.To assess the strengths and weaknesses of each model, we create simple tasksthat exhibit first, atomic properties of knowledge graph relations,and then, common inter-relational inference through synthetic genealogies.Based on these experimental results, we propose new researchdirections to improve on existing models, including ComplEx.
|
53 |
Word Embeddings in Database SystemsGünther, Michael 18 November 2021 (has links)
Research in natural language processing (NLP) focuses recently on the development of learned language models called word embedding models like word2vec, fastText, and BERT. Pre-trained on large amounts of unstructured text in natural language, those embedding models constitute a rich source of common knowledge in the domain of the text used for the training. In the NLP community, significant improvements are achieved by using those models together with deep neural network models. To support applications to benefit from word embeddings, we extend the capabilities of traditional relational database systems, which are still by far the most common DBMSs but only provide limited text analysis features. Therefore, we implement (a) novel database operations involving embedding representations to allow a database user to exploit the knowledge encoded in word embedding models for advanced text analysis operations. The integration of those operations into database query language enables users to construct queries using novel word embedding operations in conjunction with traditional query capabilities of SQL. To allow efficient retrieval of embedding representations and fast execution of the operations, we implement (b) novel search algorithms and index structures for approximated kNN-Joins and integrate those into a relational database management system. Moreover, we investigate techniques to optimize embedding representations of text values in database systems. Therefore, we design (c) a novel context adaptation algorithm. This algorithm utilizes the structured data present in the database to enrich the embedding representations of text values to model their context-specific semantic in the database. Besides, we provide (d) support for selecting a word embedding model suitable for a user's application. Therefore, we developed a data processing pipeline to construct a dataset for domain-specific word embedding evaluation. Finally, we propose (e) novel embedding techniques for pre-training on tabular data to support applications working with text values in tables. Our proposed embedding techniques model semantic relations arising from the alignment of words in tabular layouts that can only hardly be derived from text documents, e.g., relations between table schema and table body. In this way, many applications, which either employ embeddings in supervised machine learning models, e.g., to classify cells in spreadsheets, or through the application of arithmetic operations, e.g., table discovery applications, can profit from the proposed embedding techniques.:1 INTRODUCTION
1.1 Contribution
1.2 Outline
2 REPRESENTATION OF TEXT FOR NATURAL LANGUAGE PROCESSING
2.1 Natural Language Processing Systems
2.2 Word Embedding Models
2.2.1 Matrix Factorization Methods
2.2.2 Learned Distributed Representations
2.2.3 Contextualize Word Embeddings
2.2.4 Advantages of Contextualize and Static Word Embeddings
2.2.5 Properties of Static Word Embeddings
2.2.6 Node Embeddings
2.2.7 Non-Euclidean Embedding Techniques
2.3 Evaluation of Word Embeddings
2.3.1 Similarity Evaluation
2.3.2 Analogy Evaluation
2.3.3 Cluster-based Evaluation 2.4 Application for Tabular Data
2.4.1 Semantic Search
2.4.2 Data Curation
2.4.3 Data Discovery
3 SYSTEM OVERVIEW
3.1 Opportunities of an Integration
3.2 Characteristics of Word Vectors
3.3 Objectives and Challenges
3.4 Word Embedding Operations
3.5 Performance Optimization of Operations
3.6 Context Adaptation
3.7 Requirements for Model Recommendation
3.8 Tabular Embedding Models
4 MANAGEMENT OF EMBEDDING REPRESENTATIONS IN DATABASE SYSTEMS
4.1 Integration of Operations in an RDBMS
4.1.1 System Architecture
4.1.2 Storage Formats
4.1.3 User-Defined Functions
4.1.4 Web Application
4.2 Nearest Neighbor Search
4.2.1 Tree-based Methods
4.2.2 Proximity Graphs
4.2.3 Locality-Sensitive Hashing
4.2.4 Quantization Techniques
4.3 Applicability of ANN Techniques for Word Embedding kNN-Joins
4.4 Related Work on kNN Search in Database Systems
4.5 ANN-Joins for Relational Database Systems
4.5.1 Index Architecture
4.5.2 Search Algorithm
4.5.3 Distance Calculation
4.5.4 Optimization Capabilities
4.5.5 Estimation of the Number of Targets 4.5.6 Flexible Product Quantization
4.5.7 Further Optimizations
4.5.8 Parameter Tuning
4.5.9 kNN-Joins for Word2Bits
4.6 Evaluation
4.6.1 Experimental Setup
4.6.2 Influence of Index Parameters on Precision and Execution Time
4.6.3 Performance of Subroutines
4.6.4 Flexible Product Quantization
4.6.5 Accuracy of the Target Size Estimation
4.6.6 Performance of Word2Bits kNN-Join
4.7 Summary
5 CONTEXT ADAPTATION FOR WORD EMBEDDING OPTIMIZATION
5.1 Related Work
5.1.1 Graph and Text Joint Embedding Methods
5.1.2 Retrofitting Approaches
5.1.3 Table Embedding Models
5.2 Relational Retrofitting Approach
5.2.1 Data Preparation
5.2.2 Relational Retrofitting Problem
5.2.3 Relational Retrofitting Algorithm
5.2.4 Online-RETRO
5.3 Evaluation Platform: Retro Live
5.3.1 Functionality
5.3.2 Interface
5.4 Evaluation
5.4.1 Datasets
5.4.2 Training of Embeddings
5.4.3 Machine Learning Models
5.4.4 Evaluation of ML Models
5.4.5 Run-time Measurements
5.4.6 Online Retrofitting
5.5 Summary
6 MODEL RECOMMENDATION
6.1 Related Work
6.1.1 Extrinsic Evaluation
6.1.2 Intrinsic Evaluation
6.2 Architecture of FacetE
6.3 Evaluation Dataset Construction Pipeline
6.3.1 Web Table Filtering and Facet Candidate Generation
6.3.2 Check Soft Functional Dependencies
6.3.3 Post-Filtering
6.3.4 Categorization
6.4 Evaluation of Popular Word Embedding Models
6.4.1 Domain-Agnostic Evaluation
6.4.2 Evaluation of a Single Facet
6.4.3 Evaluation of an Object Set
6.5 Summary
7 TABULAR TEXT EMBEDDINGS
7.1 Related Work
7.1.1 Static Table Embedding Models
7.1.2 Contextualized Table Embedding Models
7.2 Web Table Embedding Model
7.2.1 Preprocessing
7.2.2 Text Serialization
7.2.3 Encoding Model
7.2.4 Embedding Training
7.3 Applications for Table Embeddings
7.3.1 Table Union Search
7.3.2 Classification Tasks
7.4 Evaluation
7.4.1 Intrinsic Evaluation
7.4.2 Table Union Search Evaluation
7.4.3 Table Layout Classification
7.4.4 Spreadsheet Cell Classification
7.5 Summary
8 CONCLUSION
8.1 Summary
8.2 Directions for Future Work
BIBLIOGRAPHY
LIST OF FIGURES
LIST OF TABLES
A CONVEXITY OF RELATIONAL RETROFITTING
B EVALUATION OF THE RELATIONAL RETROFITTING HYPERPARAMETERS
|
54 |
Evaluation of Sentence Representations in Semantic Text Similarity Tasks / Utvärdering av meningsrepresentation för semantisk textlikhetBalzar Ekenbäck, Nils January 2021 (has links)
This thesis explores the methods of representing sentence representations for semantic text similarity using word embeddings and benchmarks them against sentence based evaluation test sets. Two methods were used to evaluate the representations: STS Benchmark and STS Benchmark converted to a binary similarity task. Results showed that preprocessing of the word vectors could significantly boost performance in both tasks and conclude that word embed-dings still provide an acceptable solution for specific applications. The study also concluded that the dataset used might not be ideal for this type of evalua-tion, as the sentence pairs in general had a high lexical overlap. To tackle this, the study suggests that a paraphrasing dataset could act as a complement but that further investigation would be needed. / Denna avhandling undersöker metoder för att representera meningar i vektor-form för semantisk textlikhet och jämför dem med meningsbaserade testmäng-der. För att utvärdera representationerna användes två metoder: STS Bench-mark, en vedertagen metod för att utvärdera språkmodellers förmåga att ut-värdera semantisk likhet, och STS Benchmark konverterad till en binär lik-hetsuppgift. Resultaten visade att förbehandling av texten och ordvektorerna kunde ge en signifikant ökning i resultatet för dessa uppgifter. Studien konklu-derade även att datamängden som användes kanske inte är ideal för denna typ av utvärdering, då meningsparen i stort hade ett högt lexikalt överlapp. Som komplement föreslår studien en parafrasdatamängd, något som skulle kräva ytterligare studier.
|
55 |
[en] CORPUS FOR ACADEMIC DOMAIN: MODELS AND APPLICATIONS / [pt] CORPUS PARA O DOMÍNIO ACADÊMICO: MODELOS E APLICAÇÕESIVAN DE JESUS PEREIRA PINTO 16 November 2021 (has links)
[pt] Dados acadêmicos (e.g., Teses, Dissertações) englobam aspectos de toda
uma sociedade, bem como seu conhecimento científico. Neles, há uma riqueza
de informações a ser explorada por modelos computacionais, e que podem ser
positivos para sociedade. Os modelos de aprendizado de máquina, em especial,
possuem uma crescente necessidade de dados para treinamento, que precisam
ser estruturados e de tamanho considerável. Seu uso na área de processamento
de linguagem natural é pervasivo nas mais diversas tarefas.
Este trabalho realiza o esforço de coleta, construção, análise do maior
corpus acadêmico conhecido na língua portuguesa. Foram treinados modelos
de vetores de palavras, bag-of-words e transformer. O modelo transformer
BERTAcadêmico apresentou os melhores resultados, com 77 por cento de f1-score na
classificação da Grande Área de conhecimento e 63 por cento de f1-score na classificação
da Área de conhecimento nas categorizações de Teses e Dissertações.
É feita ainda uma análise semântica do corpus acadêmico através da
modelagem de tópicos, e uma visualização inédita das áreas de conhecimento
em forma de clusters. Por fim, é apresentada uma aplicação que faz uso dos
modelos treinados, o SucupiraBot. / [en] Academic data (i.e., Thesis, Dissertation) encompasses aspects of a whole society, as well as its scientific knowledge. There is a wealth of information to be explored by computational models, and that can be positive for society.
Machine learning models in particular, have an increasing need for training
data, that are efficient and of considerable size. Its use in the area of natural language processing (NLP) is pervasive in many different tasks.
This work makes the effort of collecting, constructing, analyzing and
training of models for the biggest known academic corpus in the Portuguese
language. Word embeddings, bag of words and transformers models have been
trained. The Bert-Academico has shown the better result, with 77 percent of f1-score in Great area of knowledge and 63 percent in knowledge area classification of Thesis and Dissertation.
A semantic analysis of the academic corpus is made through topic
modelling, and an unprecedented visualization of the knowledge areas is
presented. Lastly, an application that uses the trained models is showcased,
the SucupiraBot.
|
56 |
Word embeddings for monolingual and cross-language domain-specific information retrieval / Ordinbäddningar för enspråkig och tvärspråklig domänspecifik informationssökningWigder, Chaya January 2018 (has links)
Various studies have shown the usefulness of word embedding models for a wide variety of natural language processing tasks. This thesis examines how word embeddings can be incorporated into domain-specific search engines for both monolingual and cross-language search. This is done by testing various embedding model hyperparameters, as well as methods for weighting the relative importance of words to a document or query. In addition, methods for generating domain-specific bilingual embeddings are examined and tested. The system was compared to a baseline that used cosine similarity without word embeddings, and for both the monolingual and bilingual search engines the use of monolingual embedding models improved performance above the baseline. However, bilingual embeddings, especially for domain-specific terms, tended to be of too poor quality to be used directly in the search engines. / Flera studier har visat att ordinbäddningsmodeller är användningsbara för många olika språkteknologiuppgifter. Denna avhandling undersöker hur ordinbäddningsmodeller kan användas i sökmotorer för både enspråkig och tvärspråklig domänspecifik sökning. Experiment gjordes för att optimera hyperparametrarna till ordinbäddningsmodellerna och för att hitta det bästa sättet att vikta ord efter hur viktiga de är i dokumentet eller sökfrågan. Dessutom undersöktes metoder för att skapa domänspecifika tvåspråkiga inbäddningar. Systemet jämfördes med en baslinje utan inbäddningar baserad på cosinuslikhet, och för både enspråkiga och tvärspråkliga sökningar var systemet som använde enspråkiga inbäddningar bättre än baslinjen. Däremot var de tvåspråkiga inbäddningarna, särskilt för domänspecifika ord, av låg kvalitet och gav för dåliga resultat för direkt användning inom sökmotorer.
|
57 |
Semantically Aligned Sentence-Level Embeddings for Agent Autonomy and Natural Language UnderstandingFulda, Nancy Ellen 01 August 2019 (has links)
Many applications of neural linguistic models rely on their use as pre-trained features for downstream tasks such as dialog modeling, machine translation, and question answering. This work presents an alternate paradigm: Rather than treating linguistic embeddings as input features, we treat them as common sense knowledge repositories that can be queried using simple mathematical operations within the embedding space, without the need for additional training. Because current state-of-the-art embedding models were not optimized for this purpose, this work presents a novel embedding model designed and trained specifically for the purpose of "reasoning in the linguistic domain".Our model jointly represents single words, multi-word phrases, and complex sentences in a unified embedding space. To facilitate common-sense reasoning beyond straightforward semantic associations, the embeddings produced by our model exhibit carefully curated properties including analogical coherence and polarity displacement. In other words, rather than training the model on a smorgaspord of tasks and hoping that the resulting embeddings will serve our purposes, we have instead crafted training tasks and placed constraints on the system that are explicitly designed to induce the properties we seek. The resulting embeddings perform competitively on the SemEval 2013 benchmark and outperform state-of- the-art models on two key semantic discernment tasks introduced in Chapter 8.The ultimate goal of this research is to empower agents to reason about low level behaviors in order to fulfill abstract natural language instructions in an autonomous fashion. An agent equipped with an embedding space of sucient caliber could potentially reason about new situations based on their similarity to past experience, facilitating knowledge transfer and one-shot learning. As our embedding model continues to improve, we hope to see these and other abilities become a reality.
|
58 |
Sudoku Variants on the TorusWyld, Kira A 01 January 2017 (has links)
This paper examines the mathematical properties of Sudoku puzzles defined on a Torus. We seek to answer the questions for these variants that have been explored for the traditional Sudoku. We do this process with two such embeddings. The end result of this paper is a deeper mathematical understanding of logic puzzles of this type, as well as a fun new puzzle which could be played.
|
59 |
Embeddings of Product Graphs Where One Factor is a HypercubeTurner, Bethany 29 April 2011 (has links)
Voltage graph theory can be used to describe embeddings of product graphs if one factor is a Cayley graph. We use voltage graphs to explore embeddings of various products where one factor is a hypercube, describing some minimal and symmetrical embeddings. We then define a graph product, the weak symmetric difference, and illustrate a voltage graph construction useful for obtaining an embedding of the weak symmetric difference of an arbitrary graph with a hypercube.
|
60 |
Word embeddings and Patient records : The identification of MRI risk patientsKindberg, Erik January 2019 (has links)
Identification of risks ahead of MRI examinations is identified as a cumbersome and time-consuming process at the Linköping University Hospital radiology clinic. The hospital staff often have to search through large amounts of unstructured patient data to find information about implants. Word embeddings has been identified as a possible tool to speed up this process. The purpose of this thesis is to evaluate this method, and that is done by training a Word2Vec model on patient journal data and analyzing the close neighbours of key search words by calculating cosine similarity. The 50 closest neighbours of each search words are categorized and annotated as relevant to the task of identifying risk patients ahead of MRI examinations or not. 10 search words were explored, leading to a total of 500 terms being annotated. In total, 14 different categories were observed in the result and out of these 8 were considered relevant. Out of the 500 terms, 340 (68%) were considered relevant. In addition, 48 implant models could be observed which are particularly interesting because if a patient have an implant, hospital staff needs to determine it’s exact model and the MRI conditions of that model. Overall these findings points towards a positive answer for the aim of the thesis, although further developments are needed.
|
Page generated in 0.0639 seconds