• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 9
  • 7
  • 2
  • Tagged with
  • 103
  • 103
  • 76
  • 43
  • 40
  • 38
  • 28
  • 27
  • 22
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

On Transfer Learning Techniques for Machine Learning

Debasmit Das (8314707) 30 April 2020 (has links)
<pre><pre><p> </p><p>Recent progress in machine learning has been mainly due to the availability of large amounts of annotated data used for training complex models with deep architectures. Annotating this training data becomes burdensome and creates a major bottleneck in maintaining machine-learning databases. Moreover, these trained models fail to generalize to new categories or new varieties of the same categories. This is because new categories or new varieties have data distribution different from the training data distribution. To tackle these problems, this thesis proposes to develop a family of transfer-learning techniques that can deal with different training (source) and testing (target) distributions with the assumption that the availability of annotated data is limited in the testing domain. This is done by using the auxiliary data-abundant source domain from which useful knowledge is transferred that can be applied to data-scarce target domain. This transferable knowledge serves as a prior that biases target-domain predictions and prevents the target-domain model from overfitting. Specifically, we explore structural priors that encode relational knowledge between different data entities, which provides more informative bias than traditional priors. The choice of the structural prior depends on the information availability and the similarity between the two domains. Depending on the domain similarity and the information availability, we divide the transfer learning problem into four major categories and propose different structural priors to solve each of these sub-problems.</p><p> </p><p>This thesis first focuses on the unsupervised-domain-adaptation problem, where we propose to minimize domain discrepancy by transforming labeled source-domain data to be close to unlabeled target-domain data. For this problem, the categories remain the same across the two domains and hence we assume that the structural relationship between the source-domain samples is carried over to the target domain. Thus, graph or hyper-graph is constructed as the structural prior from both domains and a graph/hyper-graph matching formulation is used to transform samples in the source domain to be closer to samples in the target domain. An efficient optimization scheme is then proposed to tackle the time and memory inefficiencies associated with the matching problem. The few-shot learning problem is studied next, where we propose to transfer knowledge from source-domain categories containing abundantly labeled data to novel categories in the target domain that contains only few labeled data. The knowledge transfer biases the novel category predictions and prevents the model from overfitting. The knowledge is encoded using a neural-network-based prior that transforms a data sample to its corresponding class prototype. This neural network is trained from the source-domain data and applied to the target-domain data, where it transforms the few-shot samples to the novel-class prototypes for better recognition performance. The few-shot learning problem is then extended to the situation, where we do not have access to the source-domain data but only have access to the source-domain class prototypes. In this limited information setting, parametric neural-network-based priors would overfit to the source-class prototypes and hence we seek a non-parametric-based prior using manifolds. A piecewise linear manifold is used as a structural prior to fit the source-domain-class prototypes. This structure is extended to the target domain, where the novel-class prototypes are found by projecting the few-shot samples onto the manifold. Finally, the zero-shot learning problem is addressed, which is an extreme case of the few-shot learning problem where we do not have any labeled data in the target domain. However, we have high-level information for both the source and target domain categories in the form of semantic descriptors. We learn the relation between the sample space and the semantic space, using a regularized neural network so that classification of the novel categories can be carried out in a common representation space. This same neural network is then used in the target domain to relate the two spaces. In case we want to generate data for the novel categories in the target domain, we can use a constrained generative adversarial network instead of a traditional neural network. Thus, we use structural priors like graphs, neural networks and manifolds to relate various data entities like samples, prototypes and semantics for these different transfer learning sub-problems. We explore additional post-processing steps like pseudo-labeling, domain adaptation and calibration and enforce algorithmic and architectural constraints to further improve recognition performance. Experimental results on standard transfer learning image recognition datasets produced competitive results with respect to previous work. Further experimentation and analyses of these methods provided better understanding of machine learning as well.</p><p> </p></pre></pre>
62

Domain Adaptation to Meet the Reality-Gap from Simulation to Reality

Forsberg, Fanny January 2022 (has links)
Being able to train machine learning models on simulated data can be of great interest in several applications, one of them being for autonomous driving of cars. The reason is that it is easier to collect large labeled datasets as well as performing reinforcement learning in simulations. However, transferring these learned models to the real-world environment can be hard due to differences between the simulation and the reality; for example, differences in material, textures, lighting and content. One approach is to use domain adaptation, by making the simulations as similar as possible to the reality. The thesis's main focus is to investigate domain adaptation as a way to meet the reality-gap, and also compare it to an alternative method, domain randomization. Two different methods of domain adaptation; one adapting the simulated data to reality, and the other adapting the test data to simulation, are compared to using domain randomization. These are evaluated with a classifier making decisions for a robot car while driving in reality. The evaluation consists of a quantitative evaluation on real-world data and a qualitative evaluation aiming to observe how well the robot is driving and avoiding obstacles. The results show that the reality-gap is very large and that the examined methods reduce it, with the two using domain adaptation resulting in the largest decrease. However, none of them led to satisfactory driving.
63

Domain Adaptation for Multi-Contrast Image Segmentation in Cardiac Magnetic Resonance Imaging / Domänanpassning för segmentering av bilder med flera kontraster vid Magnetresonanstomografi av hjärta

Proudhon, Thomas January 2023 (has links)
Accurate segmentation of the ventricles and myocardium on Cardiac Magnetic Resonance (CMR) images is crucial to assess the functioning of the heart or to diagnose patients suffering from myocardial infarction. However, the domain shift existing between the multiple sequences of CMR data prevents a deep learning model trained on a specific contrast to be used on a different sequence. Domain adaptation can address this issue by alleviating the domain shift between different CMR contrasts, such as Balanced Steady-State Free Precession (bSSFP) and Late Gadolinium Enhancement (LGE) sequences. The aim of the degree project “Domain Adaptation for Multi-Contrast Image Segmentation in Cardiac Magnetic Resonance Imaging” is to apply domain adaptation to perform unsupervised segmentation of cardiac structures on LGE sequences. A style-transfer model based on generative adversarial networks is trained to achieve modality-to-modality translation between LGE and bSSFP contrasts. Then, a supervised segmentation model is developed to segment the myocardium, left and right ventricles on bSSFP data. Final segmentation is performed on synthetic bSSFP obtained by translating LGE images. Our method shows a significant increase in Dice score compared to direct segmentation of LGE data. In conclusion, the results demonstrate that using domain adaptation based on information from complementary CMR sequences is a successful approach to unsupervised segmentation of Late Gadolinium Enhancement images.
64

Domain Adaptation Applications to Complex High-dimensional Target Data

Stanojevic, Marija, 0000-0001-8227-6577 January 2023 (has links)
In the last decade, machine learning models have increased in size and amount of data they are using, which has led to improved performance on many tasks. Most notably, there has been a significant development in end-to-end deep learning and reinforcement learning models with new learning algorithms and architectures proposed frequently. Furthermore, while previous methods were focused on supervised learning, in the last five years, many models were proposed that learn in semi-supervised or self-supervised ways. The model is then fine-tuned to a specific task or different data domain. Adapting machine learning models learned on one type of data to similar but different data is called domain adaptation. This thesis discusses various challenges in the domain adaptation of machine learning models to specific tasks and real-world applications and proposes solutions for those challenges. Data in real-world applications have different properties than clean machine-learning datasets commonly used for the experimental evaluation of proposed models. Learning appropriate representations from high-dimensional complex data with internal dependencies is arduous due to the curse of dimensionality and spurious correlation. However, most real-world data have these properties in addition to a small number of labeled samples since labeling is expensive and tedious. Additionally, accuracy drops drastically if models are applied to domain-specific datasets and unbalanced problems. Moreover, state-of-the-art models are not able to handle missing data. In this thesis, I strive to create frameworks that can learn a good representation of high-dimensional small data with correlations between variables. The first chapter of this thesis describes the motivation, background, and research objectives. It also gives an overview of contributions and publications. A background needed to understand this thesis is provided in the second chapter and an introduction to domain adaptation is described in chapter three. The fourth chapter discusses domain adaptation with small target data. It describes the algorithm for semi-supervised learning over domain-specific short texts such as reviews or tweets. The proposed framework achieves up to 12.6% improvement when only 5000 labeled examples are available. The fifth chapter explores the influence of unanticipated bias in fine-tuning data. This chapter outlines how the bias in news data influences the classification performance of domain-specific text, where the domain is U.S. politics. It is shown that fine-tuning with domain-specific data is not always beneficial, especially if bias towards one label is present. The sixth chapter examines domain adaptation on datasets with high missing rates. It reviews a system created to learn from high-dimensional small data from psychological studies, which have up to 70% missingness. The proposed framework is achieving 9.3% smaller imputation and 33% lower prediction errors. The seventh chapter discusses the curse of dimensionality problem in domain adaptation. It presents a methodology for discovering research articles containing evolutionary timetrees. That system can search for, download, and filter research articles in which timetrees are imported. It scans 5 million articles in a few days. The proposed method also decreases the error of finding research papers by 21% compared to the baseline, which cannot work with high-dimensional data properly. The last, eighth chapter, summarizes the findings of this thesis and suggests future prospects. / Computer and Information Science
65

Learning from noisy labelsby importance reweighting: : a deep learning approach

Fang, Tongtong January 2019 (has links)
Noisy labels could cause severe degradation to the classification performance. Especially for deep neural networks, noisy labels can be memorized and lead to poor generalization. Recently label noise robust deep learning has outperformed traditional shallow learning approaches in handling complex input data without prior knowledge of label noise generation. Learning from noisy labels by importance reweighting is well-studied. Existing work in this line using deep learning failed to provide reasonable importance reweighting criterion and thus got undesirable experimental performances. Targeting this knowledge gap and inspired by domain adaptation, we propose a novel label noise robust deep learning approach by importance reweighting. Noisy labeled training examples are weighted by minimizing the maximum mean discrepancy between the loss distributions of noisy labeled and clean labeled data. In experiments, the proposed approach outperforms other baselines. Results show a vast research potential of applying domain adaptation in label noise problem by bridging the two areas. Moreover, the proposed approach potentially motivate other interesting problems in domain adaptation by enabling importance reweighting to be used in deep learning. / Felaktiga annoteringar kan sänka klassificeringsprestanda.Speciellt för djupa nätverk kan detta leda till dålig generalisering. Nyligen har brusrobust djup inlärning överträffat andra inlärningsmetoder när det gäller hantering av komplexa indata Befintligta resultat från djup inlärning kan dock inte tillhandahålla rimliga viktomfördelningskriterier. För att hantera detta kunskapsgap och inspirerat av domänanpassning föreslår vi en ny robust djup inlärningsmetod som använder omviktning. Omviktningen görs genom att minimera den maximala medelavvikelsen mellan förlustfördelningen av felmärkta och korrekt märkta data. I experiment slår den föreslagna metoden andra metoder. Resultaten visar en stor forskningspotential för att tillämpa domänanpassning. Dessutom motiverar den föreslagna metoden undersökningar av andra intressanta problem inom domänanpassning genom att möjliggöra smarta omviktningar.
66

Domain Adaptation with N-gram Language Models for Swedish Automatic Speech Recognition : Using text data augmentation to create domain-specific n-gram models for a Swedish open-source wav2vec 2.0 model / Domänanpassning Med N-gram Språkmodeller för Svensk Taligenkänning : Datautökning av text för att skapa domänspecifika n-gram språkmodeller för en öppen svensk wav2vec 2.0 modell

Enzell, Viktor January 2022 (has links)
Automatic Speech Recognition (ASR) enables a wide variety of practical applications. However, many applications have their own domain-specific words, creating a gap between training and test data when used in practice. Domain adaptation can be achieved through model fine-tuning, but it requires domain-specific speech data paired with transcripts, which is labor intensive to produce. Fortunately, the dependence on audio data can be mitigated to a certain extent by incorporating text-based language models during decoding. This thesis explores approaches for creating domain-specific 4-gram models for a Swedish open-source wav2vec 2.0 model. The three main approaches extend a social media corpus with domain-specific data to estimate the models. The first approach utilizes a relatively small set of in-domain text data, and the second approach utilizes machine transcripts from another ASR system. Finally, the third approach utilizes Named Entity Recognition (NER) to find words of the same entity type in a corpus to replace with in-domain words. The 4-gram models are evaluated by the error rate (ERR) of recognizing in-domain words in a custom dataset. Additionally, the models are evaluated by the Word Error Rate (WER) on the Common Voice test set to ensure good overall performance. Compared to not having a language model, the base model improves the WER on Common Voice by 2.55 percentage points and the in-domain ERR by 6.11 percentage points. Next, adding in-domain text to the base model results in a 2.61 WER improvement and a 10.38 ERR improvement over not having a language model. Finally, adding in-domain machine transcripts and using the NER approach results in the same 10.38 ERR improvement as adding in-domain text but slightly less significant WER improvements of 2.56 and 2.47, respectively. These results contribute to the exploration of state-of-the-art Swedish ASR and have the potential to enable the adoption of open-source ASR models for more use cases. / Automatisk taligenkänning (ASR) möjliggör en mängd olika praktiska tillämpningar. Men många tillämpningsområden har sin egen uppsättning domänspecifika ord vilket kan skapa problem när en taligenkänningsmodell används på data som skiljer sig från träningsdatan. Taligenkänningsmodeller kan anpassas till nya domäner genom fortsatt träning med taldata, men det kräver tillgång till domänspecifik taldata med tillhörande transkript, vilket är arbetskrävande att producera. Lyckligtvis kan beroendet av ljuddata mildras till viss del genom användande av textbaserade språkmodeller tillsammans med taligenkänningsmodellerna. Detta examensarbete utforskar tillvägagångssätt för att skapa domänspecifika 4-gram-språkmodeller för en svensk wav2vec 2.0-modell som tränats av Kungliga Biblioteket. Utöver en basmodell så används tre huvudsakliga tillvägagångssätt för att utöka en korpus med domänspecifik data att träna modellerna från. Det första tillvägagångssättet använder en relativt liten mängd domänspecifik textdata, och det andra tillvägagångssättet använder transkript från ett annat ASR-system (maskintranskript). Slutligen använder det tredje tillvägagångssättet Named Entity Recognition (NER) för att hitta ord av samma entitetstyp i en korpus som sedan ersätts med domänspecifika ord. Språkmodellerna utvärderas med ett nytt domänspecifikt evalueringsdataset samt på testdelen av Common Voice datasetet. Jämfört med att inte ha en språkmodell förbättrar basmodellen Word Error Rate (WER) på Common Voice med 2,55 procentenheter och Error Rate (ERR) inom domänen med 6,11 procentenheter. Att lägga till domänspecifik text till basmodellens korpus resulterar i en 2,61 WER-förbättringochen10,38 ERR-förbättring jämfört med att inte ha en språkmodell. Slutligen, att lägga till domänspecifika maskintranskript och att använda NER-metoden resulterar i samma 10.38 ERR-förbättringar som att lägga till domänspecifik text men något mindre WER-förbättringar på 2.56 respektive 2.47 procentenheter. Den här studien bidrar till svensk ASR och kan möjliggöra användandet av öppna taligenkänningsmodeller för fler användningsområden.
67

Cross-domain sentiment classification using grams derived from syntax trees and an adapted naive Bayes approach

Cheeti, Srilaxmi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / There is an increasing amount of user-generated information in online documents, includ- ing user opinions on various topics and products such as movies, DVDs, kitchen appliances, etc. To make use of such opinions, it is useful to identify the polarity of the opinion, in other words, to perform sentiment classification. The goal of sentiment classification is to classify a given text/document as either positive, negative or neutral based on the words present in the document. Supervised learning approaches have been successfully used for sentiment classification in domains that are rich in labeled data. Some of these approaches make use of features such as unigrams, bigrams, sentiment words, adjective words, syntax trees (or variations of trees obtained using pruning strategies), etc. However, for some domains the amount of labeled data can be relatively small and we cannot train an accurate classifier using the supervised learning approach. Therefore, it is useful to study domain adaptation techniques that can transfer knowledge from a source domain that has labeled data to a target domain that has little or no labeled data, but a large amount of unlabeled data. We address this problem in the context of product reviews, specifically reviews of movies, DVDs and kitchen appliances. Our approach uses an Adapted Naive Bayes classifier (ANB) on top of the Expectation Maximization (EM) algorithm to predict the sentiment of a sentence. We use grams derived from complete syntax trees or from syntax subtrees as features, when training the ANB classifier. More precisely, we extract grams from syntax trees correspond- ing to sentences in either the source or target domains. To be able to transfer knowledge from source to target, we identify generalized features (grams) using the frequently co-occurring entropy (FCE) method, and represent the source instances using these generalized features. The target instances are represented with all grams occurring in the target, or with a reduced grams set obtained by removing infrequent grams. We experiment with different types of grams in a supervised framework in order to identify the most predictive types of gram, and further use those grams in the domain adaptation framework. Experimental results on several cross-domains task show that domain adaptation approaches that combine source and target data (small amount of labeled and some unlabeled data) can help learn classifiers for the target that are better than those learned from the labeled target data alone.
68

Apprentissage de vote de majorité pour la classification supervisée et l'adaptation de domaine : Approches PAC Bayésiennes et combinaison de similarités

Morvant, Emilie 18 September 2013 (has links)
De nombreuses applications font appel à des méthodes d'apprentissage capables de considérer différentes sources d'information (e.g. images, son, texte) en combinant plusieurs modèles ou descriptions. Cette thèse propose des contributions théoriquement fondées permettant de répondre à deux problématiques importantes pour ces méthodes :(i) Comment intégrer de la connaissance a priori sur des informations ?(ii) Comment adapter un modèle sur des données ne suivant pas la distribution des données d'apprentissage ?Une 1ère série de résultats en classification supervisée s'intéresse à l'apprentissage de votes de majorité sur des classifieurs dans un contexte PAC-Bayésien prenant en compte un a priori sur ces classifieurs. Le 1er apport étend un algorithme de minimisation de l'erreur du vote en classification binaire en permettant l'utilisation d'a priori sous la forme de distributions alignées sur les votants. Notre 2ème contribution analyse théoriquement l'intérêt de la minimisation de la norme opérateur de la matrice de confusion de votes dans un contexte de données multiclasses. La 2nde série de résultats concerne l'AD en classification binaire : le 3ème apport combine des fonctions similarités (epsilon,gamma,tau)-Bonnes pour inférer un espace rapprochant les distributions des données d'apprentissage et de test à l'aide de la minimisation d'une borne. Notre 4ème contribution propose une analyse PAC-Bayésienne de l'AD basée sur une divergence entre distributions. Nous en dérivons des garanties théoriques pour les votes de majorité et un algorithme adapté aux classifieurs linéaires minimisant cette borne. / Many applications make use of machine learning methods able to take into account different information sources (e.g. sounds, image, text) by combining different descriptors or models. This thesis proposes a series of contributions theoretically founded dealing with two mainissues for such methods:(i) How to embed some a priori information available?(ii) How to adapt a model on new data following a distribution different from the learning data distribution? This last issue is known as domain adaptation (DA).A 1st series of contributions studies the problem of learning a majority vote over a set of voters for supervised classification in the PAC-Bayesian context allowing one to consider an a priori on the voters. Our 1st contribution extends an algorithm minimizing the error of the majority vote in binary classification by allowing the use of an a priori expressed as an aligned distribution. The 2nd analyses theoretically the interest of the minimization of the operator norm of the confusion matrix of the votes in the multiclass setting. Our 2nd series of contributions deals with DA for binary classification. The 3rd result combines (epsilon,gamma,tau)-Good similarity functions to infer a new projection space allowing us to move closer the learning and test distributions by means of the minimization of a DA bound. Finally, we propose a PAC-Bayesian analysis for DA based on a divergence between distributions. This analysis allows us to derive guarantees for learning majority votes in a DA context, and to design an algorithm specialized to linear classifiers minimizing our bound.
69

Data-driven language understanding for spoken dialogue systems

Mrkšić, Nikola January 2018 (has links)
Spoken dialogue systems provide a natural conversational interface to computer applications. In recent years, the substantial improvements in the performance of speech recognition engines have helped shift the research focus to the next component of the dialogue system pipeline: the one in charge of language understanding. The role of this module is to translate user inputs into accurate representations of the user goal in the form that can be used by the system to interact with the underlying application. The challenges include the modelling of linguistic variation, speech recognition errors and the effects of dialogue context. Recently, the focus of language understanding research has moved to making use of word embeddings induced from large textual corpora using unsupervised methods. The work presented in this thesis demonstrates how these methods can be adapted to overcome the limitations of language understanding pipelines currently used in spoken dialogue systems. The thesis starts with a discussion of the pros and cons of language understanding models used in modern dialogue systems. Most models in use today are based on the delexicalisation paradigm, where exact string matching supplemented by a list of domain-specific rephrasings is used to recognise users' intents and update the system's internal belief state. This is followed by an attempt to use pretrained word vector collections to automatically induce domain-specific semantic lexicons, which are typically hand-crafted to handle lexical variation and account for a plethora of system failure modes. The results highlight the deficiencies of distributional word vectors which must be overcome to make them useful for downstream language understanding models. The thesis next shifts focus to overcoming the language understanding models' dependency on semantic lexicons. To achieve that, the proposed Neural Belief Tracking (NBT) model forsakes the use of standard one-hot n-gram representations used in Natural Language Processing in favour of distributed representations of user utterances, dialogue context and domain ontologies. The NBT model makes use of external lexical knowledge embedded in semantically specialised word vectors, obviating the need for domain-specific semantic lexicons. Subsequent work focuses on semantic specialisation, presenting an efficient method for injecting external lexical knowledge into word vector spaces. The proposed Attract-Repel algorithm boosts the semantic content of existing word vectors while simultaneously inducing high-quality cross-lingual word vector spaces. Finally, NBT models powered by specialised cross-lingual word vectors are used to train multilingual belief tracking models. These models operate across many languages at once, providing an efficient method for bootstrapping language understanding models for lower-resource languages with limited training data.
70

Analyse en locuteurs de collections de documents multimédia / Speaker analysis of multimedia data collections

Le Lan, Gaël 06 October 2017 (has links)
La segmentation et regroupement en locuteurs (SRL) de collection cherche à répondre à la question « qui parle quand ? » dans une collection de documents multimédia. C’est un prérequis indispensable à l’indexation des contenus audiovisuels. La tâche de SRL consiste d’abord à segmenter chaque document en locuteurs, avant de les regrouper à l'échelle de la collection. Le but est de positionner des labels anonymes identifiant les locuteurs, y compris ceux apparaissant dans plusieurs documents, sans connaître à l'avance ni leur identité ni leur nombre. La difficulté posée par le regroupement en locuteurs à l'échelle d'une collection est le problème de la variabilité intra-locuteur/inter-document : selon les documents, un locuteur peut parler dans des environnements acoustiques variés (en studio, dans la rue...). Cette thèse propose deux méthodes pour pallier le problème. D'une part, une nouvelle méthode de compensation neuronale de variabilité est proposée, utilisant le paradigme de triplet-loss pour son apprentissage. D’autre part, un procédé itératif d'adaptation non supervisée au domaine est présenté, exploitant l'information, même imparfaite, que le système acquiert en traitant des données, pour améliorer ses performances sur le domaine acoustique cible. De plus, de nouvelles méthodes d'analyse en locuteurs des résultats de SRL sont étudiées, pour comprendre le fonctionnement réel des systèmes, au-delà du classique taux d'erreur de SRL (Diarization Error Rate ou DER). Les systèmes et méthodes sont évalués sur deux émissions télévisées d'une quarantaine d'épisodes, pour les architectures de SRL globale ou incrémentale, à l'aide de la modélisation locuteur à l'état de l'art. / The task of speaker diarization and linking aims at answering the question "who speaks and when?" in a collection of multimedia recordings. It is an essential step to index audiovisual contents. The task of speaker diarization and linking firstly consists in segmenting each recording in terms of speakers, before linking them across the collection. Aim is, to identify each speaker with a unique anonymous label, even for speakers appearing in multiple recordings, without any knowledge of their identity or number. The challenge of the cross-recording linking is the modeling of the within-speaker/across-recording variability: depending on the recording, a same speaker can appear in multiple acoustic conditions (in a studio, in the street...). The thesis proposes two methods to overcome this issue. Firstly, a novel neural variability compensation method is proposed, using the triplet-loss paradigm for training. Secondly, an iterative unsupervised domain adaptation process is presented, in which the system exploits the information (even inaccurate) about the data it processes, to enhance its performances on the target acoustic domain. Moreover, novel ways of analyzing the results in terms of speaker are explored, to understand the actual performance of a diarization and linking system, beyond the well-known Diarization Error Rate (DER). Systems and methods are evaluated on two TV shows of about 40 episodes, using either a global, or longitudinal linking architecture, and state of the art speaker modeling (i-vector).

Page generated in 0.0836 seconds