• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 12
  • 9
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Personalized Voice Activated Grasping System for a Robotic Exoskeleton Glove

Guo, Yunfei 05 January 2021 (has links)
Controlling an exoskeleton glove with a highly efficient human-machine interface (HMI), while accurately applying force to each joint remains a hot topic. This paper proposes a fast, secure, accurate, and portable solution to control an exoskeleton glove. This state of the art solution includes both hardware and software components. The exoskeleton glove uses a modified serial elastic actuator (SEA) to achieve accurate force sensing. A portable electronic system is designed based on the SEA to allow force measurement, force application, slip detection, cloud computing, and a power supply to provide over 2 hours of continuous usage. A voice-control-based HMI referred to as the integrated trigger-word configurable voice activation and speaker verification system (CVASV), is integrated into a robotic exoskeleton glove to perform high-level control. The CVASV HMI is designed for embedded systems with limited computing power to perform voice-activation and voice-verification simultaneously. The system uses MobileNet as the feature extractor to reduce computational cost. The HMI is tuned to allow better performance in grasping daily objects. This study focuses on applying the CVASV HMI to the exoskeleton glove to perform a stable grasp with force-control and slip-detection using SEA based exoskeleton glove. This research found that using MobileNet as the speaker verification neural network can increase the speed of processing while maintaining similar verification accuracy. / Master of Science / The robotic exoskeleton glove used in this research is designed to help patients with hand disabilities. This thesis proposes a voice-activated grasping system to control the exoskeleton glove. Here, the user can use a self-defined keyword to activate the exoskeleton and use voice to control the exoskeleton. The voice command system can distinguish between different users' voices, thereby improving the safety of the glove control. A smartphone is used to process the voice commands and send them to an onboard computer on the exoskeleton glove. The exoskeleton glove then accurately applies force to each fingertip using a force feedback actuator.This study focused on designing a state of the art human machine interface to control an exoskeleton glove and perform an accurate and stable grasp.
12

A comparative analysis of gaussian mixture models and i-vector for speaker verification under mismatched conditions

Avila, Anderson Raymundo January 2014 (has links)
Orientador: Prof. Dr. Francisco J. Fraga / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2014. / Most speaker verifcation systems are based on Gaussian mixture models and more recently on the so-called i-vector. These two methods are affected in mismatched testtrain conditions, which might be caused by vocal-efort variability, different speakingstyles or channel efects. In this work, we compared the impact of speech rate variation and room reverberation on both methods. We found that performance degradation due to variation on speech rate can be mitigated by adding fast speech samples into the training set, which decreased equal error rates for Gaussian mixture models and i-vector, respectively. Regarding reverberation, we investigated the achievements of both methods when three diferent reverberation compensation techniques are applied in order to overcome performance degradation. The results showed that having reverberant background models separated by diferent levels of reverberation can bene t both methods, with the i-vector providing the best performance in that scenario. Finally, the performance of two auditory-inspired features, mel-frequency cepstral coe ficients and the so-called modulation spectrum features, are compared in presence of room reverberation. For the speaker verifcation system considered in this work, modulation spectrum features are equally afected by reverberation time and have their performance degraded as the level of reverberation increases.
13

Automatic speaker verification on site and by telephone: methods, applications and assessment

Melin, Håkan January 2006 (has links)
Speaker verification is the biometric task of authenticating a claimed identity by means of analyzing a spoken sample of the claimant's voice. The present thesis deals with various topics related to automatic speaker verification (ASV) in the context of its commercial applications, characterized by co-operative users, user-friendly interfaces, and requirements for small amounts of enrollment and test data. A text-dependent system based on hidden Markov models (HMM) was developed and used to conduct experiments, including a comparison between visual and aural strategies for prompting claimants for randomized digit strings. It was found that aural prompts lead to more errors in spoken responses and that visually prompted utterances performed marginally better in ASV, given that enrollment data were visually prompted. High-resolution flooring techniques were proposed for variance estimation in the HMMs, but results showed no improvement over the standard method of using target-independent variances copied from a background model. These experiments were performed on Gandalf, a Swedish speaker verification telephone corpus with 86 client speakers. A complete on-site application (PER), a physical access control system securing a gate in a reverberant stairway, was implemented based on a combination of the HMM and a Gaussian mixture model based system. Users were authenticated by saying their proper name and a visually prompted, random sequence of digits after having enrolled by speaking ten utterances of the same type. An evaluation was conducted with 54 out of 56 clients who succeeded to enroll. Semi-dedicated impostor attempts were also collected. An equal error rate (EER) of 2.4% was found for this system based on a single attempt per session and after retraining the system on PER-specific development data. On parallel telephone data collected using a telephone version of PER, 3.5% EER was found with landline and around 5% with mobile telephones. Impostor attempts in this case were same-handset attempts. Results also indicate that the distribution of false reject and false accept rates over target speakers are well described by beta distributions. A state-of-the-art commercial system was also tested on PER data with similar performance as the baseline research system. / QC 20100910
14

Speaker verification incorporating high-level linguistic features

Baker, Brendan J. January 2008 (has links)
Speaker verification is the process of verifying or disputing the claimed identity of a speaker based on a recorded sample of their speech. Automatic speaker verification technology can be applied to a variety of person authentication and identification applications including forensics, surveillance, national security measures for combating terrorism, credit card and transaction verification, automation and indexing of speakers in audio data, voice based signatures, and over-the-phone security access. The ubiquitous nature of modern telephony systems allows for the easy acquisition and delivery of speech signals for processing by an automated speaker recognition system. Traditionally, approaches to automatic speaker verification have involved holistic modelling of low-level acoustic-based features in order to characterise physiological aspects of a speaker such as the length and shape of the vocal tract. Although the use of these low-level features has proved highly successful, there are numerous other sources of speaker specific information in the speech signal that have largely been ignored. In spontaneous and conversational speech, perceptually higher levels of in- formation such as the linguistic content, pronunciation idiosyncrasies, idiolectal word usage, speaking rates and prosody, can also provide useful cues as to identify of a speaker. The main aim of this work is to explore the incorporation of higher levels of information into the verification process. Specifically, linguistic constructs such as words, syllables and phones are examined for their usefulness as features for text-independent speaker verification. Two main approaches to incorporating these linguistic features are explored. Firstly, the direct modelling of linguistic feature sequences is examined. Stochastic language models are used to model word and phonetic sequences obtained from automatically obtained transcripts. Experimentation indicates that significant speaker characterising information is indeed contained in both word and phone-level transcripts. It is shown, however, that model estimation issues arise when limited speech is available for training. This speaker model estimation problem is addressed by employing an adaptive model training strategy that significantly improves the performance and extended the usefulness of both lexical and phonetic techniques to short training length situations. An alternate approach to incorporating linguistic information is also examined. Rather than modelling the high-level features independently of acoustic information, linguistic information is instead used to constrain and aid acoustic- based speaker verification techniques. It is hypothesised that a ext-constrained" approach provides direct benefits by facilitating more detailed modelling, as well as providing useful insight into which articulatory events provide the most useful speaker-characterising information. A novel framework for text-constrained speaker verification is developed. This technique is presented as a generalised framework capable of using di®erent feature sets and modelling paradigms, and is based upon the use of a newly defined pseudo-syllabic segmentation unit. A detailed exploration of the speaker characterising power of both broad phonetic and syllabic events is performed and used to optimise the system configuration. An evaluation of the proposed text- constrained framework using cepstral features demonstrates the benefits of such an approach over holistic approaches, particularly in extended training length scenarios. Finally, a complete evaluation of the developed techniques on the NIST2005 speaker recognition evaluation database is presented. The benefit of including high-level linguistic information is demonstrated when a fusion of both high- and low-level techniques is performed.
15

Biometric methods and mobile access control

Fransson, Linda, Jeansson, Therese January 2004 (has links)
Our purpose with this thesis was to find biometric methods that can be used in access control of mobile access. The access control has two parts. Firstly, to validate the identity of the caller and, secondly, to ensure the validated user is not changed during the session that follows. Any solution to the access control problem is not available today, which means that anyone can get access to the mobile phone and the Internet. Therefore we have researched after a solution that can solve this problem but also on how to secure that no one else can take over an already validated session. We began to search for biometric methods that are available today to find them that would be best suited together with a mobile phone. After we had read information about them we did choose three methods for further investigation. These methods were Fingerprint Recognition, Iris Scan and Speaker Verification. Iris Scan is the method that is best suited to solve the authentication problem. The reasons for this are many. One of them is the uniqueness and stability of the iris, not even identical twins or the pair of the same individual has the same iris minutiae. The iris is also very protected behind eyelids, cornea and the aqueous humor and therefore difficult to damage. When it comes to the method itself, is it one of the most secure methods available today. One of the reasons for this is that the equal error rate is better than one in a million. However, this rate can be even better. It all depends on the Hamming Distance, which is a value that show how different the saved and temporarily template are, and what it is set to. To solve our session authentication, which was to make sure that no one else could take over a connected mobile phone, a sensor plate is the answer. This sensor will be able to sense for touch, heat and pulse. These three sensor measurements will together secure a validated session since the mobile phone will disconnect if the sensor looses its sensor data. There are, however, technological and other challenges to be solved before our proposed solutions will become viable. We address some of these issues in our thesis.
16

Rozpoznávání mluvčího na mobilním telefonu / Speaker Recognition on Mobile Phone

Pešán, Jan January 2011 (has links)
Tato práce se zaměřuje na implementaci počítačového systému rozpoznávání řečníka do prostředí mobilního telefonu. Je zde popsán princip, funkce, a implementace rozpoznávače na mobilním telefonu Nokia N900.
17

Synchronous HMMs for audio-visual speech processing

Dean, David Brendan January 2008 (has links)
Both human perceptual studies and automaticmachine-based experiments have shown that visual information from a speaker's mouth region can improve the robustness of automatic speech processing tasks, especially in the presence of acoustic noise. By taking advantage of the complementary nature of the acoustic and visual speech information, audio-visual speech processing (AVSP) applications can work reliably in more real-world situations than would be possible with traditional acoustic speech processing applications. The two most prominent applications of AVSP for viable human-computer-interfaces involve the recognition of the speech events themselves, and the recognition of speaker's identities based upon their speech. However, while these two fields of speech and speaker recognition are closely related, there has been little systematic comparison of the two tasks under similar conditions in the existing literature. Accordingly, the primary focus of this thesis is to compare the suitability of general AVSP techniques for speech or speaker recognition, with a particular focus on synchronous hidden Markov models (SHMMs). The cascading appearance-based approach to visual speech feature extraction has been shown to work well in removing irrelevant static information from the lip region to greatly improve visual speech recognition performance. This thesis demonstrates that these dynamic visual speech features also provide for an improvement in speaker recognition, showing that speakers can be visually recognised by how they speak, in addition to their appearance alone. This thesis investigates a number of novel techniques for training and decoding of SHMMs that improve the audio-visual speech modelling ability of the SHMM approach over the existing state-of-the-art joint-training technique. Novel experiments are conducted within to demonstrate that the reliability of the two streams during training is of little importance to the final performance of the SHMM. Additionally, two novel techniques of normalising the acoustic and visual state classifiers within the SHMM structure are demonstrated for AVSP. Fused hidden Markov model (FHMM) adaptation is introduced as a novel method of adapting SHMMs from existing wellperforming acoustic hidden Markovmodels (HMMs). This technique is demonstrated to provide improved audio-visualmodelling over the jointly-trained SHMMapproach at all levels of acoustic noise for the recognition of audio-visual speech events. However, the close coupling of the SHMM approach will be shown to be less useful for speaker recognition, where a late integration approach is demonstrated to be superior.
18

Αναγνώριση ομιλητή και ομιλίας με χρήση κυματιδίων

Σιαφαρίκας, Μιχαήλ 06 September 2010 (has links)
Σκοπός της παρούσας διατριβής είναι η εκμετάλλευση των κυματιδίων με σκοπό την βελτίωση της απόδοσης συστημάτων αναγνώρισης ομιλητή και ομιλίας. Στα πλαίσια αυτά, εισάγονται τέσσερις νέοι τρόποι παραμετροποίησης του σήματος ομιλίας: (1) Η πρώτη μέθοδος προσαρμόζει την ανάλυση συχνότητας των πακέτων κυματιδίων για την προσέγγιση της ψυχοακουστικής επίδρασης των κρίσιμων ζωνών του ακουστικού συστήματος ενσωματώνοντας τις τελευταίες εξελίξεις για τον υπολογισμό τους. (2) Η δεύτερη μέθοδος εισάγει μια επέκταση του μετασχηματισμού πακέτων κυματιδίων, τον επικαλυπτόμενο μετασχηματισμό πακέτων κυματιδίων, ο οποίος χρησιμοποιείται για να δοθεί έμφαση στις περιοχές αλλαγής των κρίσιμων ζωνών από μια μικρότερη σε μια μεγαλύτερη τιμή. (3) Η τρίτη μέθοδος αξιολογεί τη συνεισφορά μη επικαλυπτόμενων ζωνών συχνοτήτων στην αναγνώριση ομιλητή και κατασκευάζεται ανάλογα ένας μετασχηματισμός πακέτων κυματιδίων ο οποίος προσαρμόζει την συχνοτική του ανάλυση σύμφωνα με την απόδοση κάθε μίας από τις ζώνες. (4) Η τέταρτη μέθοδος επιλέγει τη βέλτιστη βάση από το σύνολο των μετασχηματισμών που είναι διαθέσιμοι με τα πακέτα κυματιδίων με εφαρμογή την αναγνώριση ομιλητή και κριτήριο το μέτρο EER. Οι παραπάνω τέσσερις τρόποι παραμετροποίησης του σήματος ομιλίας αξιολογήθηκαν με το σύστημα αναγνώρισης ομιλητή WCL-1 του εργαστηρίου ενσύρματης τηλεπικοινωνίας του Πανεπιστημίου Πατρών στις βάσεις δεδομένων POLYCOST και NIST και αποδείχθηκε η ανωτερότητά τους τόσο σε σχέση με προηγούμενες μεθόδους των κυματιδίων όσο και σε σχέση με ευρέως χρησιμοποιούμενες παραμέτρους ομιλίας, όπως οι παράμετροι cepstral με βάση την κλίμακα mel (MFCC). Επιπλέον, στη διατριβή αναλύονται οι ιδιότητες των σημαντικότερων συναρτήσεων κυματιδίων, επιλέγεται η βέλτιστη για την αναπαράσταση του σήματος ομιλίας και πιστοποιείται στην πράξη αυτή η επιλογή. Τέλος, οι δύο πρώτες από τις προαναφερόμενες μεθόδους παραμετροποίησης τροποποιήθηκαν και επεκτάθηκαν κατάλληλα για την εφαρμογή στην αναγνώριση ομιλίας όπου αξιολογήθηκαν και διαπιστώθηκε η υπεροχή τους έναντι παραδοσιακών και ευρέως διαδεδομένων μεθόδων παραμετροποίησης του σήματος ομιλίας που στηρίζονται στον μετασχηματισμό Fourier. Το κύριο συμπέρασμα που προέκυψε από τη παρούσα διδακτορική διατριβή είναι ότι τα κυματίδια και συγκεκριμένα τα πακέτα κυματιδίων είναι δυνατόν να χρησιμοποιηθούν με επιτυχία στη βελτίωση της απόδοσης συστημάτων αναγνώρισης ομιλητή και ομιλίας. / The main goal of the present thesis is the exploitation of wavelets for the optimization of speaker and speech recognition systems performance. In this context, four new speech parameterization methods are introduced: (1) The first method adapts the frequency resolution of wavelet packet transform to the critical bandwidth of auditory filters incorporating the recent advances for their estimation. (2) The second method introduces a generalization of wavelet packet transform, named overlapping wavelet packet transform, which emphasizes those frequency sub-bands that critical bandwidth changes from a finer to a coarser value. (3) The third method evaluates the contribution of each one of eight non-overlapping frequency sub-bands, that the Nyquist interval is divided, to the speaker recognition task and a wavelet packet transform is constructed which adapts its frequency resolution according to the performance of each sub-band. (4) The fourth method introduces a new technique for seeking and selecting the best basis among all wavelet packet transforms available in the speaker recognition task taking as criterion the EER. The aforementioned four speech signal parameterizations were evaluated on the speaker verification system WCL-1 of Wire Communications Laboratory, University of Patras, utilizing the speaker recognition corpora POLYCOST and NIST and their superiority was proven over previous wavelet-based parameterizations as well as the widely used Mel Frequency Cepstral Coefficients (MFCC). Among the four proposed methods, it was proven that the second parameterization technique exhibited the best performance. Furthermore, the most important wavelet properties are thoroughly analyzed, the optimal is selected for the representation of the speech signal and this choice is experimentally verified. Finally, the first two parameterization methods were further modified and extended appropriately for application on the speech recognition task where their superiority was proven over traditionally and widely used speech parameterization techniques based on Fourier transform. The main conclusion that resulted in the present doctoral thesis is that wavelets and specifically wavelet packet transforms can be used successfully for the tasks of speaker and speech recognition.
19

Modèles acoustiques à structure temporelle renforcée pour la vérification du locuteur embarquée / Reinforced temporal structure of acoustic models for speaker recognition

Larcher, Anthony 24 September 2009 (has links)
La vérification automatique du locuteur est une tâche de classification qui vise à confirmer ou infirmer l’identité d’un individu d’après une étude des caractéristiques spécifiques de sa voix. L’intégration de systèmes de vérification du locuteur sur des appareils embarqués impose de respecter deux types de contraintes, liées à cet environnement : – les contraintes matérielles, qui limitent fortement les ressources disponibles en termes de mémoire de stockage et de puissance de calcul disponibles ; – les contraintes ergonomiques, qui limitent la durée et le nombre des sessions d’entraînement ainsi que la durée des sessions de test. En reconnaissance du locuteur, la structure temporelle du signal de parole n’est pas exploitée par les approches état-de-l’art. Nous proposons d’utiliser cette information, à travers l’utilisation de mots de passe personnels, afin de compenser le manque de données d’apprentissage et de test. Une première étude nous a permis d’évaluer l’influence de la dépendance au texte sur l’approche état-de-l’art GMM/UBM (Gaussian Mixture Model/ Universal Background Model). Nous avons montré qu’une contrainte lexicale imposée à cette approche, généralement utilisée pour la reconnaissance du locuteur indépendante du texte, permet de réduire de près de 30% (en relatif) le taux d’erreurs obtenu dans le cas où les imposteurs ne connaissent pas le mot de passe des clients. Dans ce document, nous présentons une architecture acoustique spécifique qui permet d’exploiter à moindre coût la structure temporelle des mots de passe choisis par les clients. Cette architecture hiérarchique à trois niveaux permet une spécialisation progressive des modèles acoustiques. Un modèle générique représente l’ensemble de l’espace acoustique. Chaque locuteur est représenté par une mixture de Gaussiennes qui dérive du modèle du monde générique du premier niveau. Le troisième niveau de notre architecture est formé de modèles de Markov semi-continus (SCHMM), qui permettent de modéliser la structure temporelle des mots de passe tout en intégrant l’information spécifique au locuteur, modélisée par le modèle GMM du deuxième niveau. Chaque état du modèle SCHMM d’un mot de passe est estimé, relativement au modèle indépendant du texte de ce locuteur, par adaptation des paramètres de poids des distributions Gaussiennes de ce GMM. Cette prise en compte de la structure temporelle des mots de passe permet de réduire de 60% le taux d’égales erreurs obtenu lorsque les imposteurs prononcent un énoncé différent du mot de passe des clients. Pour renforcer la modélisation de la structure temporelle des mots de passe, nous proposons d’intégrer une information issue d’un processus externe au sein de notre architecture acoustique hiérarchique. Des points de synchronisation forts, extraits du signal de parole, sont utilisés pour contraindre l’apprentissage des modèles de mots de passe durant la phase d’enrôlement. Les points de synchronisation obtenus lors de la phase de test, selon le même procédé, permettent de contraindre le décodage Viterbi utilisé, afin de faire correspondre la structure de la séquence avec celle du modèle testé. Cette approche a été évaluée sur la base de données audio-vidéo MyIdea grâce à une information issue d’un alignement phonétique. Nous avons montré que l’ajout d’une contrainte de synchronisation au sein de notre approche acoustique permet de dégrader les scores imposteurs et ainsi de diminuer le taux d’égales erreurs de 20% (en relatif) dans le cas où les imposteurs ignorent le mot de passe des clients tout en assurant des performances équivalentes à celles des approches état-de-l’art dans le cas où les imposteurs connaissent les mots de passe. L’usage de la modalité vidéo nous apparaît difficilement conciliable avec la limitation des ressources imposée par le contexte embarqué. Nous avons proposé un traitement simple du flux vidéo, respectant ces contraintes, qui n’a cependant pas permis d’extraire une information pertinente. L’usage d’une modalité supplémentaire permettrait néanmoins d’utiliser les différentes informations structurelles pour déjouer d’éventuelles impostures par play-back. Ce travail ouvre ainsi de nombreuses perspectives, relatives à l’utilisation d’information structurelle dans le cadre de la vérification du locuteur et aux approches de reconnaissance du locuteur assistée par la modalité vidéo / SPEAKER verification aims to validate or invalidate identity of a person by using his/her speech characteristics. Integration of an automatic speaker verification engine on embedded devices has to respect two types of constraint, namely : – limited material resources such as memory and computational power ; – limited speech, both training and test sequences. Current state-of-the-art systems do not take advantage of the temporal structure of speech. We propose to use this information through a user-customised framework, in order to compensate for the short duration speech signals that are common in the given scenario. A preliminary study allows us to evaluate the influence of text-dependency on the state-of-the-art GMM/UBM (Gaussian Mixture Model / Universal Background Model) approach. By constraining this approach, usually dedicated to text-independent speaker recognition, we show that a lexical constraint allows a relative reduction of 30% in error rate when impostors do not know the client password. We introduce a specific acoustic architecture which takes advantage of the temporal structure of speech through a low cost user-customised password framework. This three stage hierarchical architecture allows a layered specialization of the acoustic models. The upper layer, which is a classical UBM, aims to model the general acoustic space. The middle layer contains the text-independent specific characteristics of each speaker. These text-independent speaker models are obtained by a classical GMM/UBM adaptation. The previous text-independent speaker model is used to obtain a left-right Semi-Continuous Hidden Markov Model (SCHMM) with the goal of harnessing the Temporal Structure Information (TSI) of the utterance chosen by the given speaker. This TSI is shown to reduce the error rate by 60% when impostors do not know the client password. In order to reinforce the temporal structure of speech, we propose a new approach for speaker verification. The speech modality is reinforced by additional temporal information. Synchronisation points extracted from an additional process are used to constrain the acoustic decoding. Such an additional modality could be used in order to add different structural information and to thwart impostor attacks such as playback. Thanks to the specific aspects of our system, this aided-decoding shows an acceptable level of complexity. In order to reinforce the relaxed synchronisation between states and frames due to the SCHMM structure of the TSI modelling, we propose to embed an external information during the audio decoding by adding further time-constraints. This information is here labelled external to reflect that it is aimed to come from an independent process. Experiments were performed on the BIOMET part of the MyIdea database by using an external information gathered from an automatic phonetical alignment. We show that adding a synchronisation constraint to our acoustic approach allows to reduce impostor scores and to decrease the error rate from 20% when impostor do not know the client password. In others conditions, when impostors know the passwords, the performance remains similar to the original baseline. The extraction of the synchronisation constraint from a video stream seems difficult to accommodate with embedded limited resources. We proposed a first exploration of the use of a video stream in order to constrain the acoustic process. This simple video processing did not allow us to extract any pertinent information
20

Verificação de locutores independente de texto: uma análise de robustez a ruído

PINHEIRO, Hector Natan Batista 25 February 2015 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-11-08T19:13:18Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Final.pdf: 15901621 bytes, checksum: e3bd1c1be70941932d970f61be02e4c1 (MD5) / Made available in DSpace on 2016-11-08T19:13:18Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Final.pdf: 15901621 bytes, checksum: e3bd1c1be70941932d970f61be02e4c1 (MD5) Previous issue date: 2015-02-25 / O processo de identificação de um determinado indivíduo é realizado milhões de vezes, todos os dias, por organizações dos mais diversos setores. Perguntas como "Quem é esse indivíduo?" ou "É essa pessoa quem ela diz ser?" são realizadas frequentemente por organizações financeiras, sistemas de saúde, sistemas de comércio eletrônico, sistemas de telecomunicações e por instituições governamentais. Identificação biométrica diz respeito ao processo de realizar essa identificação a partir de características físicas ou comportamentais. Tais características são comumente referenciadas como características biométricas e alguns exemplos delas são: face, impressão digital, íris, assinatura e voz. Reconhecimento de locutores é uma modalidade biométrica que se propõe a realizar o processo de identificação pessoal a partir das informações presentes unicamente na voz do indivíduo. Este trabalho foca no desenvolvimento de sistemas de verificação de locutores independente de texto. O principal desafio no desenvolvimento desses sistemas provém das chamadas incompatibilidades que podem ocorrer na aquisição dos sinais de voz. As técnicas propostas para suavizá-las são chamadas de técnicas de compensação e três são os domínios onde elas podem operar: no processo de extração de características do sinal, na construção dos modelos dos locutores e no cálculo do score final do sistema. Além de apresentar uma vasta revisão da literatura do desenvolvimento de sistemas de verificação de locutores independentes de texto, esse trabalho também apresenta as principais técnicas de compensação de características, modelos e scores. Na fase de experimentação, uma análise comparativa das principais técnicas propostas na literatura é apresentada. Além disso, duas técnicas de compensação são propostas, uma do domínio de modelagem e outra do domínio dos scores. A técnica de compensação de score proposta é baseada na Distribuição Normal Acumulada e apresentou, em alguns contextos, resultados superiores aos apresentados pelas principais técnicas da literatura. Já a técnica de compensação de modelo é baseada em uma técnica da literatura que combina dois conceitos: treinamento multi-condicional e Teoria dos Dados Ausentes (Missing Data Theory). A formulação apresentada pelos autores é baseada nos chamados Modelos de União a Posteriori (Posterior Union Models), mas não é completamente adequada para verificação de locutores independente de texto. Este trabalho apresenta uma formulação apropriada para esse contexto que combina os dois conceitos utilizados pelos autores com um tipo de modelagem utilizando UBMs (Universal Background Models). A técnica proposta apresentou ganhos de desempenhos quando comparada à técnica-padrão GMM-UBM, baseada em Modelos de Misturas Gaussianas (GMMs). / The personal identification of individuals is a task executed millions of times every day by organizations from diverse fields. Questions such as "Who is this individual?" or "Is this person who he or she claims to be?" are constantly made by organizations in financial services, health care, e-commerce, telecommunication systems and governments. Biometric identification is the process of identifying people using their physiological or behavioral characteristics. These characteristics are generally known as biometrics and examples of these include face, fingerprint, iris, handwriting and speech. Speaker recognition is a biometric modality which makes the personal identification by using speaker-specific information from the speech. This work focuses on the development of text-independent speaker verification systems. In these systems, speech from an individual is used to verify the claimed identity of that individual. Furthermore, the verification must occur independently of the pronounced word or phrase. The main challenge in the development of speaker recognition systems comes from the mismatches which may occur in the acquisition of the speech signals. The techniques proposed to mitigate the mismatch effects are referred as compensation methods. They may operate in three domains: in the feature extraction process, in the estimation of the speaker models and in the computation of the decision score. Besides presenting a wide description of the main techniques used in the development of text-independent speaker verification systems, this work presents the description of the main feature-, model- and score-based compensation methods. In the experiments, this work shows comprehensive comparisons between the conventional techniques and the alternatively compensations methods. Furthermore, two compensation methods are proposed: one operates in the model domain and the other in the score-domain. The scoredomain proposed compensation method is based on the Normal cumulative distribution function and, in some contexts, outperformed the performance of the main score-domain compensation techniques. On the other hand, the model-domain compensation technique proposed in this work is based on a method presented in the literature which combines two concepts: the multi-condition training and the Missing Data Theory. The formulation proposed by the authors is based on the Posterior Union models and is not completely appropriate for the text-independent speaker verification task. This work proposes a more appropriate formulation for this context which combines the concepts used by the authors with a type of modeling using Universal Background Models (UBMs). The proposed method outperformed the usual GMM-UBM modeling technique, based on Gaussian Mixture Models (GMMs).

Page generated in 0.0969 seconds