• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 11
  • 6
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 26
  • 15
  • 14
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Contribution à l'évaluation opérationnelle des systèmes biométriques multimodaux / Contribution to the operational evaluation of multimodal biometric systems

Cabana, Antoine 28 November 2018 (has links)
Le développement et la multiplication de dispositifs connectés, en particulier avec les \textit{smartphones}, nécessitent la mise en place de moyens d'authentification. Dans un soucis d'ergonomie, les industriels intègrent massivement des systèmes biométrique afin de garantir l'identité du porteur, et ce afin d'autoriser l'accès à certaines applications et fonctionnalités sensibles (paiements, e-banking, accès à des données personnelles : correspondance électronique..). Dans un soucis de garantir, une adéquation entre ces systèmes d'authentification et leur usages, la mise en œuvre d'un processus d'évaluation est nécessaire.L'amélioration des performances biométriques est un enjeux important afin de permettre l'intégration de telles solutions d'authentification dans certains environnement ayant d'importantes exigences sur les performances, particulièrement sécuritaires. Afin d'améliorer les performances et la fiabilité des authentifications, différentes sources biométriques sont susceptibles d'être utilisées dans un processus de fusion. La biométrie multimodale réalise, en particulier, la fusion des informations extraites de différentes modalités biométriques. / Development and spread of connected devices, in particular smartphones, requires the implementation of authentication methods. In an ergonomic concern, manufacturers integrates biometric systems in order to deal with logical control access issues. These biometric systems grant access to critical data and application (payment, e-banking, privcy concerns : emails...). Thus, evaluation processes allows to estimate the systems' suitabilty with these uses. In order to improve recognition performances, manufacturer are susceptible to perform multimodal fusion.In this thesis, the evaluation of operationnal biometric systems has been studied, and an implementation is presented. A second contribution studies the quality estimation of speech samples, in order to predict recognition performances.
32

Speaker Identification Based On Discriminative Vector Quantization And Data Fusion

Zhou, Guangyu 01 January 2005 (has links)
Speaker Identification (SI) approaches based on discriminative Vector Quantization (VQ) and data fusion techniques are presented in this dissertation. The SI approaches based on Discriminative VQ (DVQ) proposed in this dissertation are the DVQ for SI (DVQSI), the DVQSI with Unique speech feature vector space segmentation for each speaker pair (DVQSI-U), and the Adaptive DVQSI (ADVQSI) methods. The difference of the probability distributions of the speech feature vector sets from various speakers (or speaker groups) is called the interspeaker variation between speakers (or speaker groups). The interspeaker variation is the measure of template differences between speakers (or speaker groups). All DVQ based techniques presented in this contribution take advantage of the interspeaker variation, which are not exploited in the previous proposed techniques by others that employ traditional VQ for SI (VQSI). All DVQ based techniques have two modes, the training mode and the testing mode. In the training mode, the speech feature vector space is first divided into a number of subspaces based on the interspeaker variations. Then, a discriminative weight is calculated for each subspace of each speaker or speaker pair in the SI group based on the interspeaker variation. The subspaces with higher interspeaker variations play more important roles in SI than the ones with lower interspeaker variations by assigning larger discriminative weights. In the testing mode, discriminative weighted average VQ distortions instead of equally weighted average VQ distortions are used to make the SI decision. The DVQ based techniques lead to higher SI accuracies than VQSI. DVQSI and DVQSI-U techniques consider the interspeaker variation for each speaker pair in the SI group. In DVQSI, speech feature vector space segmentations for all the speaker pairs are exactly the same. However, each speaker pair of DVQSI-U is treated individually in the speech feature vector space segmentation. In both DVQSI and DVQSI-U, the discriminative weights for each speaker pair are calculated by trial and error. The SI accuracies of DVQSI-U are higher than those of DVQSI at the price of much higher computational burden. ADVQSI explores the interspeaker variation between each speaker and all speakers in the SI group. In contrast with DVQSI and DVQSI-U, in ADVQSI, the feature vector space segmentation is for each speaker instead of each speaker pair based on the interspeaker variation between each speaker and all the speakers in the SI group. Also, adaptive techniques are used in the discriminative weights computation for each speaker in ADVQSI. The SI accuracies employing ADVQSI and DVQSI-U are comparable. However, the computational complexity of ADVQSI is much less than that of DVQSI-U. Also, a novel algorithm to convert the raw distortion outputs of template-based SI classifiers into compatible probability measures is proposed in this dissertation. After this conversion, data fusion techniques at the measurement level can be applied to SI. In the proposed technique, stochastic models of the distortion outputs are estimated. Then, the posteriori probabilities of the unknown utterance belonging to each speaker are calculated. Compatible probability measures are assigned based on the posteriori probabilities. The proposed technique leads to better SI performance at the measurement level than existing approaches.
33

Investigating Speaker Features From Very Short Speech Records

Berg, Brian LaRoy 11 September 2001 (has links)
A procedure is presented that is capable of extracting various speaker features, and is of particular value for analyzing records containing single words and shorter segments of speech. By taking advantage of the fast convergence properties of adaptive filtering, the approach is capable of modeling the nonstationarities due to both the vocal tract and vocal cord dynamics. Specifically, the procedure extracts the vocal tract estimate from within the closed glottis interval and uses it to obtain a time-domain glottal signal. This procedure is quite simple, requires minimal manual intervention (in cases of inadequate pitch detection), and is particularly unique because it derives both the vocal tract and glottal signal estimates directly from the time-varying filter coefficients rather than from the prediction error signal. Using this procedure, several glottal signals are derived from human and synthesized speech and are analyzed to demonstrate the glottal waveform modeling performance and kind of glottal characteristics obtained therewith. Finally, the procedure is evaluated using automatic speaker identity verification. / Ph. D.
34

Automatic speaker verification on site and by telephone: methods, applications and assessment

Melin, Håkan January 2006 (has links)
Speaker verification is the biometric task of authenticating a claimed identity by means of analyzing a spoken sample of the claimant's voice. The present thesis deals with various topics related to automatic speaker verification (ASV) in the context of its commercial applications, characterized by co-operative users, user-friendly interfaces, and requirements for small amounts of enrollment and test data. A text-dependent system based on hidden Markov models (HMM) was developed and used to conduct experiments, including a comparison between visual and aural strategies for prompting claimants for randomized digit strings. It was found that aural prompts lead to more errors in spoken responses and that visually prompted utterances performed marginally better in ASV, given that enrollment data were visually prompted. High-resolution flooring techniques were proposed for variance estimation in the HMMs, but results showed no improvement over the standard method of using target-independent variances copied from a background model. These experiments were performed on Gandalf, a Swedish speaker verification telephone corpus with 86 client speakers. A complete on-site application (PER), a physical access control system securing a gate in a reverberant stairway, was implemented based on a combination of the HMM and a Gaussian mixture model based system. Users were authenticated by saying their proper name and a visually prompted, random sequence of digits after having enrolled by speaking ten utterances of the same type. An evaluation was conducted with 54 out of 56 clients who succeeded to enroll. Semi-dedicated impostor attempts were also collected. An equal error rate (EER) of 2.4% was found for this system based on a single attempt per session and after retraining the system on PER-specific development data. On parallel telephone data collected using a telephone version of PER, 3.5% EER was found with landline and around 5% with mobile telephones. Impostor attempts in this case were same-handset attempts. Results also indicate that the distribution of false reject and false accept rates over target speakers are well described by beta distributions. A state-of-the-art commercial system was also tested on PER data with similar performance as the baseline research system. / QC 20100910
35

AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES

Unnikrishnan, Harikrishnan 01 January 2010 (has links)
Auditory stream denotes the abstract effect a source creates in the mind of the listener. An auditory scene consists of many streams, which the listener uses to analyze and understand the environment. Computer analyses that attempt to mimic human analysis of a scene must first perform Audio Scene Segmentation (ASS). ASS find applications in surveillance, automatic speech recognition and human computer interfaces. Microphone arrays can be employed for extracting streams corresponding to spatially separated sources. However, when a source moves to a new location during a period of silence, such a system loses track of the source. This results in multiple spatially localized streams for the same source. This thesis proposes to identify local streams associated with the same source using auditory features extracted from the beamformed signal. ASS using the spatial cues is first performed. Then auditory features are extracted and segments are linked together based on similarity of the feature vector. An experiment was carried out with two simultaneous speakers. A classifier is used to classify the localized streams as belonging to one speaker or the other. The best performance was achieved when pitch appended with Gammatone Frequency Cepstral Coefficeints (GFCC) was used as the feature vector. An accuracy of 96.2% was achieved.
36

Sistema de reconhecimento de locutor utilizando redes neurais artificiais / Artificial neural networks speaker recognition system

Adami, Andre Gustavo January 1997 (has links)
Este trabalho envolve o emprego de recentes tecnologias ligadas a promissora área de Inteligência Computacional e a tradicional área de Processamento de Sinais Digitais. Tem por objetivo o desenvolvimento de uma aplicação especifica na área de Processamento de Voz: o reconhecimento de locutor. Inúmeras aplicações, ligadas principalmente a segurança e controle, são possíveis a partir do domínio da tecnologia de reconhecimento de locutor, tanto no que diz respeito a identificação quanto a verificação de diferentes locutores. O processo de reconhecimento de locutor pode ser dividido em duas grandes fases: extração das características básicas do sinal de voz e classificação. Na fase de extração, procurou-se aplicar os mais recentes avanços na área de Processamento Digital de Sinais ao problema proposto. Neste contexto, foram utilizadas a frequência fundamental e as frequências formantes como parâmetros que identificam o locutor. O primeiro foi obtido através do use da autocorrelação e o segundo foi obtido através da transformada de Fourier. Estes parâmetros foram extraídos na porção da fala onde o trato vocal apresenta uma coarticulação entre dois sons vocálicos. Esta abordagem visa extrair as características desta mudança do aparato vocal. Existem dois tipos de reconhecimento de locutor: identificação (busca-se reconhecer o locutor em uma população) e verificação (busca-se verificar se a identidade alegada é verdadeira). O processo de reconhecimento de locutor é dividido em duas grandes fases: extração das características (envolve aquisição, pré-processamento e extração dos parâmetros característicos do sinal) e classificação (envolve a classificação do sinal amostrado na identificação/verificação do locutor ou não). São apresentadas diversas técnicas para representação do sinal, como analise espectral, medidas de energia, autocorrelação, LPC (Linear Predictive Coding), entre outras. Também são abordadas técnicas para extração de características do sinal, como a frequência fundamental e as frequências formantes. Na fase de classificação, pode-se utilizar diversos métodos convencionais: Cadeias de Markov, Distância Euclidiana, entre outros. Além destes, existem as Redes Neurais Artificiais (RNAs) que são consideradas poderosos classificadores. As RNAs já vêm sendo utilizadas em problemas que envolvem classificações de sinais de voz. Neste trabalho serão estudados os modelos mais utilizados para o problema de reconhecimento de locutor. Assim, o tema principal da Dissertação de Mestrado deste autor é a implementação de um sistema de reconhecimento de locutor utilizando Redes Neurais Artificiais para classificação do locutor. Neste trabalho tamb6m é apresentada uma abordagem para a implementação de um sistema de reconhecimento de locutor utilizando as técnicas convencionais para o processo de classificação do locutor. As técnicas utilizadas são Dynamic Time Warping (DTW) e Vector Quantization (VQ). / This work deals with the application of recent technologies related to the promising research domain of Intelligent Computing (IC) and to the traditional Digital Signal Processing area. This work aims to apply both technologies in a Voice Processing specific application which is the speaker recognition task. Many security control applications can be supported by speaker recognition technology, both in identification and verification of different speakers. The speaker recognition process can be divided into two main phases: basic characteristics extraction from the voice signal and classification. In the extraction phase, one proposed goal was the application of recent advances in DSP theory to the problem approached in this work. In this context, the fundamental frequency and the formant frequencies were employed as parameters to identify the speaker. The first one was obtained through the use of autocorrelation and the second ones were obtained through Fourier transform. These parameters were extracted from the portion of speech where the vocal tract presents a coarticulation between two voiced sounds. This approach is used to extract the characteristics of this apparatus vocal changing. In this work, the Multi-Layer Perceptron (MLP) ANN architecture was investigated in conjunction with the backpropagation learning algorithm. In this sense, some main characteristics extracted from the signal (voice) were used as input parameters to the ANN used. The output of MLP, trained previously with the speakers features, returns the authenticity of that signal. Tests were performed with 10 different male speakers, whose age were in the range from 18 to 24 years. The results are very promising. In this work it is also presented an approach to implement a speaker recognition system by applying conventional methods to the speaker classification process. The methods used are Dynamic Time Warping (DTW) and Vector Quantization (VQ).
37

Sistema de reconhecimento de locutor utilizando redes neurais artificiais / Artificial neural networks speaker recognition system

Adami, Andre Gustavo January 1997 (has links)
Este trabalho envolve o emprego de recentes tecnologias ligadas a promissora área de Inteligência Computacional e a tradicional área de Processamento de Sinais Digitais. Tem por objetivo o desenvolvimento de uma aplicação especifica na área de Processamento de Voz: o reconhecimento de locutor. Inúmeras aplicações, ligadas principalmente a segurança e controle, são possíveis a partir do domínio da tecnologia de reconhecimento de locutor, tanto no que diz respeito a identificação quanto a verificação de diferentes locutores. O processo de reconhecimento de locutor pode ser dividido em duas grandes fases: extração das características básicas do sinal de voz e classificação. Na fase de extração, procurou-se aplicar os mais recentes avanços na área de Processamento Digital de Sinais ao problema proposto. Neste contexto, foram utilizadas a frequência fundamental e as frequências formantes como parâmetros que identificam o locutor. O primeiro foi obtido através do use da autocorrelação e o segundo foi obtido através da transformada de Fourier. Estes parâmetros foram extraídos na porção da fala onde o trato vocal apresenta uma coarticulação entre dois sons vocálicos. Esta abordagem visa extrair as características desta mudança do aparato vocal. Existem dois tipos de reconhecimento de locutor: identificação (busca-se reconhecer o locutor em uma população) e verificação (busca-se verificar se a identidade alegada é verdadeira). O processo de reconhecimento de locutor é dividido em duas grandes fases: extração das características (envolve aquisição, pré-processamento e extração dos parâmetros característicos do sinal) e classificação (envolve a classificação do sinal amostrado na identificação/verificação do locutor ou não). São apresentadas diversas técnicas para representação do sinal, como analise espectral, medidas de energia, autocorrelação, LPC (Linear Predictive Coding), entre outras. Também são abordadas técnicas para extração de características do sinal, como a frequência fundamental e as frequências formantes. Na fase de classificação, pode-se utilizar diversos métodos convencionais: Cadeias de Markov, Distância Euclidiana, entre outros. Além destes, existem as Redes Neurais Artificiais (RNAs) que são consideradas poderosos classificadores. As RNAs já vêm sendo utilizadas em problemas que envolvem classificações de sinais de voz. Neste trabalho serão estudados os modelos mais utilizados para o problema de reconhecimento de locutor. Assim, o tema principal da Dissertação de Mestrado deste autor é a implementação de um sistema de reconhecimento de locutor utilizando Redes Neurais Artificiais para classificação do locutor. Neste trabalho tamb6m é apresentada uma abordagem para a implementação de um sistema de reconhecimento de locutor utilizando as técnicas convencionais para o processo de classificação do locutor. As técnicas utilizadas são Dynamic Time Warping (DTW) e Vector Quantization (VQ). / This work deals with the application of recent technologies related to the promising research domain of Intelligent Computing (IC) and to the traditional Digital Signal Processing area. This work aims to apply both technologies in a Voice Processing specific application which is the speaker recognition task. Many security control applications can be supported by speaker recognition technology, both in identification and verification of different speakers. The speaker recognition process can be divided into two main phases: basic characteristics extraction from the voice signal and classification. In the extraction phase, one proposed goal was the application of recent advances in DSP theory to the problem approached in this work. In this context, the fundamental frequency and the formant frequencies were employed as parameters to identify the speaker. The first one was obtained through the use of autocorrelation and the second ones were obtained through Fourier transform. These parameters were extracted from the portion of speech where the vocal tract presents a coarticulation between two voiced sounds. This approach is used to extract the characteristics of this apparatus vocal changing. In this work, the Multi-Layer Perceptron (MLP) ANN architecture was investigated in conjunction with the backpropagation learning algorithm. In this sense, some main characteristics extracted from the signal (voice) were used as input parameters to the ANN used. The output of MLP, trained previously with the speakers features, returns the authenticity of that signal. Tests were performed with 10 different male speakers, whose age were in the range from 18 to 24 years. The results are very promising. In this work it is also presented an approach to implement a speaker recognition system by applying conventional methods to the speaker classification process. The methods used are Dynamic Time Warping (DTW) and Vector Quantization (VQ).
38

Arcabouço para reconhecimento de locutor baseado em aprendizado não supervisionado / Speaker recognition framework based on unsupervised learning

Campos, Victor de Abreu [UNESP] 31 August 2017 (has links)
Submitted by Victor de Abreu Campos null (victorde.ac@gmail.com) on 2017-09-27T02:41:28Z No. of bitstreams: 1 dissertacao.pdf: 5473435 bytes, checksum: 1e76ecc15a4499dc141983740cc79e5a (MD5) / Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-09-28T13:43:21Z (GMT) No. of bitstreams: 1 campos_va_me_sjrp.pdf: 5473435 bytes, checksum: 1e76ecc15a4499dc141983740cc79e5a (MD5) / Made available in DSpace on 2017-09-28T13:43:21Z (GMT). No. of bitstreams: 1 campos_va_me_sjrp.pdf: 5473435 bytes, checksum: 1e76ecc15a4499dc141983740cc79e5a (MD5) Previous issue date: 2017-08-31 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / A quantidade vertiginosa de conteúdo multimídia acumulada diariamente tem demandado o desenvolvimento de abordagens eficazes de recuperação. Nesse contexto, ferramentas de reconhecimento de locutor capazes de identificar automaticamente um indivíduo pela sua voz são de grande relevância. Este trabalho apresenta uma nova abordagem de reconhecimento de locutor modelado como um cenário de recuperação e usando algoritmos de aprendizado não supervisionado recentes. A abordagem proposta considera Coeficientes Cepstrais de Frequência Mel (MFCCs) e Coeficientes de Predição Linear Perceptual (PLPs) como características de locutor, em combinação com múltiplas abordagens de modelagem probabilística, especificamente Quantização Vetorial, Modelos por Mistura de Gaussianas e i-vectors, para calcular distâncias entre gravações de áudio. Em seguida, métodos de aprendizado não supervisionado baseados em ranqueamento são utilizados para aperfeiçoar a eficácia dos resultados de recuperação e, com a aplicação de um classificador de K-Vizinhos Mais Próximos, toma-se uma decisão quanto a identidade do locutor. Experimentos foram conduzidos considerando três conjuntos de dados públicos de diferentes cenários e carregando ruídos de diversas origens. Resultados da avaliação experimental demonstram que a abordagem proposta pode atingir resultados de eficácia altos. Adicionalmente, ganhos de eficácia relativos de até +318% foram obtidos pelo procedimento de aprendizado não supervisionado na tarefa de recuperação de locutor e ganhos de acurácia relativos de até +7,05% na tarefa de identificação entre gravações de domínios diferentes. / The huge amount of multimedia content accumulated daily has demanded the development of effective retrieval approaches. In this context, speaker recognition tools capable of automatically identifying a person through their voice are of great relevance. This work presents a novel speaker recognition approach modelled as a retrieval scenario and using recent unsupervised learning methods. The proposed approach considers Mel-Frequency Cepstral Coefficients (MFCCs) and Perceptual Linear Prediction Coefficients (PLPs) as features along with multiple modelling approaches, namely Vector Quantization, Gaussian Mixture Models and i-vector to compute distances among audio objects. Next, rank-based unsupervised learning methods are used for improving the effectiveness of retrieval results and, based on a K-Nearest Neighbors classifier, an identity decision is taken. Several experiments were conducted considering three public datasets from different scenarios, carrying noise from various sources. Experimental results demonstrate that the proposed approach can achieve very high effectiveness results. In addition, effectiveness gains up to +318% were obtained by the unsupervised learning procedure in a speaker retrieval task. Also, accuracy gains up to +7,05% were obtained by the unsupervised learning procedure in a speaker identification task considering recordings from different domains. / FAPESP: 2015/07934-4
39

Sistema de reconhecimento de locutor utilizando redes neurais artificiais / Artificial neural networks speaker recognition system

Adami, Andre Gustavo January 1997 (has links)
Este trabalho envolve o emprego de recentes tecnologias ligadas a promissora área de Inteligência Computacional e a tradicional área de Processamento de Sinais Digitais. Tem por objetivo o desenvolvimento de uma aplicação especifica na área de Processamento de Voz: o reconhecimento de locutor. Inúmeras aplicações, ligadas principalmente a segurança e controle, são possíveis a partir do domínio da tecnologia de reconhecimento de locutor, tanto no que diz respeito a identificação quanto a verificação de diferentes locutores. O processo de reconhecimento de locutor pode ser dividido em duas grandes fases: extração das características básicas do sinal de voz e classificação. Na fase de extração, procurou-se aplicar os mais recentes avanços na área de Processamento Digital de Sinais ao problema proposto. Neste contexto, foram utilizadas a frequência fundamental e as frequências formantes como parâmetros que identificam o locutor. O primeiro foi obtido através do use da autocorrelação e o segundo foi obtido através da transformada de Fourier. Estes parâmetros foram extraídos na porção da fala onde o trato vocal apresenta uma coarticulação entre dois sons vocálicos. Esta abordagem visa extrair as características desta mudança do aparato vocal. Existem dois tipos de reconhecimento de locutor: identificação (busca-se reconhecer o locutor em uma população) e verificação (busca-se verificar se a identidade alegada é verdadeira). O processo de reconhecimento de locutor é dividido em duas grandes fases: extração das características (envolve aquisição, pré-processamento e extração dos parâmetros característicos do sinal) e classificação (envolve a classificação do sinal amostrado na identificação/verificação do locutor ou não). São apresentadas diversas técnicas para representação do sinal, como analise espectral, medidas de energia, autocorrelação, LPC (Linear Predictive Coding), entre outras. Também são abordadas técnicas para extração de características do sinal, como a frequência fundamental e as frequências formantes. Na fase de classificação, pode-se utilizar diversos métodos convencionais: Cadeias de Markov, Distância Euclidiana, entre outros. Além destes, existem as Redes Neurais Artificiais (RNAs) que são consideradas poderosos classificadores. As RNAs já vêm sendo utilizadas em problemas que envolvem classificações de sinais de voz. Neste trabalho serão estudados os modelos mais utilizados para o problema de reconhecimento de locutor. Assim, o tema principal da Dissertação de Mestrado deste autor é a implementação de um sistema de reconhecimento de locutor utilizando Redes Neurais Artificiais para classificação do locutor. Neste trabalho tamb6m é apresentada uma abordagem para a implementação de um sistema de reconhecimento de locutor utilizando as técnicas convencionais para o processo de classificação do locutor. As técnicas utilizadas são Dynamic Time Warping (DTW) e Vector Quantization (VQ). / This work deals with the application of recent technologies related to the promising research domain of Intelligent Computing (IC) and to the traditional Digital Signal Processing area. This work aims to apply both technologies in a Voice Processing specific application which is the speaker recognition task. Many security control applications can be supported by speaker recognition technology, both in identification and verification of different speakers. The speaker recognition process can be divided into two main phases: basic characteristics extraction from the voice signal and classification. In the extraction phase, one proposed goal was the application of recent advances in DSP theory to the problem approached in this work. In this context, the fundamental frequency and the formant frequencies were employed as parameters to identify the speaker. The first one was obtained through the use of autocorrelation and the second ones were obtained through Fourier transform. These parameters were extracted from the portion of speech where the vocal tract presents a coarticulation between two voiced sounds. This approach is used to extract the characteristics of this apparatus vocal changing. In this work, the Multi-Layer Perceptron (MLP) ANN architecture was investigated in conjunction with the backpropagation learning algorithm. In this sense, some main characteristics extracted from the signal (voice) were used as input parameters to the ANN used. The output of MLP, trained previously with the speakers features, returns the authenticity of that signal. Tests were performed with 10 different male speakers, whose age were in the range from 18 to 24 years. The results are very promising. In this work it is also presented an approach to implement a speaker recognition system by applying conventional methods to the speaker classification process. The methods used are Dynamic Time Warping (DTW) and Vector Quantization (VQ).
40

Identification nommée du locuteur : exploitation conjointe du signal sonore et de sa transcription / Named identification of speakers : using audio signal and rich transcription

Jousse, Vincent 04 May 2011 (has links)
Le traitement automatique de la parole est un domaine qui englobe un grand nombre de travaux : de la reconnaissance automatique du locuteur à la détection des entités nommées en passant par la transcription en mots du signal audio. Les techniques de traitement automatique de la parole permettent d’extraire nombre d’informations des documents audio (réunions, émissions, etc.) comme la transcription, certaines annotations (le type d’émission, les lieux cités, etc.) ou encore des informations relatives aux locuteurs (changement de locuteur, genre du locuteur). Toutes ces informations peuvent être exploitées par des techniques d’indexation automatique qui vont permettre d’indexer de grandes collections de documents. Les travaux présentés dans cette thèse s’intéressent à l’indexation automatique de locuteurs dans des documents audio en français. Plus précisément nous cherchons à identifier les différentes interventions d’un locuteur ainsi qu’à les nommer par leur prénom et leur nom. Ce processus est connu sous le nom d’identification nommée du locuteur (INL). La particularité de ces travaux réside dans l’utilisation conjointe du signal audio et de sa transcription en mots pour nommer les locuteurs d’un document. Le prénom et le nom de chacun des locuteurs est extrait du document lui même (de sa transcription enrichie plus exactement), avant d’être affecté à un des locuteurs du document. Nous commençons par rappeler le contexte et les précédents travaux réalisés sur l’INL avant de présenter Milesin, le système développé lors de cette thèse. L’apport de ces travaux réside tout d’abord dans l’utilisation d’un détecteur automatique d’entités nommées (LIA_NE) pour extraire les couples prénom / nom de la transcription. Ensuite, ils s’appuient sur la théorie des fonctions de croyance pour réaliser l’affectation aux locuteurs du document et prennent ainsi en compte les différents conflits qui peuvent apparaître. Pour finir, un algorithme optimal d’affectation est proposé. Ce système obtient un taux d’erreur compris entre 12 et 20 % sur des transcriptions de référence (réalisées manuellement) en fonction du corpus utilisé. Nous présentons ensuite les avancées réalisées et les limites mises en avant par ces travaux. Nous proposons notamment une première étude de l’impact de l’utilisation de transcriptions entièrement automatiques sur Milesin. / The automatic processing of speech is an area that encompasses a large number of works : speaker recognition, named entities detection or transcription of the audio signal into words. Automatic speech processing techniques can extract number of information from audio documents (meetings, shows, etc..) such as transcription, some annotations (the type of show, the places listed, etc..) or even information concerning speakers (speaker change, gender of speaker). All this information can be exploited by automatic indexing techniques which will allow indexing of large document collections. The work presented in this thesis are interested in the automatic indexing of speakers in french audio documents. Specifically we try to identify the various contributions of a speaker and nominate them by their first and last name. This process is known as named identification of the speaker. The particularity of this work lies in the joint use of audio and its transcript to name the speakers of a document. The first and last name of each speaker is extracted from the document itself (from its rich transcription more accurately), before being assigned to one of the speakers of the document. We begin by describing the context and previous work on the speaker named identification process before submitting Milesin, the system developed during this thesis. The contribution of this work lies firstly in the use of an automatic detector of named entities (LIA_NE) to extract the first name / last name of the transcript. Afterwards, they rely on the theory of belief functions to perform the assignment to the speakers of the document and thus take into account the various conflicts that may arise. Finally, an optimal assignment algorithm is proposed. This system gives an error rate of between 12 and 20 % on reference transcripts (done manually) based on the corpus used.We then present the advances and limitations highlighted by this work.We propose an initial study of the impact of the use of fully automatic transcriptions on Milesin.

Page generated in 0.101 seconds