• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 16
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Deep Learning Based Array Processing for Speech Separation, Localization, and Recognition

Wang, Zhong-Qiu 15 September 2020 (has links)
No description available.
42

Traitement de l'incertitude pour la reconnaissance de la parole robuste au bruit / Uncertainty learning for noise robust ASR

Tran, Dung Tien 20 November 2015 (has links)
Cette thèse se focalise sur la reconnaissance automatique de la parole (RAP) robuste au bruit. Elle comporte deux parties. Premièrement, nous nous focalisons sur une meilleure prise en compte des incertitudes pour améliorer la performance de RAP en environnement bruité. Deuxièmement, nous présentons une méthode pour accélérer l'apprentissage d'un réseau de neurones en utilisant une fonction auxiliaire. Dans la première partie, une technique de rehaussement multicanal est appliquée à la parole bruitée en entrée. La distribution a posteriori de la parole propre sous-jacente est alors estimée et représentée par sa moyenne et sa matrice de covariance, ou incertitude. Nous montrons comment propager la matrice de covariance diagonale de l'incertitude dans le domaine spectral à travers le calcul des descripteurs pour obtenir la matrice de covariance pleine de l'incertitude sur les descripteurs. Le décodage incertain exploite cette distribution a posteriori pour modifier dynamiquement les paramètres du modèle acoustique au décodage. La règle de décodage consiste simplement à ajouter la matrice de covariance de l'incertitude à la variance de chaque gaussienne. Nous proposons ensuite deux estimateurs d'incertitude basés respectivement sur la fusion et sur l'estimation non-paramétrique. Pour construire un nouvel estimateur, nous considérons la combinaison linéaire d'estimateurs existants ou de fonctions noyaux. Les poids de combinaison sont estimés de façon générative en minimisant une mesure de divergence par rapport à l'incertitude oracle. Les mesures de divergence utilisées sont des versions pondérées des divergences de Kullback-Leibler (KL), d'Itakura-Saito (IS) ou euclidienne (EU). En raison de la positivité inhérente de l'incertitude, ce problème d'estimation peut être vu comme une instance de factorisation matricielle positive (NMF) pondérée. De plus, nous proposons deux estimateurs d'incertitude discriminants basés sur une transformation linéaire ou non linéaire de l'incertitude estimée de façon générative. Cette transformation est entraînée de sorte à maximiser le critère de maximum d'information mutuelle boosté (bMMI). Nous calculons la dérivée de ce critère en utilisant la règle de dérivation en chaîne et nous l'optimisons par descente de gradient stochastique. Dans la seconde partie, nous introduisons une nouvelle méthode d'apprentissage pour les réseaux de neurones basée sur une fonction auxiliaire sans aucun réglage de paramètre. Au lieu de maximiser la fonction objectif, cette technique consiste à maximiser une fonction auxiliaire qui est introduite de façon récursive couche par couche et dont le minimum a une expression analytique. Grâce aux propriétés de cette fonction, la décroissance monotone de la fonction objectif est garantie / This thesis focuses on noise robust automatic speech recognition (ASR). It includes two parts. First, we focus on better handling of uncertainty to improve the performance of ASR in a noisy environment. Second, we present a method to accelerate the training process of a neural network using an auxiliary function technique. In the first part, multichannel speech enhancement is applied to input noisy speech. The posterior distribution of the underlying clean speech is then estimated, as represented by its mean and its covariance matrix or uncertainty. We show how to propagate the diagonal uncertainty covariance matrix in the spectral domain through the feature computation stage to obtain the full uncertainty covariance matrix in the feature domain. Uncertainty decoding exploits this posterior distribution to dynamically modify the acoustic model parameters in the decoding rule. The uncertainty decoding rule simply consists of adding the uncertainty covariance matrix of the enhanced features to the variance of each Gaussian component. We then propose two uncertainty estimators based on fusion to nonparametric estimation, respectively. To build a new estimator, we consider a linear combination of existing uncertainty estimators or kernel functions. The combination weights are generatively estimated by minimizing some divergence with respect to the oracle uncertainty. The divergence measures used are weighted versions of Kullback-Leibler (KL), Itakura-Saito (IS), and Euclidean (EU) divergences. Due to the inherent nonnegativity of uncertainty, this estimation problem can be seen as an instance of weighted nonnegative matrix factorization (NMF). In addition, we propose two discriminative uncertainty estimators based on linear or nonlinear mapping of the generatively estimated uncertainty. This mapping is trained so as to maximize the boosted maximum mutual information (bMMI) criterion. We compute the derivative of this criterion using the chain rule and optimize it using stochastic gradient descent. In the second part, we introduce a new learning rule for neural networks that is based on an auxiliary function technique without parameter tuning. Instead of minimizing the objective function, this technique consists of minimizing a quadratic auxiliary function which is recursively introduced layer by layer and which has a closed-form optimum. Based on the properties of this auxiliary function, the monotonic decrease of the new learning rule is guaranteed.
43

Multisensor Segmentation-based Noise Suppression for Intelligibility Improvement in MELP Coders

Demiroglu, Cenk 18 January 2006 (has links)
This thesis investigates the use of an auxiliary sensor, the GEMS device, for improving the quality of noisy speech and designing noise preprocessors to MELP speech coders. Use of auxiliary sensors for noise-robust ASR applications is also investigated to develop speech enhancement algorithms that use acoustic-phonetic properties of the speech signal. A Bayesian risk minimization framework is developed that can incorporate the acoustic-phonetic properties of speech sounds and knowledge of human auditory perception into the speech enhancement framework. Two noise suppression systems are presented using the ideas developed in the mathematical framework. In the first system, an aharmonic comb filter is proposed for voiced speech where low-energy frequencies are severely suppressed while high-energy frequencies are suppressed mildly. The proposed system outperformed an MMSE estimator in subjective listening tests and DRT intelligibility test for MELP-coded noisy speech. The effect of aharmonic comb filtering on the linear predictive coding (LPC) parameters is analyzed using a missing data approach. Suppressing the low-energy frequencies without any modification of the high-energy frequencies is shown to improve the LPC spectrum using the Itakura-Saito distance measure. The second system combines the aharmonic comb filter with the acoustic-phonetic properties of speech to improve the intelligibility of the MELP-coded noisy speech. Noisy speech signal is segmented into broad level sound classes using a multi-sensor automatic segmentation/classification tool, and each sound class is enhanced differently based on its acoustic-phonetic properties. The proposed system is shown to outperform both the MELPe noise preprocessor and the aharmonic comb filter in intelligibility tests when used in concatenation with the MELP coder. Since the second noise suppression system uses an automatic segmentation/classification algorithm, exploiting the GEMS signal in an automatic segmentation/classification task is also addressed using an ASR approach. Current ASR engines can segment and classify speech utterances in a single pass; however, they are sensitive to ambient noise. Features that are extracted from the GEMS signal can be fused with the noisy MFCC features to improve the noise-robustness of the ASR system. In the first phase, a voicing feature is extracted from the clean speech signal and fused with the MFCC features. The actual GEMS signal could not be used in this phase because of insufficient sensor data to train the ASR system. Tests are done using the Aurora2 noisy digits database. The speech-based voicing feature is found to be effective at around 10 dB but, below 10 dB, the effectiveness rapidly drops with decreasing SNR because of the severe distortions in the speech-based features at these SNRs. Hence, a novel system is proposed that treats the MFCC features in a speech frame as missing data if the global SNR is below 10 dB and the speech frame is unvoiced. If the global SNR is above 10 dB of the speech frame is voiced, both MFCC features and voicing feature are used. The proposed system is shown to outperform some of the popular noise-robust techniques at all SNRs. In the second phase, a new isolated monosyllable database is prepared that contains both speech and GEMS data. ASR experiments conducted for clean speech showed that the GEMS-based feature, when fused with the MFCC features, decreases the performance. The reason for this unexpected result is found to be partly related to some of the GEMS data that is severely noisy. The non-acoustic sensor noise exists in all GEMS data but the severe noise happens rarely. A missing data technique is proposed to alleviate the effects of severely noisy sensor data. The GEMS-based feature is treated as missing data when it is detected to be severely noisy. The combined features are shown to outperform the MFCC features for clean speech when the missing data technique is applied.
44

[en] ENHANCEMENT AND CONTINUOUS SPEECH RECOGNITION IN ADVERSE ENVIRONMENTS / [pt] REALCE E RECONHECIMENTO DE VOZ CONTÍNUA EM AMBIENTES ADVERSOS

CHRISTIAN DAYAN ARCOS GORDILLO 13 June 2018 (has links)
[pt] Esta tese apresenta e examina contribuições inovadoras no front-end dos sistemas de reconhecimento automático de voz (RAV) para o realce e reconhecimento de voz em ambientes adversos. A primeira proposta consiste em aplicar um filtro de mediana sobre a função de distribuição de probabilidade de cada coeficiente cepstral antes de utilizar uma transformação para um domínio invariante às distorções, com o objetivo de adaptar a voz ruidosa ao ambiente limpo de referência através da modificação de histogramas. Fundamentadas nos resultados de estudos psicofísicos do sistema auditivo humano, que utiliza como princípio o fato de que o som que atinge o ouvido é sujeito a um processo chamado Análise de Cena Auditiva (ASA), o qual examina como o sistema auditivo separa as fontes de som que compõem a entrada acústica, três novas abordagens aplicadas independentemente foram propostas para realce e reconhecimento de voz. A primeira aplica a estimativa de uma nova máscara no domínio espectral usando o conceito da transformada de Fourier de tempo curto (STFT). A máscara proposta aplica a técnica Local Binary Pattern (LBP) à relação sinal ruído (SNR) de cada unidade de tempo-frequência (T-F) para estimar uma máscara de vizinhança ideal (INM). Continuando com essa abordagem, propõe-se em seguida nesta tese o mascaramento usando as transformadas wavelet com base nos LBP para realçar os espectros temporais dos coeficientes wavelet nas altas frequências. Finalmente, é proposto um novo método de estimação da máscara INM, utilizando um algoritmo de aprendizagem supervisionado das Deep Neural Networks (DNN) com o objetivo de realizar a classificação de unidades T-F obtidas da saída dos bancos de filtros pertencentes a uma mesma fonte de som (ou predominantemente voz ou predominantemente ruído). O desempenho é comparado com as técnicas de máscara tradicionais IBM e IRM, tanto em termos de qualidade objetiva da voz, como através de taxas de erro de palavra. Os resultados das técnicas propostas evidenciam as melhoras obtidas em ambientes ruidosos, com diferenças significativamente superiores às abordagens convencionais. / [en] This thesis presents and examines innovative contributions in frontend of the automatic speech recognition systems (ASR) for enhancement and speech recognition in adverse environments. The first proposal applies a median filter on the probability distribution function of each cepstral coefficient before using a transformation to a distortion-invariant domain, to adapt the corrupted voice to the clean reference environment by modifying histograms. Based on the results of psychophysical studies of the human auditory system, which uses as a principle the fact that sound reaching the ear is subjected to a process called Auditory Scene Analysis (ASA), which examines how the auditory system separates the sound sources that make up the acoustic input, three new approaches independently applied were proposed for enhancement and speech recognition. The first applies the estimation of a new mask in the spectral domain using the short-time Fourier Transform (STFT) concept. The proposed mask applies the Local Binary Pattern (LBP) technique to the Signal-to-Noise Ratio (SNR) of each time-frequency unit (T-F) to estimate an Ideal Neighborhood Mask (INM). Continuing with this approach, the masking using LBP-based wavelet transforms to highlight the temporal spectra of wavelet coefficients at high frequencies is proposed in this thesis. Finally, a new method of estimation of the INM mask is proposed, using a supervised learning algorithm of Deep Neural Network (DNN) to classify the T-F units obtained from the output of the filter banks belonging to a same source of sound (or predominantly voice or predominantly noise). The performance is compared with traditional IBM and IRM mask techniques, both regarding objective voice quality and through word error rates. The results of the proposed methods show the improvements obtained in noisy environments, with differences significantly superior to the conventional approaches.
45

GCC-NMF : séparation et rehaussement de la parole en temps-réel à faible latence / GCC-NMF: low latency real-time speech separation and enhancement

Wood, Sean January 2017 (has links)
Le phénomène du cocktail party fait référence à notre remarquable capacité à nous concentrer sur une seule voix dans des environnements bruyants. Dans cette thèse, nous concevons, implémentons et évaluons une approche computationnelle nommée GCC-NMF pour résoudre ce problème. GCC-NMF combine l’apprentissage automatique non supervisé par la factorisation matricielle non négative (NMF) avec la méthode de localisation spatiale à corrélation croisée généralisée (GCC). Les atomes du dictionnaire NMF sont attribués au locuteur cible ou à l’interférence à chaque instant en fonction de leurs emplacements spatiaux estimés. Nous commençons par étudier GCC-NMF dans le contexte hors ligne, où des mélanges de 10 secondes sont traités à la fois. Nous développons ensuite une variante temps réel de GCC-NMF et réduisons par la suite sa latence algorithmique inhérente de 64 ms à 2 ms avec une méthode asymétrique de transformée de Fourier de courte durée (STFT). Nous montrons que des latences aussi faibles que 6 ms, dans la plage des délais tolérables pour les aides auditives, sont possibles sur les plateformes embarquées actuelles. Nous évaluons la performance de GCC-NMF sur des données publiquement disponibles de la campagne d’évaluation de séparation des signaux SiSEC. La qualité de séparation objective est quantifiée avec les méthodes PEASS, estimant les évaluations subjectives humaines, ainsi que BSS Eval basée sur le rapport signal sur bruit (SNR) traditionnel. Bien que GCC-NMF hors ligne ait moins bien performé que d’autres méthodes du défi SiSEC en termes de métriques SNR, ses scores PEASS sont comparables aux meilleurs résultats. Dans le cas de GCC-NMF en ligne, alors que les métriques basées sur le SNR favorisent à nouveau d’autres méthodes, GCC-NMF surpasse toutes les approches précédentes sauf une en termes de scores PEASS globaux, obtenant des résultats comparables au masque binaire idéale. Nous montrons que GCC-NMF augmente la qualité objective et les métriques d’intelligibilité STOI et ESTOI sur une large gamme de SNR d’entrée de -30 à 20 dB, avec seulement des réductions mineures pour les SNR d’entrée supérieurs à 20 dB. GCC-NMF présente plusieurs caractéristiques souhaitables lorsqu’on le compare aux approches existantes. Contrairement aux méthodes d’analyse de scène auditive computationnelle (CASA), GCC-NMF ne nécessite aucune connaissance préalable sur la nature des signaux d’entrée et pourrait donc convenir aux applications de séparation et de débruitage de source dans un grand nombre de domaines. Dans le cas de GCC-NMF en ligne, seule une petite quantité de données non étiquetées est nécessaire pour apprendre le dictionnaire NMF. Cela se traduit par une plus grande flexibilité et un apprentissage beaucoup plus rapide par rapport aux approches supervisées, y compris les solutions basées sur NMF et les réseaux neuronaux profonds qui reposent sur de grands ensembles de données étiquetées. Enfin, contrairement aux méthodes de séparation de source aveugle (BSS) qui reposent sur des statistiques de signal accumulées, GCC-NMF fonctionne indépendamment pour chaque trame, ce qui permet des applications en temps réel à faible latence. / Abstract: The cocktail party phenomenon refers to our remarkable ability to focus on a single voice in noisy environments. In this thesis, we design, implement, and evaluate a computational approach to solving this problem named GCC-NMF. GCC-NMF combines unsupervised machine learning via non-negative matrix factorization (NMF) with the generalized cross-correlation (GCC) spatial localization method. Individual NMF dictionary atoms are attributed to the target speaker or background interference at each point in time based on their estimated spatial locations. We begin by studying GCC-NMF in the offline context, where entire 10-second mixtures are treated at once. We then develop an online, instantaneous variant of GCC-NMF and subsequently reduce its inherent algorithmic latency from 64 ms to 2 ms with an asymmetric short-time Fourier transform (STFT) windowing method. We show that latencies as low as 6 ms, within the range of tolerable delays for hearing aids, are possible on current hardware platforms. We evaluate the performance of GCC-NMF on publicly available data from the Signal Separation Evaluation Campaign (SiSEC), where objective separation quality is quantified using the signal-to-noise ratio (SNR)-based BSS Eval and perceptually-motivated PEASS toolboxes. Though offline GCC-NMF underperformed other methods from the SiSEC challenge in terms of the SNR-based metrics, its PEASS scores were comparable with the best results. In the case of online GCC-NMF, while SNR-based metrics again favoured other methods, GCC-NMF outperformed all but one of the previous approaches in terms of overall PEASS scores, achieving comparable results to the ideal binary mask (IBM) baseline. Furthermore, we show that GCC-NMF increases objective speech quality and the STOI and ETOI speech intelligibility metrics over a wide range of input SNRs from -30 dB to 20 dB, with only minor reductions for input SNRs greater than 20 dB. GCC-NMF exhibits a number of desirable characteristics when compared existing approaches. Unlike computational auditory scene analysis (CASA) methods, GCC-NMF requires no prior knowledge about the nature of the input signals, and may thus be suitable for source separation and denoising applications in a wide range of fields. In the case of online GCC-NMF, only a small amount of unlabeled data is required to pre-train the NMF dictionary. This results in much greater flexibility and significantly faster training when compared to supervised approaches including NMF and deep neural network-based solutions that rely on large, supervised datasets. Finally, in contrast with blind source separation (BSS) methods that rely on accumulated signal statistics, GCC-NMF operates independently for each time frame, allowing for low latency, real-time applications.
46

Cue estimation for vowel perception prediction in low signal-to-noise ratios

Burmeister, Brian 13 May 2009 (has links)
This study investigates the signal processing required in order to allow for the evaluation of hearing perception prediction models at low signal-to-noise Ratios (SNR). It focusses on speech enhancement and the estimation of the cues from which speech may be recognized, specifically where these cues are estimated from severely degraded speech (SNR ranging from -10 dB to -3 dB). This research has application in the field of cochlear implants (CI), where a listener would hear degraded speech due to several distortions introduced by the biophysical interface (e.g. frequency and amplitude discretization). These difficulties can also be interpreted as a loss in signal quality due to a specific type of noise. The ability to investigate perception in low SNR conditions may have application in the development of CI signal processing algorithms to counter the effects of noise. In the military domain a speech signal may be degraded intentionally by enemy forces or unintentionally owing to engine noise, for example. The ability to analyse and predict perception can be used for algorithm development to counter the unintentional or intentional interference or to predict perception degradation if low SNR conditions cannot be avoided. A previously documented perception model (Svirsky, 2000) is used to illustrate that the proposed signal processing steps can indeed be used to estimate the various cues used by the perception model at SNRs successfully as low as -10 dB. AFRIKAANS : Hierdie studie ondersoek die seinprosessering wat nodig is om ’n gehoorpersepsievoorspellingmodel te evalueer by lae sein-tot-ruis-verhoudings. Hierdie studie fokus op spraakverbetering en die estimasie van spraakeienskappe wat gebruik kan word tydens spraakherkenning, spesifiek waar hierdie eienskappe beraam word vir ernstig gedegradeerde spraak (sein-tot-ruisverhoudings van -10 dB tot -3 dB). Hierdie navorsing is van toepassing in die veld van kogleêre inplantings, waar die luisteraar degradering van spraak ervaar weens die bio-fisiese koppelvlak (bv. diskrete frekwensie en amplitude). Hierdie degradering kan gesien word as ’n verlies aan seinkwaliteit weens ’n spesifieke tipe ruis. Die vermoë om persepsie te ondersoek by lae sein-tot-ruis kan toegepas word tydens die ontwikkeling van kogleêre inplantingseinprosesseringalgoritmes om die effekte van ruis teen te werk. In die militêre omgewing kan spraak deur vyandige magte gedegradeer word, of degradering van spraak kan plaasvind as gevolg van bv. enjingeraas. Die vermoë om persepsie te ondersoek en te voorspel in die teenwoordigheid van ruis kan gebruik word vir algoritme-ontwikkeling om die ruis teen te werk of om die verlies aan persepsie te voorspel waar lae sein-tot-ruis verhoudings nie vermy kan word nie. ’n Voorheen gedokumenteerde persepsiemodel (Svirsky, 2000) word gebruik om te demonstreer dat die voorgestelde seinprosesseringstappe wel suksesvol gebruik kan word om die spraakeienskappe te beraam wat deur die persepsiemodel benodig word by sein-tot-ruis verhouding so laag as -10 dB. Copyright / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
47

Detection of Human Emotion from Noise Speech

Nallamilli, Sai Chandra Sekhar Reddy, Kandi, Nihanth January 2020 (has links)
Detection of a human emotion from human speech is always a challenging task. Factors like intonation, pitch, and loudness of signal vary from different human voice. So, it's important to know the exact pitch, intonation and loudness of a speech for making it a challenging task for detection. Some voices exhibit high background noise which will affect the amplitude or pitch of the signal. So, knowing the detailed properties of a speech to detect emotion is mandatory. Detection of emotion in humans from speech signals is a recent research field. One of the scenarios where this field has been applied is in situations where the human integrity and security are at risk In this project we are proposing a set of features based on the decomposition signals from discrete wavelet transform to characterize different types of negative emotions such as anger, happy, sad, and desperation. The features are measured in three different conditions: (1) the original speech signals, (2) the signals that are contaminated with noise or are affected by the presence of a phone channel, and (3) the signals that are obtained after processing using an algorithm for Speech Enhancement Transform. According to the results, when the speech enhancement is applied, the detection of emotion in speech is increased and compared to results obtained when the speech signal is highly contaminated with noise. Our objective is to use Artificial neural network because the brain is the most efficient and best machine to recognize speech. The brain is built with some neural network. At the same time, Artificial neural networks are clearly advanced with respect to several features, such as their nonlinearity and high classification capability. If we use Artificial neural networks to evolve the machine or computer that it can detect the emotion. Here we are using feedforward neural network which is suitable for classification process and using sigmoid function as activation function. The detection of human emotion from speech is achieved by training the neural network with features extracted from the speech. To achieve this, we need proper features from the speech. So, we must remove background noise in the speech. We can remove background noise by using filters. wavelet transform is the filtering technique used to remove the background noise and enhance the required features in the speech.
48

Vícekanálové metody zvýrazňování řeči / Multi-channel Methods of Speech Enhancement

Zitka, Adam January 2008 (has links)
This thesis deals with multi-channel methods of speech enhancement. Multichannel methods of speech enhancement use a few microphones for recording signals. From mixtures of signals, for example, individual speakers can be separated, noise should be reduced etc. with using neural networks. The task of separating speakers is known as a cocktail-party effect. The main method of solving this problem is called independent component analysis. At first there are described its theoretical foundation and presented conditions and requirements for its application. Methods of ICA try to separate the mixtures with help of searching the minimal gaussian properties of signals. For the analysis of independent components are used different mathematical properties of signals such as kurtosis and entropy. Signals, which were mixed artificially on a computer, can be relatively well separated using, for example, FastICA algorithm or ICA gradient ascent. However, difficult is situation, if we want to separate the signals created in the real recording enviroment, because the separation of speech people speaking at the same time in the real environment affects other various factors such as acoustic properties of the room, noise, delays, reflections from the walls, the position or the type of microphones, etc. Work presents aproach of independent component analysis in the frequency domain, which can successfully separate also recordings made in the real environment.
49

Deep learning methods for reverberant and noisy speech enhancement

Zhao, Yan 15 September 2020 (has links)
No description available.
50

Convolutional and recurrent neural networks for real-time speech separation in the complex domain

Tan, Ke 16 September 2021 (has links)
No description available.

Page generated in 0.0454 seconds