1 |
Robust Formant tracking for Continuous Speech with Speaker Variability / Robust Formant tracking for Continuous SpeechMustafa, Kamran 12 1900 (has links)
Exposure to loud sounds can cause damage to the inner ear, leading to degradation of the neural response to speech and to formant frequencies in particular. This may result in decreased intelligibility of speech. An amplification scheme for hearing aids, called Contrast Enhanced Frequency Shaping (CEFS), may improve speech perception for ears with sound-induced hearing damage. CEFS takes into account across-frequency distortions introduced by the impaired ear and requires accurate and robust formant frequency estimates to allow dynamic, speech-spectrum-dependent amplification of speech in hearing aids. Several algorithms have been developed for extracting the formant information from speech signals, however most of these algorithms are either not robust in real-life noise environments or are not suitable for real-time implementation. The algorithm proposed in this thesis achieves formant extraction from continuous speech by using a time-varying adaptive filterbank to track and estimate individual formant frequencies. The formant tracker incorporates an adaptive voicing detector and a gender detector for robust formant extraction from continuous speech, for both male and female speakers in the presence of background noise. Thorough testing of the algorithm using various speech sentences has shown promising results over a wide range of SNRs for various types of background noises, such as AWGN, single and multiple competing background speakers and various other environmental sounds. / Thesis / Master of Applied Science (MASc)
|
2 |
Examing Listeners' Ability to Perceive Vowel-Inherent Spectral ChangesChiddenton, Kathleen 22 March 2013 (has links)
One family of theories regarding vowel perception suggests onset and offset formant-frequencies are important for identification and that the shape of the transitions themselves are not otherwise perceptually important. The present study determined just-noticeable-differences in deviations from linear formant trajectories. Diphthong-like stimuli were manipulated by inserting a point of inflection into the otherwise linear transition. Several parameters were manipulated including vowel duration, location of the inflection point in time, and fundamental frequency. Data from the first experiment indicate that listeners are largely insensitive to deviations from linearity of formant trajectory but that large enough deviations could eventually be detected. The size of these deviations seems dependent on the range of onset-offset formant frequencies. However, a second experiment in which only the first half of stimuli was presented thereby affecting the frequency range of the stimuli, gave different results. Results from these experiments along with several hypotheses are presented.
|
3 |
Zeros of the z-transform (ZZT) representation and chirp group delay processing for the analysis of source and filter characteristics of speech signalsBozkurt, Baris 27 October 2005 (has links)
This study proposes a new spectral representation called the Zeros of Z-Transform (ZZT), which is an all-zero representation of the z-transform of the signal. In addition, new chirp group delay processing techniques are developed for analysis of resonances of a signal. The combination of the ZZT representation with the chirp group delay processing algorithms provides a useful domain to study resonance characteristics of source and filter components of speech. Using the two representations, effective algorithms are developed for: source-tract decomposition of speech, glottal flow parameter estimation, formant tracking and feature extraction for speech recognition.
The ZZT representation is mainly important for theoretical studies. Studying the ZZT of a signal is essential to be able to develop effective chirp group delay processing methods. Therefore, first the ZZT representation of the source-filter model of speech is studied for providing a theoretical background. We confirm through ZZT representation that anti-causality of the glottal flow signal introduces mixed-phase characteristics in speech signals. The ZZT of windowed speech signals is also studied since windowing cannot be avoided in practical signal processing algorithms and the effect of windowing on ZZT representation is drastic. We show that separate patterns exist in ZZT representations of windowed speech signals for the glottal flow and the vocal tract contributions. A decomposition method for source-tract separation is developed based on these patterns in ZZT.
We define chirp group delay as group delay calculated on a circle other than the unit circle in z-plane. The need to compute group delay on a circle other than the unit circle comes from the fact that group delay spectra are often very noisy and cannot be easily processed for formant tracking purposes (the reasons are explained through ZZT representation). In this thesis, we propose methods to avoid such problems by modifying the ZZT of a signal and further computing the chirp group delay spectrum. New algorithms based on processing of the chirp group delay spectrum are developed for formant tracking and feature estimation for speech recognition. The proposed algorithms are compared to state-of-the-art techniques. Equivalent or higher efficiency is obtained for all proposed algorithms.
The theoretical parts of the thesis further discuss a mixed-phase model for speech and phase processing problems in detail.
|
4 |
Dynamic System Modeling And State Estimation For Speech SignalOzbek, Ibrahim Yucel 01 May 2010 (has links) (PDF)
This thesis presents an all-inclusive framework on how the current formant tracking and audio (and/or visual)-to-articulatory inversion algorithms can be improved.
The possible improvements are summarized as follows:
The first part of the thesis investigates the problem of the formant frequency estimation when the number of formants to be estimated fixed or variable respectively.
The fixed number of formant tracking method is based on the assumption that the number of formant frequencies is fixed along the speech utterance. The proposed algorithm is based on the combination of
a dynamic programming algorithm and Kalman filtering/smoothing. In this method, the speech signal is divided into voiced and unvoiced segments, and the formant candidates are associated via dynamic programming algorithm for each voiced and unvoiced part separately. Individual adaptive Kalman filtering/smoothing is used to perform the formant frequency estimation. The performance of the proposed algorithm is compared with some algorithms given in the literature.
The variable number of formant tracking method considers those formant frequencies which are visible in the spectrogram. Therefore, the number of formant frequencies is not fixed and they can change along the speech waveform. In that case, it is also necessary to estimate the number of formants to track. For this purpose, the proposed algorithm uses extra logic (formant track start/end decision unit). The measurement update of each individual formant trajectories is handled via Kalman filters. The performance of the proposed algorithm is illustrated by some examples
The second part of this thesis is concerned with improving audiovisual to articulatory inversion performance. The related studies can be examined in two parts / Gaussian mixture model (GMM) regression based inversion and Jump Markov Linear System (JMLS) based inversion.
GMM regression based inversion method involves modeling audio (and /or visual) and articulatory data as a joint Gaussian mixture model. The conditional expectation of this distribution gives the desired articulatory estimate. In this method, we examine the usefulness of the combination of various acoustic features and effectiveness of various types of fusion techniques in combination with audiovisual features. Also, we propose dynamic smoothing methods to smooth articulatory trajectories. The performance of the proposed algorithm is illustrated and compared with conventional algorithms.
JMLS inversion involves tying the acoustic (and/or visual) spaces and articulatory space via multiple state space representations. In this way, the articulatory inversion problem is converted into the state estimation problem where the audiovisual data are considered as measurements and articulatory positions are state variables. The proposed inversion method first learns the parameter set of the state space model via an expectation maximization (EM) based algorithm and the state estimation is handled via interactive multiple model (IMM) filter/smoother.
|
5 |
Suivi de formants par analyse en multirésolution / Formant tracking by Multiresolution AnalysisJemâa, Imen 19 February 2013 (has links)
Nos travaux de recherches présentés dans ce manuscrit ont pour objectif, l'optimisation des performances des algorithmes de suivi des formants. Pour ce faire, nous avons commencé par l'analyse des différentes techniques existantes utilisées dans le suivi automatique des formants. Cette analyse nous a permis de constater que l'estimation automatique des formants reste délicate malgré l'emploi de diverses techniques complexes. Vue la non disponibilité des bases de données de référence en langue arabe, nous avons élaboré un corpus phonétiquement équilibré en langue arabe tout en élaborant un étiquetage manuel phonétique et formantique. Ensuite, nous avons présenté nos deux nouvelles approches de suivi de formants dont la première est basée sur l'estimation des crêtes de Fourier (maxima de spectrogramme) ou des crêtes d'ondelettes (maxima de scalogramme) en utilisant comme contrainte de suivi le calcul de centre de gravité de la combinaison des fréquences candidates pour chaque formant, tandis que la deuxième approche de suivi est basée sur la programmation dynamique combinée avec le filtrage de Kalman. Finalement, nous avons fait une étude exploratrice en utilisant notre corpus étiqueté manuellement comme référence pour évaluer quantitativement nos deux nouvelles approches par rapport à d'autres méthodes automatiques de suivi de formants. Nous avons testé la première approche par détection des crêtes ondelette, utilisant le calcul de centre de gravité, sur des signaux synthétiques ensuite sur des signaux réels de notre corpus étiqueté en testant trois types d'ondelettes complexes (CMOR, SHAN et FBSP). Suite à ces différents tests, il apparaît que le suivi de formants et la résolution des scalogrammes donnés par les ondelettes CMOR et FBSP sont meilleurs qu'avec l'ondelette SHAN. Afin d'évaluer quantitativement nos deux approches, nous avons calculé la différence moyenne absolue et l'écart type normalisée. Nous avons fait plusieurs tests avec différents locuteurs (masculins et féminins) sur les différentes voyelles longues et courtes et la parole continue en prenant les signaux étiquetés issus de la base élaborée comme référence. Les résultats de suivi ont été ensuite comparés à ceux de la méthode par crêtes de Fourier en utilisant le calcul de centre de gravité, de l'analyse LPC combinée à des bancs de filtres de Mustafa Kamran et de l'analyse LPC dans le logiciel Praat. D'après les résultats obtenus sur les voyelles /a/ et /A/, nous avons constaté que le suivi fait par la méthode ondelette avec CMOR est globalement meilleur que celui des autres méthodes Praat et Fourier. Cette méthode donne donc un suivi de formants (F1, F2 et F3) pertinent et plus proche de suivi référence. Les résultats des méthodes Fourier et ondelette sont très proches dans certains cas puisque toutes les deux présentent moins d'erreurs que la méthode Praat pour les cinq locuteurs masculins ce qui n'est pas le cas pour les autres voyelles où il y a des erreurs qui se présentent parfois sur F2 et parfois sur F3. D'après les résultats obtenus sur la parole continue, nous avons constaté que dans le cas des locuteurs masculins, les résultats des deux nouvelles approches sont notamment meilleurs que ceux de la méthode LPC de Mustafa Kamran et ceux de Praat même si elles présentent souvent quelques erreurs sur F3. Elles sont aussi très proches de la méthode par détection de crêtes de Fourier utilisant le calcul de centre de gravité. Les résultats obtenus dans le cas des locutrices féminins confirment la tendance observée sur les locuteurs / Our research work presented in this thesis aims the optimization of the performance of formant tracking algorithms. We began by analyzing different existing techniques used in the automatic formant tracking. This analysis showed that the automatic formant estimation remains difficult despite the use of complex techniques. For the non-availability of database as reference in Arabic, we have developed a phonetically balanced corpus in Arabic while developing a manual phonetic and formant tracking labeling. Then we presented our two new automatic formant tracking approaches which are based on the estimation of Fourier ridges (local maxima of spectrogram) or wavelet ridges (local maxima of scalogram) using as a tracking constraint the calculation of center of gravity of a set of candidate frequencies for each formant, while the second tracking approach is based on dynamic programming combined with Kalman filtering. Finally, we made an exploratory study using manually labeled corpus as a reference to quantify our two new approaches compared to other automatic formant tracking methods. We tested the first approach based on wavelet ridges detection, using the calculation of the center of gravity on synthetic signals and then on real signals issued from our database by testing three types of complex wavelets (CMOR, SHAN and FBSP). Following these tests, it appears that formant tracking and scalogram resolution given by CMOR and FBSP wavelets are better than the SHAN wavelet. To quantitatively evaluate our two approaches, we calculated the absolute difference average and standard deviation. We made several tests with different speakers (male and female) on various long and short vowels and continuous speech signals issued from our database using it as a reference. The formant tracking results are compared to those of Fourier ridges method calculating the center of gravity, LPC analysis combined with filter banks method of Kamran.M and LPC analysis integrated in Praat software. According to the results of the vowels / a / and / A /, we found that formant tracking by the method with wavelet CMOR is generally better than other methods. Therefore, this method provides a correct formant tracking (F1, F2 and F3) and closer to the reference. The results of Fourier and wavelet methods are very similar in some cases since both have fewer errors than the method Praat. These results are proven for the five male speakers which is not the case for the other vowels where there are some errors which are present sometimes in F2 and sometimes in F3. According to the results obtained on continuous speech, we found that in the case of male speakers, the result of both approaches are particularly better than those of Kamran.M method and those of Praat even if they are often few errors in F3. They are also very close to the Fourier ridges method using the calculation of center of gravity. The results obtained in the case of female speakers confirm the trend observed over the male speakers
|
6 |
Singing-driven interfaces for sound synthesizersJaner Mestres, Jordi 14 March 2008 (has links)
Els instruments musicals digitals es descomponen usualment en dues parts: la interfície d'usuari i el motor de síntesi. Tradicionalment la interfície d'usuari pren el nom de controlador musical. L'objectiu d'aquesta tesi és el disseny d'un interfície que permeti el control de la síntesi de sons instrumentals a partir de la veu cantada.Amb la present recerca, intentem relacionar la veu amb el so dels instruments musicals, tenint en compte tan la descripció del senyal de veu, com les corresponents estratègies de mapeig per un control adequat del sintetitzador.Proposem dos enfocaments diferents, d'una banda el control d'un sintetitzador de veu cantada, i d'altra banda el control de la síntesi de sons instrumentals. Per aquest últim, suggerim una representació del senyal de veu com a gests vocals, que inclou una sèrie d'algoritmes d'anàlisis de veu. A la vegada, per demostrar els resultats obtinguts, hem desenvolupat dos prototips a temps real. / Los instrumentos musicales digitales se pueden separar en dos componentes: el interfaz de usuario y el motor de sintesis. El interfaz de usuario se ha denominado tradicionalmente controlador musical. El objectivo de esta tesis es el diseño de un interfaz que permita el control de la sintesis de sonidos instrumentales a partir de la voz cantada.La presente investigación pretende relacionar las caracteristicas de la voz con el sonido de los instrumentos musicales, teniendo en cuenta la descripción de la señal de voz, como las correspondientes estrategias de mapeo para un control apropiado del sintetizador. Se proponen dos enfoques distintos, el control de un sintetizador de voz cantada, y el control de la sintesis de sonidos insturmentales. Para este último, se sugiere una representación de la señal de voz como gestos vocales, incluyendo varios algoritmos de analisis de voz. Los resultados obtenidos se demuestran con dos prototipos a tiempo real. / Digital musical instruments are usually decomposed in two main constituent parts: a user interface and a sound synthesis engine. The user interface is popularly referred as a musical controller, and its design is the primary objective of this dissertation. Under the title of singing-driven interfaces, we aim to design systems that allow controlling the synthesis of musical instruments sounds with the singing voice. This dissertation searches for the relationships between the voice and the sound of musical instruments by addressing both, the voice signal description, as well as the mapping strategies for a meaningful control of the synthesized sound. We propose two different approaches, one for controlling a singing voice synthesizer, and another for controlling the synthesis of instrumental sounds. For the latter, we suggest to represent voice signal as vocal gestures, contributing with several voice analysis methods.To demonstrate the obtained results, we developed two real-time prototypes.
|
Page generated in 0.0855 seconds