• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 9
  • 7
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 27
  • 15
  • 13
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Classification of affect using novel voice and visual features

Kim, Jonathan Chongkang 07 January 2016 (has links)
Emotion adds an important element to the discussion of how information is conveyed and processed by humans; indeed, it plays an important role in the contextual understanding of messages. This research is centered on investigating relevant features for affect classification, along with modeling the multimodal and multitemporal nature of emotion. The use of formant-based features for affect classification is explored. Since linear predictive coding (LPC) based formant estimators often encounter problems with modeling speech elements, such as nasalized phonemes and give inconsistent results for bandwidth estimation, a robust formant-tracking algorithm was introduced to better model the formant and spectral properties of speech. The algorithm utilizes Gaussian mixtures to estimate spectral parameters and refines the estimates using maximum a posteriori (MAP) adaptation. When the method was used for features extraction applied to emotion classification, the results indicate that an improved formant-tracking method will also provide improved emotion classification accuracy. Spectral features contain rich information about expressivity and emotion. However, most of the recent work in affective computing has not progressed beyond analyzing the mel-frequency cepstral coefficients (MFCC’s) and their derivatives. A novel method for characterizing spectral peaks was introduced. The method uses a multi-resolution sinusoidal transform coding (MRSTC). Because of MRSTC’s high precision in representing spectral features, including preservation of high frequency content not present in the MFCC’s, additional resolving power was demonstrated. Facial expressions were analyzed using 53 motion capture (MoCap) markers. Statistical and regression measures of these markers were used for emotion classification along the voice features. Since different modalities use different sampling frequencies and analysis window lengths, a novel classifier fusion algorithm was introduced. This algorithm is intended to integrate classifiers trained at various analysis lengths, as well as those obtained from other modalities. Classification accuracy was statistically significantly improved using a multimodal-multitemporal approach with the introduced classifier fusion method. A practical application of the techniques for emotion classification was explored using social dyadic plays between a child and an adult. The Multimodal Dyadic Behavior (MMDB) dataset was used to automatically predict young children’s levels of engagement using linguistic and non-linguistic vocal cues along with visual cues, such as direction of a child’s gaze or a child’s gestures. Although this and similar research is limited by inconsistent subjective boundaries, and differing theoretical definitions of emotion, a significant step toward successful emotion classification has been demonstrated; key to the progress has been via novel voice and visual features and a newly developed multimodal-multitemporal approach.
12

Redução de ruído em sinais de voz usando curvas especializadas de modificação dos coeficientes da transformada em co-seno. / Speech denoising by softsoft thresholding.

Antunes Júnior, Irineu 24 April 2006 (has links)
Muitos métodos de redução de ruído se baseiam na possibilidade de representar o sinal original com um reduzido número de coeficientes de uma transformada, ou melhor, obtém-se um sinal com menos ruído pelo cancelamento dos coeficientes abaixo de um valor adequadamente estabelecido de magnitude. Deve-se supor que a contribuição do ruído se distribua de maneira uniforme por todos os coeficientes. Uma desvantagem destes métodos, quando aplicados a sinais de voz, é a distorção introduzida pela eliminação dos coeficientes de pequena magnitude, juntamente com a presença de sinais espúrios, como o “ruído musical" produzido por coeficientes ruidosos isolados que eventualmente ultrapassam o limiar. Para as transformadas usualmente empregadas, o histograma da distribuição dos coeficientes do sinal de voz possui um grande número de coeficientes próximos à origem. Diante disto, propomos uma nova função de “thresholding" concebida especialmente para redução de ruído em sinais de voz adicionados a AWGN (“Additive, White, and Gaussian Noise"). Esta função, chamada de SoftSoft, depende de dois valores de limiar: um nível inferior, ajustado para reduzir a distorção da voz, e um nível superior, ajustado para eliminar ruído. Os valores ótimos de limiar são calculados para minimizar uma estimativa do erro quadrático médio (MSE): diretamente, supondo conhecido o sinal original; indiretamente, usando uma função de interpolação para o MSE, levando a um método prático. A função SoftSoft alcança um MSE inferior ao que se obtém pelo emprego das conhecidas operações de “Soft" ou “Hard-thresholding", as quais dispõem apenas do limiar superior. Ainda que a melhoria em termos de MSE não seja muito expressiva, a melhoria da qualidade perceptual foi certificada tanto por um ouvinte quanto por uma medida perceptual de distorção (a distância log-espectral). / Many noise-reduction methods are based on the possibility of representing the clean signal as a reduced number of coefficients of a block transform, so that cancelling coefficients below a certain thresholding level will produce an enhanced reconstructed signal. It is necessary to assume that the clean signal has a sparse representation, while the noise energy is spread over all coefficients. The main drawback of those methods is the speech distortion introduced by eliminating small magnitude coefficients, and the presence of artifacts (“musical noise") produced by isolated noisy coefficients randomly crossing the thresholding level. Based on the observation that the speech coefficient histogram has many important coefficients close to origin, we propose a custom thresholding function to perform noise reduction in speech signals corrupted by AWGN. This function, called SoftSoft, has two thresholding levels: a lower level adjusted to reduce speech distortion, and a higher level adjusted to remove noise. The joint optimal values can be determined by minimizing the resulting mean square error (MSE). We also verify that this new thresholding function leads to a lower MSE than the well-known Soft and Hard-thresholding functions, which employ only a higher thresholding level. Although the improvement in terms of MSE is not expressive, a perceptual distortion measure (the log-spectral distance, LSD) is employed to prove the higher performance of the proposed thresholding scheme.
13

Redução de ruído em sinais de voz usando curvas especializadas de modificação dos coeficientes da transformada em co-seno. / Speech denoising by softsoft thresholding.

Irineu Antunes Júnior 24 April 2006 (has links)
Muitos métodos de redução de ruído se baseiam na possibilidade de representar o sinal original com um reduzido número de coeficientes de uma transformada, ou melhor, obtém-se um sinal com menos ruído pelo cancelamento dos coeficientes abaixo de um valor adequadamente estabelecido de magnitude. Deve-se supor que a contribuição do ruído se distribua de maneira uniforme por todos os coeficientes. Uma desvantagem destes métodos, quando aplicados a sinais de voz, é a distorção introduzida pela eliminação dos coeficientes de pequena magnitude, juntamente com a presença de sinais espúrios, como o “ruído musical” produzido por coeficientes ruidosos isolados que eventualmente ultrapassam o limiar. Para as transformadas usualmente empregadas, o histograma da distribuição dos coeficientes do sinal de voz possui um grande número de coeficientes próximos à origem. Diante disto, propomos uma nova função de “thresholding” concebida especialmente para redução de ruído em sinais de voz adicionados a AWGN (“Additive, White, and Gaussian Noise”). Esta função, chamada de SoftSoft, depende de dois valores de limiar: um nível inferior, ajustado para reduzir a distorção da voz, e um nível superior, ajustado para eliminar ruído. Os valores ótimos de limiar são calculados para minimizar uma estimativa do erro quadrático médio (MSE): diretamente, supondo conhecido o sinal original; indiretamente, usando uma função de interpolação para o MSE, levando a um método prático. A função SoftSoft alcança um MSE inferior ao que se obtém pelo emprego das conhecidas operações de “Soft” ou “Hard-thresholding”, as quais dispõem apenas do limiar superior. Ainda que a melhoria em termos de MSE não seja muito expressiva, a melhoria da qualidade perceptual foi certificada tanto por um ouvinte quanto por uma medida perceptual de distorção (a distância log-espectral). / Many noise-reduction methods are based on the possibility of representing the clean signal as a reduced number of coefficients of a block transform, so that cancelling coefficients below a certain thresholding level will produce an enhanced reconstructed signal. It is necessary to assume that the clean signal has a sparse representation, while the noise energy is spread over all coefficients. The main drawback of those methods is the speech distortion introduced by eliminating small magnitude coefficients, and the presence of artifacts (“musical noise”) produced by isolated noisy coefficients randomly crossing the thresholding level. Based on the observation that the speech coefficient histogram has many important coefficients close to origin, we propose a custom thresholding function to perform noise reduction in speech signals corrupted by AWGN. This function, called SoftSoft, has two thresholding levels: a lower level adjusted to reduce speech distortion, and a higher level adjusted to remove noise. The joint optimal values can be determined by minimizing the resulting mean square error (MSE). We also verify that this new thresholding function leads to a lower MSE than the well-known Soft and Hard-thresholding functions, which employ only a higher thresholding level. Although the improvement in terms of MSE is not expressive, a perceptual distortion measure (the log-spectral distance, LSD) is employed to prove the higher performance of the proposed thresholding scheme.
14

Μοντελοποίηση και ψηφιακή επεξεργασία προσωδιακών φαινομένων της ελληνικής γλώσσας με εφαρμογή στην σύνθεση ομιλίας / Modeling and signal processing of greek language prosodic events with application to speech synthesis

Ζέρβας, Παναγιώτης 04 February 2008 (has links)
Αντικείμενο της παρούσης διδακτορικής διατριβής αποτελεί η μελέτη και μοντελοποίηση των φαινομένων επιτονισμού της Ελληνικής γλώσσας με εφαρμογές στην σύνθεση ομιλίας. Στα πλαίσια της διατριβής αυτής αναπτύχθηκαν πόροι ομιλίας και εργαλεία για την επεξεργασία και μελέτη προσωδιακών παραγόντων οι οποίοι επηρεάζουν την πληροφορία που μεταφέρεται μέσω του προφορικού λόγου. Για την διαχείρηση και επεξεργασία των παραπάνω πόρων υλοποιήθηκε πλατφόρμα μετατροπής κειμένου σε ομιλία βασισμένη στην συνένωση δομικών μονάδων ομιλίας. Για την μελέτη και την δημιουργία των μοντέλων μηχανικής μάθησης χρησιμοποιήθηκε η γλωσσολογική αναπαράσταση GRToBI των φαινομένων επιτονισμού. / In this thesis we cope with the task of studying and modeling prosodic phenomena encountered in Greek language with applications to the task of speech synthesis from tex. Thus, spoken corpora with various levels of morphosyntactical and linguistic representation as well as tools for their processing, we constructed. For the task of coding the emerged prosodic phenomena of our recorded utterences we have utilized the GRToBI annotation of speech.
15

Steuerung sprechernormalisierender Abbildungen durch künstliche neuronale Netzwerke

Müller, Knut 01 November 2000 (has links)
No description available.
16

Fluency Features and Elicited Imitation as Oral Proficiency Measurement

Christensen, Carl V. 07 July 2012 (has links) (PDF)
The objective and automatic grading of oral language tests has been the subject of significant research in recent years. Several obstacles lie in the way of achieving this goal. Recent work has suggested a testing technique called elicited imitation (EI) can be used to accurately approximate global oral proficiency. This testing methodology, however, does not incorporate some fundamental aspects of language such as fluency. Other work has suggested another testing technique, simulated speech (SS), as a supplement to EI that can provide automated fluency metrics. In this work, I investigate a combination of fluency features extracted for SS testing and EI test scores to more accurately predict oral language proficiency. I also investigate the role of EI as an oral language test, and the optimal method of extracting fluency features from SS sound files. Results demonstrate the ability of EI and SS to more effectively predict hand-scored SS test item scores. I finally discuss implications of this work for future automated oral testing scenarios.
17

Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

Cox, Troy L. 14 March 2013 (has links) (PDF)
Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the potential of using ASR timing fluency features to predict speech ratings and the effect of prompt difficulty in that process. A speaking test with ten prompts representing five different intended difficulty levels was administered to 201 subjects. The speech samples obtained were then (a) rated by human raters holistically, (b) rated by human raters analytically at the item level, and (c) scored automatically using PRAAT to calculate ten different ASR timing fluency features. The ratings and scores of the speech samples were analyzed with Rasch measurement to evaluate the functionality of the scales and the separation reliability of the examinees, raters, and items. There were three ASR timed fluency features that best predicted human speaking ratings: speech rate, mean syllables per run, and number of silent pauses. However, only 31% of the score variance was predicted by these features. The significance in this finding is that those fluency features alone likely provide insufficient information to predict human rated speaking ability accurately. Furthermore, neither the item difficulties calculated by the ASR nor those rated analytically by the human raters aligned with the intended item difficulty levels. The misalignment of the human raters with the intended difficulties led to a further analysis that found that it was problematic for raters to use a holistic scale at the item level. However, modifying the holistic scale to a scale that examined if the response to the prompt was at-level resulted in a significant correlation (r = .98, p < .01) between the item difficulties calculated analytically by the human raters and the intended difficulties. This result supports the hypothesis that item prompts are important when it comes to obtaining quality speech samples. As test developers seek to use ASR to score speaking assessments, caution is warranted to ensure that score differences are due to examinee ability and not the prompt composition of the test.
18

Análise cepstral baseada em diferentes famílias transformada wavelet / Cepstral analysis based on different family of wavelet transform

Sanchez, Fabrício Lopes 02 December 2008 (has links)
Este trabalho apresenta um estudo comparativo entre diferentes famílias de transformada Wavelet aplicadas à análise cepstral de sinais digitais de fala humana, com o objetivo específico de determinar o período de pitch dos mesmos e, ao final, propõe um algoritmo diferencial para realizar tal operação, levando-se em consideração aspectos importantes do ponto de vista computacional, tais como: desempenho, complexidade do algoritmo, plataforma utilizada, dentre outros. São apresentados também, os resultados obtidos através da implementação da nova técnica (baseada na transformada wavelet) em comparação com a abordagem tradicional (baseada na transformada de Fourier). A implementação da técnica foi testada em linguagem C++ padrão ANSI sob as plataformas Windows XP Professional SP3, Windows Vista Business SP1, Mac OSX Leopard e Linux Mandriva 10. / This work presents a comparative study between different family of wavelets applied on cepstral analysis of the digital speech human signal with specific objective for determining of pitch period of the same and in the end, proposes an differential algorithm to make such a difference operation take into consideration important aspects of computational point of view, such as: performance, algorithm complexity, used platform, among others. They are also present, the results obtained through of the technique implementation compared with the traditional approach. The technique implementation was tested in C++ language standard ANSI under the platform Windows XP Professional SP3 Edition, Windows Vista Business SP1, MacOSX Leopard and Linux Mandriva 10.
19

Spectro-Temporal Features For Robust Automatic Speech Recognition

Suryanarayana, Venkata K 01 1900 (has links)
The speech signal is inherently characterized by its variations in time, which get reflected as variations in frequency. The specto temporal changes are due to changes in vocaltract, intonation, co-articulation and successive articulation of different phonetic sounds. In this thesis we are looking for improving the speech recognition performance through better feature parameters using a non-stationary model of speech. One effective means of modeling a general non-stationary signal is using the AM-FM model. AM-FM model can be extended to speech through a sub-band analysis, which can be mimic the auditory analysis. In this thesis, we explore new methods for estimating AM and FM parameters based on the non-uniform samples of the signal. The non-uniform sample approach along with adaptive window estimation provides for important advantage because of multi-resolution analysis. We develop several new methods based on ZC intervals, local extrema intervals and signal derivative at ZC’s as different sample measures of the signal and explore their effectiveness for instantaneous frequency (IF) and instantaneous envelope (IE) estimation. To deal with speech signal for automatic speech recognition, we explore the use of auditory motivated spectro temporal information through the use of an auditory filter bank and signal parameters (or features) are derived from the instantaneous energy in each band using the non-linear energy operator over a larger window length. The temporal correlation present in the signal is exploited by using DCT and keeping the lower few coefficients of DCT to keep the trend in the energy in each band. The DCT coefficients from different frequency bands are concatenated together, and a further spectral decorrelation is achieved through KLT (Karhunen-Loeve Transform) of the concatenated feature vector. The changes in the vocaltract are well captured by the change in the formant structure and to emphasize these details for ASR we have defined a temporal formant by using the AM-FM decomposition of sub-band speech. A uniform wideband non-overlaping filters are used for sub-band decomposition. The temporal formant is defined using the AM-FM parameters of each subband signal. The temporal evolution of a formant is represented by the lower order DCT coefficients of the temporal formant in each band and its use for ASR is explored. To address the robustness of ASR performance to environmental noisy conditions, we have used a hybrid approach of enhancing the speech signal using statistical models of the speech and noise. Use of GMM for statistical speech enhancement has been shown to be effective. It is found that the spectro-temporal features derived from enhanced speech provide further improvement to ASR performance.
20

Speech Signal Classification Using Support Vector Machines

Sood, Gaurav 07 1900 (has links)
Hidden Markov Models (HMMs) are, undoubtedly, the most employed core technique for Automatic Speech Recognition (ASR). Nevertheless, we are still far from achieving high‐performance ASR systems. Some alternative approaches, most of them based on Artificial Neural Networks (ANNs), were proposed during the late eighties and early nineties. Some of them tackled the ASR problem using predictive ANNs, while others proposed hybrid HMM/ANN systems. However, despite some achievements, nowadays, the dependency on Hidden Markov Models is a fact. During the last decade, however, a new tool appeared in the field of machine learning that has proved to be able to cope with hard classification problems in several fields of application: the Support Vector Machines (SVMs). The SVMs are effective discriminative classifiers with several outstanding characteristics, namely: their solution is that with maximum margin; they are capable to deal with samples of a very higher dimensionality; and their convergence to the minimum of the associated cost function is guaranteed. In this work a novel approach based upon probabilistic kernels in support vector machines have been attempted for speech data classification. The classification accuracy in case of support vector classification depends upon the kernel function used which in turn depends upon the data set in hand. But still as of now there is no way to know a priori which kernel will give us best results The kernel used in this work tries to normalize the time dimension by fitting a probability distribution over individual data points which normalizes the time dimension inherent to speech signals which facilitates the use of support vector machines since it acts on static data only. The divergence between these probability distributions fitted over individual speech utterances is used to form the kernel matrix. Vowel Classification, Isolated Word Recognition (Digit Recognition), have been attempted and results are compared with state of art systems.

Page generated in 0.0518 seconds