• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Neural Enhancement Strategies for Robust Speech Processing

Nawar, Mohamed Nabih Ali Mohamed 10 March 2023 (has links)
In real-world scenarios, speech signals are often contaminated with environmental noises, and reverberation, which degrades speech quality and intelligibility. Lately, the development of deep learning algorithms has marked milestones in speech- based research fields e.g. speech recognition, spoken language understanding, etc. As one of the crucial topics in the speech processing research area, speech enhancement aims to restore clean speech signals from noisy signals. In the last decades, many conventional speech enhancement statistical-based algorithms had been pro- posed. However, the performance of these approaches is limited in non-stationary noisy conditions. The raising of deep learning-based approaches for speech enhancement has led to revolutionary advances in their performance. In this context, speech enhancement is formulated as a supervised learning problem, which tackles the open challenges introduced by the speech enhancement conventional approaches. In general, deep learning speech enhancement approaches are categorized into frequency-domain and time-domain approaches. In particular, we experiment with the performance of the Wave-U-Net model, a solid and superior time-domain approach for speech enhancement. First, we attempt to improve the performance of back-end speech-based classification tasks in noisy conditions. In detail, we propose a pipeline that integrates the Wave-U-Net (later this model is modified to the Dilated Encoder Wave-U-Net) as a pre-processing stage for noise elimination with a temporal convolution network (TCN) for the intent classification task. Both models are trained independently from each other. Reported experimental results showed that the modified Wave-U-Net model not only improves the speech quality and intelligibility measured in terms of PESQ, and STOI metrics, but also improves the back-end classification accuracy. Later, it was observed that the dis-joint training approach often introduces signal distortion in the output of the speech enhancement module. Thus, it can deteriorate the back-end performance. Motivated by this, we introduce a set of fully time- domain joint training pipelines that combine the Wave-U-Net model with the TCN intent classifier. The difference between these architectures is the interconnections between the front-end and back-end. All architectures are trained with a loss function that combines the MSE loss as the front-end loss with the cross-entropy loss for the classification task. Based on our observations, we claim that the JT architecture with equally balancing both components’ contributions yields better classification accuracy. Lately, the release of large-scale pre-trained feature extraction models has considerably simplified the development of speech classification and recognition algorithms. However, environmental noise and reverberation still negatively affect performance, making robustness in noisy conditions mandatory in real-world applications. One way to mitigate the noise effect is to integrate a speech enhancement front-end that removes artifacts from the desired speech signals. Unlike the state-of-the-art enhancement approaches that operate either on speech spectrogram, or directly on time-domain signals, we study how enhancement can be applied directly on the speech embeddings, extracted using Wav2Vec, and WavLM models. We investigate a variety of training approaches, considering different flavors of joint and disjoint training of the speech enhancement front-end and of the classification/recognition back-end. We perform exhaustive experiments on the Fluent Speech Commands and Google Speech Commands datasets, contaminated with noises from the Microsoft Scalable Noisy Speech Dataset, as well as on LibriSpeech, contaminated with noises from the MUSAN dataset, considering intent classification, keyword spotting, and speech recognition tasks respectively. Results show that enhancing the speech em-bedding is a viable and computationally effective approach, and provide insights about the most promising training approaches.
2

Structuration de contenus audio-visuel pour le résumé automatique / Audio-visual content structuring for automatic summarization

Rouvier, Mickaël 05 December 2011 (has links)
Ces dernières années, avec l’apparition des sites tels que Youtube, Dailymotion ou encore Blip TV, le nombre de vidéos disponibles sur Internet aconsidérablement augmenté. Le volume des collections et leur absence de structure limite l’accès par le contenu à ces données. Le résumé automatique est un moyen de produire des synthèses qui extraient l’essentiel des contenus et les présentent de façon aussi concise que possible. Dans ce travail, nous nous intéressons aux méthodes de résumé vidéo par extraction, basées sur l’analyse du canal audio. Nous traitons les différents verrous scientifiques liés à cet objectif : l’extraction des contenus, la structuration des documents, la définition et l’estimation des fonctions d’intérêts et des algorithmes de composition des résumés. Sur chacun de ces aspects, nous faisons des propositions concrètes qui sont évaluées. Sur l’extraction des contenus, nous présentons une méthode rapide de détection de termes. La principale originalité de cette méthode est qu’elle repose sur la construction d’un détecteur en fonction des termes cherchés. Nous montrons que cette stratégie d’auto-organisation du détecteur améliore la robustesse du système, qui dépasse sensiblement celle de l’approche classique basée sur la transcription automatique de la parole.Nous présentons ensuite une méthode de filtrage qui repose sur les modèles à mixtures de Gaussiennes et l’analyse factorielle telle qu’elle a été utilisée récemment en identification du locuteur. L’originalité de notre contribution tient à l’utilisation des décompositions par analyse factorielle pour l’estimation supervisée de filtres opérants dans le domaine cepstral.Nous abordons ensuite les questions de structuration de collections de vidéos. Nous montrons que l’utilisation de différents niveaux de représentation et de différentes sources d’informations permet de caractériser le style éditorial d’une vidéo en se basant principalement sur l’analyse de la source audio, alors que la plupart des travaux précédents suggéraient que l’essentiel de l’information relative au genre était contenue dans l’image. Une autre contribution concerne l’identification du type de discours ; nous proposons des modèles bas niveaux pour la détection de la parole spontanée qui améliorent sensiblement l’état de l’art sur ce type d’approches.Le troisième axe de ce travail concerne le résumé lui-même. Dans le cadre du résumé automatique vidéo, nous essayons, dans un premier temps, de définir ce qu’est une vue synthétique. S’agit-il de ce qui le caractérise globalement ou de ce qu’un utilisateur en retiendra (par exemple un moment émouvant, drôle....) ? Cette question est discutée et nous faisons des propositions concrètes pour la définition de fonctions d’intérêts correspondants à 3 différents critères : la saillance, l’expressivité et la significativité. Nous proposons ensuite un algorithme de recherche du résumé d’intérêt maximal qui dérive de celui introduit dans des travaux précédents, basé sur la programmation linéaire en nombres entiers. / These last years, with the advent of sites such as Youtube, Dailymotion or Blip TV, the number of videos available on the Internet has increased considerably. The size and their lack of structure of these collections limit access to the contents. Sum- marization is one way to produce snippets that extract the essential content and present it as concisely as possible.In this work, we focus on extraction methods for video summary, based on au- dio analysis. We treat various scientific problems related to this objective : content extraction, document structuring, definition and estimation of objective function and algorithm extraction.On each of these aspects, we make concrete proposals that are evaluated.On content extraction, we present a fast spoken-term detection. The main no- velty of this approach is that it relies on the construction of a detector based on search terms. We show that this strategy of self-organization of the detector im- proves system robustness, which significantly exceeds the classical approach based on automatic speech recogntion.We then present an acoustic filtering method for automatic speech recognition based on Gaussian mixture models and factor analysis as it was used recently in speaker identification. The originality of our contribution is the use of decomposi- tion by factor analysis for estimating supervised filters in the cepstral domain.We then discuss the issues of structuring video collections. We show that the use of different levels of representation and different sources of information in or- der to characterize the editorial style of a video is principaly based on audio analy- sis, whereas most previous works suggested that the bulk of information on gender was contained in the image. Another contribution concerns the type of discourse identification ; we propose low-level models for detecting spontaneous speech that significantly improve the state of the art for this kind of approaches.The third focus of this work concerns the summary itself. As part of video summarization, we first try, to define what a synthetic view is. Is that what cha- racterizes the whole document, or what a user would remember (by example an emotional or funny moment) ? This issue is discussed and we make some concrete proposals for the definition of objective functions corresponding to three different criteria : salience, expressiveness and significance. We then propose an algorithm for finding the sum of the maximum interest that derives from the one introduced in previous works, based on integer linear programming.
3

Speech Classification using Acoustic embedding and Large Language Models Applied on Alzheimer’s Disease Prediction Task

Kheirkhahzadeh, Maryam January 2023 (has links)
Alzheimer’s sjukdom är en neurodegenerativ sjukdom som leder till demens. Den kan börja tyst i de tidiga stadierna och fortsätta under åren till en allvarlig och obotlig fas. Språkstörningar uppstår ofta som ett av de tidiga symptomen och kan till slut leda till fullständig mutism i de avancerade stadierna av sjukdomen. Därför är tal- och språkbaserad analys en lovande och icke-invasiv metod för att upptäcka Alzheimer’s sjukdom i dess tidiga stadier. Vårt mål är att använda maskininlärning för att jämföra informationmängden hos språkliga representationer i stora språkmodeller och förtränade akustiska representationer. Såvitt vi vet är detta första gången som GPT-3 och wav2vec2.0 har använts tillsammans för klassificering av Alzheimer’s sjukdom. Dessutom utnyttjade vi för första gången en kombination av två stora språkmodeller, GPT-3 och BERT, för denna specifika uppgift. Genom att utvärdera vår metod på två datamängder på engelska och svenska kan vi också belysa språkskillnaderna mellan dessa två språk. / Alzheimer’s disease is a neurodegenerative disease that leads to dementia. It can begin silently in the early stages and progresses over the years to a severe and incurable stage. Language impairment often emerges as one of the early symptoms and can eventually progress to complete mutism in advanced stages of the disease. As a result, speech processing is a promising and non-invasive approach for detecting Alzheimer’s disease in its early stages. Our objective is to compare the informativeness levels of linguistic embedding derived from large language models and pre-trained acoustic embedding extracted using wav2vec2.0, in a machine learning-based approach. To the best of our knowledge, this is the first time that fusing GPT-3 text embedding and wav2vec2.0 acoustic embedding has been explored for Alzheimer’s disease classification. In addition, we utilized a combination of two large language models, GPT-3 and BERT, for the first time on this specific task. By evaluating our method on two datasets in English and Swedish, we can also highlight the language differences between these two languages.

Page generated in 0.1135 seconds