• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Speech Recognition for low-resource languages using Wav2Vec2 : Modern Standard Arabic (MSA) as an example of a low-resource language

Zouhair, Taha January 2021 (has links)
The need for fully automatic translation at DigitalTolk, a Stockholm-based company providing translation services, leads to exploring Automatic Speech Recognition as a first step for Modern Standard Arabic (MSA). Facebook AI recently released a second version of its Wav2Vec models, dubbed Wav2Vec 2.0, which uses deep neural networks and provides several English pretrained models along with a multilingual model trained in 53 different languages, referred to as the Cross-Lingual Speech Representation (XLSR-53). The small English and the XLSR-53 pretrained models are tested, and the results stemming from them discussed, with the Arabic data from Mozilla Common Voice. In this research, the small model did not yield any results and may have needed more unlabelled data to train whereas the large model proved to be successful in predicting the audio recordings in Arabic and a Word Error Rate of 24.40% was achieved, an unprecedented result. The small model turned out to be not suitable for training especially on languages other than English and where the unlabelled data is not enough. On the other hand, the large model gave very promising results despite the low amount of data. The large model should be the model of choice for any future training that needs to be done on low resource languages such as Arabic.
2

Improving Speech Recognition for Arabic language Using Low Amounts of Labeled Data

Bakheet, Mohammed January 2021 (has links)
The importance of Automatic Speech Recognition (ASR) Systems, whose job is to generate text from audio, is increasing as the number of applications of these systems is rapidly going up. However, when it comes to training ASR systems, the process is difficult and rather tedious, and that could be attributed to the lack of training data. ASRs require huge amounts of annotated training data containing the audio files and the corresponding accurately written transcript files. This annotated (labeled) training data is very difficult to find for most of the languages, it usually requires people to perform the annotation manually which, apart from the monetary price it costs, is error-prone. A supervised training task is impractical for this scenario.  The Arabic language is one of the languages that do not have an abundance of labeled data, which makes its ASR system's accuracy very low compared to other resource-rich languages such as English, French, or Spanish. In this research, we take advantage of unlabeled voice data by learning general data representations from unlabeled training data (only audio files) in a self-supervised task or pre-training phase. This phase is done by using wav2vec 2.0 framework which masks out input in the latent space and solves a contrastive task. The model is then fine-tuned on a few amounts of labeled data. We also exploit models that have been pre-trained on different languages, by using wav2vec 2.0, for the purpose of fine-tuning them on Arabic language by using annotated Arabic data.   We show that using wav2vec 2.0 framework for pre-training on Arabic is considerably time and resource-consuming. It took the model 21.5 days (about 3 weeks) to complete 662 epochs and get a validation accuracy of 58%.  Arabic is a right-to-left (rtl) language with many diacritics that indicate how letters should be pronounced, these two features make it difficult for Arabic to fit into these models, as it requires heavy pre-processing for the transcript files. We demonstrate that we can fine-tune a cross-lingual model, that is trained on raw waveforms of speech in multiple languages, on Arabic data and get a low word error rate 36.53%. We also prove that by fine-tuning the model parameters we can increase the accuracy, thus, decrease the word error rate from 54.00% to 36.69%.
3

Improving accuracy of speech recognition for low resource accents : Testing the performance of fine-tuned Wav2vec2 models on accented Swedish / Förbättrad taligenkänning för lågresurs-brytningar : Testning av prestandan för finjusterade Wav2vec2-modeller på bryten svenska

Dabiri, Arash January 2023 (has links)
While the field of speech recognition has recently advanced quickly, even the highest performing models struggle with accents. There are several methods of improving the performance on accents, but many are hard to implement or need high amounts of data and are therefore costly to implement. Therefore, examining the performance of the Wav2vec2 architecture, which previously has performed well on small amounts of labeled data, becomes relevant. Using a model trained in Swedish, this thesis fine-tunes the model on small datasets of three Swedish accents, to create both accent-dependent specialized models as well as an accent-independent general model. The specialized models perform better than the original model, and the general model performs approximately as well as each specialized model without sacrificing performance on non-accented Swedish. This means that the Wav2vec2 framework offers a low cost method of improving speech recognition that can be used to improve private and public services for larger parts of the population. / Trots att området för taligenkänning nyligen har avancerat snabbt, presterar även de bästa modellerna sämre vid språk med utländsk brytning. Det finns flera metoder för att förbättra prestandan på accenter, men många är komplexa eller behöver stora mängder data och är därför dyra att implementera. Därför blir det relevant att undersöka prestandan för Wav2vec2-arkitekturen, som tidigare har presterat väl med små mängder märkt träningsdata. En modell tränad i svenska finjusteras i denna avhandling på tre små datamängder bestående av olika svenska brytningar, för att skapa både brytningsberoende specialiserade modeller såväl som en brytningsoberoende generell modell. De specialiserade modellerna presterar bättre än originalmodellen, och den allmänna modellen presterar ungefär lika bra som varje specialiserad modell utan att ge avkall på prestanda på ickebruten svenska. Detta innebär att ramverket Wav2vec2 erbjuder en lågkostnadsmetod för att förbättra taligenkänning som kan användas för att förbättra privata och offentliga tjänster för större delar av befolkningen.
4

Multilingual Speech Emotion Recognition using pretrained models powered by Self-Supervised Learning / Flerspråkig känsloigenkänning från tal med hjälp av förtränade tal-modeller baserat på själv-övervakad Inlärning

Luthman, Felix January 2022 (has links)
Society is based on communication, for which speech is the most prevalent medium. In day to day interactions we talk to each other, but it is not only the words spoken that matters, but the emotional delivery as well. Extracting emotion from speech has therefore become a topic of research in the area of speech tasks. This area as a whole has in recent years adopted a Self- Supervised Learning approach for learning speech representations from raw speech audio, without the need for any supplementary labelling. These speech representations can be leveraged for solving tasks limited by the availability of annotated data, be it for low-resource language, or a general lack of data for the task itself. This thesis aims to evaluate the performances of a set of pre-trained speech models by fine-tuning them in different multilingual environments, and evaluating their performance thereafter. The model presented in this paper is based on wav2vec 2.0 and manages to correctly classify 86.58% of samples over eight different languages and four emotional classes when trained on those same languages. Experiments were conducted to garner how well a model trained on seven languages would perform on the one left out, which showed that there is quite a large margin of similarity in how different cultures express vocal emotions, and further investigations showed that as little as just a few minutes of in-domain data is able to increase the performance substantially. This shows promising results even for niche languages, as the amount of available data may not be as large of a hurdle as one might think. With that said, increasing the amount of data from minutes to hours does still garner substantial improvements, albeit to a lesser degree. / Hela vårt samhälle är byggt på kommunikation mellan olika människor, varav tal är det vanligaste mediet. På en daglig basis interagerar vi genom att prata med varandra, men det är inte bara orden som förmedlar våra intentioner, utan även hur vi uttrycker dem. Till exempel kan samma mening ge helt olika intryck beroende på ifall den sägs med ett argt eller glatt tonfall. Talbaserad forskning är ett stort vetenskapligt område i vilket talbaserad känsloigenkänning vuxit fram. Detta stora tal-område har under de senaste åren sett en tendens att utnyttja en teknik kallad själv-övervakad inlärning för att utnyttja omärkt ljuddata för att lära sig generella språkrepresentationer, vilket kan liknas vid att lära sig strukturen av tal. Dessa representationer, eller förtränade modeller, kan sedan utnyttjas som en bas för att lösa problem med begränsad tillgång till märkt data, vilket kan vara fallet för sällsynta språk eller unika uppgifter. Målet med denna rapport är att utvärdera olika applikationer av denna representations inlärning i en flerspråkig miljö genom att finjustera förtränade modeller för känsloigenkänning. I detta syfte presenterar vi en modell baserad på wav2vec 2.0 som lyckas klassifiera 86.58% av ljudklipp tagna från åtta olika språk över fyra olika känslo-klasser, efter att modellen tränats på dessa språk. För att avgöra hur bra en modell kan klassifiera data från ett språk den inte tränats på skapades modeller tränade på sju språk, och evaluerades sedan på det språk som var kvar. Dessa experiment visar att sättet vi uttrycker känslor mellan olika kulturer är tillräckligt lika för att modellen ska prestera acceptabelt även i det fall då modellen inte sett språket under träningsfasen. Den sista undersökningen utforskar hur olika mängd data från ett språk påverkar prestandan på det språket, och visar att så lite som endast ett par minuter data kan förbättra resultet nämnvärt, vilket är lovande för att utvidga modellen för fler språk i framtiden. Med det sagt är ytterligare data att föredra, då detta medför fortsatta förbättringar, om än i en lägre grad.
5

A Swedish wav2vec versus Google speech-to-text

Lagerlöf, Ester January 2022 (has links)
As the automatic speech recognition technology is becoming more advanced, the possibilities of in which fields it can operate are growing. The best automatic speech recognition technologies today are mainly based on - and made for - the English language. However, the national library of Sweden recently released open-source wav2vec models purposefully with the Swedish language in mind. With the interest of investigating their performance, one of their models is chosen to assess how well they transcribe the Swedish news broadcasts ”kvart-i-fem”-ekot, comparing its results with Google speech-to-text. The results present wav2vec as the prominent model for this type of audio data, securing a word error rate average that is 9 percentage points less than Google-speech-to-text. A part of this performance could be attributed to the self-supervising method the wav2vec model uses to access large amounts of unlabeled data in its training. In spite of this, both models displayed difficulty with transcribing audio that has poor quality such as disturbing background noise and stationary sounds. Words like abbreviations and names was also difficult for them both to correctly transcribe. Google speech-to-text did however perform better than the wav2vec model on this part.
6

Automatisk taligenkänning som metod för att undersöka artikulationshastighet i svenska / Automatic speech recognition as a method to investigate articulation rate in Swedish

Martin Björkdahl, Liv January 2022 (has links)
Den senaste tidens utveckling inom automatisk taligenkänning har lett till mindre resurskrävan-de och mer effektiva modeller. Detta innebär nya möjligheter för forskning kring spontant tal.I den här studien används Kungliga Bibliotekets svenska version av Wav2Vec 2.0 och en tal-korpus skapas utifrån ljudklipp från Sveriges Radio för att undersöka artikulationshastighet ispontant tal. Artikulationshastighet har setts ha en negativ korrelation till informationsdensiteti tidigare studier. Utifrån Uniform Information Density-hypotesens antagande; att talare strävarefter att jämna ut distributionen av information i ett yttrande, undersöks om de sammanlagdadependenslängderna mellan alla huvud och dependenter i meningar är korrelerat med artiku-lationshastigheten. Studien visar att metoden där artikulationshastighet beräknas med hjälp avKB:s Wav2Vec 2.0 leder till systematiskt högre artikulationshastighet än vid en manuell beräk-ning. Samt att korrelationen mellan antal stavelser i ett ord och artikulationshastighet blir denomvända mot vad tidigare studier med manuella metoder visat. Hypotesen att längre depen-denslängd skulle vara relaterat till högre artikulationshastighet får inget stöd i studien. Iställetses en motsatt effekt av minskande artikulationshastighet i relation till ökande dependenslängd.Studien belyser behovet av en modell specialiserad för beräkning av duration för att vidare ut-forska artikulationshastighet genom automatisk taligenkänning. / The last few years progress within automatic speech recognition has led to models that are lessresource demanding and more effective. This means new possibilities in the research regardingspontaneous speech. In this study, KB:s Swedish version of Wav2Vec 2.0 is used to create aspeech corpus and investigate articulation rate in spontaneous speech, with data from SverigesRadio. This study aims to investigate if this is a good method. It has been observed in previousstudies that articulation rate is negatively correlated to information density. With the uniforminformation density hypothesis; that speakers aim to distribute information evenly in an utteran-ce, as a base - this study aims to investigate whether the sum of the word dependency lengths insentences is correlated to articulation rate. The result shows that the method of calculating ar-ticulation rate with KB:s Wav2Vec 2.0 leads to systematically higher articulation rates comparedto results of a manual method. The hypothesis that longer dependency lengths would correlatewith higher articulation rates is not supported in the results. Instead the opposite effect can be  observed. The study shows the need for a model specialized in calculating duration for futureresearch regarding articulation rate with automatic speech recognition.KeywordsASR, automatic speech recognition, UID, articulation rate, dependency length, dependecy mi-nimization, corpus studies, information density
7

A Comparative Analysis of Whisper and VoxRex on Swedish Speech Data

Fredriksson, Max, Ramsay Veljanovska, Elise January 2024 (has links)
With the constant development of more advanced speech recognition models, the need to determine which models are better in specific areas and for specific purposes becomes increasingly crucial. Even more so for low-resource languages such as Swedish, dependent on the progress of models for the large international languages. Lagerlöf (2022) conducted a comparative analysis between Google’s speech-to-text model and NLoS’s VoxRex B, concluding that VoxRex was the best for Swedish audio. Since then, OpenAI released their Automatic Speech Recognition model Whisper, prompting a reassessment of the preferred choice for transcribing Swedish. In this comparative analysis using data from Swedish radio news segments, Whisper performs better than VoxRex in tests on the raw output, highly affected by more proficient sentence constructions. It is not possible to conclude which model is better regarding pure word prediction. However, the results favor VoxRex, displaying a lower variability, meaning that even though Whisper can predict full text better, the decision of what model to use should be determined by the user’s needs.

Page generated in 0.0224 seconds