• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Disambiguating Italian homographic heterophones with SoundChoice and testing ChatGPT as a data-generating tool

Nanni, Matilde January 2023 (has links)
Text-To-Speech systems are challenged by the presence of homographs, words that have more than one possible pronunciation. Rule-based approaches are often still the preferred solution to this issue in the industry. However, there have been multiple attempts to solve the ‘homograph issue’, by exploring statistical-based, neural-based, and hybrid techniques, mostly for English. Ploujnikov and Ravanelli (2022) proposed a neural-based grapheme-to-phoneme framework, SoundChoice, which comes as an RNN and a transformer version and can be fine-tuned for homograph disambiguation thanks to a weighted homograph loss. This thesis trains and tests this framework on Italian, instead of English, to see how it performs on a different language. Moreover, seeing as the available data containing homographs was insufficient for this task, the thesis experiments using ChatGPT as a data-generating tool. SoundChoice was also investigated for out-of-domain evaluation by testing it on data from a Corpus. The results showed that the RNN model reached a 71% accuracy from a baseline of 59%. A better performance was observed for the transformers model which went from 57% to 74%. Further analysis would be needed to draw more solid conclusions as to the origin of this gap and the models should be trained on Corpus data and tested on ChatGPT data to assess whether ChatGPT-generated data is, indeed, suitable as a replacement for Corpus data.
2

Homograph Disambiguation and Diacritization for Arabic Text-to-Speech Using Neural Networks / Homografdisambiguering och diakritisering för arabiska text-till-talsystem med hjälp av neurala nätverk

Lameris, Harm January 2021 (has links)
Pre-processing Arabic text for Text-to-Speech (TTS) systems poses major challenges, as Arabic omits short vowels in writing. This omission leads to a large number of homographs, and means that Arabic text needs to be diacritized to disambiguate these homographs, in order to be matched up with the intended pronunciation. Diacritizing Arabic has generally been achieved by using rule-based, statistical, or hybrid methods that combine rule-based and statistical methods. Recently, diacritization methods involving deep learning have shown promise in reducing error rates. These deep-learning methods are not yet commonly used in TTS engines, however. To examine neural diacritization methods for use in TTS engines, we normalized and pre-processed a version of the Tashkeela corpus, a large diacritized corpus containing largely Classical Arabic texts, for TTS purposes. We then trained and tested three state-of-the-art Recurrent-Neural-Network-based models on this data set. Additionally we tested these models on the Wiki News corpus, a test set that contains Modern Standard Arabic (MSA) news articles and thus more closely resembles most TTS queries. The models were evaluated by comparing the Diacritic Error Rate (DER) and Word Error Rate (WER) achieved for each data set to one another and to the DER and WER reported in the original papers. Moreover, the per-diacritic accuracy was examined, and a manual evaluation was performed. For the Tashkeela corpus, all models achieved a lower DER and WER than reported in the original papers. This was largely the result of using more training data in addition to the TTS pre-processing steps that were performed on the data. For the Wiki News corpus, the error rates were higher, largely due to the domain gap between the data sets. We found that for both data sets the models overfit on common patterns and the most common diacritic. For the Wiki News corpus the models struggled with Named Entities and loanwords. Purely neural models generally outperformed the model that combined deep learning with rule-based and statistical corrections. These findings highlight the usability of deep learning methods for Arabic diacritization in TTS engines as well as the need for diacritized corpora that are more representative of Modern Standard Arabic.

Page generated in 0.072 seconds