1 |
Direct Speech Translation Toward High-Quality, Inclusive, and Augmented SystemsGaido, Marco 28 April 2023 (has links)
When this PhD started, the translation of speech into text in a different language was mainly tackled with a cascade of automatic speech recognition (ASR) and machine translation (MT) models, as the emerging direct speech translation (ST) models were not yet competitive. To close this gap, part of the PhD has been devoted to improving the quality of direct models, both in the simplified condition of test sets where the audio is split into well-formed sentences, and in the realistic condition in which the audio is automatically segmented. First, we investigated how to transfer knowledge from MT models trained on large corpora. Then, we defined encoder architectures that give different weights to the vectors in the input sequence, reflecting the variability of the amount of information over time in speech. Finally, we reduced the adverse effects caused by the suboptimal automatic audio segmentation in two ways: on one side, we created models robust to this condition; on the other, we enhanced the audio segmentation itself. The good results achieved in terms of overall translation quality allowed us to investigate specific behaviors of direct ST systems, which are crucial to satisfy real users’ needs. On one side, driven by the ethical goal of inclusive systems, we disclosed that established technical choices geared toward high general performance (statistical word segmentation of the target text, knowledge distillation from MT) cause an exacerbation of the gender representational disparities in the training data. Along this line of work, we proposed mitigation techniques that reduce the gender bias of ST models, and showed how gender-specific systems can be used to control the translation of gendered words related to the speakers, regardless of their vocal traits. On the other side, motivated by the practical needs of interpreters and translators, we evaluated the potential of direct ST systems in the “augmented translation” scenario, focusing on the translation and recognition of named entities (NEs). Along this line of work, we proposed solutions to cope with the major weakness of ST models (handling person names), and introduced direct models that jointly perform ST and NE recognition showing their superiority over a pipeline of dedicated tools for the two tasks. Overall, we believe that this thesis moves a step forward toward adopting direct ST systems in real applications, increasing the awareness of their strengths and weaknesses compared to the traditional cascade paradigm.
|
2 |
Neural Speech Translation: From Neural Machine Translation to Direct Speech TranslationDi Gangi, Mattia Antonino 27 April 2020 (has links)
Sequence-to-sequence learning led to significant improvements to machine translation (MT) and automatic speech recognition (ASR) systems. These advancements were first reflected in spoken language translation (SLT) when using a cascade of (at least) ASR and MT with the new "neural" models, then by using sequence-to-sequence learning to directly translate the input audio speech into text in the target language. In this thesis we cover both approaches to the SLT task. First, we show the limits of NMT in terms of robustness to input errors when compared to the previous phrase-based state of the art. We then focus on the NMT component to achieve better translation quality with higher computational efficiency by using a network based on weakly-recurrent units. Our last work involving a cascade explores the effects on the NMT robustness when adding automatic transcripts to the training data. In order to move to the direct speech-to-text approach, we introduce MuST-C, the largest multilingual SLT corpus for training direct translation systems. MuST-C increases significantly the size of publicly available data for this task as well as their language coverage. With such availability of data, we adapted the Transformer architecture to the SLT task for its computational efficiency . Our adaptation, which we call S-Transformer, is meant to better model the audio input, and with it we set a new state of the art for MuST-C. Building on these positive results, we finally use S-Transformer with different data applications: i) one-to-many multilingual translation by training it on MuST-C; ii participation to the IWSLT 19 shared task with data augmentation; and iii) instance-based adaptation for using the training data at test time. The results in this thesis show a steady quality improvement in direct SLT. Our hope is that the presented resources and technological solutions will increase its adoption in the near future, so to make multilingual information access easier in a globalized world.
|
Page generated in 0.1043 seconds