Spelling suggestions: "subject:"beural machine translation"" "subject:"aneural machine translation""
1 |
Automatic subtitling: A new paradigmKarakanta, Alina 11 November 2022 (has links)
Audiovisual Translation (AVT) is a field where Machine Translation (MT) has long found limited success mainly due to the multimodal nature of the source and the formal requirements of the target text. Subtitling is the predominant AVT type, quickly and easily providing access to the vast amounts of audiovisual content becoming available daily. Automation in subtitling has so far focused on MT systems which translate source language subtitles, already transcribed and timed by humans. With recent developments in speech translation (ST), the time is ripe for extended automation in subtitling, with end-to-end solutions for obtaining target language subtitles directly from the source speech. In this thesis, we address the key steps for accomplishing the new paradigm of automatic subtitling: data, models and evaluation. First, we address the lack of representative data by compiling MuST-Cinema, a speech-to-subtitles corpus. Segmenter models trained on MuST-Cinema accurately split sentences into subtitles, and enable automatic data augmentation techniques. Having representative data at hand, we move to developing direct ST models for three scenarios: offline subtitling, dual subtitling, live subtitling. Lastly, we propose methods for evaluating subtitle-specific aspects, such as metrics for subtitle segmentation, a product- and process-based exploration of the effect of spotting changes in the subtitle post-editing process, and finally, a comprehensive survey on subtitlers' user experience and views on automatic subtitling. Our findings show the potential of speech technologies for extending automation in subtitling to provide multilingual access to information and communication.
|
2 |
Encoder-decoder neural networksKalchbrenner, Nal January 2017 (has links)
This thesis introduces the concept of an encoder-decoder neural network and develops architectures for the construction of such networks. Encoder-decoder neural networks are probabilistic conditional generative models of high-dimensional structured items such as natural language utterances and natural images. Encoder-decoder neural networks estimate a probability distribution over structured items belonging to a target set conditioned on structured items belonging to a source set. The distribution over structured items is factorized into a product of tractable conditional distributions over individual elements that compose the items. The networks estimate these conditional factors explicitly. We develop encoder-decoder neural networks for core tasks in natural language processing and natural image and video modelling. In Part I, we tackle the problem of sentence modelling and develop deep convolutional encoders to classify sentences; we extend these encoders to models of discourse. In Part II, we go beyond encoders to study the longstanding problem of translating from one human language to another. We lay the foundations of neural machine translation, a novel approach that views the entire translation process as a single encoder-decoder neural network. We propose a beam search procedure to search over the outputs of the decoder to produce a likely translation in the target language. Besides known recurrent decoders, we also propose a decoder architecture based solely on convolutional layers. Since the publication of these new foundations for machine translation in 2013, encoder-decoder translation models have been richly developed and have displaced traditional translation systems both in academic research and in large-scale industrial deployment. In services such as Google Translate these models process in the order of a billion translation queries a day. In Part III, we shift from the linguistic domain to the visual one to study distributions over natural images and videos. We describe two- and three- dimensional recurrent and convolutional decoder architectures and address the longstanding problem of learning a tractable distribution over high-dimensional natural images and videos, where the likely samples from the distribution are visually coherent. The empirical validation of encoder-decoder neural networks as state-of- the-art models of tasks ranging from machine translation to video prediction has a two-fold significance. On the one hand, it validates the notions of assigning probabilities to sentences or images and of learning a distribution over a natural language or a domain of natural images; it shows that a probabilistic principle of compositionality, whereby a high- dimensional item is composed from individual elements at the encoder side and whereby a corresponding item is decomposed into conditional factors over individual elements at the decoder side, is a general method for modelling cognition involving high-dimensional items; and it suggests that the relations between the elements are best learnt in an end-to-end fashion as non-linear functions in distributed space. On the other hand, the empirical success of the networks on the tasks characterizes the underlying cognitive processes themselves: a cognitive process as complex as translating from one language to another that takes a human a few seconds to perform correctly can be accurately modelled via a learnt non-linear deterministic function of distributed vectors in high-dimensional space.
|
3 |
Comparing Encoder-Decoder Architectures for Neural Machine Translation: A Challenge Set ApproachDoan, Coraline 19 November 2021 (has links)
Machine translation (MT) as a field of research has known significant advances in recent years, with the increased interest for neural machine translation (NMT). By combining deep learning with translation, researchers have been able to deliver systems that perform better than most, if not all, of their predecessors. While the general consensus regarding NMT is that it renders higher-quality translations that are overall more idiomatic, researchers recognize that NMT systems still struggle to deal with certain classic difficulties, and that their performance may vary depending on their architecture. In this project, we implement a challenge-set based approach to the evaluation of examples of three main NMT architectures: convolutional neural network-based systems (CNN), recurrent neural network-based (RNN) systems, and attention-based systems, trained on the same data set for English to French translation. The challenge set focuses on a selection of lexical and syntactic difficulties (e.g., ambiguities) drawn from literature on human translation, machine translation, and writing for translation, and also includes variations in sentence lengths and structures that are recognized as sources of difficulties even for NMT systems. This set allows us to evaluate performance in multiple areas of difficulty for the systems overall, as well as to evaluate any differences between architectures’ performance. Through our challenge set, we found that our CNN-based system tends to reword sentences, sometimes shifting their meaning, while our RNN-based system seems to perform better when provided with a larger context, and our attention-based system seems to struggle the longer a sentence becomes.
|
4 |
Obohacování neuronového strojového překladu technikou sdíleného trénování na více úlohách / Enriching Neural MT through Multi-Task TrainingMacháček, Dominik January 2018 (has links)
The Transformer model is a very recent, fast and powerful discovery in neural machine translation. We experiment with multi-task learning for enriching the source side of the Transformer with linguistic resources to provide it with additional information to learn linguistic and world knowledge better. We analyze two approaches: the basic shared model with multi-tasking through simple data manipulation, and multi-decoder models. We test joint models for machine translation (MT) and POS tagging, dependency parsing and named entity recognition as the secondary tasks. We evaluate them in comparison with the baseline and with dummy, linguistically unrelated tasks. We focus primarily on the standard- size data setting for German-to-Czech MT. Although our enriched models did not significantly outperform the baseline, we empirically document that (i) the MT models benefit from the secondary linguistic tasks; (ii) considering the amount of training data consumed, the multi-tasking models learn faster; (iii) in low-resource conditions, the multi-tasking significantly improves the model; (iv) the more fine-grained annotation of the source as the secondary task, the higher benefit to MT.
|
5 |
Strojový překlad do mnoha jazyků současně / Multi-Target Machine TranslationIhnatchenko, Bohdan January 2020 (has links)
In international and highly-multilingual environments, it often happens, that a talk, a document, or any other input, needs to be translated into a massive number of other languages. However, it is not always an option to have a distinct system for each possible language pair due to the fact that training and operating such kind of translation systems is computationally demanding. Combining multiple target languages into one translation model usually causes a de- crease in quality of output for each its translation direction. In this thesis, we experiment with combinations of target languages to see, if a specific grouping of them can lead to better results than just randomly selecting target languages. We build upon a recent research on training a multilingual Transformer model without any change to its architecture: adding a target language tag to the source sentence. We trained a large number of bilingual and multilingual Transformer models and evaluated them on multiple test sets from different domains. We found that in most of the cases grouping related target languages into one model caused a better performance compared to models with randomly selected languages. However, we also found that a domain of the test set, as well as domains of data sampled into the training set, usu- ally have a more...
|
6 |
Detecting Logical Errors in Programming Assignments Using code2seqLückner, Anton, Chapman, Kevin January 2023 (has links)
The demand for new competent programmers is increasing with the ever-growing dependency on technology. The workload for teachers with more and more students creates the need for more automated tools for feedback and grading. While some tools exist that alleviate this to some degree, machine learning presents an interesting avenue for techniques and tools to do this more efficiently. Logical errors are common occurrences within novice code, and therefore a model that could detect these would alleviate the workload for the teachers and be a boon to students. This study aims to explore the performance of the machine learning model code2seq in detecting logical errors. This is explored through an empirical experiment where a data-set consisting of real-world Java code that is modified to contain one specific logical error is used to train, validate and test the code2seq model. The performance of the model is measured using the metrics: accuracy, precision, recall and F1-score. The results of this study show promise for the application of the code2seq model in detecting logical errors and have the potential for real-world use in classrooms.
|
7 |
Multilingual Neural Machine Translation for Low Resource LanguagesLakew, Surafel Melaku 20 April 2020 (has links)
Machine Translation (MT) is the task of mapping a source language to a target language. The recent introduction of neural MT (NMT) has shown promising results for high-resource language, however, poorly performing for low-resource language (LRL) settings. Furthermore, the vast majority of the 7, 000+ languages around the world do not
have parallel data, creating a zero-resource language (ZRL) scenario. In this thesis, we
present our approach to improving NMT for LRL and ZRL, leveraging a multilingual NMT
modeling (M-NMT), an approach that allows building a single NMT to translate across
multiple source and target languages. This thesis begins by i) analyzing the effectiveness
of M-NMT for LRL and ZRL translation tasks, spanning two NMT modeling architectures (Recurrent and Transformer), ii) presents a self-learning approach for improving the zero-shot translation directions of ZRLs, iii) proposes a dynamic transfer-learning approach from a pre-trained (parent) model to a LRL (child) model by tailoring to the
vocabulary entries of the latter, iv) extends M-NMT to translate from a source language
to specific language varieties (e.g. dialects), and finally, v) proposes an approach that
can control the verbosity of an NMT model output. Our experimental findings show the
effectiveness of the proposed approaches in improving NMT of LRLs and ZRLs.
|
8 |
Improving the Quality of Neural Machine Translation Using Terminology InjectionDougal, Duane K. 01 December 2018 (has links)
Most organizations use an increasing number of domain- or organization-specific words and phrases. A translation process, whether human or automated, must also be able to accurately and efficiently use these specific multilingual terminology collections. However, comparatively little has been done to explore the use of vetted terminology as an input to machine translation (MT) for improved results. In fact, no single established process currently exists to integrate terminology into MT as a general practice, and especially no established process for neural machine translation (NMT) exists to ensure that the translation of individual terms is consistent with an approved terminology collection. The use of tokenization as a method of injecting terminology and of evaluating terminology injection is the focus of this thesis. I use the attention mechanism prevalent in state-of-the-art NMT systems to produce the desired results. Attention vectors play an important part of this method to correctly identify semantic entities and to align the tokens that represent them. My methods presented in this thesis use these attention vectors to align the source tokens in the sentence to be translated with the target tokens in the final translation output. Then, supplied terminology is injected, where these alignments correctly identify semantic entities. My methods demonstrate significant improvement to the state-of-the-art results for NMT using terminology injection.
|
9 |
Sequence to Sequence Machine Learning for Automatic Program RepairSvensson, Niclas, Vrabac, Damir January 2019 (has links)
Most of previous program repair approaches are only able to generate fixes for one-line bugs, including machine learning based approaches. This work aims to reveal whether such a system with the state of the art technique is able to make useful predictions while being fed by whole source files. To verify whether multi-line bugs can be fixed using a state of the art solution a system has been created, using already existing Neural Machine Translation tools and data gathered from GitHub. The result of the finished system shows however, that the method used in this thesis is not sufficient to get satisfying results. No bug has successfully been corrected by the system. Although the results are poor there are still unexplored approaches to the project that possibly could improve the performance of the system. One way being narrowing down the input data to method level of source code instead of file level.
|
10 |
Java Syntax Error Repair Using RoBERTaXiang, Ziyi January 2022 (has links)
Deep learning has achieved promising results for automatic program repair (APR).In this paper, we revisit this topic and propose an end-to-end approach Classfix tocorrect java syntax errors. Classfix uses the RoBERTa classification model to localizethe error, and uses the RoBERTa encoder-decoder model to repair the located buggyline. Our work introduces a new localization method that enables us to fix a programwith an arbitrary length. Our approach categorises errors into symbol errors and worderrors. We conduct a large scale experiment to evaluate Classfix and the result showsClassfix is able to repair 75.5% symbol errors and 64.3% word errors. In addition,Classfix achieves 97% and 84.7% accuracy in locating symbol errors and word errors,respectively. / Deep learning har uppnått lovande resultat för automatisk programreparation (APR).I den här uppsatsen återkommer vi till det här ämnet och använder Classfix för attkorrigera javasyntaxfel. Classfix använder en RoBERTa-classification model för attlokalisera felet och en RoBERTa-encoder-decoder model för att reparera buggar.Vårt arbete introducerar en ny lokaliseringsmetod som gör att vi kan fixa programav godtycklig längd. Studien kategoriserar fel i symbolfel och ordfel. Vi genomförstorskaliga experiment för att utvärdera Classfix. Resultatet visar att Classfix kan fixa75.5% av symbolfel och 64.3% av ordfel. Dessutom uppnår Classfix 97% och 84,7% noggrannhet när det gäller att lokalisera symbolfel respektive ordfel.
|
Page generated in 0.1002 seconds