181 |
Improving BERTScore for Machine Translation Evaluation Through Contrastive LearningYousuf, Oreen January 2022 (has links)
Since the advent of automatic evaluation, tasks within Natural Language Processing (NLP), including Machine Translation, have been able to better utilize both time and labor resources. Later, multilingual pre-trained models (MLMs)have uplifted many languages’ capacity to participate in NLP research. Contextualized representations generated from these MLMs are both influential towards several downstream tasks and have inspired practitioners to better make sense of them. We propose the adoption of BERTScore, coupled with contrastive learning, for machine translation evaluation in lieu of BLEU - the industry leading metric. While BERTScore computes a similarity score for each token in a candidate and reference sentence, it does away with exact matches in favor of computing token similarity using contextual embeddings. We improve BERTScore via contrastive learning-based fine-tuning on MLMs. We use contrastive learning to improve BERTScore across different language pairs in both high and low resource settings (English-Hausa, English-Chinese), across three models (XLM-R, mBERT, and LaBSE) and across three domains (news,religious, combined). We also investigated both the effects of pairing relatively linguistically similar low-resource languages (Somali-Hausa), and data size on BERTScore and the corresponding Pearson correlation to human judgments. We found that reducing the distance between cross-lingual embeddings via contrastive learning leads to BERTScore having a substantially greater correlation to system-level human evaluation than BLEU for mBERT and LaBSE in all language pairs in multiple domains.
|
182 |
Grammatical Error Correction for Learners of Swedish as a Second LanguageNyberg, Martina January 2022 (has links)
Grammatical Error Correction refers to the task of automatically correcting errors in written text, typically with respect to texts written by learners of a second language. The work in this thesis implements and evaluates two methods to Grammatical Error Correction for Swedish. In addition, the proposed methods are compared to an existing, rule-based system. Previous research on GEC for the Swedish language is limited and has not yet utilized the potential of neural networks. The first method implemented in this work is based on a neural machine translation approach, training a Transformer model to translate erroneous text into a corrected version. A parallel dataset containing artificially generated errors is created to train the model. The second method utilizes a Swedish version of the pre-trained language model BERT to estimate the likelihood of potential corrections in an erroneous text. Employing the SweLL gold corpus consisting of essays written by learners of Swedish, the proposed methods are evaluated using GLEU and through a manual evaluation based on the types of errors and their corresponding corrections found in the essays. The results show that the two methods correct approximately the same amount of errors, while differing in terms of which error types that are best handled. Specifically, the translation approach has a wider coverage of error types and is superior for syntactical and punctuation errors. In contrast, the language model approach yields consistently higher recall and outperforms the translation approach with regards to lexical and morphological errors. To improve the results, future work could investigate the effect of increased model size and amount of training data, as well as the potential in combining the two methods.
|
183 |
Sequence to Sequence Machine Learning for Automatic Program RepairSvensson, Niclas, Vrabac, Damir January 2019 (has links)
Most of previous program repair approaches are only able to generate fixes for one-line bugs, including machine learning based approaches. This work aims to reveal whether such a system with the state of the art technique is able to make useful predictions while being fed by whole source files. To verify whether multi-line bugs can be fixed using a state of the art solution a system has been created, using already existing Neural Machine Translation tools and data gathered from GitHub. The result of the finished system shows however, that the method used in this thesis is not sufficient to get satisfying results. No bug has successfully been corrected by the system. Although the results are poor there are still unexplored approaches to the project that possibly could improve the performance of the system. One way being narrowing down the input data to method level of source code instead of file level.
|
184 |
Practical Morphological Modeling: Insights from Dialectal ArabicErdmann, Alexander January 2020 (has links)
No description available.
|
185 |
Who is afraid of MT?Schmitt, Peter A. 12 August 2022 (has links)
Machine translation (MT) is experiencing a renaissance. On one hand,
machine translation is becoming more common and used in ever larger scale, on
the other hand many translators have an almost hostile attitude towards machine
translation programs and those translators who use MT as a tool. Either it is
assumed that the MT can never be as good as a human translation or machine
translation is viewed as the ultimate enemy of the translator and as a job killer. The
article discusses with various examples the limits and possibilities of machine
translation. It demonstrates that machine translation can be better than human
translations – even if they were made by experienced professional translators. The
paper also reports the results of a test that showed that translation customers must
expect that even well-known and expensive translation service providers deliver a
quality that is on par with poor MT. Overall, it is argued that machine translation
programs are no more and no less than an additional tool with which the translation
industry can satisfy certain requirements. This abstract was also – as the entire
article – automatically translated into English.
|
186 |
Chinese Zero Pronoun Resolution with Neural NetworksYang, Yifan January 2022 (has links)
In this thesis, I explored several neural network-based models to resolve the issues of zero pronoun in Chinese English translation tasks. I reviewed previous work that attempts to take the resolution as a classification task, such as determining if a candidate in a given set is the antecedent of a zero pronoun, which can be categorized as rule-based and supervised methods. Existing methods either did not take the relationship between potential zero pronoun candidates into consideration or did not fully utilize attention to zero pronoun representations. In my experiments, I investigated attention-based neural network models as well as its application in reinforcement learning environment building on an existing neural model. In particular, I integrated an LSTM-tree-based module into the attention network, which encodes syntax information for zero pronoun resolution tasks. In addition, I apply Bi-Attention layers between modules to interactively learn the syntax and semantic alignment. Furthermore, I leveraged a reinforcement learning framework to fine-tune the proposed model, and experiment with different encoding strategies, i.e., FastText, BERT, and trained RNN-based embedding. I found that attention-based model with LSTM-tree- based module, fine-tuned under reinforcement learning framework that utilized FastText embedding achieves the best performance, superior to the baseline models. I evaluated the model performance on different categories of resources, of which FastText shows great potential in encoding web blog text.
|
187 |
Java Syntax Error Repair Using RoBERTaXiang, Ziyi January 2022 (has links)
Deep learning has achieved promising results for automatic program repair (APR).In this paper, we revisit this topic and propose an end-to-end approach Classfix tocorrect java syntax errors. Classfix uses the RoBERTa classification model to localizethe error, and uses the RoBERTa encoder-decoder model to repair the located buggyline. Our work introduces a new localization method that enables us to fix a programwith an arbitrary length. Our approach categorises errors into symbol errors and worderrors. We conduct a large scale experiment to evaluate Classfix and the result showsClassfix is able to repair 75.5% symbol errors and 64.3% word errors. In addition,Classfix achieves 97% and 84.7% accuracy in locating symbol errors and word errors,respectively. / Deep learning har uppnått lovande resultat för automatisk programreparation (APR).I den här uppsatsen återkommer vi till det här ämnet och använder Classfix för attkorrigera javasyntaxfel. Classfix använder en RoBERTa-classification model för attlokalisera felet och en RoBERTa-encoder-decoder model för att reparera buggar.Vårt arbete introducerar en ny lokaliseringsmetod som gör att vi kan fixa programav godtycklig längd. Studien kategoriserar fel i symbolfel och ordfel. Vi genomförstorskaliga experiment för att utvärdera Classfix. Resultatet visar att Classfix kan fixa75.5% av symbolfel och 64.3% av ordfel. Dessutom uppnår Classfix 97% och 84,7% noggrannhet när det gäller att lokalisera symbolfel respektive ordfel.
|
188 |
Breaking Language Barriers: Enhancing Multilingual Representation for Sentence Alignment and Translation / 言語の壁を超える:文のアラインメントと翻訳のための多言語表現の改善Mao, Zhuoyuan 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第25420号 / 情博第858号 / 新制||情||144(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)特定教授 黒橋 禎夫, 教授 河原 達也, 教授 鹿島 久嗣 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
189 |
Arabic Text Recognition and Machine TranslationAlkhoury, Ihab 13 July 2015 (has links)
[EN] Research on Arabic Handwritten Text Recognition (HTR) and Arabic-English Machine Translation (MT) has been usually approached as two independent areas of study. However, the idea of creating one system that combines both areas together, in order to generate English translation out of images containing Arabic text, is still a very challenging task. This process can be interpreted as the translation of Arabic images. In this thesis, we propose a system that recognizes Arabic handwritten text images, and translates the recognized text into English. This system is built from the combination of an HTR system and an MT system.
Regarding the HTR system, our work focuses on the use of Bernoulli Hidden Markov Models (BHMMs). BHMMs had proven to work very well with Latin script. Indeed, empirical results based on it were reported on well-known corpora, such as IAM and RIMES. In this thesis, these results are extended to Arabic script, in particular, to the well-known IfN/ENIT and NIST OpenHaRT databases for Arabic handwritten text.
The need for transcribing Arabic text is not only limited to handwritten text, but also to printed text. Arabic printed text might be considered as a simple form of handwritten text version. Thus, for this kind of text, we also propose Bernoulli HMMs. In addition, we propose to compare BHMMs with state-of-the-art technology based on neural networks.
A key idea that has proven to be very effective in this application of Bernoulli HMMs is the use of a sliding window of adequate width for feature extraction. This idea has allowed us to obtain very competitive results in the recognition of both Arabic handwriting and printed text. Indeed, a system based on it ranked first at the ICDAR 2011 Arabic recognition competition on the Arabic Printed Text Image (APTI) database. Moreover, this idea has been refined by using repositioning techniques for extracted windows, leading to further improvements in Arabic text recognition.
In the case of handwritten text, this refinement improved our system which ranked first at the ICFHR 2010 Arabic handwriting recognition competition on IfN/ENIT. In the case of printed text, this refinement led to an improved system which ranked second at the ICDAR 2013 Competition on Multi-font and Multi-size Digitally Represented Arabic Text on APTI. Furthermore, this refinement was used with neural networks-based technology, which led to state-of-the-art results.
For machine translation, the system was based on the combination of three state-of-the-art statistical models: the standard phrase-based models, the hierarchical phrase-based models, and the N-gram phrase-based models. This combination was done using the Recognizer Output Voting Error Reduction (ROVER) method. Finally, we propose three methods of combining HTR and MT to develop an Arabic image translation system. The system was evaluated on the NIST OpenHaRT database, where competitive results were obtained. / [ES] El reconocimiento de texto manuscrito (HTR) en árabe y la traducción automática (MT) del árabe al inglés se han tratado habitualmente como dos áreas de estudio independientes. De hecho, la idea de crear un sistema que combine las dos áreas, que directamente genere texto en inglés a partir de imágenes que contienen texto en árabe, sigue siendo una tarea difícil. Este proceso se puede interpretar como la traducción de imágenes de texto en árabe. En esta tesis, se propone un sistema que reconoce las imágenes de texto manuscrito en árabe, y que traduce el texto reconocido al inglés. Este sistema está construido a partir de la combinación de un sistema HTR y un sistema MT.
En cuanto al sistema HTR, nuestro trabajo se enfoca en el uso de los Bernoulli Hidden Markov Models (BHMMs). Los modelos BHMMs ya han sido probados anteriormente en tareas con alfabeto latino obteniendo buenos resultados. De hecho, existen resultados empíricos publicados usando corpus conocidos, tales como IAM o RIMES. En esta tesis, estos resultados se han extendido al texto manuscrito en árabe, en particular, a las bases de datos IfN/ENIT y NIST OpenHaRT.
En aplicaciones reales, la transcripción del texto en árabe no se limita únicamente al texto manuscrito, sino también al texto impreso. El texto impreso se puede interpretar como una forma simplificada de texto manuscrito. Por lo tanto, para este tipo de texto, también proponemos el uso de modelos BHMMs. Además, estos modelos se han comparado con tecnología del estado del arte basada en redes neuronales.
Una idea clave que ha demostrado ser muy eficaz en la aplicación de modelos BHMMs es el uso de una ventana deslizante (sliding window) de anchura adecuada durante la extracción de características. Esta idea ha permitido obtener resultados muy competitivos tanto en el reconocimiento de texto manuscrito en árabe como en el de texto impreso. De hecho, un sistema basado en este tipo de extracción de características quedó en la primera posición en el concurso ICDAR 2011 Arabic recognition competition usando la base de datos Arabic Printed Text Image (APTI). Además, esta idea se ha perfeccionado mediante el uso de técnicas de reposicionamiento aplicadas a las ventanas extraídas, dando lugar a nuevas mejoras en el reconocimiento de texto árabe.
En el caso de texto manuscrito, este refinamiento ha conseguido mejorar el sistema que ocupó el primer lugar en el concurso ICFHR 2010 Arabic handwriting recognition competition usando IfN/ENIT. En el caso del texto impreso, este refinamiento condujo a un sistema mejor que ocupó el segundo lugar en el concurso ICDAR 2013 Competition on Multi-font and Multi-size Digitally Represented Arabic Text en el que se usaba APTI. Por otro lado, esta técnica se ha evaluado también en tecnología basada en redes neuronales, lo que ha llevado a resultados del estado del arte.
Respecto a la traducción automática, el sistema se ha basado en la combinación de tres tipos de modelos estadísticos del estado del arte: los modelos standard phrase-based, los modelos hierarchical phrase-based y los modelos N-gram phrase-based. Esta combinación se hizo utilizando el método Recognizer Output Voting Error Reduction (ROVER). Por último, se han propuesto tres métodos para combinar los sistemas HTR y MT con el fin de desarrollar un sistema de traducción de imágenes de texto árabe a inglés. El sistema se ha evaluado sobre la base de datos NIST OpenHaRT, donde se han obtenido resultados competitivos. / [CA] El reconeixement de text manuscrit (HTR) en àrab i la traducció automàtica (MT) de l'àrab a l'anglès s'han tractat habitualment com dues àrees d'estudi independents. De fet, la idea de crear un sistema que combine les dues àrees, que directament genere text en anglès a partir d'imatges que contenen text en àrab, continua sent una tasca difícil. Aquest procés es pot interpretar com la traducció d'imatges de text en àrab. En aquesta tesi, es proposa un sistema que reconeix les imatges de text manuscrit en àrab, i que tradueix el text reconegut a l'anglès. Aquest sistema està construït a partir de la combinació d'un sistema HTR i d'un sistema MT.
Pel que fa al sistema HTR, el nostre treball s'enfoca en l'ús dels Bernoulli Hidden Markov Models (BHMMs). Els models BHMMs ja han estat provats anteriorment en tasques amb alfabet llatí obtenint bons resultats. De fet, existeixen resultats empírics publicats emprant corpus coneguts, tals com IAM o RIMES. En aquesta tesi, aquests resultats s'han estès a la escriptura manuscrita en àrab, en particular, a les bases de dades IfN/ENIT i NIST OpenHaRT.
En aplicacions reals, la transcripció de text en àrab no es limita únicament al text manuscrit, sinó també al text imprès. El text imprès es pot interpretar com una forma simplificada de text manuscrit. Per tant, per a aquest tipus de text, també proposem l'ús de models BHMMs. A més a més, aquests models s'han comparat amb tecnologia de l'estat de l'art basada en xarxes neuronals.
Una idea clau que ha demostrat ser molt eficaç en l'aplicació de models BHMMs és l'ús d'una finestra lliscant (sliding window) d'amplària adequada durant l'extracció de característiques. Aquesta idea ha permès obtenir resultats molt competitius tant en el reconeixement de text àrab manuscrit com en el de text imprès. De fet, un sistema basat en aquest tipus d'extracció de característiques va quedar en primera posició en el concurs ICDAR 2011 Arabic recognition competition emprant la base de dades Arabic Printed Text Image (APTI).
A més a més, aquesta idea s'ha perfeccionat mitjançant l'ús de tècniques de reposicionament aplicades a les finestres extretes, donant lloc a noves millores en el reconeixement de text en àrab. En el cas de text manuscrit, aquest refinament ha aconseguit millorar el sistema que va ocupar el primer lloc en el concurs ICFHR 2010 Arabic handwriting recognition competition usant IfN/ENIT. En el cas del text imprès, aquest refinament va conduir a un sistema millor que va ocupar el segon lloc en el concurs ICDAR 2013 Competition on Multi-font and Multi-size Digitally Represented Arabic Text en el qual s'usava APTI. D'altra banda, aquesta tècnica s'ha avaluat també en tecnologia basada en xarxes neuronals, el que ha portat a resultats de l'estat de l'art.
Respecte a la traducció automàtica, el sistema s'ha basat en la combinació de tres tipus de models estadístics de l'estat de l'art: els models standard phrase-based, els models hierarchical phrase-based i els models N-gram phrase-based. Aquesta combinació es va fer utilitzant el mètode Recognizer Output Voting Errada Reduction (ROVER). Finalment, s'han proposat tres mètodes per combinar els sistemes HTR i MT amb la finalitat de desenvolupar un sistema de traducció d'imatges de text àrab a anglès. El sistema s'ha avaluat sobre la base de dades NIST OpenHaRT, on s'han obtingut resultats competitius. / Alkhoury, I. (2015). Arabic Text Recognition and Machine Translation [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53029
|
190 |
Komparace výstupů z veřejně dostupných překladačů ve směru němčina-čeština / Comparing German-Czech translation output of publicly available machine translation enginesŘehořová, Klára January 2022 (has links)
This diploma thesis deals with the quality of the output of publicly available machine translation engines: Amazon Translate, Bing Translator, Deep L and Google Translate. The aim of the thesis was to qualitatively compare each machine translation engine and to test their capabilities on several types of texts - expressive, informative and operative. The analysis is carried out on translations of German texts into Czech. The thesis consists of two parts, in the theoretical part we discuss the origin and development of machine translation, its types, the automatic machine translation engines which are the object of our research, and we present a two-stage model for evaluating the quality of translation. In the empirical part, the results of the analysis based on the models of K. Reiß and A. Torrens are presented. These results show that the listed machine translation engines can be ranked from the highest to the lowest level of output quality as follows: Deep L, Google Translate, Bing Translator and Amazon Translate. Furthermore, it also turns out that the error rate correlates with the creativity of the text.
|
Page generated in 0.0341 seconds