• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 38
  • 21
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 302
  • 302
  • 108
  • 77
  • 61
  • 57
  • 56
  • 54
  • 49
  • 47
  • 46
  • 42
  • 35
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Problems and Issues in Machine Translation: the Case of Translation from English to Lithuanian / Mašininio Vertimo Problemos ir Klausimai Vertimų iš Anglų Kalbos į Lietuvių Kalbą Pavyzdžiu

Stalmačenkaitė, Viktorija 27 June 2013 (has links)
Bachelor thesis focuses on problems and issues of machine translation while translating texts of different genres. The theoretical part of the paper covers such topics as the notion of machine translation (MT), its most crucial mistakes and the notion of text genres in the English language. The practical part consists of the analysis of 5 different texts pointing out the most severe mistakes detected in the output of MT. Conclusions drawn after the analysis showed that MT requires further improvment and more thorough investigation. / Bakalauro darbas aptaria mašininio vertimo problemas verčiant įvairių stilių tekstus. Teorinė dalis aptaria tokias temas, kaip mašininio vertimo (MV) samprata ir šio reiškinio pačias svarbiausias klaidas, taip pat šioje dalyje aptariama teksto stiliaus samprata. Praktinėje dalyje buvo analizuojami 5 skirtingų stilių tekstai ir aptariamos pačios ryškiausios ir svarbiausios klaidos šiuose tekstuose. Išvados paaiškėjusios po tyrimo atskleidė, kad MV sistemą dar reikia tobulinti ir atlikti išsamesnius tyrimus.
92

Sintaktiese herrangskikking as voorprosessering in die ontwikkeling van Engels na Afrikaanse statistiese masjienvertaalsisteem / Marissa Griesel

Griesel, Marissa January 2011 (has links)
Statistic machine translation to any of the resource scarce South African languages generally results in low quality output. Large amounts of training data are required to generate output of such a standard that it can ease the work of human translators when incorporated into a translation environment. Sufficiently large corpora often do not exist and other techniques must be researched to improve the quality of the output. One of the methods in international literature that yielded good improvements in the quality of the output applies syntactic reordering as pre-processing. This pre-processing aims at simplifying the decod-ing process as less changes will need to be made during translation in this stage. Training will also benefit since the automatic word alignments can be drawn more easily because the word orders in both the source and target languages are more similar. The pre-processing is applied to the source language training data as well as to the text that is to be translated. It is in the form of rules that recognise patterns in the tags and adapt the structure accordingly. These tags are assigned to the source language side of the aligned parallel corpus with a syntactic analyser. In this research project, the technique is adapted for translation from English to Afrikaans and deals with the reordering of verbs, modals, the past tense construct, construc-tions with “to” and negation. The goal of these rules is to change the English (source language) structure to better resemble the Afrikaans (target language) structure. A thorough analysis of the output of the base-line system serves as the starting point. The errors that occur in the output are divided into categories and each of the underlying constructs for English and Afrikaans are examined. This analysis of the output and the literature on syntax for the two languages are combined to formulate the linguistically motivated rules. The module that performs the pre-processing is evaluated in terms of the precision and the recall, and these two measures are then combined in the F-score that gives one number by which the module can be assessed. All three of these measures compare well to international standards. Furthermore, a compari-son is made between the system that is enriched by the pre-processing module and a baseline system on which no extra processing is applied. This comparison is done by automatically calculating two metrics (BLEU and NIST scores) and it shows very positive results. When evaluating the entire document, an increase in the BLEU score from 0,4968 to 0,5741 (7,7 %) and in the NIST score from 8,4515 to 9,4905 (10,4 %) is reported. / Thesis (M.A. (Applied Language and Literary Studies))--North-West University, Potchefstroom Campus, 2011.
93

Sintaktiese herrangskikking as voorprosessering in die ontwikkeling van Engels na Afrikaanse statistiese masjienvertaalsisteem / Marissa Griesel

Griesel, Marissa January 2011 (has links)
Statistic machine translation to any of the resource scarce South African languages generally results in low quality output. Large amounts of training data are required to generate output of such a standard that it can ease the work of human translators when incorporated into a translation environment. Sufficiently large corpora often do not exist and other techniques must be researched to improve the quality of the output. One of the methods in international literature that yielded good improvements in the quality of the output applies syntactic reordering as pre-processing. This pre-processing aims at simplifying the decod-ing process as less changes will need to be made during translation in this stage. Training will also benefit since the automatic word alignments can be drawn more easily because the word orders in both the source and target languages are more similar. The pre-processing is applied to the source language training data as well as to the text that is to be translated. It is in the form of rules that recognise patterns in the tags and adapt the structure accordingly. These tags are assigned to the source language side of the aligned parallel corpus with a syntactic analyser. In this research project, the technique is adapted for translation from English to Afrikaans and deals with the reordering of verbs, modals, the past tense construct, construc-tions with “to” and negation. The goal of these rules is to change the English (source language) structure to better resemble the Afrikaans (target language) structure. A thorough analysis of the output of the base-line system serves as the starting point. The errors that occur in the output are divided into categories and each of the underlying constructs for English and Afrikaans are examined. This analysis of the output and the literature on syntax for the two languages are combined to formulate the linguistically motivated rules. The module that performs the pre-processing is evaluated in terms of the precision and the recall, and these two measures are then combined in the F-score that gives one number by which the module can be assessed. All three of these measures compare well to international standards. Furthermore, a compari-son is made between the system that is enriched by the pre-processing module and a baseline system on which no extra processing is applied. This comparison is done by automatically calculating two metrics (BLEU and NIST scores) and it shows very positive results. When evaluating the entire document, an increase in the BLEU score from 0,4968 to 0,5741 (7,7 %) and in the NIST score from 8,4515 to 9,4905 (10,4 %) is reported. / Thesis (M.A. (Applied Language and Literary Studies))--North-West University, Potchefstroom Campus, 2011.
94

Lexikální a tvaroslovné varianty ve strojovém překladu / Lexical and Morphological Choices in Machine Translation

Tamchyna, Aleš January 2017 (has links)
This work focuses on two problems in machine translation: lexical choice and target-side morphology. The first problem is the correct transfer of meaning from the source language to the target language. The second problem, which is mainly relevant for morphologically rich target languages, is then the choice of the correct surface form of each target lexeme. We work with these problems within the framework of phrase-based machine translation and we propose a discriminative model of translation which utilizes both source and target context information and which uses rich linguistically motivated features. We show how our model addresses specific weaknesses of standard phrase-based systems and that it provides consistent improvements of translation quality across a broad range of experiments. Apart from our main contribution, we also provide a number of experimental evaluations, analyses and manual annotation experiments, mostly related to English-Czech translation.
95

Advanced Quality Measures for Speech Translation / Mesures de qualité avancées pour la traduction de la parole

Le, Ngoc Tien 29 January 2018 (has links)
Le principal objectif de cette thèse vise à estimer de manière automatique la qualité de la traduction de langue parlée (Spoken Language Translation ou SLT), appelée estimation de confiance (Confidence Estimation ou CE). Le système de SLT génère les hypothèses représentées par les séquences de mots pour l'audio qui contient parfois des erreurs. En raison de multiples facteurs, la sortie de SLT, ayant une qualité insatisfaisante, pourrait causer différents problèmes pour les utilisateurs finaux. Par conséquent, il est utile de savoir combien de confiance les tokens corrects pourraient être trouvés au sein de l'hypothèse. L'objectif de l'estimation de confiance consistait à obtenir des scores qui quantifient le niveau de confiance ou à annoter les tokens cibles en appliquant le seuil de décision (par exemple, seuil par défaut = 0,5). Dans le cadre de cette thèse, nous avons proposé un boîte à outils, qui consiste en un framework personnalisable, flexible et en une plate-forme portative, pour l'estimation de confiance au niveau de mots (Word-level Confidence Estimation ou WCE) de SLT.En premier lieu, les erreurs dans le SLT ont tendance à se produire sur les hypothèses de la reconnaissance automatique de la parole (Automatic Speech Recognition ou ASR) et sur celles de la traduction automatique (Machine Translation ou MT), qui sont représentées par des séquences de mots. Ce phénomène est étudié par l'estimation de confiance (CE) au niveau des mots en utilisant les modèles de champs aléatoires conditionnels (Conditional Random Fields ou CRF). Cette tâche, relativement nouvelle, est définie et formalisée comme un problème d'étiquetage séquentiel dans lequel chaque mot, dans l'hypothèse de SLT, est annoté comme bon ou mauvais selon un ensemble des traits importants. Nous proposons plusieurs outils servant d’estimer la confiance des mots (WCE) en fonction de notre évaluation automatique de la qualité de la transcription (ASR), de la qualité de la traduction (MT), ou des deux (combiner ASR et MT). Ce travail de recherche est réalisable parce que nous avons construit un corpus spécifique, qui contient 6.7k des énoncés pour lesquels un quintuplet est normalisé comme suit : (1) sortie d’ASR, (2) transcription en verbatim, (3) traduction textuelle, (4) traduction vocale et (5) post-édition de la traduction. La conclusion de nos multiples expérimentations, utilisant les traits conjoints entre ASR et MT pour WCE, est que les traits de MT demeurent les plus influents, tandis que les traits de ASR peuvent apporter des informations intéressantes complémentaires.En deuxième lieu, nous proposons deux méthodes pour distinguer des erreurs susceptibles d’ASR et de celles de MT, dans lesquelles chaque mot, dans l'hypothèse de SLT, est annoté comme good (bon), asr_error (concernant les erreurs d’ASR) ou mt_error (concernant les erreurs de MT). Nous contribuons donc à l’estimation de confiance au niveau de mots (WCE) pour SLT par trouver la source des erreurs au sein des systèmes de SLT.En troisième lieu, nous proposons une nouvelle métrique, intitulée Word Error Rate with Embeddings (WER-E), qui est exploitée afin de rendre cette tâche possible. Cette approche génère de meilleures hypothèses de SLT lors de l'optimisation de l'hypothèse de N-meilleure hypothèses avec WER-E.En somme, nos stratégies proposées pour l'estimation de la confiance se révèlent un impact positif sur plusieurs applications pour SLT. Les outils robustes d’estimation de la qualité pour SLT peuvent être utilisés dans le but de re-calculer des graphes de la traduction de parole ou dans le but de fournir des retours d’information aux utilisateurs dans la traduction vocale interactive ou des scénarios de parole aux textes assistés par ordinateur.Mots-clés: Estimation de la qualité, Estimation de confiance au niveau de mots (WCE), Traduction de langue parlée (SLT), traits joints, Sélection des traits. / The main aim of this thesis is to investigate the automatic quality assessment of spoken language translation (SLT), called Confidence Estimation (CE) for SLT. Due to several factors, SLT output having unsatisfactory quality might cause various issues for the target users. Therefore, it is useful to know how we are confident in the tokens of the hypothesis. Our first contribution of this thesis is a toolkit LIG-WCE which is a customizable, flexible framework and portable platform for Word-level Confidence Estimation (WCE) of SLT.WCE for SLT is a relatively new task defined and formalized as a sequence labelling problem where each word in the SLT hypothesis is tagged as good or bad accordingto a large feature set. We propose several word confidence estimators (WCE) based on our automatic evaluation of transcription (ASR) quality, translation (MT) quality,or both (combined/joint ASR+MT). This research work is possible because we built a specific corpus, which contains 6.7k utterances for which a quintuplet containing: ASRoutput, verbatim transcript, text translation, speech translation and post-edition of the translation is built. The conclusion of our multiple experiments using joint ASR and MT features for WCE is that MT features remain the most influent while ASR features can bring interesting complementary information.As another contribution, we propose two methods to disentangle ASR errors and MT errors, where each word in the SLT hypothesis is tagged as good, asr_error or mt_error.We thus explore the contributions of WCE for SLT in finding out the source of SLT errors.Furthermore, we propose a simple extension of WER metric in order to penalize differently substitution errors according to their context using word embeddings. For instance, the proposed metric should catch near matches (mainly morphological variants) and penalize less this kind of error which has a more limited impact on translation performance. Our experiments show that the correlation of the new proposed metric with SLT performance is better than the one of WER. Oracle experiments are also conducted and show the ability of our metric to find better hypotheses (to be translated) in the ASR N-best. Finally, a preliminary experiment where ASR tuning is based on our new metric shows encouraging results.To conclude, we have proposed several prominent strategies for CE of SLT that could have a positive impact on several applications for SLT. Robust quality estimators for SLT can be used for re-scoring speech translation graphs or for providing feedback to the user in interactive speech translation or computer-assisted speech-to-text scenarios.Keywords: Quality estimation, Word confidence estimation (WCE), Spoken Language Translation (SLT), Joint Features, Feature Selection.
96

On the application of focused crawling for statistical machine translation domain adaptation

Laranjeira, Bruno Rezende January 2015 (has links)
O treinamento de sistemas de Tradução de Máquina baseada em Estatística (TME) é bastante dependente da disponibilidade de corpora paralelos. Entretanto, este tipo de recurso costuma ser difícil de ser encontrado, especialmente quando lida com idiomas com poucos recursos ou com tópicos muito específicos, como, por exemplo, dermatologia. Para contornar esta situação, uma possibilidade é utilizar corpora comparáveis, que são recursos muito mais abundantes. Um modo de adquirir corpora comparáveis é a aplicação de algoritmos de Coleta Focada (CF). Neste trabalho, são propostas novas abordagens para CF, algumas baseadas em n-gramas e outras no poder expressivo das expressões multipalavra. Também são avaliadas a viabilidade do uso de CF para realização de adaptação de domínio para sistemas genéricos de TME e se há alguma correlação entre a qualidade dos algoritmos de CF e dos sistemas de TME que podem ser construídos a partir dos respectivos dados coletados. Os resultados indicam que algoritmos de CF podem ser bons meios para adquirir corpora comparáveis para realizar adaptação de domínio para TME e que há uma correlação entre a qualidade dos dois processos. / Statistical Machine Translation (SMT) is highly dependent on the availability of parallel corpora for training. However, these kinds of resource may be hard to be found, especially when dealing with under-resourced languages or very specific domains, like the dermatology. For working this situation around, one possibility is the use of comparable corpora, which are much more abundant resources. One way of acquiring comparable corpora is to apply Focused Crawling (FC) algorithms. In this work we propose novel approach for FC algorithms, some based on n-grams and other on the expressive power of multiword expressions. We also assess the viability of using FC for performing domain adaptations for generic SMT systems and whether there is a correlation between the quality of the FC algorithms and of the SMT systems that can be built with its collected data. Results indicate that the use of FCs is, indeed, a good way for acquiring comparable corpora for SMT domain adaptation and that there is a correlation between the qualities of both processes.
97

Obohacování neuronového strojového překladu technikou sdíleného trénování na více úlohách / Enriching Neural MT through Multi-Task Training

Macháček, Dominik January 2018 (has links)
The Transformer model is a very recent, fast and powerful discovery in neural machine translation. We experiment with multi-task learning for enriching the source side of the Transformer with linguistic resources to provide it with additional information to learn linguistic and world knowledge better. We analyze two approaches: the basic shared model with multi-tasking through simple data manipulation, and multi-decoder models. We test joint models for machine translation (MT) and POS tagging, dependency parsing and named entity recognition as the secondary tasks. We evaluate them in comparison with the baseline and with dummy, linguistically unrelated tasks. We focus primarily on the standard- size data setting for German-to-Czech MT. Although our enriched models did not significantly outperform the baseline, we empirically document that (i) the MT models benefit from the secondary linguistic tasks; (ii) considering the amount of training data consumed, the multi-tasking models learn faster; (iii) in low-resource conditions, the multi-tasking significantly improves the model; (iv) the more fine-grained annotation of the source as the secondary task, the higher benefit to MT.
98

On the application of focused crawling for statistical machine translation domain adaptation

Laranjeira, Bruno Rezende January 2015 (has links)
O treinamento de sistemas de Tradução de Máquina baseada em Estatística (TME) é bastante dependente da disponibilidade de corpora paralelos. Entretanto, este tipo de recurso costuma ser difícil de ser encontrado, especialmente quando lida com idiomas com poucos recursos ou com tópicos muito específicos, como, por exemplo, dermatologia. Para contornar esta situação, uma possibilidade é utilizar corpora comparáveis, que são recursos muito mais abundantes. Um modo de adquirir corpora comparáveis é a aplicação de algoritmos de Coleta Focada (CF). Neste trabalho, são propostas novas abordagens para CF, algumas baseadas em n-gramas e outras no poder expressivo das expressões multipalavra. Também são avaliadas a viabilidade do uso de CF para realização de adaptação de domínio para sistemas genéricos de TME e se há alguma correlação entre a qualidade dos algoritmos de CF e dos sistemas de TME que podem ser construídos a partir dos respectivos dados coletados. Os resultados indicam que algoritmos de CF podem ser bons meios para adquirir corpora comparáveis para realizar adaptação de domínio para TME e que há uma correlação entre a qualidade dos dois processos. / Statistical Machine Translation (SMT) is highly dependent on the availability of parallel corpora for training. However, these kinds of resource may be hard to be found, especially when dealing with under-resourced languages or very specific domains, like the dermatology. For working this situation around, one possibility is the use of comparable corpora, which are much more abundant resources. One way of acquiring comparable corpora is to apply Focused Crawling (FC) algorithms. In this work we propose novel approach for FC algorithms, some based on n-grams and other on the expressive power of multiword expressions. We also assess the viability of using FC for performing domain adaptations for generic SMT systems and whether there is a correlation between the quality of the FC algorithms and of the SMT systems that can be built with its collected data. Results indicate that the use of FCs is, indeed, a good way for acquiring comparable corpora for SMT domain adaptation and that there is a correlation between the qualities of both processes.
99

Translationese and Swedish-English Statistical Machine Translation

Joelsson, Jakob January 2016 (has links)
This thesis investigates how well machine learned classifiers can identify translated text, and the effect translationese may have in Statistical Machine Translation -- all in a Swedish-to-English, and reverse, context. Translationese is a term used to describe the dialect of a target language that is produced when a source text is translated. The systems trained for this thesis are SVM-based classifiers for identifying translationese, as well as translation and language models for Statistical Machine Translation. The classifiers successfully identified translationese in relation to non-translated text, and to some extent, also what source language the texts were translated from. In the SMT experiments, variation of the translation model was whataffected the results the most in the BLEU evaluation. Systems configured with non-translated source text and translationese target text performed better than their reversed counter parts. The language model experiments showed that those trained on known translationese and classified translationese performed better than known non-translated text, though classified translationese did not perform as well as the known translationese. Ultimately, the thesis shows that translationese can be identified by machine learned classifiers and may affect the results of SMT systems.
100

Pronoun translation between English and Icelandic

Odd, Jakobsson January 2018 (has links)
A problem in machine translation is how to handle pronouns since languages use these differently, for example, in anaphoric reference. This essay examines what happens to the English third person pronouns he, she, and it when translated into Icelandic. Parallel corpora were prepared by tokenisation and subsequently the machine translation method word alignment was applied on the corpus. The results show that when a pronoun is used to refer to something outside the sentence (extra-sentential), this gives rise to major problems. Another problem encountered was the differences in the deictic strength between pronouns in English and Icelandic. One conclusion that can be drawn is that more research is needed as more reliable ways of handling pronouns are needed in translations. / Ett problem inom maskinöversättning är hur man ska hantera pronomen då språk använder dessa olika, exempelvis vid anaforisk referens. I den här uppsatsen undersöks vad som händer med engelska tredje persons pronomen he, she, och it när de har översatts till isländska. Parallella korpusar gjordes iordning genom tokenisering och därefter användes maskinöversättningsmetoden ordlänkning på korpusen. Resultaten visar att när pronomen används för att referera till något utanför satsen (extrasententiell) är det ett stort problem. Ett annat problem som påträffades gällde skillnader i deiktisk styrka mellan pronomen i engelska och isländska. En slutsats som kan dras är att mer forskning behövs då det behövs mer tillförlitliga sätt att hantera pronomen i översättningar.

Page generated in 0.0909 seconds