• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 38
  • 21
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 295
  • 295
  • 107
  • 77
  • 61
  • 56
  • 54
  • 54
  • 48
  • 46
  • 46
  • 42
  • 35
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Compound Processing for Phrase-Based Statistical Machine Translation

Stymne, Sara January 2009 (has links)
<p>In this thesis I explore how compound processing can be used to improve phrase-based statistical machine translation (PBSMT) between English and German/Swedish. Both German and Swedish generally use closed compounds, which are written as one word without spaces or other indicators of word boundaries. Compounding is both common and productive, which makes it problematic for PBSMT, mainly due to sparse data problems.</p><p>The adopted strategy for compound processing is to split compounds into their component parts before training and translation. For translation into Swedish and German the parts are merged after translation. I investigate the effect of different splitting algorithms for translation between English and German, and of different merging algorithms for German. I also apply these methods to a different language pair, English--Swedish. Overall the studies show that compound processing is useful, especially for translation from English into German or Swedish. But there are improvements for translation into English as well, such as a reduction of unknown words.</p><p>I show that for translation between English and German different splitting algorithms work best for different translation directions. I also design and evaluate a novel merging algorithm based on part-of-speech matching, which outperforms previous methods for compound merging, showing the need for information that is carried through the translation process, rather than only external knowledge sources such as word lists. Most of the methods for compound processing were originally developed for German. I show that these methods can be applied to Swedish as well, with similar results.</p>
52

Rule-based Machine Translation in Limited Domain for PDAs

Chiang, Shin-Chian 10 September 2009 (has links)
In this thesis, we implement a rule-based machine ranslation (MT) system for Personal Digital Assistants (PDAs). Rule-based MT system has three modules in general: analysis, transfer and generation. Grammars used in our system are lexicalized tree automata-based grammar (LTA) and synchronous lexicalized tree adjoining grammar (SLTAG). LTA is used for analysis, and SLTAG is used for transfer and generation. We adjust developed parser to PDAs as a parser in the analysis module. The SLTAG parser in the transfer module would search possible source side of SLTAG in source parse tree. Then, growing target parse tree and scoring each hypothesis is based on language model and rule probability. To avoid too much estimation, generation step would prune some hypotheses under threshold. Compared with other rule-based MT systems, we can build rules automatically and design a flexible rule type. SLTAG parser is coded specially for the rule type. In experiments, Chinese-English BTEC is our training and test data. We can get 17% BLEU score for the test data.
53

Discourse in Statistical Machine Translation

Hardmeier, Christian January 2014 (has links)
This thesis addresses the technical and linguistic aspects of discourse-level processing in phrase-based statistical machine translation (SMT). Connected texts can have complex text-level linguistic dependencies across sentences that must be preserved in translation. However, the models and algorithms of SMT are pervaded by locality assumptions. In a standard SMT setup, no model has more complex dependencies than an n-gram model. The popular stack decoding algorithm exploits this fact to implement efficient search with a dynamic programming technique. This is a serious technical obstacle to discourse-level modelling in SMT. From a technical viewpoint, the main contribution of our work is the development of a document-level decoder based on stochastic local search that translates a complete document as a single unit. The decoder starts with an initial translation of the document, created randomly or by running a stack decoder, and refines it with a sequence of elementary operations. After each step, the current translation is scored by a set of feature models with access to the full document context and its translation. We demonstrate the viability of this decoding approach for different document-level models. From a linguistic viewpoint, we focus on the problem of translating pronominal anaphora. After investigating the properties and challenges of the pronoun translation task both theoretically and by studying corpus data, a neural network model for cross-lingual pronoun prediction is presented. This network jointly performs anaphora resolution and pronoun prediction and is trained on bilingual corpus data only, with no need for manual coreference annotations. The network is then integrated as a feature model in the document-level SMT decoder and tested in an English–French SMT system. We show that the pronoun prediction network model more adequately represents discourse-level dependencies for less frequent pronouns than a simpler maximum entropy baseline with separate coreference resolution. By creating a framework for experimenting with discourse-level features in SMT, this work contributes to a long-term perspective that strives for more thorough modelling of complex linguistic phenomena in translation. Our results on pronoun translation shed new light on a challenging, but essential problem in machine translation that is as yet unsolved.
54

Dataselektering en –manipulering vir statistiese Engels–Afrikaanse masjienvertaling / McKellar C.A.

McKellar, Cindy. January 2011 (has links)
Die sukses van enige masjienvertaalsisteem hang grootliks van die hoeveelheid en kwaliteit van die beskikbare afrigtingsdata af. n Sisteem wat met foutiewe of lae–kwaliteit data afgerig is, sal uiteraard swakker afvoer lewer as n sisteem wat met korrekte of hoë–kwaliteit data afgerig is. In die geval van hulpbronarm tale waar daar min data beskikbaar is en data dalk noodgedwonge vertaal moet word vir die skep van parallelle korpora wat as afrigtingsdata kan dien, is dit dus baie belangrik dat die data wat vir vertaling gekies word, so gekies word dat dit teksgedeeltes insluit wat die meeste waarde tot die masjienvertaalsisteem sal bydra. Dit is ook in so n geval uiters belangrik om die beskikbare data so goed moontlik aan te wend. Hierdie studie stel ondersoek in na metodes om afrigtingsdata te selekteer met die doel om n optimale masjienvertaalsisteem met beperkte hulpbronne af te rig. Daar word ook aandag gegee aan die moontlikheid om die gewigte van sekere gedeeltes van die afrigtingsdata te verhoog om sodoende die data wat die meeste waarde tot die masjienvertaalsisteem bydra te beklemtoon. Alhoewel hierdie studie spesifiek gerig is op metodes vir dataselektering en –manipulering vir die taalpaar Engels–Afrikaans, sou die metodes ook vir toepassing op ander taalpare gebruik kon word. Die evaluasieproses dui aan dat beide die dataselekteringsmetodes, asook die aanpassing van datagewigte, n positiewe impak op die kwaliteit van die resulterende masjienvertaalsisteem het. Die uiteindelike sisteem, afgerig deur n kombinasie van verskillende metodes, toon n 2.0001 styging in die NIST–telling en n 0.2039 styging in die BLEU–telling. / Thesis (M.A. (Applied Language and Literary Studies))--North-West University, Potchefstroom Campus, 2011.
55

Dataselektering en –manipulering vir statistiese Engels–Afrikaanse masjienvertaling / McKellar C.A.

McKellar, Cindy. January 2011 (has links)
Die sukses van enige masjienvertaalsisteem hang grootliks van die hoeveelheid en kwaliteit van die beskikbare afrigtingsdata af. n Sisteem wat met foutiewe of lae–kwaliteit data afgerig is, sal uiteraard swakker afvoer lewer as n sisteem wat met korrekte of hoë–kwaliteit data afgerig is. In die geval van hulpbronarm tale waar daar min data beskikbaar is en data dalk noodgedwonge vertaal moet word vir die skep van parallelle korpora wat as afrigtingsdata kan dien, is dit dus baie belangrik dat die data wat vir vertaling gekies word, so gekies word dat dit teksgedeeltes insluit wat die meeste waarde tot die masjienvertaalsisteem sal bydra. Dit is ook in so n geval uiters belangrik om die beskikbare data so goed moontlik aan te wend. Hierdie studie stel ondersoek in na metodes om afrigtingsdata te selekteer met die doel om n optimale masjienvertaalsisteem met beperkte hulpbronne af te rig. Daar word ook aandag gegee aan die moontlikheid om die gewigte van sekere gedeeltes van die afrigtingsdata te verhoog om sodoende die data wat die meeste waarde tot die masjienvertaalsisteem bydra te beklemtoon. Alhoewel hierdie studie spesifiek gerig is op metodes vir dataselektering en –manipulering vir die taalpaar Engels–Afrikaans, sou die metodes ook vir toepassing op ander taalpare gebruik kon word. Die evaluasieproses dui aan dat beide die dataselekteringsmetodes, asook die aanpassing van datagewigte, n positiewe impak op die kwaliteit van die resulterende masjienvertaalsisteem het. Die uiteindelike sisteem, afgerig deur n kombinasie van verskillende metodes, toon n 2.0001 styging in die NIST–telling en n 0.2039 styging in die BLEU–telling. / Thesis (M.A. (Applied Language and Literary Studies))--North-West University, Potchefstroom Campus, 2011.
56

Towards a Better Human-Machine Collaboration in Statistical Translation : Example of Systematic Medical Reviews / Vers une meilleure collaboration humain-machine en traduction statistique : l'exemple des revues systématiques en médecine

Ive, Julia 01 September 2017 (has links)
La traduction automatique (TA) a connu des progrès significatifs ces dernières années et continue de s'améliorer. La TA est utilisée aujourd'hui avec succès dans de nombreux contextes, y compris les environnements professionnels de traduction et les scénarios de production. Cependant, le processus de traduction requiert souvent des connaissances plus larges qu'extraites de corpus parallèles. Étant donné qu'une injection de connaissances humaines dans la TA est nécessaire, l'un des moyens possibles d'améliorer TA est d'assurer une collaboration optimisée entre l'humain et la machine. À cette fin, de nombreuses questions sont posées pour la recherche en TA: Comment détecter les passages où une aide humaine devrait être proposée ? Comment faire pour que les machines exploitent les connaissances humaines obtenues afin d'améliorer leurs sorties ? Enfin, comment optimiser l'échange: minimiser l'effort humain impliqué et maximiser la qualité de TA? Diverses solutions sont possibles selon les scénarios de traductions considérés. Dans cette thèse, nous avons choisi de nous concentrer sur la pré-édition, une intervention humaine en TA qui a lieu ex-ante, par opposition à la post-édition, où l'intervention humaine qui déroule ex-post. En particulier, nous étudions des scénarios de pré-édition ciblés où l'humain doit fournir des traductions pour des segments sources difficiles à traduire et choisis avec soin. Les scénarios de la pré-édition impliquant la pré-traduction restent étonnamment peu étudiés dans la communauté. Cependant, ces scénarios peuvent offrir une série d'avantages relativement, notamment, à des scénarios de post-édition non ciblés, tels que : la réduction de la charge cognitive requise pour analyser des phrases mal traduites; davantage de contrôle sur le processus; une possibilité que la machine exploite de nouvelles connaissances pour améliorer la traduction automatique au voisinage des segments pré-traduits, etc. De plus, dans un contexte multilingue, des difficultés communes peuvent être résolues simultanément pour de nombreuses langues. De tels scénarios s'adaptent donc parfaitement aux contextes de production standard, où l'un des principaux objectifs est de réduire le coût de l’intervention humaine et où les traductions sont généralement effectuées à partir d'une langue vers plusieurs langues à la fois. Dans ce contexte, nous nous concentrons sur la TA de revues systématiques en médecine. En considérant cet exemple, nous proposons une méthodologie indépendante du système pour la détection des difficultés de traduction. Nous définissons la notion de difficulté de traduction de la manière suivante : les segments difficiles à traduire sont des segments pour lesquels un système de TA fait des prédictions erronées. Nous formulons le problème comme un problème de classification binaire et montrons que, en utilisant cette méthodologie, les difficultés peuvent être détectées de manière fiable sans avoir accès à des informations spécifiques au système. Nous montrons que dans un contexte multilingue, les difficultés communes sont rares. Une perspective plus prometteuse en vue d'améliorer la qualité réside dans des approches dans lesquelles les traductions dans les différentes langues s’aident mutuellement à résoudre leurs difficultés. Nous intégrons les résultats de notre procédure de détection des difficultés dans un protocole de pré-édition qui permet de résoudre ces difficultés par pré-traduction. Nous évaluons le protocole dans un cadre simulé et montrons que la pré-traduction peut être à la fois utile pour améliorer la qualité de la TA et réaliste en termes d'implication des efforts humains. En outre, les effets indirects sont significatifs. Nous évaluons également notre protocole dans un contexte préliminaire impliquant des interventions humaines. Les résultats de ces expériences pilotes confirment les résultats obtenus dans le cadre simulé et ouvrent des perspectives encourageantes pour des tests ultérieures. / Machine Translation (MT) has made significant progress in the recent years and continues to improve. Today, MT is successfully used in many contexts, including professional translation environments and production scenarios. However, the translation process requires knowledge larger in scope than what can be captured by machines even from a large quantity of translated texts. Since injecting human knowledge into MT is required, one of the potential ways to improve MT is to ensure an optimized human-machine collaboration. To this end, many questions are asked by modern research in MT: How to detect where human assistance should be proposed? How to make machines exploit the obtained human knowledge so that they could improve their output? And, not less importantly, how to optimize the exchange so as to minimize the human effort involved and maximize the quality of MT output? Various solutions have been proposed depending on concrete implementations of the MT process. In this thesis we have chosen to focus on Pre-Edition (PRE), corresponding to a type of human intervention into MT that takes place ex-ante, as opposed to Post-Edition (PE), where human intervention takes place ex-post. In particular, we study targeted PRE scenarios where the human is to provide translations for carefully chosen, difficult-to-translate, source segments. Targeted PRE scenarios involving pre-translation remain surprisingly understudied in the MT community. However, such PRE scenarios can offer a series of advantages as compared, for instance, to non-targeted PE scenarios: i.a., the reduction of the cognitive load required to analyze poorly translated sentences; more control over the translation process; a possibility that the machine will exploit new knowledge to improve the automatic translation of neighboring words, etc. Moreover, in a multilingual setting common difficulties can be resolved at one time and for many languages. Such scenarios thus perfectly fit standard production contexts, where one of the main goals is to reduce the cost of PE and where translations are commonly performed simultaneously from one language into many languages. A representative production context - an automatic translation of systematic medical reviews - is the focus of this work. Given this representative context, we propose a system-independent methodology for translation difficulty detection. We define the notion of translation difficulty as related to translation quality: difficult-to-translate segments are segments for which an MT system makes erroneous predictions. We cast the problem of difficulty detection as a binary classification problem and demonstrate that, using this methodology, difficulties can be reliably detected without access to system-specific information. We show that in a multilingual setting common difficulties are rare, and a better perspective of quality improvement lies in approaches where translations into different languages will help each other in the resolution of difficulties. We integrate the results of our difficulty detection procedure into a PRE protocol that enables resolution of those difficulties by pre-translation. We assess the protocol in a simulated setting and show that pre-translation as a type of PRE can be both useful to improve MT quality and realistic in terms of the human effort involved. Moreover, indirect effects are found to be genuine. We also assess the protocol in a preliminary real-life setting. Results of those pilot experiments confirm the results in the simulated setting and suggest an encouraging beginning of the test phase.
57

Factored neural machine translation / Traduction automatique neuronale factorisée

García Martínez, Mercedes 27 March 2018 (has links)
La diversité des langues complexifie la tâche de communication entre les humains à travers les différentes cultures. La traduction automatique est un moyen rapide et peu coûteux pour simplifier la communication interculturelle. Récemment, laTraduction Automatique Neuronale (NMT) a atteint des résultats impressionnants. Cette thèse s'intéresse à la Traduction Automatique Neuronale Factorisé (FNMT) qui repose sur l'idée d'utiliser la morphologie et la décomposition grammaticale des mots (lemmes et facteurs linguistiques) dans la langue cible. Cette architecture aborde deux défis bien connus auxquelles les systèmes NMT font face. Premièrement, la limitation de la taille du vocabulaire cible, conséquence de la fonction softmax, qui nécessite un calcul coûteux à la couche de sortie du réseau neuronale, conduisant à un taux élevé de mots inconnus. Deuxièmement, le manque de données adéquates lorsque nous sommes confrontés à un domaine spécifique ou une langue morphologiquement riche. Avec l'architecture FNMT, toutes les inflexions des mots sont prises en compte et un vocabulaire plus grand est modélisé tout en gardant un coût de calcul similaire. De plus, de nouveaux mots non rencontrés dans les données d'entraînement peuvent être générés. Dans ce travail, j'ai développé différentes architectures FNMT en utilisant diverses dépendances entre les lemmes et les facteurs. En outre, j'ai amélioré la représentation de la langue source avec des facteurs. Le modèle FNMT est évalué sur différentes langues dont les plus riches morphologiquement. Les modèles à l'état de l'art, dont certains utilisant le Byte Pair Encoding (BPE) sont comparés avec le modèle FNMT en utilisant des données d'entraînement de petite et de grande taille. Nous avons constaté que les modèles utilisant les facteurs sont plus robustes aux conditions d'entraînement avec des faibles ressources. Le FNMT a été combiné avec des unités BPE permettant une amélioration par rapport au modèle FNMT entrainer avec des données volumineuses. Nous avons expérimenté avec dfférents domaines et nous avons montré des améliorations en utilisant les modèles FNMT. De plus, la justesse de la morphologie est mesurée à l'aide d'un ensemble de tests spéciaux montrant l'avantage de modéliser explicitement la morphologie de la cible. Notre travail montre les bienfaits de l'applicationde facteurs linguistiques dans le NMT. / Communication between humans across the lands is difficult due to the diversity of languages. Machine translation is a quick and cheap way to make translation accessible to everyone. Recently, Neural Machine Translation (NMT) has achievedimpressive results. This thesis is focus on the Factored Neural Machine Translation (FNMT) approach which is founded on the idea of using the morphological and grammatical decomposition of the words (lemmas and linguistic factors) in the target language. This architecture addresses two well-known challenges occurring in NMT. Firstly, the limitation on the target vocabulary size which is a consequence of the computationally expensive softmax function at the output layer of the network, leading to a high rate of unknown words. Secondly, data sparsity which is arising when we face a specific domain or a morphologically rich language. With FNMT, all the inflections of the words are supported and larger vocabulary is modelled with similar computational cost. Moreover, new words not included in the training dataset can be generated. In this work, I developed different FNMT architectures using various dependencies between lemmas and factors. In addition, I enhanced the source language side also with factors. The FNMT model is evaluated on various languages including morphologically rich ones. State of the art models, some using Byte Pair Encoding (BPE) are compared to the FNMT model using small and big training datasets. We found out that factored models are more robust in low resource conditions. FNMT has been combined with BPE units performing better than pure FNMT model when trained with big data. We experimented with different domains obtaining improvements with the FNMT models. Furthermore, the morphology of the translations is measured using a special test suite showing the importance of explicitly modeling the target morphology. Our work shows the benefits of applying linguistic factors in NMT.
58

Srovnání (a historická podmíněnost) výstupů ze strojových překladačů / Comparing Machine Translation Output (and the Way it Changes over Time)

Kyselová, Soňa January 2018 (has links)
This diploma thesis focuses on machine translation (MT), which has been studied for a relatively long time in linguistics (and later also in translation studies) and which in recent years is at the forefront of the broader public as well. This thesis aims to explore the quality of machine translation outputs and the way it changes over time. The theoretical part first deals with the machine translation in general, namely basic definitions, brief history and approaches to machine translation, then describes online machine translation systems and evaluation methods. Finally, this part provides a methodological model for the empirical part. Using a set of texts translated with MT, the empirical part seeks to check how online machine translation systems deal with translation of different text-types and whether there is improvement of the quality of MT outputs over time. In order to do so, an analysis of text-type, semantics, lexicology, stylistics and pragmatics is carried out as well as a rating of the general applicability of the translation. The final part of this thesis compares and concludes the results of the analysis. With regard to this comparation, conclusions are made and general tendencies stated that have emerged from the empirical part of the thesis.
59

Strojový překlad s využitím syntaktické analýzy / Machine Translation Using Syntactic Analysis

Popel, Martin January 2018 (has links)
Machine Translation Using Syntactic Analysis Martin Popel This thesis describes our improvement of machine translation (MT), with a special focus on the English-Czech language pair, but using techniques ap- plicable also to other languages. First, we present multiple improvements of the deep-syntactic system TectoMT. For instance, we implemented a novel context-sensitive translation model, comparing several machine learning ap- proaches. We also adapted TectoMT to other domains and languages. Sec- ond, we present Transformer - a state-of-the-art end-to-end neural MT sys- tem. We analyzed in detail the effect of several training hyper-parameters. With our optimized training, the system outperformed the best result on the WMT2017 test set by +1.0 BLEU. We further extended this system by uti- lization of monolingual training data and by a new type of backtranslation (+2.8 BLEU compared to the baseline system). In addition, we leveraged domain adaptation and the effect of "translationese" (i.e which language in parallel data is the original and which is the translation) to optimize MT systems for original-language and translated-language data (gaining further +0.2 BLEU). Our improved neural MT system significantly (p¡0.05) out- performed all other systems in English-Czech and Czech-English WMT2018 shared tasks,...
60

Multimodalita ve strojovém překladu / Multimodality in Machine Translation

Libovický, Jindřich January 2019 (has links)
Multimodality in Machine Translation Jindřich Libovický Traditionally, most natural language processing tasks are solved within the lan- guage, relying on distributional properties of words. Representation learning abilities of deep learning recently allowed using additional information source by grounding the representations in the visual modality. One of the tasks that attempt to exploit the visual information is multimodal machine translation: translation of image captions when having access to the original image. The thesis summarizes joint processing of language and real-world images using deep learning. It gives an overview of the state of the art in multimodal machine translation and describes our original contribution to solving this task. We introduce methods of combining multiple inputs of possibly different modalities in recurrent and self-attentive sequence-to-sequence models and show results on multimodal machine translation and other tasks related to machine translation. Finally, we analyze how the multimodality influences the semantic properties of the sentence representation learned by the networks and how that relates to translation quality.

Page generated in 0.1017 seconds