• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 38
  • 21
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 302
  • 302
  • 108
  • 77
  • 61
  • 57
  • 56
  • 54
  • 49
  • 47
  • 46
  • 42
  • 35
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Pós-edição automática de textos traduzidos automaticamente de inglês para português do Brasil

Martins, Débora Beatriz de Jesus 10 April 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:12Z (GMT). No. of bitstreams: 1 5932.pdf: 1110060 bytes, checksum: fe08b552e37f04451248c376cfc4454f (MD5) Previous issue date: 2014-04-10 / Universidade Federal de Minas Gerais / The project described in this document focusses on the post-editing of automatically translated texts. Machine Translation (MT) is the task of translating texts in natural language performed by a computer and it is part of the Natural Language Processing (NLP) research field, linked to the Artificial Intelligence (AI) area. Researches in MT using different approaches, such as linguistics and statistics, have advanced greatly since its beginning in the 1950 s. Nonetheless, the automatically translated texts, except when used to provide a basic understanding of a text, still need to go through post-editing to become well written in the target language. At present, the most common form of post-editing is that executed by human translators, whether they are professional translators or the users of the MT system themselves. Manual post-editing is more accurate but it is cost and time demanding and can be prohibitive when too many changes have to be made. As an attempt to advance in the state-of-the-art in MT research, mainly regarding Brazilian Portuguese, this research has as its goal verifying the effectiveness of using an Automated Post-Editing (APE) system in translations from English to Portuguese. By using a training corpus containing reference translations (good translations produced by humans) and translations produced by a phrase-based statistical MT system, machine learning techniques were applied for the APE creation. The resulting APE system is able to: (i) automatically identify MT errors and (ii) automatically correct MT errors by using previous error identification or not. The evaluation of the APE effectiveness was made through the usage of the automatic evaluation metrics BLEU and NIST, calculated for post-edited and not post-edited sentences. There was also manual verification of the sentences. Despite the limited results that were achieved due to the small size of our training corpus, we can conclude that the resulting APE improves MT quality from English to Portuguese. / O projeto de mestrado descrito neste documento tem como foco a pós-edição de textos traduzidos automaticamente. Tradução Automática (TA) é a tarefa de traduzir textos em língua natural desempenhada por um computador e faz parte da linha de pesquisa de Processamento de Línguas Naturais (PLN), vinculada à área de Inteligência Artificial (IA). As pesquisas em TA, utilizando desde abordagens linguísticas até modelos estatísticos, têm avançado muito desde seu início na década de 1950. Entretanto, os textos traduzidos automaticamente, exceto quando utilizados apenas para um entendimento geral do assunto, ainda precisam passar por pós-edição para que se tornem bem escritos na língua alvo. Atualmente, a forma mais comum de pós-edição é a executada por tradutores humanos, sejam eles profissionais ou os próprios usuários dos sistemas de TA. A pós-edição manual é mais precisa, mas traz custo e demanda tempo, especialmente quando envolve muitas alterações. Como uma tentativa para avançar o estado da arte das pesquisas em TA, principalmente envolvendo o português do Brasil, esta pesquisa visa verificar a efetividade do uso de um sistema de pós-edição automática (Automated Post-Editing ou APE) na tradução do inglês para o português. Utilizando um corpus de treinamento contendo traduções de referência (boas traduções produzidas por humanos) e traduções geradas por um sistema de TA estatística baseada em frases, técnicas de aprendizado de máquina foram aplicadas para o desenvolvimento do APE. O sistema de APE desenvolvido: (i) identifica automaticamente os erros de TA e (ii) realiza a correção automática da tradução com ou sem a identificação prévia dos erros. A avaliação foi realizada usando tanto medidas automáticas BLEU e NIST, calculadas para as sentenças sem e com a pós-edição; como analise manual. Apesar de resultados limitados pelo pequeno tamanho do corpus de treinamento, foi possível concluir que o APE desenvolvido melhora a qualidade da TA de inglês para português.
272

Uma solu??o para gera??o autom?tica de trilhas em l?ngua brasileira de sinais em conte?dos multim?dia

Ara?jo, Tiago Maritan Ugulino de 14 September 2012 (has links)
Made available in DSpace on 2014-12-17T14:55:05Z (GMT). No. of bitstreams: 1 TiagoMUA_TESE.pdf: 1442352 bytes, checksum: a9909ef0bb9ebf04b3cad967bbf8be1c (MD5) Previous issue date: 2012-09-14 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / Deaf people have serious difficulties to access information. The support for sign languages is rarely addressed in Information and Communication Technologies (ICT). Furthermore, in scientific literature, there is a lack of works related to machine translation for sign languages in real-time and open-domain scenarios, such as TV. To minimize these problems, in this work, we propose a solution for automatic generation of Brazilian Sign Language (LIBRAS) video tracks into captioned digital multimedia contents. These tracks are generated from a real-time machine translation strategy, which performs the translation from a Brazilian Portuguese subtitle stream (e.g., a movie subtitle or a closed caption stream). Furthermore, the proposed solution is open-domain and has a set of mechanisms that exploit human computation to generate and maintain their linguistic constructions. Some implementations of the proposed solution were developed for digital TV, Web and Digital Cinema platforms, and a set of experiments with deaf users was developed to evaluate the main aspects of the solution. The results showed that the proposed solution is efficient and able to generate and embed LIBRAS tracks in real-time scenarios and is a practical and feasible alternative to reduce barriers of deaf to access information, especially when human interpreters are not available / Os surdos enfrentam s?rias dificuldades para acessar informa??es. As Tecnologias de Informa??o e Comunica??o (TIC) quando s?o desenvolvidas dificilmente levam em considera??o os requisitos espec?ficos destes usu?rios especiais. O suporte para l?nguas de sinais, por exemplo, ? raramente explorado nessas tecnologias. Al?m disso, as solu??es presentes na literatura relacionadas a tradu??o autom?tica para l?nguas de sinais s?o restritas a um dom?nio de aplica??o espec?fico ou n?o s?o vi?veis para cen?rios que necessitam de tradu??o em tempo real, como, por exemplo, na TV. Para reduzir esses problemas, neste trabalho ? proposta uma solu??o para gera??o autom?tica de trilhas em L?ngua Brasileira de Sinais (LIBRAS) em conte?dos digitais multim?dia legendados. As trilhas de LIBRAS s?o geradas a partir de uma estrat?gia de tradu??o autom?tica e em tempo real para LIBRAS, que realiza a tradu??o a partir de fluxos de legendas (como, por exemplo, legendas ou closed caption) em l?ngua portuguesa. Al?m disso, a solu??o proposta ? de dom?nio geral e dotada de um conjunto de mecanismos que exploram a colabora??o e a computa??o humana para gerar e manter suas constru??es ling??sticas de forma eficiente. Implementa??es da solu??o proposta foram desenvolvidas para as plataformas de TV Digital, Web e Cinema Digital, e um conjunto de experimentos, incluindo testes com usu?rios surdos, foi desenvolvido para avaliar os principais aspectos da solu??o. Os resultados mostraram que a solu??o proposta ? eficiente, capaz de gerar e embarcar as trilhas de LIBRAS em cen?rios que exigem tradu??o em tempo real, al?m de ser uma alternativa pr?tica e vi?vel para redu??o das barreiras de acesso ? informa??o dos surdos, especialmente quando int?rpretes humanos n?o est?o dispon?veis
273

Projection multilingue d'annotations pour dialogues avancés

Julien, Simon 12 1900 (has links)
No description available.
274

Apprentissage discriminant des modèles continus en traduction automatique / Discriminative Training Procedure for Continuous-Space Translation Models

Do, Quoc khanh 31 March 2016 (has links)
Durant ces dernières années, les architectures de réseaux de neurones (RN) ont été appliquées avec succès à de nombreuses applications en Traitement Automatique de Langues (TAL), comme par exemple en Reconnaissance Automatique de la Parole (RAP) ainsi qu'en Traduction Automatique (TA).Pour la tâche de modélisation statique de la langue, ces modèles considèrent les unités linguistiques (c'est-à-dire des mots et des segments) à travers leurs projections dans un espace continu (multi-dimensionnel), et la distribution de probabilité à estimer est une fonction de ces projections.Ainsi connus sous le nom de "modèles continus" (MC), la particularité de ces derniers se trouve dans l'exploitation de la représentation continue qui peut être considérée comme une solution au problème de données creuses rencontré lors de l'utilisation des modèles discrets conventionnels.Dans le cadre de la TA, ces techniques ont été appliquées dans les modèles de langue neuronaux (MLN) utilisés dans les systèmes de TA, et dans les modèles continus de traduction (MCT).L'utilisation de ces modèles se sont traduit par d'importantes et significatives améliorations des performances des systèmes de TA. Ils sont néanmoins très coûteux lors des phrases d'apprentissage et d'inférence, notamment pour les systèmes ayant un grand vocabulaire.Afin de surmonter ce problème, l'architecture SOUL (pour "Structured Output Layer" en anglais) et l'algorithme NCE (pour "Noise Contrastive Estimation", ou l'estimation contrastive bruitée) ont été proposés: le premier modifie la structure standard de la couche de sortie, alors que le second cherche à approximer l'estimation du maximum de vraisemblance (MV) par une méthode d’échantillonnage.Toutes ces approches partagent le même critère d'estimation qui est la log-vraisemblance; pourtant son utilisation mène à une incohérence entre la fonction objectif définie pour l'estimation des modèles, et la manière dont ces modèles seront utilisés dans les systèmes de TA.Cette dissertation vise à concevoir de nouvelles procédures d'entraînement des MC, afin de surmonter ces problèmes.Les contributions principales se trouvent dans l'investigation et l'évaluation des méthodes d'entraînement efficaces pour MC qui visent à: (i) réduire le temps total de l'entraînement, et (ii) améliorer l'efficacité de ces modèles lors de leur utilisation dans les systèmes de TA.D'un côté, le coût d'entraînement et d'inférence peut être réduit (en utilisant l'architecture SOUL ou l'algorithme NCE), ou la convergence peut être accélérée.La dissertation présente une analyse empirique de ces approches pour des tâches de traduction automatique à grande échelle.D'un autre côté, nous proposons un cadre d'apprentissage discriminant qui optimise la performance du système entier ayant incorporé un modèle continu.Les résultats expérimentaux montrent que ce cadre d'entraînement est efficace pour l'apprentissage ainsi que pour l'adaptation des MC au sein des systèmes de TA, ce qui ouvre de nouvelles perspectives prometteuses. / Over the past few years, neural network (NN) architectures have been successfully applied to many Natural Language Processing (NLP) applications, such as Automatic Speech Recognition (ASR) and Statistical Machine Translation (SMT).For the language modeling task, these models consider linguistic units (i.e words and phrases) through their projections into a continuous (multi-dimensional) space, and the estimated distribution is a function of these projections. Also qualified continuous-space models (CSMs), their peculiarity hence lies in this exploitation of a continuous representation that can be seen as an attempt to address the sparsity issue of the conventional discrete models. In the context of SMT, these echniques have been applied on neural network-based language models (NNLMs) included in SMT systems, and oncontinuous-space translation models (CSTMs). These models have led to significant and consistent gains in the SMT performance, but are also considered as very expensive in training and inference, especially for systems involving large vocabularies. To overcome this issue, Structured Output Layer (SOUL) and Noise Contrastive Estimation (NCE) have been proposed; the former modifies the standard structure on vocabulary words, while the latter approximates the maximum-likelihood estimation (MLE) by a sampling method. All these approaches share the same estimation criterion which is the MLE ; however using this procedure results in an inconsistency between theobjective function defined for parameter stimation and the way models are used in the SMT application. The work presented in this dissertation aims to design new performance-oriented and global training procedures for CSMs to overcome these issues. The main contributions lie in the investigation and evaluation of efficient training methods for (large-vocabulary) CSMs which aim~:(a) to reduce the total training cost, and (b) to improve the efficiency of these models when used within the SMT application. On the one hand, the training and inference cost can be reduced (using the SOUL structure or the NCE algorithm), or by reducing the number of iterations via a faster convergence. This thesis provides an empirical analysis of these solutions on different large-scale SMT tasks. On the other hand, we propose a discriminative training framework which optimizes the performance of the whole system containing the CSM as a component model. The experimental results show that this framework is efficient to both train and adapt CSM within SMT systems, opening promising research perspectives.
275

Modèles exponentiels et contraintes sur les espaces de recherche en traduction automatique et pour le transfert cross-lingue / Log-linear Models and Search Space Constraints in Statistical Machine Translation and Cross-lingual Transfer

Pécheux, Nicolas 27 September 2016 (has links)
La plupart des méthodes de traitement automatique des langues (TAL) peuvent être formalisées comme des problèmes de prédiction, dans lesquels on cherche à choisir automatiquement l'hypothèse la plus plausible parmi un très grand nombre de candidats. Malgré de nombreux travaux qui ont permis de mieux prendre en compte la structure de l'ensemble des hypothèses, la taille de l'espace de recherche est généralement trop grande pour permettre son exploration exhaustive. Dans ce travail, nous nous intéressons à l'importance du design de l'espace de recherche et étudions l'utilisation de contraintes pour en réduire la taille et la complexité. Nous nous appuyons sur l'étude de trois problèmes linguistiques — l'analyse morpho-syntaxique, le transfert cross-lingue et le problème du réordonnancement en traduction — pour mettre en lumière les risques, les avantages et les enjeux du choix de l'espace de recherche dans les problèmes de TAL.Par exemple, lorsque l'on dispose d'informations a priori sur les sorties possibles d'un problème d'apprentissage structuré, il semble naturel de les inclure dans le processus de modélisation pour réduire l'espace de recherche et ainsi permettre une accélération des traitements lors de la phase d'apprentissage. Une étude de cas sur les modèles exponentiels pour l'analyse morpho-syntaxique montre paradoxalement que cela peut conduire à d'importantes dégradations des résultats, et cela même quand les contraintes associées sont pertinentes. Parallèlement, nous considérons l'utilisation de ce type de contraintes pour généraliser le problème de l'apprentissage supervisé au cas où l'on ne dispose que d'informations partielles et incomplètes lors de l'apprentissage, qui apparaît par exemple lors du transfert cross-lingue d'annotations. Nous étudions deux méthodes d'apprentissage faiblement supervisé, que nous formalisons dans le cadre de l'apprentissage ambigu, appliquées à l'analyse morpho-syntaxiques de langues peu dotées en ressources linguistiques.Enfin, nous nous intéressons au design de l'espace de recherche en traduction automatique. Les divergences dans l'ordre des mots lors du processus de traduction posent un problème combinatoire difficile. En effet, il n'est pas possible de considérer l'ensemble factoriel de tous les réordonnancements possibles, et des contraintes sur les permutations s'avèrent nécessaires. Nous comparons différents jeux de contraintes et explorons l'importance de l'espace de réordonnancement dans les performances globales d'un système de traduction. Si un meilleur design permet d'obtenir de meilleurs résultats, nous montrons cependant que la marge d'amélioration se situe principalement dans l'évaluation des réordonnancements plutôt que dans la qualité de l'espace de recherche. / Most natural language processing tasks are modeled as prediction problems where one aims at finding the best scoring hypothesis from a very large pool of possible outputs. Even if algorithms are designed to leverage some kind of structure, the output space is often too large to be searched exaustively. This work aims at understanding the importance of the search space and the possible use of constraints to reduce it in size and complexity. We report in this thesis three case studies which highlight the risk and benefits of manipulating the seach space in learning and inference.When information about the possible outputs of a sequence labeling task is available, it may seem appropriate to include this knowledge into the system, so as to facilitate and speed-up learning and inference. A case study on type constraints for CRFs however shows that using such constraints at training time is likely to drastically reduce performance, even when these constraints are both correct and useful at decoding.On the other side, we also consider possible relaxations of the supervision space, as in the case of learning with latent variables, or when only partial supervision is available, which we cast as ambiguous learning. Such weakly supervised methods, together with cross-lingual transfer and dictionary crawling techniques, allow us to develop natural language processing tools for under-resourced languages. Word order differences between languages pose several combinatorial challenges to machine translation and the constraints on word reorderings have a great impact on the set of potential translations that is explored during search. We study reordering constraints that allow to restrict the factorial space of permutations and explore the impact of the reordering search space design on machine translation performance. However, we show that even though it might be desirable to design better reordering spaces, model and search errors seem yet to be the most important issues.
276

Confidence Measures for Alignment and for Machine Translation / Mesures de Confiance pour l’Alignement et pour la Traduction Automatique

Xu, Yong 26 September 2016 (has links)
En linguistique informatique, la relation entre langues différentes est souventétudiée via des techniques d'alignement automatique. De tels alignements peuvent êtreétablis à plusieurs niveaux structurels. En particulier, les alignements debi-textes aux niveaux phrastiques et sous-phrastiques constituent des sources importantesd'information dans pour diverses applications du Traitement Automatique du Language Naturel (TALN)moderne, la Traduction Automatique étant un exemple proéminent.Cependant, le calcul effectif des alignements de bi-textes peut êtreune tâche compliquée. Les divergences entre les langues sont multiples,de la structure de discours aux constructions morphologiques.Les alignements automatiques contiennent, majoritairement, des erreurs nuisantaux performances des applications.Dans cette situation, deux pistes de recherche émergent. La première est de continuerà améliorer les techniques d'alignement.La deuxième vise à développer des mesures de confiance fiables qui permettent aux applicationsde sélectionner les alignements selon leurs besoins.Les techniques d'alignement et l'estimation de confiance peuvent tous les deuxbénéficier d'alignements manuels.Des alignements manuels peuventjouer un rôle de supervision pour entraîner des modèles, et celuides données d'évaluation. Pourtant, la création des telles données est elle-mêmeune question importante, en particulier au niveau sous-phrastique, où les correspondancesmultilingues peuvent être implicites et difficiles à capturer.Cette thèse étudie des moyens pour acquérir des alignements de bi-textes utiles, aux niveauxphrastiques et sous-phrastiques. Le chapitre 1 fournit une description de nos motivations,la portée et l'organisation du travail, et introduit quelques repères terminologiques et lesprincipales notations.L'état-de-l'art des techniques d'alignement est revu dans la Partie I. Les chapitres 2 et3 décriventles méthodes respectivement pour l'alignement des phrases et des mots.Le chapitre 4 présente les bases de données d'alignement manuel,et discute de la création d'alignements de référence. Le reste de la thèse, la Partie II,présente nos contributions à l'alignement de bi-textes, en étudiant trois aspects.Le chapitre 5 présente notre contribution à la collection d'alignements de référence. Pourl'alignement des phrases, nous collectons les annotations d'un genre spécifiquede textes: les bi-textes littéraires. Nous proposons aussi un schéma d'annotation deconfiance. Pour l'alignement sous-phrastique,nous annotons les liens entre mots isolés avec une nouvelle catégorisation, et concevonsune approche innovante de segmentation itérative pour faciliter l'annotation des liens entre groupes de mots.Toutes les données collectées sont disponibles en ligne.L'amélioration des méthodes d'alignement reste un sujet important de la recherche. Nousprêtons une attention particulière à l'alignement phrastique, qui est souvent le point dedépart de l'alignement de bi-textes. Le chapitre 6 présente notre contribution. En commençantpar évaluer les outils d'alignement d'état-de-l'art et par analyser leurs modèles et résultats,nous proposons deux nouvelles méthodes pour l'alignement phrastique, qui obtiennent desperformances d'état-de-l'art sur un jeu de données difficile.L'autre sujet important d'étude est l'estimation de confiance. Dans le chapitre 7, nousproposons des mesures de confiance pour les alignements phrastique et sous-phrastique.Les expériences montrent que l'estimation de confiance des liens d'alignement reste undéfi remarquable. Il sera très utile de poursuivre cette étude pour renforcer les mesuresde confiance pour l'alignement de bi-textes.Enfin, notons que les contributions apportées dans cette thèse sont employées dans uneapplication réelle: le développement d'une liseuse qui vise à faciliter la lecturedes livres électroniques multilingues. / In computational linguistics, the relation between different languages is often studied through automatic alignment techniques. Such alignments can be established at various structural levels. In particular, sentential and sub-sentential bitext alignments constitute an important source of information in various modern Natural Language Processing (NLP) applications, a prominent one being Machine Translation (MT).Effectively computing bitext alignments, however, can be a challenging task. Discrepancies between languages appear in various ways, from discourse structures to morphological constructions. Automatic alignments would, at least in most cases, contain noise harmful for the performance of application systems which use the alignments. To deal with this situation, two research directions emerge: the first is to keep improving alignment techniques; the second is to develop reliable confidence measures which enable application systems to selectively employ the alignments according to their needs.Both alignment techniques and confidence estimation can benefit from manual alignments. Manual alignments can be used as both supervision examples to train scoring models and as evaluation materials. The creation of such data is, however, an important question in itself, particularly at sub-sentential levels, where cross-lingual correspondences can be only implicit and difficult to capture.This thesis focuses on means to acquire useful sentential and sub-sentential bitext alignments. Chapter 1 provides a non-technical description of the research motivation, scope, organization, and introduces terminologies and notation. State-of-the-art alignment techniques are reviewed in Part I. Chapter 2 and 3 describe state-of-the-art methods for respectively sentence and word alignment. Chapter 4 summarizes existing manual alignments, and discusses issues related to the creation of gold alignment data. The remainder of this thesis, Part II, presents our contributions to bitext alignment, which are concentrated on three sub-tasks.Chapter 5 presents our contribution to gold alignment data collection. For sentence- level alignment, we collect manual annotations for an interesting text genre: literary bitexts, which are very useful for evaluating sentence aligners. We also propose a scheme for sentence alignment confidence annotation. For sub-sentential alignment, we annotate one-to-one word links with a novel 4-way labelling scheme, and design a new approachfor facilitating the collection of many-to-many links. All the collected data is released on-line.Improving alignment methods remains an important research subject. We pay special attention to sentence alignment, which often lies at the beginning of the bitext alignment pipeline. Chapter 6 presents our contributions to this task. Starting by evaluating state-of-the-art aligners and analyzing their models and results, we propose two new sentence alignment methods, which achieve state-of-the-art performance on a difficult dataset.The other important subject that we study is confidence estimation. In Chapter 7, we propose confidence measures for sentential and sub-sentential alignments. Experiments show that confidence estimation of alignment links is a challenging problem, and more works on enhancing the confidence measures will be useful.Finally, note that these contributions have been employed in a real world application: the development of a bilingual reading tool aimed at facilitating the reading in a foreign language.
277

Bimorphism Machine Translation

Quernheim, Daniel 10 April 2017 (has links)
The field of statistical machine translation has made tremendous progress due to the rise of statistical methods, making it possible to obtain a translation system automatically from a bilingual collection of text. Some approaches do not even need any kind of linguistic annotation, and can infer translation rules from raw, unannotated data. However, most state-of-the art systems do linguistic structure little justice, and moreover many approaches that have been put forward use ad-hoc formalisms and algorithms. This inevitably leads to duplication of effort, and a separation between theoretical researchers and practitioners. In order to remedy the lack of motivation and rigor, the contributions of this dissertation are threefold: 1. After laying out the historical background and context, as well as the mathematical and linguistic foundations, a rigorous algebraic model of machine translation is put forward. We use regular tree grammars and bimorphisms as the backbone, introducing a modular architecture that allows different input and output formalisms. 2. The challenges of implementing this bimorphism-based model in a machine translation toolkit are then described, explaining in detail the algorithms used for the core components. 3. Finally, experiments where the toolkit is applied on real-world data and used for diagnostic purposes are described. We discuss how we use exact decoding to reason about search errors and model errors in a popular machine translation toolkit, and we compare output formalisms of different generative capacity.
278

Algebraic decoder specification: coupling formal-language theory and statistical machine translation: Algebraic decoder specification: coupling formal-language theory and statistical machine translation

Büchse, Matthias 18 December 2014 (has links)
The specification of a decoder, i.e., a program that translates sentences from one natural language into another, is an intricate process, driven by the application and lacking a canonical methodology. The practical nature of decoder development inhibits the transfer of knowledge between theory and application, which is unfortunate because many contemporary decoders are in fact related to formal-language theory. This thesis proposes an algebraic framework where a decoder is specified by an expression built from a fixed set of operations. As yet, this framework accommodates contemporary syntax-based decoders, it spans two levels of abstraction, and, primarily, it encourages mutual stimulation between the theory of weighted tree automata and the application.
279

Round-Trip Translation : A New Path for Automatic Program Repair using Large Language Models / Tur och retur-översättning : En ny väg för automatisk programreparation med stora språkmodeller

Vallecillos Ruiz, Fernando January 2023 (has links)
Research shows that grammatical mistakes in a sentence can be corrected by machine translating it to another language and back. We investigate whether this correction capability of Large Language Models (LLMs) extends to Automatic Program Repair (APR), a software engineering task. Current generative models for APR are pre-trained on source code and fine-tuned for repair. This paper proposes bypassing fine-tuning and using Round-Trip Translation (RTT): translation of code from one programming language to another programming or natural language, and back. We hypothesize that RTT with LLMs performs a regression toward the mean, which removes bugs as they are a form of noise w.r.t. the more frequent, natural, bug-free code in the training data. To test this hypothesis, we employ eight recent LLMs pre-trained on code, including the latest GPT versions, and four common program repair benchmarks in Java. We find that RTT with English as an intermediate language repaired 101 of 164 bugs with GPT-4 on the HumanEval-Java dataset. Moreover, 46 of these are unique bugs that are not repaired by other LLMs fine-tuned for APR. Our findings highlight the viability of round-trip translation with LLMs as a technique for automated program repair and its potential for research in software engineering. / Forskning visar att grammatiska fel i en mening kan korrigeras genom att maskinöversätta den till ett annat språk och tillbaka. Vi undersöker om denna korrigeringsegenskap hos stora språkmodeller (LLMs) även gäller för Automatisk Programreparation (APR), en uppgift inom mjukvaruteknik. Nuvarande generativa modeller för APR är förtränade på källkod och finjusterade för reparation. Denna artikel föreslår att man undviker finjustering och använder Tur och retur-översättning (RTT): översättning av kod från ett programmeringsspråk till ett annat programmerings- eller naturspråk, och tillbaka. Vi antar att RTT med LLMs utför en regression mot medelvärdet, vilket tar bort buggar eftersom de är en form av brus med avseende på den mer frekventa, naturliga, buggfria koden i träningsdatan. För att testa denna hypotes använder vi åtta nyligen förtränade LLMs på kod, inklusive de senaste GPT-versionerna, och fyra vanliga programreparationsstandarder i Java. Vi upptäcker att RTT med engelska som ett mellanspråk reparerade 101 av 164 buggar med GPT-4 på HumanEval-Java-datasetet. Dessutom är 46 av dessa unika buggar som inte repareras av andra LLMs finjusterade för APR. Våra resultat belyser genomförbarheten av tur och retur-översättning med LLMs som en teknik för automatiserad programreparation och dess potential för forskning inom mjukvaruteknik.
280

Cohesion in Translation: A Corpus Study of Human-translated, Machine-translated, and Non-translated Texts (Russian into English)

Bystrova-McIntyre, Tatyana 21 November 2012 (has links)
No description available.

Page generated in 0.0692 seconds