• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 21
  • 21
  • 20
  • 17
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ruqual: A System for Assessing Post-Editing

Housley, Jason K. 25 May 2012 (has links) (PDF)
Post-editing machine translation has become more common in recent years due to the increase in materials requiring translation and the effectiveness of machine translation systems. This project presents a system for formalizing structured translation specifications that facilitates the assessment of the performance of a post-editor. This report provides details concerning two software applications: the Ruqual Specifications Writer, which aids in the authoring of post-editing project specifications, and the Ruqual Rubric Viewer which provides a graphical user interface for filling out a machine readable rubric file. The project as a whole relies on a definition of translation quality based on the specification approach. In order to test whether potential evaluators are able to reliably assess the quality of post-edited translations, a user study was conducted that utilized the Specifications Writer and Rubric Viewer. The specifications developed for the project were based on actual post-editing data provided by Ray Flournoy of Adobe. The study involved simulating the work of five post-editors, which consisted of developing texts and scenarios. 17 non-expert graders rated the work of the five fictional post-editors, and an Intraclass Correlation of the graders responses shows that they are reliable to a high degree. The groundwork laid by this project should help in the development of other applications that assist in the assessment of translation projects in terms of a specification approach to quality.
2

Post-Editing als Bestandteil von Translationsstudiengängen in der DACH-Region

Schumann, Paula 01 April 2020 (has links)
No description available.
3

Automatic post-editing of phrase-based machine translation outputs / Automatic post-editing of phrase-based machine translation outputs

Rosa, Rudolf January 2013 (has links)
We present Depfix, a system for automatic post-editing of phrase-based English-to-Czech machine trans- lation outputs, based on linguistic knowledge. First, we analyzed the types of errors that a typical machine translation system makes. Then, we created a set of rules and a statistical component that correct errors that are common or serious and can have a potential to be corrected by our approach. We use a range of natural language processing tools to provide us with analyses of the input sentences. Moreover, we reimple- mented the dependency parser and adapted it in several ways to parsing of statistical machine translation outputs. We performed both automatic and manual evaluations which confirmed that our system improves the quality of the translations.
4

Utilité et utilisation de la traduction automatique dans l’environnement de traduction : une évaluation axée sur les traducteurs professionnels

Rémillard, Judith 19 June 2018 (has links)
L’arrivée de la traduction automatique (TA) bouleverse les pratiques dans l’industrie de la traduction et soulève par le fait même des questions sur l’utilité et l’utilisation de cette technologie. Puisque de nombreuses études ont déjà porté sur son utilisation dans un contexte où elle est imposée aux traducteurs, nous avons choisi d’adopter la perspective toute particulière des traducteurs pour examiner son utilité et son utilisation volontaire dans l’environnement de traduction (ET). Notre recherche visait à répondre à trois grandes questions : les traducteurs utilisent-ils la TA dans leurs pratiques? Les traducteurs croient-ils que les données de sortie sont utiles et utilisables? Les traducteurs utilisent-ils concrètement ces données de sortie dans le processus de traduction? Pour répondre à ces questions, nous avons d’abord diffusé un sondage à grande échelle afin de mesurer l’utilisation de la TA en tant qu’outil, de recueillir des données sur le profil des répondants et d’évaluer leur perception d’utilité par rapport aux données de sortie et aux divers types de phénomènes que nous avions identifiés au préalable avec l’aide de deux traducteurs professionnels. Ensuite, nous avons réalisé une expérience avec d’autres traducteurs professionnels où nous leur avons demandé de procéder à la traduction de courts segments et avons examiné s’ils utilisaient ou non ces données de sortie pour produire leur traduction. Notre analyse était fondée sur le principe que, dans un contexte d’utilisation volontaire, l’utilisation d’une donnée de sortie permet d’induire une perception d’utilité et d’examiner, par le fait même, l’utilité de la TA. Dans l’ensemble, nous avons trouvé que la TA n’est habituellement pas utilisée de façon volontaire, que les perceptions des traducteurs sont peu favorables à une telle utilisation, que la perception des traducteurs quant à l’utilité des données de sortie est aussi plutôt négative, mais que les données de sortie semblent être beaucoup plus utiles et utilisables que ce que ne pourraient le croire les traducteurs, car ils les ont généralement utilisées dans le processus de traduction. Nous avons aussi examiné les facteurs déterminants de l’utilité et de l’utilisation de la TA et des données de sortie.
5

Productivity and quality in the post-editing of outputs from translation memories and machine translation

Guerberof Arenas, Ana 24 September 2012 (has links)
This study presents empirical research on no-match, machine-translated and translation-memory segments, analyzed in terms of translators’ productivity, final quality and prior professional experience. The findings suggest that translators have higher productivity and quality when using machine-translated output than when translating on their own, and that the productivity and quality gained with machine translation are not significantly different from the values obtained when processing fuzzy matches from a translation memory in the 85-94 percent range. The translators’ prior experience impacts on the quality they deliver but not on their productivity. These quantitative findings are triangulatedwith qualitative data from an online questionnaire and from one-to-one debriefings with the translators. Este estudio presenta una investigación empírica sobre la traducción de segmentos nuevos y aquellos procesados con traducción automática y memorias de traducción analizados en relación a la productividad, calidad final y experiencia profesional de un grupo de traductores. Los resultados sugieren que los traductores obtienen una productividad y calidad más altas cuando procesan segmentos de traducción automática que cuando traducen sin ninguna ayuda y que dicha productividad y calidad no son significativamente diferentes a la que se obtiene cuando procesan coincidencias parciales de una memoria de traducción (del 85 al 94 por ciento). La experiencia profesional previa de los traductores influye en la calidad pero no así en la productividad obtenidas. Los resultados cuantitativos se triangulan, además, con datos cualitativos obtenidos a través de un cuestionario en línea y de entrevistas individuales realizadas a los trad
6

Post-Editing als Bestandteil von Translationsstudiengängen in der DACH-Region: Ergebnisse einer Online-Befragung

Schumann, Paula 25 May 2020 (has links)
No description available.
7

Automatic Post-editing and Quality Estimation in Machine Translation of Product Descriptions

Kukk, Kätriin January 2022 (has links)
As a result of drastically improved machine translation quality in recent years, machine translation followed by manual post-editing is currently a trend in the language industry that is slowly but surely replacing manual translation from scratch. In this thesis, the applicability of machine translation to product descriptions of clothing items is studied. The focus lies on determining whether automatic post-editing is a viable approach for improving baseline translations when new training data becomes available and finding out if there is an existing quality estimation system that could reliably assign quality scores to machine translated texts. It is shown that machine translation is a promising approach for the target domain with the majority of systems experimented with being able to generate translations that on average are of almost publishable quality according to the human evaluation carried out, meaning that only light post-editing is needed before the translations can be published. Automatic post-editing is shown to be able to improve the worst baseline translations but struggles with improving the overall translation quality due to its tendency to overcorrect good translations. Nevertheless, one of the trained post-editing systems is still rated higher than the baseline by human evaluators. A new finding is that training a post-editing model on more data using worse translations leads to better performance compared to training on less but higher-quality data. None of the quality estimation systems experimented with shows a strong correlation with human evaluation results which is why it is suggested not to provide the confidence scores of the baseline model to the human evaluators responsible for correcting and approving translations. The main contributions of this work are showing that the target domain of product descriptions is suitable for integrating machine translation into the translation workflow, proposing an approach for that translation workflow that is more automated than the current one as well as the finding that it is better to use more data and poorer translations compared to less data and higher-quality translations when training an automatic post-editing system.
8

A Study on Manual and Automatic Evaluation Procedures and Production of Automatic Post-editing Rules for Persian Machine Translation

Mostofian, Nasrin January 2017 (has links)
Evaluation of machine translation is an important step towards improving MT. One way to evaluate the output of MT is to focus on different types of errors occurring in the translation hypotheses, and to think of possible solutions to fix those errors. An error categorization is a rather beneficent tool that makes it easy to analyze the translation errors and can also be utilized to manually generate post-editing rules to be applied automatically to the product of machine translation. In this work, we define a categorization for the errors occurring in Swedish--Persian machine translation by analyzing the errors that occur in three data-sets from two websites: 1177.se, and Linköping municipality. We define three types of monolingual reference free evaluation (MRF), and use two automatic metrics BLEU and TER, to conduct a bilingual evaluation for Swedish-Persian translation. Later on, based on the experience of working with the errors that occur in the corpora, we manually generate automatic post-editing (APE) rules and apply them to the product of machine translation. Three different sets of results are obtained: (1) The results of analyzing MT errors show that the three most common types of errors that occur in the translation hypotheses are mistranslated words, wrong word order, and extra prepositions. These types of errors are placed in semantic and syntactic categories respectively. (2) The results of comparing the correlation between the automatic and manual evaluation show a low correlation between the two evaluations. (3) Lastly, applying the APE rules to the product of machine translation gives an increase in BLEU score on the largest data-set while remaining almost unchanged on the other two data-sets. The results for TER show a better score on one data-set, while the scores on the two other data-sets remain unchanged.
9

Kategorizace úprav strojového překladu při post-editaci: jazyková kombinace angličtina - čeština / Classification of Edit Categories in Machine Translation Post-Editing: English-Czech Language Combination

Kopecká, Klára January 2021 (has links)
Today, a translation job does not only mean transferring content from one language to another by utilizing one's own knowledge of two languages and subject matter expertise. Today, translation often means working with suggestions from various resources, including machine translation. The popularity of machine translation post-editing (MTPE) as a form of translation is growing. That is why this sklil should be acquired by translation students prior to entering the market. In order to work with machine translation efficiently, not only knowledge of the basic principles of how machine translation engines and translation technology work is needed, but also being able to assess the relevance of each linguistic edit with regards to the assigned instructions and purpose of the translated text. The aim of this master's thesis is to analyze linguistic edits carried out during an MTPE job from English to Czech. Identified edits are then classified, resulting in a list of linguistic edit categories in English-to-Czech MTPE. KEY WORDS MTPE, PEMT, post-editing, machine translation, classification, edits
10

Automatic Post-Editing for Machine Translation

Chatterjee, Rajen 16 October 2019 (has links)
Automatic Post-Editing (APE) aims to correct systematic errors in a machine translated text. This is primarily useful when the machine translation (MT) system is not accessible for improvement, leaving APE as a viable option to improve translation quality as a downstream task - which is the focus of this thesis. This field has received less attention compared to MT due to several reasons, which include: the limited availability of data to perform a sound research, contrasting views reported by different researchers about the effectiveness of APE, and limited attention from the industry to use APE in current production pipelines. In this thesis, we perform a thorough investigation of APE as a down- stream task in order to: i) understand its potential to improve translation quality; ii) advance the core technology - starting from classical methods to recent deep-learning based solutions; iii) cope with limited and sparse data; iv) better leverage multiple input sources; v) mitigate the task-specific problem of over-correction; vi) enhance neural decoding to leverage external knowledge; and vii) establish an online learning framework to handle data diversity in real-time. All the above contributions are discussed across several chapters, and most of them are evaluated in the APE shared task organized each year at the Conference on Machine Translation. Our efforts in improving the technology resulted in the best system at the 2017 APE shared task, and our work on online learning received a distinguished paper award at the Italian Conference on Computational Linguistics. Overall, outcomes and findings of our work have boost interest among researchers and attracted industries to examine this technology to solve real-word problems.

Page generated in 0.1634 seconds