• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Machine Translation and Translation Memory Systems: An Ethnographic Study of Translators’ Satisfaction

Mohammadi Dehcheshmeh, Maryam January 2017 (has links)
The translator’s workplace (TW) has undergone radical changes since microcomputers were introduced on the market and, as a result, digitization increased enormously. Existing translation-related technologies, such as machine translation (MT), were enhanced and others, such as translation memory (TM) systems, were developed. It is a noteworthy fact that implementing new translation-related technologies in the TW is done in various conditions according to specific goals that subsequently define new work conditions for translators. These new work conditions affect translators’ satisfaction with their job, and their satisfaction will influence career development and employee retention in the translation industry over the long term. In the past two decades, Language Service Providers (LSPs) have started integrating MT into TM systems to benefit from MT suggestions when TM is not helpful. Neither TM nor MT is unfamiliar to the translation industry, but the combination, i.e. TM+MT, is fairly new. So far, there have been few studies on translators’ satisfaction with TM+MT. This study consists of an ethnographic research project on seven translators in a Canada-based company where TM+MT is used. Observations, semi-structured interviews, and in-house document analysis have been used as data collection methods. The data obtained has been analyzed and discussed based on Rodríguez-Castro’s task satisfaction model (2011). This model addresses intrinsic and extrinsic sources of translators’ satisfaction with the activities they do in their job. Investigating the factors and variables of her model in the aforementioned company, I concluded that those sources of satisfaction cannot be considered separately from the job-context factors, such as the company’s policies in implementing TM+MT.
2

Automatická kontrola překladu / Automatic Checking of Translation

Šimlovič, Juraj January 2011 (has links)
Translation memories are becoming more and more popular with professional translators nowadays, especially in fields of software localization and translation of technical and official documents. Although commercial systems, which employ memory translation, provide some limited capabilities for automatic checking of translations, these are mostly of simple search-and-replace type. And none of these systems provide reasonable means of applying Czech morphology while checking. Professional translators could benefit from an automatic tool, which would provide more advanced rule-based checking capabilities, taking Czech and even English morphology into the process. Checking not only for correct use of terminology, but also for illicit translations and use of forbidden terms would be useful. This thesis investigates types of mistakes translators tend to make. Review of existing solutions for automatic translation checking for different languages is provided. An application is then suggested and developed, which attempts to search for some of the most frequent mistakes made in translations into Czech language, taking morphology into account while searching.
3

Using Alignment Methods to Reduce Translation of Changes in Structured Information

Resman, Daniel January 2012 (has links)
In this thesis I present an unsupervised approach that can be made supervised in order to reducetranslation of changes in structured information, stored in XML-documents. By combining a sentenceboundary detection algorithm and a sentence alignment algorithm, a translation memory is createdfrom the old version of the information in different languages. This translation memory can then beused to translate sentences that are not changed. The structure of the XML is used to improve theperformance. Two implementations were made and evaluated in three steps: sentence boundary detection,sentence alignment and correspondence. The last step evaluates the using of the translation memoryon a new version in the source language. The second implementation was an improvement, using theresults of the evaluation of the first implementation. The evaluation was done using 100 XML-documents in English, German and Swedish. There was a significant difference between the results ofthe implementations in the first two steps. The errors were reduced by each step and in the last stepthere were only three errors by first implementation and no errors by the second implementation. The evaluation of the implementations showed that it was possible to reduce text that requires re-translation by about 80%. Similar information can and is used by the translators to achieve higherproductivity, but this thesis shows that it is possible to reduce translation even before the textsreaches the translators.
4

[en] TRANSLATION MEMORY: AID OR HANDICAP? / [pt] MEMÓRIA DE TRADUÇÃO: AUXÍLIO OU EMPECILHO?

ADRIANA CESCHIN RIECHE 04 June 2004 (has links)
[pt] Diante do papel cada vez mais importante desempenhado pelas ferramentas de auxílio à tradução no trabalho de tradutores profissionais, a discussão das conseqüências de sua utilização assume especial interesse. O presente estudo concentra-se em apenas uma dessas ferramentas: os sistemas de memória de tradução, que surgiram prometendo ganhos de produtividade, maior consistência e economia. O objetivo é analisar os principais fatores que levam a problemas de qualidade nesses sistemas e apresentar sugestões para melhorar o controle da qualidade realizado, ressaltando a necessidade de manutenção e revisão das memórias para que realmente sirvam ao propósito de serem ferramentas e não empecilhos para o tradutor. Essas questões serão analisadas no contexto do mercado de localização de software, segmento em que as memórias de tradução são amplamente utilizadas, à luz das abordagens contemporâneas sobre qualidade da tradução. / [en] Considering the increasingly important role played by computer-aided translation tools in the work of professional translators, the discussion about their use gains special interest. This study focuses on only one of these tools: translation memory systems, which were developed to ensure productivity gains, more consistency and cost savings. The objective is to analyze the major factors leading to quality problems in such systems and to suggest ways to enhance quality control, emphasizing the need for updating and reviewing the translation memories so that they can actually serve as translation aids rather than handicaps. These issues will be analyzed in the context of the software localization market, a segment in which translation memories are widely used, in the light of contemporary approaches to translation quality assessment.
5

Translation Memory System Optimization : How to effectively implement translation memory system optimization / Optimering av översättningsminnessystem : Hur man effektivt implementerar en optimering i översättningsminnessystem

Chau, Ting-Hey January 2015 (has links)
Translation of technical manuals is expensive, especially when a larger company needs to publish manuals for their whole product range in over 20 different languages. When a text segment (i.e. a phrase, sentence or paragraph) is manually translated, we would like to reuse these translated segments in future translation tasks. A translated segment is stored with its corresponding source language, often called a language pair in a Translation Memory System. A language pair in a Translation Memory represents a Translation Entry also known as a Translation Unit. During a translation, when a text segment in a source document matches a segment in the Translation Memory, available target languages in the Translation Unit will not require a human translation. The previously translated segment can be inserted into the target document. Such functionality is provided in the single source publishing software, Skribenta developed by Excosoft. Skribenta requires text segments in source documents to find an exact or a full match in the Translation Memory, in order to apply a translation to a target language. A full match can only be achieved if a source segment is stored in a standardized form, which requires manual tagging of entities, and often reoccurring words such as model names and product numbers. This thesis investigates different ways to improve and optimize a Translation Memory System. One way was to aid users with the work of manual tagging of entities, by developing Heuristic algorithms to approach the problem of Named Entity Recognition (NER). The evaluation results from the developed Heuristic algorithms were compared with the result from an off the shelf NER tool developed by Stanford. The results shows that the developed Heuristic algorithms is able to achieve a higher F-Measure compare to the Stanford NER, and may be a great initial step to aid Excosofts’ users to improve their Translation Memories. / Översättning av tekniska manualer är väldigt kostsamt, speciellt när större organisationer behöver publicera produktmanualer för hela deras utbud till över 20 olika språk. När en text (t.ex. en fras, mening, paragraf) har blivit översatt så vill vi kunna återanvända den översatta texten i framtida översättningsprojekt och dokument. De översatta texterna lagras i ett översättningsminne (Translation Memory). Varje text lagras i sitt källspråk tillsammans med dess översättning på ett annat språk, så kallat målspråk. Dessa utgör då ett språkpar i ett översättningsminnessystem (Translation Memory System). Ett språkpar som lagras i ett översättningsminne utgör en Translation Entry även kallat Translation Unit. Om man hittar en matchning när man söker på källspråket efter en given textsträng i översättningsminnet, får man upp översättningar på alla möjliga målspråk för den givna textsträngen. Dessa kan i sin tur sättas in i måldokumentet. En sådan funktionalitet erbjuds i publicerings programvaran Skribenta, som har utvecklats av Excosoft. För att utföra en översättning till ett målspråk kräver Skribenta att text i källspråket hittar en exakt matchning eller en s.k. full match i översättningsminnet. En full match kan bara uppnås om en text finns lagrad i standardform. Detta kräver manuell taggning av entiteter och ofta förekommande ord som modellnamn och produktnummer. I denna uppsats undersöker jag hur man effektivt implementerar en optimering i ett översättningsminnessystem, bland annat genom att underlätta den manuella taggningen av entitier. Detta har gjorts genom olika Heuristiker som angriper problemet med Named Entity Recognition (NER). Resultat från de utvecklade Heuristikerna har jämförts med resultatet från det NER-verktyg som har utvecklats av Stanford. Resultaten visar att de Heuristiker som jag utvecklat uppnår ett högre F-Measure jämfört med Stanford NER och kan därför vara ett bra inledande steg för att hjälpa Excosofts användare att förbättra deras översättningsminnen.
6

Automatisk kvalitetskontroll av terminologi i översättningar / Automatic quality checking of terminology in translations

Edholm, Lars January 2007 (has links)
<p>Kvalitet hos översättningar är beroende av korrekt användning av specialiserade termer, som kan göra översättningen lättare att förstå och samtidigt minska tidsåtgång och kostnader för översättningen (Lommel, 2007). Att terminologi används konsekvent är viktigt, och något som bör granskas vid en kvalitetskontroll av exempelvis översatt dokumentation (Esselink, 2000). Det finns idag funktioner för automatisk kontroll av terminologi i flera kommersiella program. Denna studie syftar till att utvärdera sådana funktioner, då ingen tidigare större studie av detta har påträffats.</p><p>För att få en inblick i hur kvalitetskontroll sker i praktiken genomfördes först två kvalitativa intervjuer med personer involverade i detta på en översättningsbyrå. Resultaten jämfördes med aktuella teorier inom området och visade på stor överensstämmelse med vad exempelvis Bass (2006) förespråkar.</p><p>Utvärderingarna inleddes med en granskning av täckningsgrad hos en verklig termdatabas jämfört med subjektivt markerade termer i en testkorpus baserad på ett autentiskt översättningsminne. Granskningen visade dock på relativt låg täckningsgrad. För att öka täckningsgraden modifierades termdatabasen, bland annat utökades den med längre termer ur testkorpusen.</p><p>Därefter kördes fyra olika programs funktion för kontroll av terminologi i testkorpusen jämfört med den modifierade termdatabasen. Slutligen modifierades även testkorpusen, där ett antal fel placerades ut för att få en mer idealiserad utvärdering. Resultaten i form av larm för potentiella fel kategoriserades och bedömdes som riktiga eller falska larm. Detta utgjorde basen för mått på kontrollernas precision och i den sista utvärderingen även deras recall.</p><p>Utvärderingarna visade bland annat att det för terminologi i översättningar på engelska - svenska var mest fördelaktigt att matcha termdatabasens termer som delar av ord i översättningens käll- och målsegment. På så sätt kan termer med olika böjningsformer fångas utan stöd för språkspecifik morfologi. En orsak till många problem vid matchningen var utseendet på termdatabasens poster, som var mer anpassat för mänskliga översättare än för maskinell läsning.</p><p>Utifrån intervjumaterialet och utvärderingarnas resultat formulerades rekommendationer kring införandet av verktyg för automatisk kontroll av terminologi. På grund av osäkerhetsfaktorer i den automatiska kontrollen motiveras en manuell genomgång av dess resultat. Genom att köra kontrollen på stickprov som redan granskats manuellt ur andra aspekter, kan troligen en lämplig omfattning av resultat att gå igenom manuellt erhållas. Termdatabasens kvalitet är avgörande för dess täckningsgrad för översättningar, och i förlängningen också för nyttan med att använda den för automatisk kontroll.</p> / <p>Quality in translations depends on the correct use of specialized terms, which can make the translation easier to understand as well as reduce the required time and costs for the translation (Lommel, 2007). Consistent use of terminology is important, and should be taken into account during quality checks of for example translated documentation (Esselink, 2000). Today, several commercial programs have functions for automatic quality checking of terminology. The aim of this study is to evaluate such functions since no earlier major study of this has been found.</p><p>To get some insight into quality checking in practice, two qualitative interviews were initially carried out with individuals involved in this at a translation agency. The results were compared to current theories in the subject field and revealed a general agreement with for example the recommendations of Bass (2006).</p><p>The evaluations started with an examination of the recall for a genuine terminology database compared to subjectively marked terms in a test corpus based on an authentic translation memory. The examination however revealed a relatively low recall. To increase the recall the terminology database was modified, it was for example extended with longer terms from the test corpus.</p><p>After that, the function for checking terminology in four different commercial programs was run on the test corpus using the modified terminology database. Finally, the test corpus was also modified, by planting out a number of errors to produce a more idealized evaluation. The results from the programs, in the form of alarms for potential errors, were categorized and judged as true or false alarms. This constitutes a base for measures of precision of the checks, and in the last evaluation also of their recall.</p><p>The evaluations showed that for terminology in translations of English to Swedish, it was advantageous to match terms from the terminology database using partial matching of words in the source and target segments of the translation. In that way, terms with different inflected forms could be matched without support for language﷓specific morphology. A cause of many problems in the matching process was the form of the entries in the terminology database, which were more suited for being read by human translators than by a machine.</p><p>Recommendations regarding the introduction of tools for automatic checking of terminology were formulated, based on the results from the interviews and evaluations. Due to factors of uncertainty in the automatic checking, a manual review of its results is motivated. By running the check on a sample that has already been manually checked in other aspects, a reasonable number of results to manually review can be obtained. The quality of the terminology database is crucial for its recall on translations, and in the long run also for the value of using it for automatic checking.</p>
7

Traduzir na contemporaneidade: efeitos da adoção de sistemas de memórias sobre a concepção ética da prática tradutória

Stupiello, Érika Nogueira de Andrade [UNESP] 25 March 2010 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:32:45Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-03-25Bitstream added on 2014-06-13T19:22:18Z : No. of bitstreams: 1 stupiello_ena_dr_sjrp.pdf: 980318 bytes, checksum: 65f85d4d951959755414469d5e56e14d (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / As transformações experimentadas no mundo considerado globalizado têm gerado o crescimento do montante de informações e a urgência de disseminação das mesmas além fronteiras, promovendo o expressivo aumento da demanda por traduções elaboradas de maneira rápida e segundo padrões de produção específicos. Para atender a essas exigências e manterem-se competitivos, tradutores, cada vez mais, estão lançando mão das ferramentas tecnológicas atualmente disponíveis, em especial, sistemas de memórias de tradução. A aplicação dessas ferramentas requer que o tradutor siga determinadas regras que garantam o desempenho prometido, especialmente pela manipulação de termos e fraseologias utilizados na tradução a fim de garantir seu reaproveitamento em trabalhos futuros. A crescente adoção de ferramentas pelo tradutor contemporâneo suscita uma reflexão de cunho ético sobre a extensão de sua responsabilidade pelo material traduzido. Visando a esse fim, nesta tese, investigam-se os pressupostos teóricos que sustentam os projetos dessas ferramentas tecnológicas de tradução, analisando-se tanto as contribuições que elas têm proporcionado ao tradutor, como algumas das questões que procedem do modo como a profissão é concebida como resultado do uso dos recursos por elas disponibilizados. Para fomentar a análise proposta, foram examinados os recursos pressupostos como dinamizadores do trabalho do tradutor, principalmente pelas funções de segmentação do texto de origem, alinhamento de traduções e pelo processo de correspondência textual disponíveis em três sistemas de memória: o Wordfast, o Trados e o Transit. O estudo dos projetos e dos recursos disponibilizados por essas ferramentas auxiliou a análise sobre o envolvimento do tradutor com a tradução, quando esse profissional integra um processo maior de produção... / Transformations in the globalized world have generated the growth of the amount of information and the urgency of its dissemination beyond borders, promoting a significant increase in the demand for translations performed fast and according to specific production standards. In order to comply with these requirements and remain competitive, translators are more and more embracing the technological tools currently available, mainly, translation memory systems. The application of these tools requires the translator to follow certain rules that guarantee the promised performance, mainly by manipulating terms and phraseologies used in the translation so as to ascertain their reuse in future translations. The growing adoption of tools by the contemporary translator calls for an ethical consideration of the extension of the translator’s responsibility for the translated material. In this thesis, the theoretical assumptions supporting the projects of these translation technological tools are investigated through the analysis of both the contributions they have been providing for the translator and some issues that arise from the way the profession is conceived as a result of the use of the resources made available by these tools. To foment the proposed analysis, resources deemed to make the translator’s work more dynamic have been examined, mainly through the functions of source-text segmentation, translation alignment and textual matching available in three translation memory systems: Wordfast, Trados and Transit. The study of the projects and resources made available by these tools encouraged the analysis of the translator’s involvement with the translation when he/she is part of a larger process of production and distribution of information to audiences located in the most varied places in the world. From this analysis, a survey was carried out of issues... (Complete abstract click electronic access below)
8

Managing Terminology for Translation Using Translation Environment Tools: Towards a Definition of Best Practices

Gómez Palou Allard, Marta 03 May 2012 (has links)
Translation Environment Tools (TEnTs) became popular in the early 1990s as a partial solution for coping with ever-increasing translation demands and the decreasing number of translators available. TEnTs allow the creation of repositories of legacy translations (translation memories) and terminology (integrated termbases) used to identify repetition in new source texts and provide alternate translations, thereby reducing the need to translate the same information twice. While awareness of the important role of terminology in translation and documentation management has been on the rise, little research is available on best practices for building and using integrated termbases. The present research is a first step toward filling this gap and provides a set of guidelines on how best to optimize the design and use of integrated termbases. Based on existing translation technology and terminology management literature, as well as our own experience, we propose that traditional terminology and terminography principles designed for stand-alone termbases should be adapted when an integrated termbase is created in order to take into account its unique characteristics: active term recognition, d one-click insertion of equivalents into the target text and document pretranslation. The proposed modifications to traditional principles cover a wide range of issues, including using record structures with fewer fields, adopting the TBX-Basic’s record structure, classifying records by project or client, creating records based on equivalent pairs rather concepts in cases where synonyms exist, recording non-term units and multiple forms of a unit, and using translated documents as sources. The overarching hypothesis and its associated concrete strategies were evaluated first against a survey of current practices in terminology management within TEnTs and later through a second survey that tested user acceptance of the strategies. The result is a set of guidelines that describe best practices relating to design, content selection and information recording within integrated termbases that will be used for translation purposes. These guidelines will serve as a point of reference for new users of TEnTs, as an academic resource for translation technology educators, as a map of challenges in terminology management within TEnTs that translation software developers seek to resolve and, finally, as a springboard for further research on the optimization of integrated termbases for translation.
9

Managing Terminology for Translation Using Translation Environment Tools: Towards a Definition of Best Practices

Gómez Palou Allard, Marta 03 May 2012 (has links)
Translation Environment Tools (TEnTs) became popular in the early 1990s as a partial solution for coping with ever-increasing translation demands and the decreasing number of translators available. TEnTs allow the creation of repositories of legacy translations (translation memories) and terminology (integrated termbases) used to identify repetition in new source texts and provide alternate translations, thereby reducing the need to translate the same information twice. While awareness of the important role of terminology in translation and documentation management has been on the rise, little research is available on best practices for building and using integrated termbases. The present research is a first step toward filling this gap and provides a set of guidelines on how best to optimize the design and use of integrated termbases. Based on existing translation technology and terminology management literature, as well as our own experience, we propose that traditional terminology and terminography principles designed for stand-alone termbases should be adapted when an integrated termbase is created in order to take into account its unique characteristics: active term recognition, d one-click insertion of equivalents into the target text and document pretranslation. The proposed modifications to traditional principles cover a wide range of issues, including using record structures with fewer fields, adopting the TBX-Basic’s record structure, classifying records by project or client, creating records based on equivalent pairs rather concepts in cases where synonyms exist, recording non-term units and multiple forms of a unit, and using translated documents as sources. The overarching hypothesis and its associated concrete strategies were evaluated first against a survey of current practices in terminology management within TEnTs and later through a second survey that tested user acceptance of the strategies. The result is a set of guidelines that describe best practices relating to design, content selection and information recording within integrated termbases that will be used for translation purposes. These guidelines will serve as a point of reference for new users of TEnTs, as an academic resource for translation technology educators, as a map of challenges in terminology management within TEnTs that translation software developers seek to resolve and, finally, as a springboard for further research on the optimization of integrated termbases for translation.
10

Sistemas de memórias de tradução e tecnologias de tradução automática: possíveis efeitos na produção de tradutores em formação / Translation memory systems and machine translation: possible effects on the production of translation trainees

Talhaferro, Lara Cristina Santos 26 February 2018 (has links)
Submitted by Lara Cristina Santos Talhaferro null (lara.talhaferro@hotmail.com) on 2018-03-07T01:06:11Z No. of bitstreams: 1 Dissertação_LaraCSTalhaferro_2018.pdf: 4550332 bytes, checksum: 634c0356d3f9c55e334ef6a26a877056 (MD5) / Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-03-07T15:46:44Z (GMT) No. of bitstreams: 1 talhaferro_lcs_me_sjrp.pdf: 4550332 bytes, checksum: 634c0356d3f9c55e334ef6a26a877056 (MD5) / Made available in DSpace on 2018-03-07T15:46:44Z (GMT). No. of bitstreams: 1 talhaferro_lcs_me_sjrp.pdf: 4550332 bytes, checksum: 634c0356d3f9c55e334ef6a26a877056 (MD5) Previous issue date: 2018-02-26 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / O processo da globalização, que tem promovido crescente circulação de informações multilíngues em escala mundial, tem proporcionado notáveis mudanças no mercado da tradução. No contexto globalizado, para manterem-se competitivos e atenderem à demanda de trabalho, a qual conta com frequentes atualizações de conteúdo e prazos reduzidos, os tradutores passaram a adotar ferramentas de tradução assistidas por computador em sua rotina de trabalho. Duas dessas ferramentas, utilizadas principalmente por tradutores das áreas técnica, científica e comercial, são os sistemas de memórias de tradução e as tecnologias de tradução automática. O emprego de tais recursos pode ter influências imprevisíveis nas traduções, sobre as quais os tradutores raramente têm oportunidade de ponderar. Se os profissionais são iniciantes ou se lhes falta experiência em determinada ferramenta, essa influência pode ser ainda maior. Considerando que os profissionais novatos tendem a utilizar cada vez mais as ferramentas disponíveis para aumentar sua eficiência, neste trabalho são investigados os possíveis efeitos do uso de sistemas de memórias de tradução e tecnologias de tradução automática, especificamente o sistema Wordfast Anywhere e um de seus tradutores automáticos, o Google Cloud Translate API, nas escolhas de graduandos em Tradução. Foi analisada a aplicação dessas ferramentas na tradução (inglês/português) de quatro abstracts designados a dez alunos do quarto ano do curso de Bacharelado em Letras com Habilitação de Tradutor da Unesp de São José do Rio Preto, divididos em três grupos: os que fizeram o uso do Wordfast Anywhere, os que utilizaram essa ferramenta para realizar a pós-edição da tradução feita pelo Google Cloud Translate API e os que não utilizaram nenhuma dessas ferramentas para traduzir os textos. Tal exame consistiu de uma análise numérica entre as traduções, com a ajuda do software Turnitin e uma análise contrastiva da produção dos alunos, em que foram considerados critérios como tempo de realização da tradução, emprego da terminologia específica, coesão e coerência textual, utilização da norma culta da língua portuguesa e adequação das traduções ao seu fim. As traduções também passaram pelo exame de profissionais das áreas sobre as quais tratam os abstracts, para avaliá-las do ponto de vista de um usuário do material traduzido. Além de realizarem as traduções, os alunos responderam a um questionário, em que esclarecem seus hábitos e suas percepções sobre as ferramentas computacionais de tradução. A análise desses trabalhos indica que a automação não influenciou significativamente na produção das traduções, confirmando nossa hipótese de que o tradutor tem papel central nas escolhas terminológicas e na adequação do texto traduzido a seu fim. / Globalization has promoted a growing flow of multilingual information worldwide, causing significant changes in translation market. In this scenario, translators have been employing computer-assisted translation tools (CAT Tools) in a proficient way to meet the demand for information translated into different languages in condensed turnarounds. Translation memory systems and machine translation are two of these tools, used especially when translating technical, scientific and commercial texts. This configuration may have inevitable influences in the production of translated texts. Nonetheless, translators seldom have the opportunity to ponder on how their production may be affected by the use of these tools, especially if they are novice in the profession or lack experience with the tools used. Seeking to examine how the work of translators in training may be influenced by translation memory systems and machine translation technologies they employ, this work investigates how a translation memory system, Wordfast Anywhere, and one of its machine translation tools, Google Cloud Translate API, may affect the choices of Translation trainees. To achieve this goal, we present an analysis of English-to-Portuguese translations of four abstracts assigned to ten students of the undergraduate Program in Languages with Major in Translation at São Paulo State University, divided into three groups: one aided by Wordfast Anywhere, one aided by Google Cloud Translate API, and one unassisted by any of these tools. This study consists of a numerical analysis, assisted by Turnitin, and a comparative analysis, whose aspects examined are the following: time spent to perform the translation, use of specific terminology, cohesion and coherence, use of standard Portuguese, and suitability for their purposes. Apart from this analysis, a group of four experts were consulted on the translations as users of their content. Finally, the students filled a questionnaire on their habits and perceptions on CAT Tools. The examination of their work suggests that automation did not influence the production of the translations significantly, confirming our hypothesis that human translators are at the core of decision-making when it comes to terminological choices and suitability of translated texts to their purpose. / 2016/07907-0

Page generated in 0.4252 seconds