Return to search

Machine Translation Of Fictional And Non-fictional Texts : An examination of Google Translate's accuracy on translation of fictional versus non-fictional texts.

This study focuses on and tries to identify areas where machine translation can be useful by examining translated fictional and non-fictional texts, and the extent to which these different text types are better or worse suited for machine translation.  It additionally evaluates the performance of the free online translation tool Google Translate (GT). The BLEU automatic evaluation metric for machine translation was used for this study, giving a score of 27.75 BLEU value for fictional texts and 32.16 for the non-fictional texts. The non-fictional texts are samples of law documents, (commercial) company reports, social science texts (religion, welfare, astronomy) and medicine. These texts were selected because of their degree of difficulty. The non-fictional sentences are longer than those of the fictional texts and in this regard MT systems have struggled. In spite of having longer sentences, the non-fictional texts got a higher BLUE score than the fictional ones. It is speculated that one reason for the higher score of non-fictional texts might be that more specific terminology is used in these texts, leaving less room for subjective interpretation than for the fictional texts. There are other levels of meaning at work in the fictional texts that the human translator needs to capture.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:su-106670
Date January 2014
CreatorsSalimi, Jonni
PublisherStockholms universitet, Engelska institutionen
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0019 seconds