Translation quality assessment remains pertinent in both translation theory and in the industry. Specifically, the process of assessing a target document's quality or a person's translation competence involves a lot of time and money on the part of various governments, organizations and individuals. In response to this issue, this project builds on the ongoing research of Hague et al. (2012), who seek to determine the capabilities of a computerized translation test for the French-to-English and Spanish-to-English language pairs. Specifically, Hague et al. (2012) question whether a good score on a detect-and-correct style computerized translation test that is calculated by a computer also indicates a good score on a traditional full translation test that is calculated by hand. This project seeks to further this research by seeking to answer the same question using an Arabic-to-English language pair. The methods used in this research involve testing individuals using two different style translation tests and then comparing the results. The first style translation test involves a detect-and-correct format where a subject is given a list of project specifications in the form of a translation brief, a source text passage and a corresponding target text passage that has errors introduced throughout. The subject is expected to detect and fix the errors while leaving the rest of the text alone. A score is given for this test using an automated algorithm. The second style test is a traditional translation test where a subject is given the same translation brief and a source text. The subject is expected to produce an acceptable target text, which is subsequently scored by hand. Thereafter, various forms of analysis are used to determine the relationship between the scores of the two types of tests. The results of this research do not strongly suggest that a high score on the detect-and-correct portion of the test indicates a high score on a hand-graded full translation test for the subject population used. However, this research still provides insight, especially concerning whether the detect-and-correct portion of the test actually measures translation competence and concerning second language acquisition (SLA) programs and their intentions. In addition, this research provides insight into logistical issues in testing such as the impact text difficulty and length may have on a detect-and-correct style test as well as the negative impact the American Translators Association (ATA) grading practices of weighting errors and capping errors can have on an experiment such as the one described in this research.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-4107 |
Date | 11 June 2012 |
Creators | Kuhn, Amanda J. |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | http://lib.byu.edu/about/copyright/ |
Page generated in 0.0019 seconds