• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 38
  • 21
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 302
  • 302
  • 108
  • 77
  • 61
  • 57
  • 56
  • 54
  • 49
  • 47
  • 46
  • 42
  • 35
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Google Translate in English Language Learning : A Study of Teachers' Beliefs and Practices

Laird Eriksson, Nickole January 2021 (has links)
The purpose of this study is to explore upper secondary school English teachers' beliefs and practices for free online machine translation (FOMT) tools. It is believed that students are using these tools, but the focus of this study is to highlight what teachers think and how they are addressing FOMT usage by students. Participants are currently teaching various English levels in upper secondary schools throughout Sweden and have varying degrees of experience. This study includes a brief background of previous studies detailing teachers' attitudes and methods for incorporating machine translation (MT) in their language teaching. The theoretical framework used for this study is language teacher cognition and translation in language teaching. The results reveal that the previous research conducted in this area has not yet influenced teachers' language classrooms methods. Teachers' education and language learning experience may explain this disconnect to current research. There is a common theme that teachers do not mind using FOMT tools in their personal lives but strongly recommend other sources for their students.
122

Directing Post-Editors’ Attention to Machine Translation Output that Needs Editing through an Enhanced User Interface: Viability and Automatic Application via a Word-level Translation Accuracy Indicator

Gilbert, Devin Robert 13 July 2022 (has links)
No description available.
123

Detecting Logical Errors in Programming Assignments Using code2seq

Lückner, Anton, Chapman, Kevin January 2023 (has links)
The demand for new competent programmers is increasing with the ever-growing dependency on technology. The workload for teachers with more and more students creates the need for more automated tools for feedback and grading. While some tools exist that alleviate this to some degree, machine learning presents an interesting avenue for techniques and tools to do this more efficiently. Logical errors are common occurrences within novice code, and therefore a model that could detect these would alleviate the workload for the teachers and be a boon to students. This study aims to explore the performance of the machine learning model code2seq in detecting logical errors. This is explored through an empirical experiment where a data-set consisting of real-world Java code that is modified to contain one specific logical error is used to train, validate and test the code2seq model. The performance of the model is measured using the metrics: accuracy, precision, recall and F1-score. The results of this study show promise for the application of the code2seq model in detecting logical errors and have the potential for real-world use in classrooms.
124

A Hybrid System for Glossary Generation of Feature Film Content for Language Learning

Corradini, Ryan Arthur 04 August 2010 (has links) (PDF)
This report introduces a suite of command-line tools created to assist content developers with the creation of rich supplementary material to use in conjunction with feature films and other video assets in language teaching. The tools are intended to leverage open-source corpora and software (the OPUS OpenSubs corpus and the Moses statistical machine translation system, respectively), but are written in a modular fashion so that other resources could be leveraged in their place. The completed tool suite facilitates three main tasks, which together constitute this project. First, several scripts created for use in preparing linguistic data for the system are discussed. Next, a set of scripts are described that together leverage the strengths of both terminology management and statistical machine translation to provide candidate translation entries for terms of interest. Finally, a tool chain and methodology are given for enriching the terminological data store based on the output of the machine translation process, thereby enabling greater accuracy and efficiency with each subsequent application.
125

Coherence and Cohesion in an ESL Academic Writing Environment: Rethinking the Use of Translation and FOMT in Language Teaching

Alimohammadi, Solmaz 20 January 2023 (has links)
For several years, the use of translation and specifically Machine Translation - including Free Online Machine Translation (FOMT) tools - in L2 curricula has been the subject of ongoing debate. Even though the use of such tools is commonly discouraged in L2 classrooms by educators, the persistence of English as a second language (ESL) students in utilizing the tools has inspired many scholars to investigate whether it is helpful to develop effective strategies that transform FOMT into a teaching/learning tool in the ESL/English for specific purposes (ESP) classroom. Specifically, scholars have examined how FOMT can impact or enhance the writing quality of ESL students' compositions in terms of coherence and cohesion. In line with the same research interests, this project examined ESL students' typical coherence/cohesion challenges in academic writing at an Ontario post-secondary institution offering courses in French. The study explored the writing behaviours, such as the use of technologies including FOMT, that influence these challenges. In addition, this project sought to ascertain whether ESL students can be trained to better achieve coherence/cohesion in academic writing and how this training affects their writing behaviours, with particular attention to the use of technologies such as FOMT. In doing so, the study employed a mixed-methods research design and collected survey data, writing samples and screen recordings from 6 high-intermediate-level ESL students. Survey data was also collected from 23 ESL instructors about ESL students' practices, including tool use. Semi-structured interviews were conducted with the students and 3 instructors who evaluated the writing samples. Based on the survey results, all the students demonstrated a positive attitude toward FOMT tools, and 5 students used the tools during the writing process in this project. In contrast, the instructors reported divided opinions about such tools for ESL writing purposes. The results showed that instructions can assist students with improving their text quality in terms of coherence and cohesion. As well, based on the results, FOMT can assist the students in constructing their texts during the writing process. The results demonstrated that this assistance can also have a subsequent positive impact on the coherence and cohesion levels in the produced texts.
126

Multilingual Neural Machine Translation for Low Resource Languages

Lakew, Surafel Melaku 20 April 2020 (has links)
Machine Translation (MT) is the task of mapping a source language to a target language. The recent introduction of neural MT (NMT) has shown promising results for high-resource language, however, poorly performing for low-resource language (LRL) settings. Furthermore, the vast majority of the 7, 000+ languages around the world do not have parallel data, creating a zero-resource language (ZRL) scenario. In this thesis, we present our approach to improving NMT for LRL and ZRL, leveraging a multilingual NMT modeling (M-NMT), an approach that allows building a single NMT to translate across multiple source and target languages. This thesis begins by i) analyzing the effectiveness of M-NMT for LRL and ZRL translation tasks, spanning two NMT modeling architectures (Recurrent and Transformer), ii) presents a self-learning approach for improving the zero-shot translation directions of ZRLs, iii) proposes a dynamic transfer-learning approach from a pre-trained (parent) model to a LRL (child) model by tailoring to the vocabulary entries of the latter, iv) extends M-NMT to translate from a source language to specific language varieties (e.g. dialects), and finally, v) proposes an approach that can control the verbosity of an NMT model output. Our experimental findings show the effectiveness of the proposed approaches in improving NMT of LRLs and ZRLs.
127

Low-Resource Domain Adaptation for Jihadi Discourse : Tackling Low-Resource Domain Adaptation for Neural Machine Translation Using Real and Synthetic Data

Tollersrud, Thea January 2023 (has links)
In this thesis, I explore the problem of low-resource domain adaptation for jihadi discourse. Due to the limited availability of annotated parallel data, developing accurate and effective models in this domain poses a challenging task. To address this issue, I propose a method that leverages a small in-domain manually created corpus and a synthetic corpus created from monolingual data using back-translation. I evaluate the approach by fine-tuning a pre-trained language model on different proportions of real and synthetic data and measuring its performance on a held-out test set. My experiments show that fine-tuning a model on one-fifth real parallel data and synthetic parallel data effectively reduces occurrences of over-translation and bolsters the model's ability to translate in-domain terminology. My findings suggest that synthetic data can be a valuable resource for low-resource domain adaptation, especially when real parallel data is difficult to obtain. The proposed method can be extended to other low-resource domains where annotated data is scarce, potentially leading to more accurate models and better translation of these domains.
128

Automatic Post-Editing for Machine Translation

Chatterjee, Rajen 16 October 2019 (has links)
Automatic Post-Editing (APE) aims to correct systematic errors in a machine translated text. This is primarily useful when the machine translation (MT) system is not accessible for improvement, leaving APE as a viable option to improve translation quality as a downstream task - which is the focus of this thesis. This field has received less attention compared to MT due to several reasons, which include: the limited availability of data to perform a sound research, contrasting views reported by different researchers about the effectiveness of APE, and limited attention from the industry to use APE in current production pipelines. In this thesis, we perform a thorough investigation of APE as a down- stream task in order to: i) understand its potential to improve translation quality; ii) advance the core technology - starting from classical methods to recent deep-learning based solutions; iii) cope with limited and sparse data; iv) better leverage multiple input sources; v) mitigate the task-specific problem of over-correction; vi) enhance neural decoding to leverage external knowledge; and vii) establish an online learning framework to handle data diversity in real-time. All the above contributions are discussed across several chapters, and most of them are evaluated in the APE shared task organized each year at the Conference on Machine Translation. Our efforts in improving the technology resulted in the best system at the 2017 APE shared task, and our work on online learning received a distinguished paper award at the Italian Conference on Computational Linguistics. Overall, outcomes and findings of our work have boost interest among researchers and attracted industries to examine this technology to solve real-word problems.
129

Machine Translation Through the Creation of a Common Embedding Space

Sandvick, Joshua, Sandvick 11 December 2018 (has links)
No description available.
130

Learning to Rank Algorithms and Their Application in Machine Translation

Xia, Tian January 2015 (has links)
No description available.

Page generated in 0.0297 seconds