• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 14
  • 14
  • 11
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Effect of Prompt Accent on Elicited Imitation Assessments in English as a Second Language

Barrows, Jacob Garlin 01 March 2016 (has links) (PDF)
Elicited imitation (EI) assessment has been shown to have value as an inexpensive method for low-stakes tests (Cox & Davies, 2012), but little has been reported on the effect L2 accent has on test-takers' ability to understand and process the test items they hear. Furthermore, no study has investigated the effect of accent on EI test face validity. This study examined how the accent of input audio files affected EI test difficulty as well as test-takers' perceptions of such an effect. To investigate, self-reports of students' exposure to different varieties of English were obtained from a pre-assessment survey. A 63-item EI test was then administered in which English language learners in the United States listened to test items in three varieties of English: American English, Australian English, and British English. A post-assessment survey was then administered to gather information regarding perceived difficulty of accented prompts. A many facet Rasch analysis found that accent affected item difficulty in an EI test with a separation reliability coefficient of .98—British English being the most difficult and American English the easiest. Survey results indicated that students perceived this increase in difficulty, and ANOVAs between the survey and test results indicated that student perceptions of an increase in difficulty aligned with reality. Specifically, accents that students were “Not at all Familiar” with resulted in significantly lower EI test scores than accents with which the students were familiar. These findings suggest that prompt accent should be carefully considered in EI test development.
2

Examining Rater Bias in Elicited Imitation Scoring: Influence of Rater's L1 and L2 Background to the Ratings

Son, Min Hye 16 July 2010 (has links) (PDF)
Elicited Imitation (EI), which is a way of assessing language learners' speaking, has been used for years. Furthermore, there have been many studies done showing rater bias (variance in test ratings associated with a specific rater and attributable to the attributes of a test taker) in language assessment. In this project, I evaluated possible rater bias, focusing mostly on bias attributable to raters' and test takers' language backgrounds, as seen in EI ratings. I reviewed literature on test rater bias, participated in a study of language background and rater bias, and produced recommendations for reducing bias in EI administration. Also, based on possible rater bias effects discussed in the literature I reviewed and on results of the research study I participated in, I created a registration tool to collect raters' background information that might be helpful in evaluating and reducing rater bias in future EI testing. My project also involved producing a co-authored research paper. In that paper we found no bias effect based on rater first or second language background.
3

Elicited Imitation Testing as a Measure of Oral Language Proficiency at the Missionary Training Center

Moulton, Sara E. 15 March 2012 (has links) (PDF)
This research study aimed to create an alternative method of measuring the language proficiency of English as a Second Language (ESL) missionaries at the Missionary Training Center (MTC). Elicited imitation (EI) testing was used as this measure of language proficiency and an instrument was designed and tested with 30 ESL missionaries at the MTC. Results from the EI test were compared with an existing Language Speaking Assessment (LSA) currently in use at the MTC. EI tests were rated by human raters and also by a computer utilizing automatic speech recognition technology. Scores were compared across instruments and across scoring types. The EI test correlated highly with the LSA using both scoring methods providing initial validity for future testing and use of the instrument in measuring language proficiency at the MTC.
4

Development and Validation of a Portuguese Elicited Imitation Test

Reynolds, Braden Beldon 13 April 2020 (has links)
Elicited imitation (EI) is a method of assessing oral proficiency in which the examinee listens to a prompt and attempts to repeat it back exactly as it was heard. Research over recent decades has successfully established correlation between EI testing and other oral proficiency tests, such as the Oral Proficiency Interview (OPI) and the OPI by computer (OPIc). This paper details the history of oral proficiency assessment as well as that of EI. It then outlines the development process and validation of a Portuguese Elicited Imitation test. The processes of item selection and item validation are detailed followed by the criterion-related validation through a statistical correlation analysis of participants' results on an official American Council on the Teaching of Foreign Languages (ACTFL) OPIc and their predicted OPIc scores which were based on their results of the Portuguese EI calibration test. Results of the statistical analysis revealed a strong correlation between the predicted scores of the EI test and the actual OPIc scores. In order to go beyond previously completed EI research, this paper addresses the issue of face validity which has been a challenge for the proliferation of EI testing. Analysis of a survey administered after participants' completion of the two tests (OPIc and EI) addresses the experiences and reactions of the participants to the two testing formats. Suggestions for future use of EI as well as future research will be presented.
5

A Study of First Language Background and Second Language Order of Acquisition

Aitken, Meghan Elizabeth 18 July 2011 (has links) (PDF)
One major topic that often appears in textbooks on second language acquisition (SLA) is that of order of acquisition of morphemes. Much research has been done on the issue in the past, and a particular acquisition order has been accepted by many in the field of SLA for second language learners of English. This order of morphemes is deemed invariant and not affected by the native language of the learner. This thesis examines this claim, using an elicited imitation test to target nine English morphemes. The results show that a learner's native language does indeed have an effect on the order of acquisition of morphemes; however, only a few limited claims can be made regarding this order (for example, Japanese and Korean seem to acquire the auxiliary morpheme earlier than in other languages). Previous research is examined in light of the differences between this and other studies, with a specific focus on methodological issues which could have a significant impact on both results and interpretation of results in studies related to order of acquisition of morphemes.
6

Towards optimal measurement and theoretical grounding of L2 English elicited imitation: Examining scales, (mis)fits, and prompt features from item response theory and random forest approaches

Ji-young Shin (11560495) 14 October 2021 (has links)
<p>The present dissertation investigated the impact of scales / scoring methods and prompt linguistic features on the meausrement quality of L2 English elicited imitation (EI). Scales / scoring methods are an important feature for the validity and reliabilty of L2 EI test, but less is known (Yan et al., 2016). Prompt linguistic features are also known to influence EI test quaity, particularly item difficulty, but item discrimination or corpus-based, fine-grained meausres have rarely been incorporated into examining the contribution of prompt linguistic features. The current study addressed the research needs, using item response theory (IRT) and random forest modeling.</p><p>Data consisted of 9,348 oral responses to forty-eight items, including EI prompts, item scores, and rater comments, which were collected from 779 examinees of an L2 English EI test at Purdue Universtiy. First, the study explored the current and alternative EI scales / scoring methods that measure grammatical / semantic accuracy, focusing on optimal IRT-based measurement qualities (RQ1 through RQ4 in Phase Ⅰ). Next, the project identified important prompt linguistic features that predict EI item difficulty and discrimination across different scales / scoring methods and proficiency, using multi-level modeling and random forest regression (RQ5 and RQ6 in Phase Ⅱ).</p><p>The main findings were (although not limited to): 1) collapsing exact repetition and paraphrase categories led to more optimal measurement (i.e., adequacy of item parameter values, category functioning, and model / item / person fit) (RQ1); there were fewer misfitting persons with lower proficiency and higher frequency of unexpected responses in the extreme categories (RQ2); the inconsistency of qualitatively distinguishing semantic errors and the wide range of grammatical accuracy in the minor error category contributed to misfit (RQ3); a quantity-based, 4-category ordinal scale outperformed quality-based or binary scales (RQ4); sentence length significantly explained item difficulty only, with small variance explained (RQ5); Corpus-based lexical measures and phrase-level syntactic complexity were important to predicting item difficulty, particularly for the higher ability level. The findings made implications for EI scale / item development in human and automatic scoring settings and L2 English proficiency development.</p>
7

Elicited Imitation and Automated Speech Recognition: Evaluating Differences among Learners of Japanese

Tsuchiya, Shinsuke 05 July 2011 (has links) (PDF)
This study addresses the usefulness of elicited imitation (EI) and automated speech recognition (ASR) as a tool for second language acquisition (SLA) research by evaluating differences among learners of Japanese. The findings indicate that the EI and ASR grading system used in this study was able to differentiate between beginning- and advanced-level learners as well as instructed and self-instructed learners. No significant difference was found between self-instructed learners with and without post-mission instruction. The procedure, reliability and validity of the ASR-based computerized EI are discussed. Results and discussion will provide insights regarding different types of second language (L2) development, the effects of instruction, implications for teaching, as well as limitations of the EI and ASR grading system.
8

A Study of the Correlation Between Working Memory and Second Language EI Test Scores

Okura, Eve Kiyomi 10 June 2011 (has links) (PDF)
A principal argument against the use of elicited imitation (EI) to measure L2 oral proficiency is that performance does not require linguistic knowledge, but requires only rote memorization. This study addressed the issue by administering two tests to the same group of students studying English as a second language: (1) a working memory test, and (2) an English oral proficiency EI test. Participants came from a range of English language proficiency levels. A Pearson correlation was performed on the test results for each participant. The hypothesis was that English EI scores and working memory scores would not correlate significantly. This would suggest that the two tests do differ in what they measure, and that the English EI test does measure knowledge of the language to some degree. The results of the Pearson correlation revealed that there was a small positive correlation between working memory and English EI scores, but that it was not significant. There was also a significantly positive correlation between students' English EI scores and ELC level. These findings suggest that the English EI test fundamentally functions as a language test, and not significantly as a working memory test.
9

Fluency Features and Elicited Imitation as Oral Proficiency Measurement

Christensen, Carl V. 07 July 2012 (has links) (PDF)
The objective and automatic grading of oral language tests has been the subject of significant research in recent years. Several obstacles lie in the way of achieving this goal. Recent work has suggested a testing technique called elicited imitation (EI) can be used to accurately approximate global oral proficiency. This testing methodology, however, does not incorporate some fundamental aspects of language such as fluency. Other work has suggested another testing technique, simulated speech (SS), as a supplement to EI that can provide automated fluency metrics. In this work, I investigate a combination of fluency features extracted for SS testing and EI test scores to more accurately predict oral language proficiency. I also investigate the role of EI as an oral language test, and the optimal method of extracting fluency features from SS sound files. Results demonstrate the ability of EI and SS to more effectively predict hand-scored SS test item scores. I finally discuss implications of this work for future automated oral testing scenarios.
10

Designing and Evaluating a Russian Elicited Imitation Test to Be Used at the Missionary Training Center

Burdis, Jacob R. 17 March 2014 (has links) (PDF)
Elicited Imitation (EI) is an assessment approach that uses sentence imitation tasks to gauge the oral proficiency level of test takers. EI tests have been created for several of the world's languages, including English, Spanish, Japanese, French, and Mandarin. Little research has been conducted for using the EI approach with learners of Russian. This dissertation describes a multi-faceted study that was presented in two journal articles for the creation and analysis of a Russian EI test. The EI test was created for and tested with Russian-speaking missionaries and employees at the Missionary Training Center (MTC) in Provo, UT. The first article describes the creation of the test and analyzes its ability to predict oral language proficiency by comparing individuals' scores on the EI to their scores on the Oral Proficiency Interview (OPI). The test was found to effectively predict an individual's OPI score (R2 = .86). The second article analyzes the difference in person ability estimates and item difficulty measures between items from a general content bank and a religious content bank. The mean score for the content specific items (x̄ = .51) was significantly higher than the mean score for the general test (x̄ = .44, p < 0.001). Additionally, the item difficulties for the religious items were significantly less than the item difficulties for the general items (p < 0.05).

Page generated in 0.1519 seconds