• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 114
  • 114
  • 38
  • 27
  • 27
  • 25
  • 22
  • 19
  • 18
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Elicited Imitation Testing as a Measure of Oral Language Proficiency at the Missionary Training Center

Moulton, Sara E. 15 March 2012 (has links) (PDF)
This research study aimed to create an alternative method of measuring the language proficiency of English as a Second Language (ESL) missionaries at the Missionary Training Center (MTC). Elicited imitation (EI) testing was used as this measure of language proficiency and an instrument was designed and tested with 30 ESL missionaries at the MTC. Results from the EI test were compared with an existing Language Speaking Assessment (LSA) currently in use at the MTC. EI tests were rated by human raters and also by a computer utilizing automatic speech recognition technology. Scores were compared across instruments and across scoring types. The EI test correlated highly with the LSA using both scoring methods providing initial validity for future testing and use of the instrument in measuring language proficiency at the MTC.
32

Toward a Predictive Measure of L2 Proficiency: Linking Proficiency and Vocabulary in Spanish as a Foreign Language

Hoy, Rebekah F. 15 November 2011 (has links)
No description available.
33

A hierarchical approach to effective test generation for VHDL behavioral models

Rao, Sanat R. 04 August 2009 (has links)
This thesis describes the development of the Hierarchical Behavioral Test Generator (HBTG) for the testing of VHDL behavioral models. HBTG uses the Process Model Graph of the VHDL behavioral model as the base for test generation. Test sets for individual processes of the model are precomputed and stored in the design library. Using this information, HBTG hierarchically constructs a test sequence that tests the functionality of the model. The test sequence generated by HBTG is used for the simulation of the model. Various features present in HBTG and the implementation of the algorithm are discussed. The idea of an effective test sequence for a VHDL behavioral model is proposed. A system is presented to evaluate the quality of the test sequence generated by the algorithm. Test sequences and coverage results are given for several models. Some suggestions for future improvements to the tools are made. The HBTG forms part of a complete CAD system for rapid development and testing of VHDL behavioral models. / Master of Science
34

Developing a model for investigating the impact of language assessment within educational contexts by a public examination provider

Saville, N. D. January 2009 (has links)
There is no comprehensive model of language test or examination impact and how it might be investigated within educational contexts by a provider of high-stakes examinations, such as an international examinations board. This thesis addresses the development of such a model from the perspective of Cambridge ESOL, a provider of English language tests and examinations in over 100 countries. The starting point for the thesis is a discussion of examinations within educational processes generally and the role that examinations board, such as Cambridge ESOL play within educational systems. The historical context and assessment tradition is an important part of this discussion. In the literature review, the effects and consequences of language tests and examinations are discussed with reference to the better known concept of washback and how impact can be defined as a broader notion operating at both micro and macro levels. This is contextualised within the assessment literature on validity theory and the application of innovation theories within educational systems. Methodologically, the research is based on a meta-analysis which is employed in order to describe and review three impact projects. These three projects were carried out by researchers based in Cambridge to implement an approach to test impact which had emerged during the 1990s as part of the test development and validation procedures adopted by Cambridge ESOL. Based on the analysis, the main outcome and contribution to knowledge is an expanded model of impact designed to provide examination providers with a more effective “theory of action”. When applied within Cambridge ESOL, this model will allow anticipated impacts of the English language examinations to be monitored more effectively and will inform on-going processes of innovation; this will lead to well-motivated improvements in the examinations and the related systems. Wider applications of the model in other assessment contexts are also suggested.
35

An investigation into the effects of topic and background knowledge of topic on second language speaking performance assessment in language proficiency interviews

Khabbazbashi, Nahal January 2013 (has links)
This study explores, from a test validity perspective, the extent to which the two variables of topic and background knowledge of topic have an effect on spoken performance in language proficiency interviews. It is argued that in assessment contexts where topics are randomly assigned to test takers, it is necessary to demonstrate that topics of tasks and the level of background knowledge that test takers brings to these topics do not exert an undue influence on test results. Otherwise, a validity threat may be introduced to the test. Data were collected from 82 Farsi speakers of English who performed on ten different topics, across three task types. Participants’ background knowledge of topics was elicited using self- report questionnaires while C-tests were used as a measure of general English language proficiency. Four raters assigned scores to spoken performances using rating scales. Semi- structured interviews were carried out with raters, upon completion of the rating process. A mixed- methods strategy of inquiry was adopted where findings from the quantitative analyses of score data (using Multi-Faceted Rasch Measurement, multiple regression and descriptive statistics) were synthesised with the results of the qualitative analyses of rater interviews and test takers’ content of speech in addressing the foci of the study. The study’s main findings showed that the topics used in the study exhibited difficulty measures which were statistically distinct i.e. topics, within a given task type, could not be considered parallel. However, the size of the differences in topic difficulties was too small to have a large practical effect on scores. Participants’ different levels of background knowledge were shown to have a consistent, systematic and statistically significant effect on performance with low levels of background knowledge posing the highest level of challenge for test takers and vice versa. Nevertheless, these statistically significant differences in background knowledge levels failed to translate into practically significant differences, as the size of the differences were too small to have a large impact on performance scores. Results indicated that, compared to general language proficiency which accounted for approximately 60% of the variance in spoken performance scores, background knowledge only explained about 1-3% of the variance. Qualitative analyses of data suggested lack of background knowledge to be associated with topic abandonment, disengagement from topic-related questions, and fewer opportunities for test takers to elaborate on topics. It was also associated with negative affective influence on test takers, particularly lower proficiency individuals. Taken together, the findings have theoretical, methodological and practical implications for second language speaking performance assessment.
36

A Cantonese linguistic communication measure for evaluating aphasic narrative production

Kong, Pak-hin, Anthony., 江柏軒. January 2007 (has links)
published_or_final_version / abstract / Speech and Hearing Sciences / Doctoral / Doctor of Philosophy
37

A COMPARISON OF GEOGRAPHICALLY DIFFERENTIATED RURAL MEXICAN CHILDREN USING THE SPANISH VERSION OF THE ILLINOIS TEST OF PSYCHOLINGUISTIC ABILITY.

YOUNG, WILLIAM ARTHUR. January 1982 (has links)
The purpose of this investigation was to examine the cognitive and psycholinguistic processes of rural Mexican children and compare them to urban Mexican children using the Spanish Version of the Illinois Test of Psycholinguistic Ability as the primary diagnostic instrument. Additionally, Physical, Environmental, and Psychological test correlates are surveyed to demonstrate their application in the diagnostic and interpretive processes. Fifty-nine children, aged four to nine, were tested in this project. The children were all monolingual Spanish speaking, and they all lived in or very near Santa Rosalillita, a small mid-peninsula fishing cooperative in Baja California, Mexico. The cumulative total of the I.T.P.A. subtest raw scores for the Santa Rosalillita children are compared to the scores of two samples of urban children which were used in the I.T.P.A. standardization procedure. A series of t tests are used to analyze the data. The Physical, Environmental, and Psychological test correlates are examined by using obtained I.T.P.A. profiles, a questionnaire, and the author's on-site observations. The relevant results of this study are: (1) There appears to be a significant difference between the five year old children of Santa Rosalillita when they are compared to the two groups of five year old urban Mexican children. (2) There were no significant differences between the seven and nine year old groups of Santa Rosalillita children when they were compared to the seven and nine year old groups of urban Mexican children. (3) Physical, Environmental, and Psychological test correlates can provide important information in the diagnosis and interpretation of the Spanish I.T.P.A. It was concluded that extreme caution be used with the Spanish I.T.P.A. when it is employed with rural Mexican children below the age of six. The results suggest that the Spanish I.T.P.A. be used as one of several sources of information in the effort to serve the educational needs of Mexican children, but not as the only source of information. Further research may focus on rural children living in other geographic locales of Mexico, or on the development of appropriate and useful educational methods and materials for assimilating Mexican youngsters into American schools.
38

Locus of Control in L2 English Listening Assessment

Goodwin, Sarah J 06 January 2017 (has links)
In second language (L2) listening assessment, various factors have the potential to impact the validity of listening test items (Brindley & Slatyer, 2002; Buck & Tatsuoka, 1998; Freedle & Kostin, 1999; Nissan, DeVincenzi, & Tang, 1996; Read, 2002; Shohamy & Inbar, 1991). One relatively unexplored area to date is who controls the aural input. In traditional standardized listening tests, an administrator controlled recording is played once or twice. In real world or classroom listening, however, listeners can sometimes request repetition or clarification. Allowing listeners to control the aural input thus has the potential to add test authenticity but requires careful design of the input and expected response as well as an appropriate computer interface. However, if candidates feel less anxious, allowing control of listening input may enhance examinees' experience and still reflect their listening proficiency. Comparing traditional and self paced (i.e., examinees having the opportunity to start, stop, and move the audio position) delivery of multiple choice comprehension items, my research inquiry is whether self paced listening can be a sufficiently reliable and valid measure of examinees' listening ability. Data were gathered from 100 prospective and current university ESL students. They were administered computer based multiple choice listening tests: 10 identical once played items, followed by 33 items in three different conditions: 1) administrator paced input with no audio player visible, 2) self paced with a short time limit, and 3) self paced with a longer time limit. Many facet Rasch (1960/1980) modeling was used to compare the difficulty and discrimination of the items across conditions. Results indicated items on average were similar difficulty overall but discriminated best in self paced conditions. Furthermore, the vast majority of examinees reported they preferred self paced listening. The quantitative results were complemented by follow up stimulated recall interviews with eight participants who took 22 additional test items using screen capture software to explore whether and when they paused and/or repeated the input. Frequency of and reasons for self pacing did not follow any particular pattern by proficiency level. Examinees tended to play more than once but not two full times through, even without limited time. Implications for listening instruction and classroom assessment, as well as standardized testing, are discussed.
39

An investigation into the construct validity of an academic writing test in English with special reference to the Academic Writing Module of the IELTS Test

Alsagoafi, Ahmad Abdulrahman January 2013 (has links)
The International English Language Testing System (IELTS) is the world’s leading high stakes test that assesses the English Language Proficiency of candidates who speak languages other than English and wish to gain entry into universities where English is the language of instruction. Recently, over 3000 institutions in the United States accepted the IELTS test to be an indicator of language proficiency (IELTS, 2012a). Because of this preference for the IELTS test, and its worldwide recognition, there has been an increase in the number of students who are taking the test every year. According to the IELTS website, more than 7000 institutions around the world trust the test results and, not surprisingly, more than 1.7 million candidates take the test every year in one of the 800 recognised test centres across 135 countries (IELTS, 2012a). These candidates include people who seek not only to obtain admission to universities, but also for immigration authorities, employers of certain companies and government agencies. Acknowledging this popularity and importance to learners of English as a Foreign Language (EFL), this qualitative study has investigated the construct validity of the academic writing module in the IELTS test from the perspectives of the stakeholders (i.e. candidates, lecturers and markers). The aim was to understand why some Saudi students fail to cope with demands of the university despite the fact that they have achieved the minimum requirements in IELTS. In this study, data was collected in two phases in two different settings through open-ended questionnaires, semi-structured observations and semi-structured interviews. Phase I was carried out in the Department of English Language (DEL) at King Faisal University in Saudi Arabia, while Phase II was conducted in one university in the UK. The sample of the study included: 8 students, 6 university lecturers and one marker. In this study, data were analysed and coded into themes by using NVivo 9. The results of this case study have shown that the stakeholders were doubtful about the issue of readiness of students, which is claimed by IELTS, and they wanted the test to be clearer about how the students were going to cope with university demands upon gaining entry. In addition, with respect to the content validity of the test, this study found that the tasks in the academic writing test to a large extent do not reflect the kind of tasks candidates are likely to encounter at university. Furthermore, this study pointed out that response validity, on the part of students who may not have understood the rubric of the tasks, is another important factor affecting the students’ performance. Also, the findings of this study suggested that scoring validity could have a significant effect on the students’ scores because of the inconsistency of markers during the scoring process as they may have sometimes failed to assign the students to their corresponding level of proficiency. Consequently, the study provided a set of implications as well as recommendations for future research.
40

Construct representation of First Certificate in English (FCE) reading

Corrigan, Michael January 2015 (has links)
The current study investigates the construct representation of the reading component of a B2 level general English test: First Certificate in English (FCE). Construct representation is the relationship between cognitive processes elicited by the test and item difficulty. To facilitate this research, a model of the cognitive process involved in responding to reading test items was defined, drawing together aspects of different models (Embretson & Wetzel, 1987; Khalifa & Weir, 2009; Rouet, 2012). The resulting composite contained four components: the formation of an understanding of item requirements (OP), the location of relevant text in the reading passage (SEARCH), the retrieval of meaning from the relevant text (READ) and the selection of an option for the response (RD). Following this, contextual features predicted by theory to influence the cognitive processes, and hence the difficulty of items, were determined. Over 50 such variables were identified and mapped to each of the cognitive processes in the model. Examples are word frequency in the item stem and options for OP; word frequency in the reading passage for READ; semantic match between stem/option and relevant text in the passage for SEARCH; and dispersal of relevant information in the reading passage for RD. Response data from approximately 10,000 live test candidates were modelled using the Linear Logistic Test Model (LLTM) within a Generalised Linear Mixed Model framework (De Boeck & Wilson, 2004b). The LLTM is based on the Rasch model, for which the probability of success on an item is a function of item difficulty and candidate ability. The holds for LLTM except that item difficulty is decomposed so that the contribution of each source of difficulty (the contextual features mentioned above) is estimated. The main findings of the study included the identification of 26 contextual features which either increased or decreased item difficulty. Of these features, 20 were retained in a final model which explained 75.79% of the variance accounted for by a Rasch model. Among the components specified by the composite model, OP and READ were found to have the most influence, with RD exhibiting a moderate influence and SEARCH a low influence. Implications for developers of FCE include the need to consider and balance test method effects, and for other developers the additional need to determine whether their tests test features found to be criterial to the target level (such as non-standard word order at B2 level). Researchers wishing to use Khalifa and Weir’s (2009) model of reading should modify the stage termed named inferencing and consider adding further stages which define the way in which the goal setter and monitor work and the way in which item responses are selected. Finally, for those researchers interested in adopting a similar approach to that of the current study, careful consideration should be given to the way in which attributes are selected. The aims and scope of the study are of prime importance here.

Page generated in 0.1658 seconds