Spelling suggestions: "subject:"anguage -- desting"" "subject:"anguage -- ingesting""
11 |
"If I just get one IELTS certificate, I can get anything" : an impact study of IELTS in PakistanMemon, Natasha January 2015 (has links)
This thesis examines the impact of the high-stakes International English Language Testing System (IELTS) across different stakeholders in Pakistan, and on Pakistani education, society and economy more broadly. The global profile of IELTS means that washback and impact studies (both comparative and country-specific) are now increasingly carried out by Cambridge ESOL (Hawkey, 2006; Moore et al., 2012). These are undertaken not simply with a view to improving the test, but with a view to investigating how it is used and perceived. In Pakistan, as elsewhere, IELTS has assumed great significance on account of its gate-keeping function in emigration, higher education abroad and professional registration. Demand and candidature grow daily. However, specific conditions that pertain in Pakistan, mainly political instability, and major disparities in wealth and development, have a particular effect on the role of IELTS in the country. The current impact study employs a sequential exploratory concurrent embedded mixed methods design to assess the impact. Phase 1 is a preliminary survey of 20 IELTS preparation institutes, followed by an in-depth qualitative study of two IELTS preparation centres. The qualitative study employs classroom observations, semistructured interviews with teachers (N=2), informal conversational interviews with test-preparers (N=20), and pre- and post-study testing to assess the efficacy of IELTS preparation. Phase 2 analyses questionnaires from a further ten preparation centres. Respondents comprised 200 IELTS test-preparers, 100 IELTS test-takers and 10 IELTS preparation teachers. The survey was supplemented by a focus group with four test-preparers and semi-structured interviews with five employers and five parents. The initial survey of the private English Language Teaching industry in Pakistan showed a radical expansion of IELTS preparation courses. Yet the in-depth study of two specific centres showed that the courses are not effective in improving the scores of students. Courses, although relatively expensive, are very short and most testpreparers enter them with lower English proficiency than is appropriate for IELTS. Questionnaires and interviews showed that IELTS test-preparers and test-takers are primarily motivated to take the test for emigration and study abroad. The test preparers have high expectations from the course regarding improvement of their English proficiency which are generally not met. Disappointed test-takers hold some beliefs that their IELTS course and test will be of benefit to them in Pakistan. Although English ability is always considered as part of recruitment, employers interviewed for this project confirmed that an IELTS certificate is never explicitly required. It is likely that the local uses of IELTS that are emerging in Pakistan are much more indirect. I argue that because public education is not meeting the demand for English, IELTS is now perceived as a route of English education and general certification, and a badge of middle class status if not actual material gain. These findings have implications for both providers of state education in Pakistan, and providers of the IELTS test (Cambridge ESOL). The former needs to address the lack of publicly funded English education and English qualifications; and the latter needs to consider whether IELTS is appropriate for large numbers of low proficiency candidates, and for purposes other than admission to universities abroad and immigration.
|
12 |
Examining the Validity and Reliability of the ITT Vocabulary Size TestsTschirner, Erwin 18 October 2021 (has links)
The Institute for Test Research and Test Development (ITT) has provided complimentary Vocabulary Size Tests (VST) in 15 languages to language learners and their teachers, measuring their own or their learners’ receptive and productive vocabulary sizes. This report examines in detail and on a large empirical basis the validity and reliability of these tests.
|
13 |
A comparative study of the developmental sentence scoring normative data obtained in Portland, Oregon, and the Midwest, for children between the ages of 5.0 and 5.11 yearsMcNutt, Eileen 01 January 1985 (has links)
The focus of this study was the Developmental Sentence Scoring (DSS), developed by Lee and Canter (1971) and Lee (1974). The DSS is used to analyze a corpus of 50 utterances according to eight grammatical categories. Once a DSS score is determined for an individual child, that child's performance can be compared to that of his/ her peers, using the normative data provided by Lee (1974), and reported by Koenigsknecht (1974). This normative data has been widely used both clinically, and in research projects with little regard for the validity of the norms when applied outside the Midwest, where it was originally normed.
|
14 |
A comparative study of three language sampling methods using developmental sentence scoringDong, Cheryl Diane 01 January 1986 (has links)
The present study sought to determine the effect different stimulus material has on the language elicited from children. Its purpose was to determine whether a significant difference existed among language samples elicited three different ways when analyzed using DSS. Eighteen children between the ages of 3.6 and 5.6 years were chosen to participate in the study. All of the children had normal bearing. normal receptive vocabulary skills and no demonstrated or suspected physical or social delays. Three language samples. each elicited by either toys. pictures. or stories. were obtained from each child. For each sample. a corpus of 50 utterances was selected for analysis and analyzed according to the DSS procedure as described by Lee and Ganter (1971).
|
15 |
The Effect of Prompt Accent on Elicited Imitation Assessments in English as a Second LanguageBarrows, Jacob Garlin 01 March 2016 (has links) (PDF)
Elicited imitation (EI) assessment has been shown to have value as an inexpensive method for low-stakes tests (Cox & Davies, 2012), but little has been reported on the effect L2 accent has on test-takers' ability to understand and process the test items they hear. Furthermore, no study has investigated the effect of accent on EI test face validity. This study examined how the accent of input audio files affected EI test difficulty as well as test-takers' perceptions of such an effect. To investigate, self-reports of students' exposure to different varieties of English were obtained from a pre-assessment survey. A 63-item EI test was then administered in which English language learners in the United States listened to test items in three varieties of English: American English, Australian English, and British English. A post-assessment survey was then administered to gather information regarding perceived difficulty of accented prompts. A many facet Rasch analysis found that accent affected item difficulty in an EI test with a separation reliability coefficient of .98—British English being the most difficult and American English the easiest. Survey results indicated that students perceived this increase in difficulty, and ANOVAs between the survey and test results indicated that student perceptions of an increase in difficulty aligned with reality. Specifically, accents that students were “Not at all Familiar” with resulted in significantly lower EI test scores than accents with which the students were familiar. These findings suggest that prompt accent should be carefully considered in EI test development.
|
16 |
A partial validation of the contextual validity of the Centre Listening Test in JapanYanagawa, Kozo January 2012 (has links)
The purpose of this study was to validate the listening comprehension component of the Centre Test in Japan (henceforth, JNCTL) in relation to contextual parameters and cognitive processing. For the purpose of this study, a comprehensive framework of contextual parameters and a L2 listening processing model was established. This provided a solid theoretical framework for this study, whereby empirical evidence was elicited in relation to contextual parameters and cognitive processing. The elicitation was made through document analysis, focus group interviews, and a large-scale questionnaire administered to stakeholders including 110 high school English teachers and 391 third year students of high schools. The elicited data was subjected to descriptive, quantitative and qualitative analysis. The results of Preliminary studies identified ten possible key parameters to help the JNCTL achieve greater validity. They included the number of opportunities to listen to the input, a lack of hesitations, a lack of overlapping turns, a lack of multi-participant discussions, a lack of variety in the English accents used, a lack of L2 speakers, a lack of inference questions, a lack of non-linear texts, a lack of sandhi-variations, and a lack of natural speech rate. The results of the questionnaire revealed that sandhi-variation was the key parameter to help the current JNCTL achieve greater validity in a direction that would be accepted by the stakeholders, and it was further explored in Main Study in attempt to investigate the effect of sandhi-variation on listening comprehension test performance and the level of cognitive load imposed on the test takers. A series of experiments was conducted involving the manipulation of sandhi-variation. The results revealed that although no statistical difference was found in item difficulty estimates between the sandhi-variation and non-sandhi-variation versions, sandhi-variation may involve double effects on listening comprehension for the test takers. The positive effects could involve providing more prominent phonological difference between accented and unaccented words in connected speech which are produced by sandhi-variation, and this difference may reduce the cognitive load imposed on the test takers. The negative effects may involve increasing the cognitive load imposed on the test takers by obscuring sounds through elision or unclear pronunciation, and disturbing speech perception or word recognition. Recommendations are provided for improving the validity of the current JNCTL and for the development of listening comprehension tests more generally. Implications are also suggested for the teaching of listening at secondary schools in Japan. Lastly, the limitations of the study are outlined and suggestions for further research are proposed.
|
17 |
Testing, Assessment, and Evaluation in Language ProgramsAlobaid, Adnan Othman January 2016 (has links)
This three-article dissertation addresses three different yet interrelated topics: language testing, assessment, and evaluation. The first article (Saudi Student Placement into ESL Program Levels: Issues beyond Test Criteria) addresses a crucial yet understudied issue concerning why lower-level ESL classes typically contain a disproportionate number of Saudi students. Based on data obtained from different stakeholders, the findings revealed that one-third of the study students intentionally underperformed on ESL placement tests. However, ESL administrators participating in this study provided contradicting findings. The second article explores the efficacy of (Integrating Self-assessment Techniques into L2 Classroom Assessment Procedures) by examining the accuracy of CEFR self-assessment rubric compared to students' TOEFL scores, and the extent to which gender and levels of language proficiency cause any potential score underestimation. By obtaining data from 21 ESL students attending the Center for English as a Second Language at University of Arizona, the findings revealed no statistically significant correlations between participants' self-assessed scores and their TOEFL scores. However, the participants reported that the CEFR self-assessment rubric is accurate in measuring their levels of language proficiency. On the other hand, the third article (Quality Assurance and Accreditation as Forms for Language Program Evaluation: A Case Study of Two EFL Departments in A Saudi University) provides a simulated program evaluation based on an integrated set of standards of the NCAAA (the National Commission for Academic Accreditation and Assessment) and CEA (the Commission on English Language Program Accreditation). The findings indicated that the standards of the mission, curriculum, student learning outcomes, and program development, planning, and review, were partially met, whereas the standards of teaching strategies, assessment methods, and student achievement were not.
|
18 |
A Pilot Study: Normative Data on the Intelligibility of 3 1/2 Year Old ChildrenWare, Karen Mary 05 November 1996 (has links)
Most of the previous published research involving intelligibility has focused on persons with various disabilities or delays. Minimal research has been conducted on intelligibility in young children with no diagnosed speech and/ or language disorders. The result is a gap in normative data by which to set a standard to judge speech as being at an acceptable level of intelligibility for a particular age group. The focus of this pilot study was to collect normative data on the intelligibility of young children, ages 3:6 ±2 months, with no diagnosed speech and/or language disorder. ~ Thirteen subjects, ages 3:6 ±2 months, were recruited from the greater Portland/Vancouver area. These subjects were screened for normal development in speech sound production, expressive/receptive language, and hearing. It was also established that English was the primary language spoken in the home. Resonance, voice quality, and fluency were informally assessed by the researcher during the course of the session and found to be normal. The 100-word speech samples were collected by the researcher on audiotape and later played back to two listeners, who were familiar with the topic but unfamiliar with the speaker. The listeners orthographically transcribed the samples and a comparison was made by the researcher between the two sets of written transcriptions. This comparison provided the percentage of intelligible words, out of a possible 100, which were understood by both listeners. The results showed the mean intelligibility percentage for 31/2-year-old children with no diagnosed speech and/or language disorders to be 88% (SD = 5.7%) with a range of intelligibility from 76% to 96 % . Both the mode and the median for this sample were 90 % . Several other variables were addressed as points of interest but the comparisons were not investigated in depth. The focus of this study was to collect, in a methodically documented manner, normative data on intelligibility in 3 1/2-year-olds. When the results from this study are compared to the only other available data (Weiss, 1982), they were found to fall within 1 SD of each other (SD = 5.7%), indicating that there are no measurable differences between the findings.
|
19 |
Developmental sentence scoring sample size comparisonValenciano, Marilyn May 01 January 1981 (has links)
Assessment of language abilities is an integral part of accruing information on the development of concept formation and the learning of grammatical rules. The maturity and complexity of a child's language can be assessed through the use of a language sample. The sample consists of a specified number of utterances which are emitted spontaneously and then analyzed according to a given procedure.
The purpose of this study was to determine if there is a significant difference among the scores obtained from language samples of 25, 50, and 75 utterances when using the DSS procedure for ages 4.0 through 4.6 years. Twelve children, selected on the basis of chronological age, normal receptive vocabulary skills, normal hearing, and a monolingual background, participated as subjects.
|
20 |
Developmental sentence scoring : a comparative study conducted in Portland, OregonMcCluskey, Kathryn Marie 01 January 1984 (has links)
The purpose of this investigation was to replicate the study conducted by Lee and Canter (1971) and Lee (1974a) to determine if a significant difference among the scores in the two studies existed due to geographical location, and to initiate the establishment of norms for the Portland, Oregon geographical area. Forty children, selected on the basis of chronological age (4.0 to 4.11 years), normal receptive vocabulary skills, normal hearing, and a monolingual background, participated as subjects. A language sample of fifty utterances was elicited from each child and analyzed according to the Developmental Sentence Scoring (DSS) procedure.
|
Page generated in 0.0628 seconds