• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • Tagged with
  • 12
  • 12
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluation of the effectiveness and predictive validity of English language assessment in two colleges of applied sciences in Oman

Al Hajri, Fatma Said Mohammed January 2013 (has links)
This thesis investigates the effectiveness of English language assessment in the Foundation Programme (FP) and its predictive validity for academic achievement in the First Year (FY) at two Colleges of Applied Sciences (CAS) in Oman. The objectives of this study are threefold: (1) Identify how well the FP assessment has met its stated and unstated objectives and evaluate its intended and unintended outcomes using impact evaluation approaches. (2) Study the predictive validity of FP assessment and analyse the linguistic needs of FY academic courses and assessment. (3) Investigate how FP assessment and its impact are perceived by students and teachers. The research design was influenced by Messick‟s (1989; 1994; 1996) unitary concept of validity, by Norris (2006; 2008; 2009) views on validity evaluation and by Owen‟s (2007) ideas on impact evaluation. The study was conducted in two phases using five different methods: questionnaires, focus groups, interviews, document analysis and a correlational study. In the first phase, 184 students completed a questionnaire and 106 of these participated in 12 focus groups, whilst 27 teachers completed a different questionnaire and 19 of these were interviewed. The aim of this phase was to explore the perceptions of the students and teachers on the FP assessment instruments in terms of their validity and reliability, structure, and political and social impact. The findings indicated a general positive perception of the instruments, though more so for the Academic English Skills course (AES) than the General English Skills course (GES). There were also calls for increasing the quantity and quality of the assessment instruments. The political impact of the English language FP assessment was strongly felt by the participants. In the second phase, 176 students completed a questionnaire and 83 of them participated in 15 focus groups; 29 teachers completed a different questionnaire and of these 23 teachers were interviewed. The main focus was on students and teachers‟ perceptions of FP assessment, and how language accuracy should be considered in marking academic written courses. One finding was that most students in FY tended to face difficulties not only in English but also in what could be called „study skills‟; some of these were attributed to the leniency of FP assessment exit criteria. Throughout the two phases, 118 documents on FP assessment at CAS were thematically analysed. The objective was to understand the official procedures prescribed for writing and using assessment instruments in FP and compare them against actual test papers and classroom practices. The findings revealed the use of norm-referenced assessment instead of criterion referenced, incompatibility between what was assessed and what was taught, inconsistency in using assessment criteria and in the unhelpful verbatim replication of national assessment standards. The predictive validity studies generally found a low overall correlation between students‟ scores in English language assessment instruments and their scores in academic courses. The findings of this study are in line with most but not all previous studies. The strength of predictive validity was dependent on a number of variables especially the students‟ specializations, and their self-evaluations of their own English language levels. Some recommendations are offered for the reform of entry requirements of the Omani higher education.
2

An investigation into the construct validity of an academic writing test in English with special reference to the Academic Writing Module of the IELTS Test

Alsagoafi, Ahmad Abdulrahman January 2013 (has links)
The International English Language Testing System (IELTS) is the world’s leading high stakes test that assesses the English Language Proficiency of candidates who speak languages other than English and wish to gain entry into universities where English is the language of instruction. Recently, over 3000 institutions in the United States accepted the IELTS test to be an indicator of language proficiency (IELTS, 2012a). Because of this preference for the IELTS test, and its worldwide recognition, there has been an increase in the number of students who are taking the test every year. According to the IELTS website, more than 7000 institutions around the world trust the test results and, not surprisingly, more than 1.7 million candidates take the test every year in one of the 800 recognised test centres across 135 countries (IELTS, 2012a). These candidates include people who seek not only to obtain admission to universities, but also for immigration authorities, employers of certain companies and government agencies. Acknowledging this popularity and importance to learners of English as a Foreign Language (EFL), this qualitative study has investigated the construct validity of the academic writing module in the IELTS test from the perspectives of the stakeholders (i.e. candidates, lecturers and markers). The aim was to understand why some Saudi students fail to cope with demands of the university despite the fact that they have achieved the minimum requirements in IELTS. In this study, data was collected in two phases in two different settings through open-ended questionnaires, semi-structured observations and semi-structured interviews. Phase I was carried out in the Department of English Language (DEL) at King Faisal University in Saudi Arabia, while Phase II was conducted in one university in the UK. The sample of the study included: 8 students, 6 university lecturers and one marker. In this study, data were analysed and coded into themes by using NVivo 9. The results of this case study have shown that the stakeholders were doubtful about the issue of readiness of students, which is claimed by IELTS, and they wanted the test to be clearer about how the students were going to cope with university demands upon gaining entry. In addition, with respect to the content validity of the test, this study found that the tasks in the academic writing test to a large extent do not reflect the kind of tasks candidates are likely to encounter at university. Furthermore, this study pointed out that response validity, on the part of students who may not have understood the rubric of the tasks, is another important factor affecting the students’ performance. Also, the findings of this study suggested that scoring validity could have a significant effect on the students’ scores because of the inconsistency of markers during the scoring process as they may have sometimes failed to assign the students to their corresponding level of proficiency. Consequently, the study provided a set of implications as well as recommendations for future research.
3

Construct representation of First Certificate in English (FCE) reading

Corrigan, Michael January 2015 (has links)
The current study investigates the construct representation of the reading component of a B2 level general English test: First Certificate in English (FCE). Construct representation is the relationship between cognitive processes elicited by the test and item difficulty. To facilitate this research, a model of the cognitive process involved in responding to reading test items was defined, drawing together aspects of different models (Embretson & Wetzel, 1987; Khalifa & Weir, 2009; Rouet, 2012). The resulting composite contained four components: the formation of an understanding of item requirements (OP), the location of relevant text in the reading passage (SEARCH), the retrieval of meaning from the relevant text (READ) and the selection of an option for the response (RD). Following this, contextual features predicted by theory to influence the cognitive processes, and hence the difficulty of items, were determined. Over 50 such variables were identified and mapped to each of the cognitive processes in the model. Examples are word frequency in the item stem and options for OP; word frequency in the reading passage for READ; semantic match between stem/option and relevant text in the passage for SEARCH; and dispersal of relevant information in the reading passage for RD. Response data from approximately 10,000 live test candidates were modelled using the Linear Logistic Test Model (LLTM) within a Generalised Linear Mixed Model framework (De Boeck & Wilson, 2004b). The LLTM is based on the Rasch model, for which the probability of success on an item is a function of item difficulty and candidate ability. The holds for LLTM except that item difficulty is decomposed so that the contribution of each source of difficulty (the contextual features mentioned above) is estimated. The main findings of the study included the identification of 26 contextual features which either increased or decreased item difficulty. Of these features, 20 were retained in a final model which explained 75.79% of the variance accounted for by a Rasch model. Among the components specified by the composite model, OP and READ were found to have the most influence, with RD exhibiting a moderate influence and SEARCH a low influence. Implications for developers of FCE include the need to consider and balance test method effects, and for other developers the additional need to determine whether their tests test features found to be criterial to the target level (such as non-standard word order at B2 level). Researchers wishing to use Khalifa and Weir’s (2009) model of reading should modify the stage termed named inferencing and consider adding further stages which define the way in which the goal setter and monitor work and the way in which item responses are selected. Finally, for those researchers interested in adopting a similar approach to that of the current study, careful consideration should be given to the way in which attributes are selected. The aims and scope of the study are of prime importance here.
4

The effect of the prompt on writing product and process : a mixed methods approach

Chapman, Mark Derek January 2016 (has links)
The aim of this thesis is to investigate the effect of the writing prompt on test takers in terms of their test taking processes and the final written product in a second language writing assessment context. The study employs a mixed methods approach, with a quantitative and a qualitative strand. The quantitative study focuses on an analysis of the responses to six different writing prompts, with the responses being analyzed for significant differences in a range of key textual features, such as syntactic complexity, lexical sophistication, fluency and cohesion. The qualitative study incorporates stimulated recall interviews with test takers to learn about the aspects of the writing prompt that can have an effect on test taking processes, such as selecting a prompt, planning a response, and composing a response. The results of the quantitative study indicate that characteristics of the writing prompt (domain, response mode, focus, number of rhetorical cues) have an effect on numerous textual features of the response; for example, fluency, syntactic complexity, lexical sophistication, and cohesion. The qualitative results indicate that similar characteristics of the writing prompt can have an effect on how test takers select a prompt, and that the test time constraint interacts with the prompt characteristics to affect how test takers plan and compose their responses. The topic and the number of rhetorical cues are the prompt characteristics that have the greatest effect on test taking processes. The main conclusion drawn from the study findings are that several prompt characteristics should be controlled if prompts are to be considered equivalent. Without controlling certain prompt characteristics, both test taking processes and the written product will vary as a result of the prompt. The findings raise some serious questions regarding the inferences that may legitimately be drawn from writing scores. The findings provide clear guidance on prompt characteristics that should be controlled to help ensure that prompts present an equivalent challenge and opportunity to test takers to demonstrate their writing proficiency. This thesis makes an original contribution to the second language writing assessment literature in the detailed understanding of the relationships between specific prompt characteristics and textual features of the response.
5

Investigation into the features of written discourse at levels B2 and C1 of the CEFR

Waller, Daniel January 2015 (has links)
Validation in language testing is an ongoing process in which information is collected through investigations into the design, implementation, products and impacts of an assessment (Sireci, 2007). This includes the cognitive processes elicited from candidates by a test (Weir, 2005). This study investigated the English Speaking Board’s ESOL International examinations at levels B2 and C1 of the CEFR. The study considered the role of discourse competence in successful performances through examination of cognitive phases employed by candidates and metadiscourse markers and whether the use fit with models such as the CEFR and Field (2004) and so contributed to the validation argument. The study had two strands. The process strand of the study was largely qualitative and focussed on the cognitive processes which candidates used to compose their texts. Verbal reports were carried out with a total of twelve participants, six at each level. The product strand of the study analysed the use of metadiscourse markers in the scripts of sixty candidates in order to identify developing features of discourse competence at levels B2 and C1. The process strand of the study identified that there were statistically significant differences in the cognitive phases employed by the participants in the study. The investigation also identified a number of differences in what B2 and C1 learners attended to while carrying out the different phases. The product strand of the study found no statistically significant differences in the use of metadiscourse markers used by candidates at the two levels, but observed differences in the way particular metadiscourse markers were employed. These differences indicate the direction for a possible larger-scale study. Unlike previous studies into metadiscourse (Burneikaite, 2008; Plakans, 2009; Bax, Nataksuhara & Waller, forthcoming) the study controlled for task, text type and rhetorical pattern and nationality. The study suggested that discourse competence contributed to higher-level performances in writing and that the examinations under investigation elicited a wide range of cognitive phases from C1 candidates. The study also suggested that many of the CEFR’s statements about the development of discourse competence at the higher levels are correct.
6

SPONTANEOUS SPEECH ANALYSIS FOR DETECTING MILD COGNITIVE IMPAIRMENT AND ALZHEIMER’S DISEASE IN THAI OLDER ADULTS

Na Chiangmai, Natinee 17 October 2023 (has links)
Memory deficits in Alzheimer’s disease (AD) and mild cognitive impairment (MCI) can be reflected in language-based tests, especially spontaneous speech tasks. Three spontaneous speech tests were developed in this study, including Thai Picture description (TPD), Thai Story Recall (TSR), and Semi-structured Interview for Thai (SIT) Ninety-eight Thai older adults underwent screening tests and three spontaneous speech tests. Then they were classified into three groups, including healthy control (HC), MCI, and AD. Their verbal responses were extracted into the content variables and acoustic features. Then the discriminant ability and accuracy in differentiating HC, MCI, and AD were examined with by Multivariate Discriminant Analysis (MDA) and analysis of the ROC curve and AUC. Two content variables showed significant differences among three groups of participants, i.e., correct information unit (CIU) of the TPD and delayed recall scores of the TSR. For acoustic features, ANOVAs revealed that three variables were significantly different among the three experimental groups, i.e., total utterance time in delayed recall, number of voice breaks in the TPD, and the SIT. The result of a stepwise estimation in MDA presented that the best combination of predictive model was CIU and backward digit span (BDS), in which provides 61.1% of correct classification. This discriminant function showed AUC of .81 in differentiating HC and MCI, AUC of .91 in distinguishing HC and AD, and AUC of .86 in detecting persons with cognitive impairments (MCI and AD) from HC. In conclusion, the combination of CIU of TPD and BDS is suitable for differentiating AD and persons with cognitive impairments from HC. However, there is no appropriate predictor in distinguishing MCI and AD.
7

Automated Event-driven Security Assessment

January 2014 (has links)
abstract: With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches. / Dissertation/Thesis / M.S. Computer Science 2014
8

Guest editors’ introduction

Takala, Sauli, Voss, Bernd 14 July 2020 (has links)
This special issue of Language Learning in Higher Education is devoted to the field of language testing and assessment, an area often underrated in higher education, where other concerns tend to be more in the focus of attention. Our call for papers made clear that our aim was “to cover a wide range of interrelated themes, in theory and practice, such as assessment and self-assessment, formative and summative assessment, performance standards and standard setting, use and impact of tests, tailoring and developing tests for special purposes, backwash effects (desirable/undesirable), quality issues and ethical concerns. Also considered would be contributions dealing with programme assessment and evaluation . . .” In other words, we were inviting contributions from a wider range of perspectives than is often associated with this field. As a result, the 12 articles selected and presented here cover a rather wide variety of issues often more concerned with the users of language tests, i.e. with those who have to apply them, to develop them within their own institutional constraints, and to interpret and defend the results, than with full-time researchers talking to full-time researchers.
9

Validating a set of Japanese EFL proficiency tests : demonstrating locally designed tests meet international standards

Dunlea, Jamie January 2015 (has links)
This study applied the latest developments in language testing validation theory to derive a core body of evidence that can contribute to the validation of a large-scale, high-stakes English as a Foreign Language (EFL) testing program in Japan. The testing program consists of a set of seven level-specific tests targeting different levels of proficiency. This core aspect of the program was selected as the main focus of this study. The socio-cognitive model of language test development and validation provided a coherent framework for the collection, analysis and interpretation of evidence. Three research questions targeted core elements of a validity argument identified in the literature on the socio-cognitive model. RQ 1 investigated the criterial contextual and cognitive features of tasks at different levels of proficiency, Expert judgment and automated analysis tools were used to analyze a large bank of items administered in operational tests across multiple years. RQ 2 addressed empirical item difficulty across the seven levels of proficiency. An innovative approach to vertical scaling was used to place previously administered items from all levels onto a single Rasch-based difficulty scale. RQ 3 used multiple standard-setting methods to investigate whether the seven levels could be meaningfully related to an external proficiency framework. In addition, the study identified three subsidiary goals: firstly, toevaluate the efficacy of applying international standards of best practice to a local context: secondly, to critically evaluate the model of validation; and thirdly, to generate insights directly applicable to operational quality assurance. The study provides evidence across all three research questions to support the claim that the seven levels in the program are distinct. At the same time, the results provide insights into how to strengthen explicit task specification to improve consistency across levels. This study is the largest application of the socio-cognitive model in terms of the amount of operational data analyzed, and thus makes a significant contribution to the ongoing study of validity theory in the context of language testing. While the study demonstrates the efficacy of the socio-cognitive model selected to drive the research design, it also provides recommendations for further refining the model, with implications for the theory and practice of language testing validation.
10

Plošné testování jazykových dovedností žáků se sluchovým postižením / Plošné testování jazykových dovedností žáků se sluchovým postižením

Chmátalová, Zuzana January 2015 (has links)
This diploma thesis deals with national assessment of language abilities of hearing impaired pupils and closer focuses on pre-lingual deaf pupils at the ends of the primary and lower secondary schools. Based on the definition of the characteristics of the national assessment, the diploma thesis also focuses on an overview of currently used language tests, both at international level as well as it gives information about the practice in selected foreign education systems. Special attention is devoted to information that deal with the assessment of language abilities in the Czech Republic, and the form of participation of deaf pupils in such testing . The practical part is done by a research which was done to find out the views of those involved in the education of pupils with hearing disorders just the current assessment practice language skills of the pupils. In the final part of my diploma thesis is a possible outline for innovations for testing related to information from the preceding parts of my diploma thesis.

Page generated in 0.0684 seconds