Spelling suggestions: "subject:"cultiple choice cuestions"" "subject:"cultiple choice 4uestions""
1 |
Can Using Online Formative Assessment Boost the Academic Performance of Business Students? An Empirical StudyOellermann, Susan Wilma, Van der merwe, Alexander Dawid January 2015 (has links)
The declining quality of first year student intake at the Durban University of Technology (DUT) prompted the addition of online learning to traditional instruction. The time spent by students in an online classroom and their scores in subsequent multiple-choice question (MCQ) tests were measured. Tests on standardised regression coefficients showed self-test time as a significant predictor of summative MCQ performance while controlling for ability. Exam MCQ performance was found to be associated, positively and significantly, with annual self-test time at the 5 percent level and a significant relationship was found between MCQ marks and year marks. It was concluded that students’ use of the self-test tool in formative assessments has a significant bearing on students’ year marks and final grades. The negative nature of the standardised beta coefficient for gender indicates that, when year marks and annual self-test time are considered, males appear to have performed slightly better than females.
|
2 |
Multiple-choice questions : linguistic investigation of difficulty for first-language and second-language studentsSanderson, Penelope Jane 11 1900 (has links)
Multiple-choice questions are acknowledged to be difficult for both English mother-tongue and second-language university students to interpret and answer. In a context in which university tuition policies are demanding explicitly that assessments need to be designed and administered in such a way that no students are disadvantaged by the assessment process, the thesis explores the fairness of multiple-choice questions as a way of testing second-language students in South Africa. It explores the extent to which two multiple-choice Linguistics examinations at Unisa are in fact ‘generally accessible’ to second-language students, focusing on what kinds of multiple-choice questions present particular problems for second-language speakers and what contribution linguistic factors make to these difficulties.
Statistical analysis of the examination results of two classes of students writing multiple-choice exams in first-year Linguistics is coupled with a linguistic analysis of the examination papers to establish the readability level of each question and whether the questions adhered to eight item-writing guidelines relating to maximising readability and avoiding negatives, long items, incomplete sentence stems, similar answer choices, grammatically non-parallel answer choices, ‘All-of-the-above’ and ‘None-of-the-above’ items. Correlations are sought between question difficulty and aspects of the language of these questions and an attempt is made to investigate the respective contributions of cognitive difficulty and linguistic difficulty on student performance.
To complement the quantitative portion of the study, a think-aloud protocol was conducted with 13 students in an attempt to gain insight into the problems experienced by individual students in reading, understanding and answering multiple-choice questions. The consolidated quantitative and qualitative findings indicate that among the linguistic aspects of questions that contributed to question difficulty for second language speakers was a high density of academic words, long items and negative stems. These sources of difficulty should be addressed as far as possible during item-writing and editorial review of questions.
|
3 |
Unsupervised relation extraction for e-learning applicationsAfzal, Naveed January 2011 (has links)
In this modern era many educational institutes and business organisations are adopting the e-Learning approach as it provides an effective method for educating and testing their students and staff. The continuous development in the area of information technology and increasing use of the internet has resulted in a huge global market and rapid growth for e-Learning. Multiple Choice Tests (MCTs) are a popular form of assessment and are quite frequently used by many e-Learning applications as they are well adapted to assessing factual, conceptual and procedural information. In this thesis, we present an alternative to the lengthy and time-consuming activity of developing MCTs by proposing a Natural Language Processing (NLP) based approach that relies on semantic relations extracted using Information Extraction to automatically generate MCTs. Information Extraction (IE) is an NLP field used to recognise the most important entities present in a text, and the relations between those concepts, regardless of their surface realisations. In IE, text is processed at a semantic level that allows the partial representation of the meaning of a sentence to be produced. IE has two major subtasks: Named Entity Recognition (NER) and Relation Extraction (RE). In this work, we present two unsupervised RE approaches (surface-based and dependency-based). The aim of both approaches is to identify the most important semantic relations in a document without assigning explicit labels to them in order to ensure broad coverage, unrestricted to predefined types of relations. In the surface-based approach, we examined different surface pattern types, each implementing different assumptions about the linguistic expression of semantic relations between named entities while in the dependency-based approach we explored how dependency relations based on dependency trees can be helpful in extracting relations between named entities. Our findings indicate that the presented approaches are capable of achieving high precision rates. Our experiments make use of traditional, manually compiled corpora along with similar corpora automatically collected from the Web. We found that an automatically collected web corpus is still unable to ensure the same level of topic relevance as attained in manually compiled traditional corpora. Comparison between the surface-based and the dependency-based approaches revealed that the dependency-based approach performs better. Our research enabled us to automatically generate questions regarding the important concepts present in a domain by relying on unsupervised relation extraction approaches as extracted semantic relations allow us to identify key information in a sentence. The extracted patterns (semantic relations) are then automatically transformed into questions. In the surface-based approach, questions are automatically generated from sentences matched by the extracted surface-based semantic pattern which relies on a certain set of rules. Conversely, in the dependency-based approach questions are automatically generated by traversing the dependency tree of extracted sentence matched by the dependency-based semantic patterns. The MCQ systems produced from these surface-based and dependency-based semantic patterns were extrinsically evaluated by two domain experts in terms of questions and distractors readability, usefulness of semantic relations, relevance, acceptability of questions and distractors and overall MCQ usability. The evaluation results revealed that the MCQ system based on dependency-based semantic relations performed better than the surface-based one. A major outcome of this work is an integrated system for MCQ generation that has been evaluated by potential end users.
|
4 |
Multiple-choice questions : linguistic investigation of difficulty for first-language and second-language studentsSanderson, Penelope Jane 11 1900 (has links)
Multiple-choice questions are acknowledged to be difficult for both English mother-tongue and second-language university students to interpret and answer. In a context in which university tuition policies are demanding explicitly that assessments need to be designed and administered in such a way that no students are disadvantaged by the assessment process, the thesis explores the fairness of multiple-choice questions as a way of testing second-language students in South Africa. It explores the extent to which two multiple-choice Linguistics examinations at Unisa are in fact ‘generally accessible’ to second-language students, focusing on what kinds of multiple-choice questions present particular problems for second-language speakers and what contribution linguistic factors make to these difficulties.
Statistical analysis of the examination results of two classes of students writing multiple-choice exams in first-year Linguistics is coupled with a linguistic analysis of the examination papers to establish the readability level of each question and whether the questions adhered to eight item-writing guidelines relating to maximising readability and avoiding negatives, long items, incomplete sentence stems, similar answer choices, grammatically non-parallel answer choices, ‘All-of-the-above’ and ‘None-of-the-above’ items. Correlations are sought between question difficulty and aspects of the language of these questions and an attempt is made to investigate the respective contributions of cognitive difficulty and linguistic difficulty on student performance.
To complement the quantitative portion of the study, a think-aloud protocol was conducted with 13 students in an attempt to gain insight into the problems experienced by individual students in reading, understanding and answering multiple-choice questions. The consolidated quantitative and qualitative findings indicate that among the linguistic aspects of questions that contributed to question difficulty for second language speakers was a high density of academic words, long items and negative stems. These sources of difficulty should be addressed as far as possible during item-writing and editorial review of questions.
|
5 |
Effects of Interspersing Recall versus Recognition Questions with Response Cards During Lectures on Students' Academic and Participation Behaviors in a College ClassroomSinger, Leslie S. 13 November 2018 (has links)
Instructional design and delivery may be one tool available to teachers to increase the academic and social behaviors of all students in the classroom. Effective instruction is an evidence-based teaching strategy that can be used to efficiently educate our youth across all learning environments. One effective instructional strategy includes increasing students’ opportunities to respond to instructor-posed questions during lectures. Students may respond to questions using a response card system as a way to promote active engagement. This study examined the most common form of instructor-posed questions presented during lecture, recall and recognition questions, to determine the differential effects on students’ academic and participation behavior in a college classroom. Results found no differentiation in students’ academic behavior with respect to question type. Students’ participation behavior was greater when the instructor used class wide active responding procedures than observed in baseline conditions that represented typical college instruction.
|
6 |
Algorithms for assessing the quality and difficulty of multiple choice exam questionsLuger, Sarah Kaitlin Kelly January 2016 (has links)
Multiple Choice Questions (MCQs) have long been the backbone of standardized testing in academia and industry. Correspondingly, there is a constant need for the authors of MCQs to write and refine new questions for new versions of standardized tests as well as to support measuring performance in the emerging massive open online courses, (MOOCs). Research that explores what makes a question difficult, or what questions distinguish higher-performing students from lower-performing students can aid in the creation of the next generation of teaching and evaluation tools. In the automated MCQ answering component of this thesis, algorithms query for definitions of scientific terms, process the returned web results, and compare the returned definitions to the original definition in the MCQ. This automated method for answering questions is then augmented with a model, based on human performance data from crowdsourced question sets, for analysis of question difficulty as well as the discrimination power of the non-answer alternatives. The crowdsourced question sets come from PeerWise, an open source online college-level question authoring and answering environment. The goal of this research is to create an automated method to both answer and assesses the difficulty of multiple choice inverse definition questions in the domain of introductory biology. The results of this work suggest that human-authored question banks provide useful data for building gold standard human performance models. The methodology for building these performance models has value in other domains that test the difficulty of questions and the quality of the exam takers.
|
7 |
Multiple-choice and short-answer questions in language assessment: the interplay between item format and second language readingLiao, Jui-Teng 01 May 2018 (has links)
Multiple-choice (MCQs) and short-answer questions (SAQs) are the most common test formats for assessing English reading proficiency. While the former provides test-takers with prescribed options, the latter requires short written responses. Test developers favor MCQs over SAQs for the following reasons: less time required for rating, high rater agreement, and wide content coverage. This mixed methods dissertation investigated the impacts of test format on reading performance, metacognitive awareness, test-completion processes, and task perceptions.
Participants were eighty English as a second language (ESL) learners from a Midwestern community college. They were first divided into two groups of approximately equivalent reading proficiencies and then completed MCQ and SAQ English reading tests in different orders. After completing each format, participants filled out a survey about demographic information, strategy use, and perceptions of test formats. They also completed a 5-point Likert-scale survey to assess their degree of metacognitive awareness. At the end, sixteen participants were randomly chosen to engage in retrospective interviews focusing on their strategy use and task perceptions.
This study employed a mixed methods approach in which quantitative and qualitative strands converged to draw an overall meta-inference. For the quantitative strand, descriptive statistics, paired sample t-tests, item analyses, two-way ANOVAs, and correlation analyses were conducted to investigate 1) the differences between MCQ and SAQ test performance and 2) the relationship between test performance and metacognitive awareness. For the qualitative strand, test-takers’ MCQ and SAQ test completion processes and task perceptions were explored using coded interview and survey responses related to strategy use and perceptions of test formats.
Results showed that participants performed differently on MCQ and SAQ reading tests, even though both tests were highly correlated. The paired sample t-tests revealed that participants’ English reading and writing proficiencies might account for the MCQ and SAQ performance disparity. Moreover, there was no positive relationship between reading test performance and the degree of metacognitive awareness generated by the frequency of strategy use. Correlation analyses suggested whether a higher or lower English reading proficiency of the participants was more important than strategy use. Although the frequency of strategy use did not benefit test performance, strategies implemented for MCQ and SAQ tests were found to generate interactive processes allowing participants to gain deeper understanding of the source texts. Furthermore, participants’ perceptions toward MCQs, SAQs, and a combination of both revealed positive and negative influences among test format, reading comprehension, and language learning. Therefore, participants’ preferences of test format should be considered when measuring their English reading proficiency. This study has pedagogical implications on the use of various test formats in L2 reading classrooms.
|
8 |
A QUANTITATIVE STUDY EXAMINING THE RELATIONSHIP BETWEEN LEARNING PREFERENCES AND STANDADIZED MULTIPLE CHOICE ACHIEVEMENT TEST PERFORMANCE OF NURSE AIDE STUDENTSNeupane, Ramesh 01 May 2019 (has links)
The research purpose was to investigate the differences between learning preferences (i.e., Active-Reflective, Sensing-Intuitive, Visual-Verbal, and Sequential-Global) determined by the Index of Learning Style and gender (i.e., Male and Female) in regards to standardized achievement multiple-choice test performance determined by the Illinois Nurse Aide Competency Examination (INACE), i.e., overall INACE performance and INACE performance based on six duty areas (i.e., communicating information, performing basic nursing skills, performing personal care, performing basic restorative skills, providing mental health-services, and providing for resident’s rights) of nurse aide students. The study explored the relationship between variables using a non-experimental, comparative and descriptive approach. The nurse aide students who completed the Illinois approved Basic Nurse Aide Training (BNAT) and 21-mandated skills assessment and were ready to take the Illinois Nurse Aide Competency Examination (INACE) in the month of October 2018 and December 2018 at various community colleges across the state of Illinois were the participants of the study. A sample of 800 nurse aide students were selected through stratified (north, central, and south) random sampling out of which N = 472 participated in the study representing the actual sample.
|
9 |
In-Hospital Cardiac Arrest : A Study of Education in Cardiopulmonary Resuscitation and its Effects on Knowledge, Skills and Attitudes among Healthcare Professionals and Survival of In-Hospital Cardiac Arrest PatientsSödersved Källestedt, Marie-Louise January 2011 (has links)
This thesis investigated whether outcome after in-hospital cardiac arrest patients could be improved by a cardiopulmonary resuscitation (CPR) educational intervention focusing on all hospital healthcare professionals. Annually in Sweden, approximately 3000 in-hospital patients suffer a cardiac arrest in which CPR is attempted, and which 900 will survive. The thesis is based on five papers: Paper I was a methodological study concluding in a reliable multiple choice questionnaire (MCQ) aimed at measuring CPR knowledge. Paper II was an intervention study. The intervention consisted of educating 3144 healthcare professionals in CPR. The MCQ from Paper I was answered by the healthcare professionals both before (82% response rate) and after (98% response rate) education. Theoretical knowledge improved in all the different groups of healthcare professionals after the intervention. Paper III was an observational laboratory study investigating the practical CPR skills of 74 healthcare professionals’. Willingness to use an automated external defibrillator (AED) improved generally after education, and there were no major differences in CPR skills between the different healthcare professions. Paper IV investigated, by use of a questionnaire, the attitudes to CPR of 2152 healthcare professionals (82% response rate). A majority of healthcare professionals reported a positive attitude to resuscitation. Paper V was a register study of patients suffering from cardiac arrest. The intervention tended not to reduce the delay to start of treatment or to increase overall survival. However, our results suggested indirect signs of an improved cerebral function among survivors. In conclusion, CPR education and the introduction of AEDs in-hospital – improved healthcare professionals knowledge, skills, and attitudes – did not improve patients’ survival to hospital discharge, but the functional status among survivors improved.
|
10 |
The development of a framework for evaluating e-assessment systemsSingh, Upasana Gitanjali 11 1900 (has links)
Academics encounter problems with the selection, evaluation, testing and implementation of e-assessment software tools. The researcher experienced these problems while adopting e-assessment at the university where she is employed. Hence she undertook this study, which is situated in schools and departments in Computing-related disciplines, namely Computer Science, Information Systems and Information Technology at South African Higher Education Institutions. The literature suggests that further research is required in this domain. Furthermore, preliminary empirical studies indicated similar disabling factors at other South African tertiary institutions, which were barriers to long-term implementation of e-assessment. Despite this, academics who are adopters of e-assessment indicate satisfaction, particularly when conducting assessments with large classes. Questions of the multiple choice genre can be assessed automatically, leading to increased productivity and more frequent assessments. The purpose of this research is to develop an evaluation framework to assist academics in determining which e-assessment tool to adopt, enabling them to make more informed decisions. Such a framework would also support evaluation of existing e-assessment systems.
The underlying research design is action research, which supported an iterative series of studies for developing, evaluating, applying, refining, and validating the SEAT (Selecting and Evaluating an e-Assessment Tool) Evaluation Framework and subsequently an interactive electronic version, e-SEAT. Phase 1 of the action research comprised Studies 1 to 3, which established the nature, context and extent of adoption of e-assessment. This set the foundation for development of SEAT in Phase 2. During Studies 4 to 6 in Phase 2, a rigorous sequence of evaluation and application facilitated the transition from the manual SEAT Framework to the electronic evaluation instrument, e-SEAT, and its further evolution.
This research resulted in both a theoretical contribution (SEAT) and a practical contribution (e-SEAT). The findings of the action research contributed, along with the literature, to the categories and criteria in the framework, which in turn, contributed to the bodies of knowledge on MCQs and e-assessment.
The final e-SEAT version, the ultimate product of this action research, is presented in Appendix J1. For easier reference, the Appendices are included on a CD, attached to the back cover of this Thesis.. / Computing / PhD. (Information Systems)
|
Page generated in 0.0754 seconds