31 |
Test-wiseness and background knowledge: Their relative contributions to high test performanceRoberson, Daniel Bennett 07 August 2020 (has links)
When given a multiple-choice test over unfamiliar material, students may score significantly above chance levels. This performance may be explained by prior knowledge of the material or by “test-wiseness,” determining the correct answer by using cues present in the test. Participants answered questions from an introductory psychology test-bank in two formats: a question stem with a single alternative and a traditional four alternative multiple-choice, reporting what sources of information they used to answer each question. For the single-alternative condition, participants had an accuracy of 42.2%, 17.2% higher than the base chance of 25%, with an average accuracy of 40.75% for the multiple-choice condition. Participants who stated they had previously learnt the material showed no significant difference in accuracy than those who stated they had guessed. These findings suggest that tests may have inflated scores which reflect test-wiseness and prior knowledge more than formal learning of the test materials.
|
32 |
College Students' Behavior on Multiple Choice Self-Tailored Exams in Relation to Metacognitive Ability, Self-efficacy, and Test AnxietyVuk, Jasna 09 August 2008 (has links)
The purpose of this study was to observe college students’ behavior on five selftailored, multiple choice exams throughout a semester in relation to: a) metacognitive ability, b) self-efficacy expectations, and c) test anxiety. Additionally, the effect of a selftailoring procedure on exam scores and content validity of the tests was observed. Selftailored testing was defined as an option in which students selected up to five questions they wanted to omit from being scored on an exam. Students’ metacognitive ability was defined as the percentage of incorrectly answered questions out of the total number omitted. Ninety-nine college students from two sections of an educational psychology undergraduate course participated in this study. Eighty students completed the study; seventy-one used an option to omit questions on all exams. Before taking exam 1, students answered measures of self-efficacy and test anxiety. After completing each of the five course exams, students marked on the back of their answer sheet up to five questions they wanted to be omitted from scoring. After exam 5, students answered a questionnaire that addressed their perception of the self-tailoring procedure. MANOVA, repeated measures ANOVA, Pearson correlations, t-test and one-way ANOVA were conducted. Students made a statistically significant increase in their scores on all exams by using the questions omitting procedure. There was a statistically significant linear increase of percentages of incorrectly answered questions out of the total number omitted across five exams. Frequency of items that students omitted from scoring were significantly negatively correlated with item difficulty values. The content validity of the test was affected on two out of five exams based on cognitive level of items and on three out of five exams based on chapter coverage. Students’ self-efficacy expectations and test anxiety were not related to the likelihood to apply the self-tailoring procedure or to the degree of success students had in applying the procedure.The study provided a new perspective on self-tailored tests in college classroom with implications for teaching, assessment, and students’ metacognitive abilities.
|
33 |
The impact of multiple-choice item styles, judge experience and item taxonomy level on minimum passing standards and interscorer agreement /Zahran, Abd El Aziz H January 1981 (has links)
No description available.
|
34 |
How do Students Regulate Their Use of Multiple Choice Practice Tests?Badali, Sabrina 28 June 2022 (has links)
No description available.
|
35 |
An experiment in the use of objective tests of the multiple-choice type for review and motivation in the teaching of high school chemistry.Jared, John Charles. January 1966 (has links)
No description available.
|
36 |
Comparing 12 finite state models of examinee performance on multiple-choice testsZin, Than Than 04 May 2006 (has links)
Finite state test theory models the response behavior of an examinee and establishes the relationship between the ability of the examinee and the observed responses on a multiple-choice test. In finite state modeling, various assumptions about item characteristics and the examinees’ response strategies are made to estimate an examinee ability, and willingness to guess.
Twelve sets of plausible assumptions about identifiability of distractors and examinee guessing strategies were adopted and the corresponding finite state models were actualized. Three consequences of the adoption of the 12 sets of assumptions were investigated: 1) the extent to which the resulting ability estimates rank ordered the examinees similarly, 2) variation in the magnitude of ability estimates and the estimated willingness to guess across the 12 models, and 3) the extent to which conclusions about examinees subgroups would differ according to the model employed. Also, conventional number-right scores were compared with the finite state scores with respect to the three outcomes just listed.
All scoring methods rank ordered the examinees essentially the same. The magnitude of the finite state scores varied considerably across models mainly due to differing assumptions about the identifiability of distractors. Differing assumptions about examinee guessing strategy had surprisingly little effect on the magnitude of the ability estimates, though estimates of willingness to guess varied consistently according to the assumed strategies. Conclusions about group differences also varied across the models as a result of differing assumptions about both item characteristics and examinee guessing strategies. / Ph. D.
|
37 |
A QUANTITATIVE STUDY EXAMINING THE RELATIONSHIP BETWEEN LEARNING PREFERENCES AND STANDADIZED MULTIPLE CHOICE ACHIEVEMENT TEST PERFORMANCE OF NURSE AIDE STUDENTSNeupane, Ramesh 01 May 2019 (has links)
The research purpose was to investigate the differences between learning preferences (i.e., Active-Reflective, Sensing-Intuitive, Visual-Verbal, and Sequential-Global) determined by the Index of Learning Style and gender (i.e., Male and Female) in regards to standardized achievement multiple-choice test performance determined by the Illinois Nurse Aide Competency Examination (INACE), i.e., overall INACE performance and INACE performance based on six duty areas (i.e., communicating information, performing basic nursing skills, performing personal care, performing basic restorative skills, providing mental health-services, and providing for resident’s rights) of nurse aide students. The study explored the relationship between variables using a non-experimental, comparative and descriptive approach. The nurse aide students who completed the Illinois approved Basic Nurse Aide Training (BNAT) and 21-mandated skills assessment and were ready to take the Illinois Nurse Aide Competency Examination (INACE) in the month of October 2018 and December 2018 at various community colleges across the state of Illinois were the participants of the study. A sample of 800 nurse aide students were selected through stratified (north, central, and south) random sampling out of which N = 472 participated in the study representing the actual sample.
|
38 |
Characterizing Multiple-Choice Assessment Practices in Undergraduate General ChemistryJared B Breakall (8080967) 04 December 2019 (has links)
<p>Assessment of
student learning is ubiquitous in higher education chemistry courses because it
is the mechanism by which instructors can assign grades, alter teaching
practice, and help their students to succeed. One type of assessment that is
popular in general chemistry courses, yet difficult to create effectively, is
the multiple-choice assessment. Despite its popularity, little is known about
the extent that multiple-choice general chemistry exams adhere to accepted
design practices or the processes that general chemistry instructors engage in
while creating these assessments. Further understanding of multiple-choice
assessment quality and the design practices of general chemistry instructors
could inform efforts to improve the quality of multiple-choice assessment
practice in the future. This work attempted to characterize multiple-choice
assessment practices in undergraduate general chemistry classrooms by, 1)
conducting a phenomenographic study of general chemistry instructor’s
assessment practices and 2) designing an instrument that can detect violations
of item writing guidelines in multiple-choice chemistry exams. </p>
<p>The
phenomenographic study of general chemistry instructors’ assessment practices
included 13 instructors from the United States who participated in a
three-phase interview. They were asked to describe how they create multiple-choice
assessments, to evaluate six multiple-choice exam items, and to create two
multiple-choice exam items using a think-aloud protocol. It was found that the
participating instructors considered many appropriate assessment design
practices yet did not utilize, or were not familiar with, all the appropriate
assessment design practices available to them. </p>
<p>Additionally, an
instrument was developed that can be used to detect violations of item writing guidelines
in multiple-choice exams. The instrument, known as the Item Writing Flaws
Evaluation Instrument (IWFEI) was shown to be reliable between users of the
instrument. Once developed, the IWFEI was used to analyze 1,019 general
chemistry exam items. This instrument provides a tool for researchers to use to
study item writing guideline adherence, as well as, a tool for instructors to
use to evaluate their own multiple-choice exams. The use of the IWFEI is hoped
to improve multiple-choice item writing practice and quality.</p>
<p>The results of
this work provide insight into the multiple-choice assessment design practices of
general chemistry instructors and an instrument that can be used to evaluate multiple-choice
exams for item writing guideline adherence. Conclusions, recommendations for
professional development, and recommendations for future research are discussed.</p>
|
39 |
Effects of feedback in computer-administered multiple-choice testing procedure and paper-and-pencil testing procedureLeung, Man-tak, 梁文德 January 1984 (has links)
published_or_final_version / Education / Master / Master of Education
|
40 |
Multiple-choice questions : linguistic investigation of difficulty for first-language and second-language studentsSanderson, Penelope Jane 11 1900 (has links)
Multiple-choice questions are acknowledged to be difficult for both English mother-tongue and second-language university students to interpret and answer. In a context in which university tuition policies are demanding explicitly that assessments need to be designed and administered in such a way that no students are disadvantaged by the assessment process, the thesis explores the fairness of multiple-choice questions as a way of testing second-language students in South Africa. It explores the extent to which two multiple-choice Linguistics examinations at Unisa are in fact ‘generally accessible’ to second-language students, focusing on what kinds of multiple-choice questions present particular problems for second-language speakers and what contribution linguistic factors make to these difficulties.
Statistical analysis of the examination results of two classes of students writing multiple-choice exams in first-year Linguistics is coupled with a linguistic analysis of the examination papers to establish the readability level of each question and whether the questions adhered to eight item-writing guidelines relating to maximising readability and avoiding negatives, long items, incomplete sentence stems, similar answer choices, grammatically non-parallel answer choices, ‘All-of-the-above’ and ‘None-of-the-above’ items. Correlations are sought between question difficulty and aspects of the language of these questions and an attempt is made to investigate the respective contributions of cognitive difficulty and linguistic difficulty on student performance.
To complement the quantitative portion of the study, a think-aloud protocol was conducted with 13 students in an attempt to gain insight into the problems experienced by individual students in reading, understanding and answering multiple-choice questions. The consolidated quantitative and qualitative findings indicate that among the linguistic aspects of questions that contributed to question difficulty for second language speakers was a high density of academic words, long items and negative stems. These sources of difficulty should be addressed as far as possible during item-writing and editorial review of questions.
|
Page generated in 0.0684 seconds