Problem Statement: Nowadays, multiple choice (MC) tests are very common, and replace many constructed response (CR)
tests. However, literature reveals that there is no consensus whether both test formats are equally suitable for measuring students' ability or knowledge. This might be due to the fact that neither the type of MC question nor the scoring rule used when comparing test formats are mentioned. Hence, educators do not have any guidelines which test format or scoring rule is appropriate.
Purpose of Study: The study focuses on the comparison of CR and MC tests. More precisely, short answer questions are
contrasted to equivalent MC questions with multiple responses which are graded with three different scoring rules.
Research Methods: An experiment was conducted based on three instruments: A CR and a MC test using a similar stem to assure that the questions are of an equivalent level of difficulty. This procedure enables the comparison of the scores students gained in the two forms of examination. Additionally, a questionnaire was handed out for further insights into students' learning strategy, test preference, motivation, and demographics. In contrast to previous studies the present study applies the
many-facet Rasch measurement approach for analyzing data which allows improving the reliability of an assessment and
applying small datasets.
Findings: Results indicate that CR tests are equal to MC tests with multiple responses if Number Correct (NC) scoring is used. An explanation seems straight forward since the grader of the CR tests did not penalize wrong answers and rewarded partially correct answers. This means that s/he uses the same logic as NC scoring. All other scoring methods such as the All or-Nothing or University-Specific rule neither reward partial knowledge nor penalize guessing. Therefore, these methods are found to be stricter than NC scoring or CR tests and cannot be used interchangeably.
Conclusions: CR tests can be replaced by MC tests with multiple responses if NC scoring is used, due to the fact that the multiple response format measures more complex thinking skills than conventional MC questions. Hence, educators can take advantage of low grading costs, consistent grading, no scoring biases, and greater coverage of the syllabus while students benefit from timely feedback. (authors' abstract)
Identifer | oai:union.ndltd.org:VIENNA/oai:epub.wu-wien.ac.at:4691 |
Date | 10 March 2011 |
Creators | Kastner, Margit, Stangl, Barbara |
Source Sets | Wirtschaftsuniversität Wien |
Language | English |
Detected Language | English |
Type | Article, PeerReviewed |
Format | application/pdf |
Rights | Creative Commons: Attribution-Noncommercial-No Derivative Works 3.0 Austria |
Relation | http://dx.doi.org/10.1016/j.sbspro.2011.02.035, http://ejop.psychopen.eu/article/view/195, http://epub.wu.ac.at/4691/ |
Page generated in 0.0055 seconds