In this study, I investigate how successful provided response questions, such as multiple choice questions, are as an assessment format compared to the conventional constructed response questions. Based on the literature on mathematics assessment, I firstly identify an assessment taxonomy, consisting of seven mathematics assessment components, ordered by cognitive levels of difficulty and cognitive skills. I then develop a theoretical framework, for determining the quality of a question, with respect to three measuring criteria: discrimination index, confidence index and expert opinion. The theoretical framework forms the foundation against which I construct the Quality Index (QI) model for measuring how good a mathematics question is. The QI model gives a quantitative value to the quality of a question. I also give a visual representation of the quality of a question in terms of a radar plot. I illustrate the use of the QI model for quantifying the quality of mathematics questions in a particular undergraduate mathematics course, in both of the two assessment formats – provided response questions (PRQs) and constructed response questions (CRQs). I then determine which of the seven assessment components can best be assessed in the PRQ format and which can best be assessed in the CRQ format. In addition I also investigate student preferences between the two assessment formats. / Thesis (PhD)--University of Pretoria, 2009. / Mathematics and Applied Mathematics / unrestricted
Identifer | oai:union.ndltd.org:netd.ac.za/oai:union.ndltd.org:up/oai:repository.up.ac.za:2263/24261 |
Date | 20 January 2009 |
Creators | Huntley, Belinda |
Contributors | Prof J C Engelbrecht, Prof A F Harding, belinda.huntley@wits.ac.za |
Source Sets | South African National ETD Portal |
Detected Language | English |
Type | Thesis |
Rights | ©University of Pretoria 2008 D554/ |
Page generated in 0.0022 seconds