The objectives of the present study were to: (a) replicate the results of Arthur et al. (2002) by comparing race-based subgroup differences on a multiple-choice and constructed response test in a laboratory setting using a larger sample, (b) extend their work by investigating the role of reading ability, test-taking skills, and test perceptions that could explain why subgroup differences are reduced when the test format is changed from multiple-choice to a constructed response format, and (c) assess the criterion-related validity of the constructed response test. Two hundred sixty White and 204 African Americans completed a demographic questionnaire, Test Attitudes and Perceptions Survey, a multiple-choice or constructed response test, the Raven's Advanced Progressive Matrices Short Form, the Nelson-Denny Reading Test, Experimental Test of Testwiseness, and a post-test questionnaire. In general, the pattern of results supported the hypotheses in the predicted direction. For example, although there was a reduction in subgroup differences in performance on the constructed response compared to the multiple-choice test, the difference was not statistically significant. However, analyses by specific test content yielded a significant reduction in subgroup differences on the science reasoning section. In addition, all of the hypothesized study variables, with the exception of face validity, were significantly related to test performance. Significant subgroup differences were also obtained for all study variables except for belief in tests and stereotype threat. The results also indicate that reading ability, test-taking skills, and perceived fairness partially mediated the relationship between race and test performance. Finally, the criterion-related validity for the constructed response test was stronger than that for the multiple-choice test. The results suggested that the constructed response test format investigated in the present study may be a viable alternative to the traditional multiple-choice format in high-stakes testing to solve the organizational dilemma of using the most valid predictors of job performance and simultaneously reducing subgroup differences and subsequent adverse impact on tests of knowledge, skill, ability, and achievement. However, additional research is needed to further demonstrate the appropriateness of the constructed response format as an alternative to traditional testing methods.
Identifer | oai:union.ndltd.org:TEXASAandM/oai:repository.tamu.edu:1969.1/128 |
Date | 30 September 2004 |
Creators | Edwards, Bryan D. |
Contributors | Arthur, Winfred, Jr., Finch, John F., Willson, Victor, Pritchard, Robert D. |
Publisher | Texas A&M University |
Source Sets | Texas A and M University |
Language | en_US |
Detected Language | English |
Type | Electronic Dissertation, text |
Format | 1884907 bytes, 267882 bytes, electronic, application/pdf, text/plain, born digital |
Page generated in 0.0017 seconds