Policymakers expect international educational assessments to report credible national and international changes in student achievement over time. However, international assessment projects face great methodological challenges to creating comparable scores across jurisdictions and time points, fundamentally because jurisdictions vary in many aspects of curriculum and curriculum change as well as in the patterns of students’ test-taking behaviour. Using data from the Second IEA Mathematics Study (SIMS), the study reported in this dissertation addresses the potential impact of the different equating methodologies used in current international assessments on the accurate estimates of change in jurisdiction achievement over time. The results of the study demonstrate that the different equating methodologies implemented through the Item Response Theory (IRT) models currently used in international assessments may be of limited use in estimating change in jurisdiction achievement over time. This is because the international assessment data violate the IRT model assumptions, in particular the unidimensionality assumption. In addition, the estimation of jurisdiction results based on a common international scale may potentially distort the results of those jurisdictions that have levels of student achievement that are much lower or higher than most other participating jurisdictions. The findings of this study have important implications for researchers as well as policy makers.
Identifer | oai:union.ndltd.org:TORONTO/oai:tspace.library.utoronto.ca:1807/19168 |
Date | 25 February 2010 |
Creators | Xu, Yunmei |
Contributors | Wolfe, Richard |
Source Sets | University of Toronto |
Language | en_ca |
Detected Language | English |
Type | Thesis |
Page generated in 0.002 seconds