Return to search

The effects of a small examinee sample size on the precision of measurement for tests developed by four different item selection strategies

This study sought to determine the best item selection procedure to use in mastery testing situations where small examinee samples exist. The item selection procedures evaluated were (a) modified classical, utilizing p-values and point biserial correlations; (b) criterion referenced, utilizing phi coefficients; (c) domain sampling, utilizing random item selection; and (d) Rasch model, utilizing item logits (b-values). / A computer program was used to create a data matrix containing the simulated responses of 1000 examinees to 240 items. From this matrix 12 sets of data were drawn. Each data set contained the responses of 50 randomly selected examinees and their associated responses to all 240 items. Three data sets were used for computing test results and or item statistics for each item selection procedure. / Item selection procedures were evaluated in terms of test information, standard error of the estimate, misclassification rate, and accuracy of the domain percentage correct score. Test information associated with each item was computed according to a three parameter item response theory (IRT) model. / The results show that the classical and criterion referenced procedures were effective at selecting items which, for a given cut-off score, would be identified by a three parameter model as having high information. However, since ability scores for a three parameter model can not be generated from sample sizes of 50 there is no way to effectively utilize the items identified with high information. Indeed, all of the item selection procedures (Rasch model included), which used statistical indices as a bases for item selection, produced biased estimates of the domain percentage correct score. As a result of the biased domain score estimates all of the "optimal" item selection procedures produced generally higher misclassification rates and less accurate domain score estimates than the random item selection procedure. For this reason the random item selection procedure was recommended for mastery tests in which small examinee samples exist. / Source: Dissertation Abstracts International, Volume: 49-08, Section: A, page: 2188. / Major Professor: Jacob Beard. / Thesis (Ph.D.)--The Florida State University, 1988.

Identiferoai:union.ndltd.org:fsu.edu/oai:fsu.digital.flvc.org:fsu_77825
ContributorsJones, Michael Hadley., Florida State University
Source SetsFlorida State University
LanguageEnglish
Detected LanguageEnglish
TypeText
Format122 p.
RightsOn campus use only.
RelationDissertation Abstracts International

Page generated in 0.0017 seconds