Spelling suggestions: "subject:"tem forminformation"" "subject:"tem informationation""
1 |
A Study Of The Predictive Validity Of The Baskent University English Proficiency Exam Through The Use Of The Two-parameter Irt Model& / #8217 / s Ability EstimatesYapar, Taner 01 January 2003 (has links) (PDF)
The purpose of this study is to analyze the predictive power of the ability
estimates obtained through the two-parameter IRT model on the English
Proficiency Exam administered at BaSkent University in September 2001
(BUSPE 2001). As prerequisite analyses the fit of one- and two-parameter
models of IRT were investigated.
The data used for this study were the test data of all students (727) who took
BUSPE 2001 and the departmental English course grades of the passing
students.
At the first stage, whether the assumptions of IRT were met was
investigated. Next, the observed and theoretical distribution of the test data
was reviewed by using chi square statistics. After that, the invariance of
ability estimates across different sets of items and invariance of item
parameters across different groups of students were examined.
At the second stage, the predictive validity of BUSPE 2001 and its subtests
was analyzed by using both classical test scores and ability estimates of the
better fitting IRT model.
The findings revealed that the test met the assumptions of
unidimensionality, local independence and nonspeededness, the
assumptions of equal discrimination indices was not met. Whether the
assumption of minimal guessing was met remained vague. The chi square
statistics indicated that only the two parameter model fitted the test data.
The ability estimates were found to be invariant across different item sets
and the item parameters were found to be invariant across different groups
of students.
The IRT estimated predictive validity outweighed the predictive validity
calculated through classical total scores both for the whole test and its
subtests. The reading subtest was the best predictor of future performance in
departmental English courses among all subtests.
|
2 |
Investigating Parameter Recovery and Item Information for Triplet Multidimensional Forced Choice Measure: An Application of the GGUM-RANK ModelLee, Philseok 07 June 2016 (has links)
To control various response biases and rater errors in noncognitive assessment, multidimensional forced choice (MFC) measures have been proposed as an alternative to single-statement Likert-type scales. Historically, MFC measures have been criticized because conventional scoring methods can lead to ipsativity problems that render scores unsuitable for inter-individual comparisons. However, with the recent advent of classical test theory and item response theory scoring methods that yield normative information, MFC measures are surging in popularity and becoming important components of personnel and educational assessment systems. This dissertation presents developments concerning a GGUM-based MFC model henceforth referred to as the GGUM-RANK. Markov Chain Monte Carlo (MCMC) algorithms were developed to estimate GGUM-RANK statement and person parameters directly from MFC rank responses, and the efficacy of the new estimation algorithm was examined through computer simulations and an empirical construct validity investigation. Recently derived GGUM-RANK item information functions and information indices were also used to evaluate overall item and test quality for the empirical study and to give insights into differences in scoring accuracy between two-alternative (pairwise preference) and three-alternative (triplet) MFC measures for future work. This presentation concludes with a discussion of the research findings and potential applications in workforce and educational setting.
|
Page generated in 0.0722 seconds