201 |
Using Item Response Theory to Develop a Shorter Version of the Transition Readiness Assessment Questionnaire (TRAQ)Johnson, Kiana, McBee, R., Wood, David L. 01 January 2016 (has links)
No description available.
|
202 |
Testing the Assumption of Sample Invariance of Item Difficulty Parameters in the Rasch Rating Scale ModelCurtin, Joseph A. 20 August 2007 (has links) (PDF)
Rasch is a mathematical model that allows researchers to compare data that measure a unidimensional trait or ability (Bond & Fox, 2007). When data fit the Rasch model, it is mathematically proven that the item difficulty estimates are independent of the sample of respondents. The purpose of this study was to test the robustness of the Rasch model with regards to its ability to maintain invariant item difficulty estimates when real (data that does not perfectly fit the Rasch model), polytomous scored data is used. The data used in this study comes from a university alumni questionnaire that was collected over a period of five years. The analysis tests for significant variation between (a) small samples taken from a larger sample, (b) a base sample and subsequent (longitudinal) samples and (c) variation over time with confounding variables. The confounding variables studied include (a) the gender of the respondent and (b) the respondent's type of major at the time of graduation. The study used three methods to assess variation: (a) the between-fit statistic, (b) confidence intervals around the mean of the estimates and (c) a general linear model. The general linear model used the person residual statistic from the Winsteps' person output file as a dependent variable with year, gender and type of major as independent variables. Results of the study support the invariant nature of the item difficulty estimates when polytomous data from the alumni questionnaire is used. The analysis found comparable results (within sampling error) for the between-fit statistics and the general linear model. The confidence interval method was limited in its usefulness due to small confidence bands and the limitation of the plots. The linear model offered the most valuable data in that it provides methods to not only detect the existence of variation but to assess the relative magnitude of the variation from different sources. Recommendations for future research include studies regarding the impact of sample size on the between-fit statistic and confidence intervals as well as the impact of large amounts of systematic missing data on the item parameter estimates.
|
203 |
Maintenance of Vertical Scales Under Conditions of Item Parameter Drift and Rasch Model-data MisfitO'Neil, Timothy Paul 01 May 2010 (has links)
With scant research to draw upon with respect to the maintenance of vertical scales over time, decisions around the creation and performance of vertical scales over time necessarily suffers due to the lack of information. Undetected item parameter drift (IPD) presents one of the greatest threats to scale maintenance within an item response theory (IRT) framework. There is also still an outstanding question as to the utility of the Rasch model as an underlying viable framework for establishing and maintaining vertical scales. Even so, this model is currently used for scaling many state assessment systems. Most criticisms of the Rasch model in this context have not involved simulation. And most have not acknowledged conditions in which the model may function sufficiently to justify its use in vertical scaling. To address these questions, vertical scales were created from real data using the Rasch and 3PL models. Ability estimates were then generated to simulate a second (Time 2) administration. These simulated data were placed onto the base vertical scales using a horizontal vertical scaling approach and a mean-mean transformation. To examine the effects of IPD on vertical scale maintenance, several conditions of IPD were simulated to occur within each set of linking items. In order to evaluate the viability of using the Rasch model within a vertical scaling context, data were generated and calibrated at Time 2 within each model (Rasch and 3PL) as well as across each model (Rasch data generataion/3PL calibration, and vice versa). Results pertaining the first question of the effect IPD has on vertical scale maintenance demonstrate that IPD has an effect directly related to percentage of drifting linking items, the magnitude of IPD exhibited, and the direction. With respect to the viability of using the Rasch model within a vertical scaling context, results suggest that the Rasch model is perfectly viable within a vertical scaling context in which the model is appropriate for the data. It is also clearly evident that where data involve varying discrimination and guessing, use of the Rasch model is inappropriate.
|
204 |
Using Item Mapping to Evaluate Alignment between Curriculum and AssessmentKaira, Leah Tepelunde 01 September 2010 (has links)
There is growing interest in alignment between state's standards and test content partly due to accountability requirements of the No Child Left Behind (NCLB) Act of 2001. Among other problems, current alignment methods almost entirely rely on subjective judgment to assess curriculum-assessment alignment. In addition none of the current alignment models accounts for student actual performance on the assessment and there are no consistent criteria for assessing alignment across the various models. Due to these problems, alignment results employing different models cannot be compared. This study applied item mapping to student response data for the Massachusetts Adult Proficiency Test (MAPT) for Math and Reading to assess alignment. Item response theory (IRT) was used to locate items on a proficiency scale and then two criterion response probability (RP) values were applied to the items to map each item to a proficiency category. Item mapping results were compared to item writers' classification of the items. Chi-square tests, correlations, and logistic regression were used to assess the degree of agreement between the two sets of data. Seven teachers were convened for a one day meeting to review items that do not map to intended grade level to explain the misalignment. Results show that in general, there was higher agreement between SMEs classification and item mapping results at RP50 than RP67. Higher agreement was also observed for items assessing lower level cognitive abilities. Item difficulty, cognitive demand, clarity of the item, level of vocabulary of item compared to reading level of examinees and mathematical concept being assessed were some of the suggested reasons for misalignment.
|
205 |
The impact of product group forcing on individual item forecast accuracyReddy, Chandupatla Surender January 1991 (has links)
No description available.
|
206 |
An IRT Investigation of Common LMX MeasuresHowald, Nicholas 29 November 2017 (has links)
No description available.
|
207 |
Type I Error Rates and Power Estimates for Several Item Response Theory Fit IndicesSchlessman, Bradley R. 29 December 2009 (has links)
No description available.
|
208 |
DO APPLICANTS AND INCUMBENTS RESPOND TO PERSONALITY ITEMS SIMILARLY? A COMPARISON USING AN IDEAL POINT RESPONSE MODELO'Brien, Erin L. 09 July 2010 (has links)
No description available.
|
209 |
An Examination of Type I Errors and Power for Two Differential Item Functioning IndicesClark, Patrick Carl, Jr. 28 October 2010 (has links)
No description available.
|
210 |
A Bifactor Model of Burnout? An Item Response Theory Analysis of the Maslach Burnout Inventory – Human Services Survey.Periard, David Andrew 05 August 2016 (has links)
No description available.
|
Page generated in 0.0373 seconds