Spelling suggestions: "subject:"itemresponse theory"" "subject:"chemoresponse theory""
91 |
Using Item Response Theory to Develop a Shorter Version of the Transition Readiness Assessment Questionnaire (TRAQ)Johnson, Kiana, McBee, R., Wood, David L. 01 January 2016 (has links)
No description available.
|
92 |
The Effects Of Differential Item Functioning On Predictive BiasBryant, Damon 01 January 2004 (has links)
The purpose of this research was to investigate the relation between measurement bias at the item level (differential item functioning, dif) and predictive bias at the test score level. Dif was defined as a difference in the probability of getting a test item correct for examinees with the same ability but from different subgroups. Predictive bias was defined as a difference in subgroup regression intercepts and/or slopes in predicting a criterion. Data were simulated by computer. Two hypothetical subgroups (a reference group and a focal group) were used. The predictor was a composite score on a dimensionally complex test with 60 items. Sample size (35, 70, and 105 per group), validity coefficient (.3 or .5), and the mean difference on the predictor (0, .33, .66, and 1 standard deviation, sd) and the criterion (0 and .35 sd) were manipulated. The percentage of items showing dif (0%, 15%, and 30%) and the effect size of dif (small = .3, medium = .6, and large = .9) were also manipulated. Each of the 432 conditions in the 3 x 2 x 4 x 2 x 3 x 3 design was replicated 500 times. For each replication, a predictive bias analysis was conducted, and the detection of predictive bias against each subgroup was the dependent variable. The percentage of dif and the effect size of dif were hypothesized to influence the detection of predictive bias; hypotheses were also advanced about the influence of sample size and mean subgroup differences on the predictor and criterion. Results indicated that dif was not related to the probability of detecting predictive bias against any subgroup. Results were inconsistent with the notion that measurement bias and predictive bias are mutually supportive, i.e., the presence (or absence) of one type of bias is evidence in support of the presence (or absence) of the other type of bias. Sample size and mean differences on the predictor/criterion had direct and indirect effects on the probability of detecting predictive bias against both reference and focal groups. Implications for future research are discussed.
|
93 |
An IRT Investigation of Common LMX MeasuresHowald, Nicholas 29 November 2017 (has links)
No description available.
|
94 |
Type I Error Rates and Power Estimates for Several Item Response Theory Fit IndicesSchlessman, Bradley R. 29 December 2009 (has links)
No description available.
|
95 |
DO APPLICANTS AND INCUMBENTS RESPOND TO PERSONALITY ITEMS SIMILARLY? A COMPARISON USING AN IDEAL POINT RESPONSE MODELO'Brien, Erin L. 09 July 2010 (has links)
No description available.
|
96 |
A Bifactor Model of Burnout? An Item Response Theory Analysis of the Maslach Burnout Inventory – Human Services Survey.Periard, David Andrew 05 August 2016 (has links)
No description available.
|
97 |
Detecting Insufficient Effort Responding: An Item Response Theory ApproachBarnes, Tyler Douglas January 2016 (has links)
No description available.
|
98 |
Case and covariate influence: implications for model assessmentDuncan, Kristin A. 12 October 2004 (has links)
No description available.
|
99 |
A semi-parametric approach to estimating item response functionsLiang, Longjuan 22 June 2007 (has links)
No description available.
|
100 |
The Effect of Item Parameter Uncertainty on Test ReliabilityBodine, Andrew James 24 August 2012 (has links)
No description available.
|
Page generated in 0.4186 seconds