Spelling suggestions: "subject:"tem"" "subject:"stem""
51 |
継時的比較志向性尺度短縮版の作成 : Item Response Theory を用いた検討NAMIKAWA, Tsutomu, 並川, 努 30 December 2010 (has links)
No description available.
|
52 |
Latent trait, factor, and number endorsed scoring of polychotomous and dichotomous responses to the Common Metric Questionnaire /Becker, R. Lance. January 1991 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute and State University, 1991. / Vita. Abstract. Includes bibliographical references (leaves 79-83). Also available via the Internet.
|
53 |
Stratified item selection and exposure control in unidimensional adaptive testing in the presence of two-dimensional dataKalinowski, Kevin E. Henson, Robin K., January 2009 (has links)
Thesis (Ph. D.)--University of North Texas, Aug., 2009. / Title from title page display. Includes bibliographical references.
|
54 |
Detection and Classification of DIF Types Using Parametric and Nonparametric Methods: A comparison of the IRT-Likelihood Ratio Test, Crossing-SIBTEST, and Logistic Regression ProceduresLopez, Gabriel E. 01 January 2012 (has links)
The purpose of this investigation was to compare the efficacy of three methods for detecting differential item functioning (DIF). The performance of the crossing simultaneous item bias test (CSIBTEST), the item response theory likelihood ratio test (IRT-LR), and logistic regression (LOGREG) was examined across a range of experimental conditions including different test lengths, sample sizes, DIF and differential test functioning (DTF) magnitudes, and mean differences in the underlying trait distributions of comparison groups, herein referred to as the reference and focal groups. In addition, each procedure was implemented using both an all-other anchor approach, in which the IRT-LR baseline model, CSIBEST matching subtest, and LOGREG trait estimate were based on all test items except for the one under study, and a constant anchor approach, in which the baseline model, matching subtest, and trait estimate were based on a predefined subset of DIF-free items. Response data for the reference and focal groups were generated using known item parameters based on the three-parameter logistic item response theory model (3-PLM). Various types of DIF were simulated by shifting the generating item parameters of select items to achieve desired DIF and DTF magnitudes based on the area between the groups' item response functions. Power, Type I error, and Type III error rates were computed for each experimental condition based on 100 replications and effects analyzed via ANOVA. Results indicated that the procedures varied in efficacy, with LOGREG when implemented using an all-other approach providing the best balance of power and Type I error rate. However, none of the procedures were effective at identifying the type of DIF that was simulated.
|
55 |
Evaluation of two types of Differential Item Functioning in factor mixture models with binary outcomesLee, Hwa Young, doctor of educational psychology 22 February 2013 (has links)
Differential Item Functioning (DIF) occurs when examinees with the same ability have different probabilities of endorsing an item. Conventional DIF detection methods (e.g., the Mantel-Hansel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable (e.g., Cohen & Bolt, 2005). True source of DIF may be unobserved, including variables such as personality, response patterns, or unmeasured background variables.
The Factor Mixture Model (FMM) is designed to detect unobserved sources of heterogeneity in factor structures, and an FMM with binary outcomes has recently been used for assessing DIF (DeMars & Lau, 2011; Jackman, 2010). However, FMMs with binary outcomes for detecting DIF have not been thoroughly explored to investigate both types of between-class latent DIF (LDIF) and class-specific observed DIF (ODIF).
The present simulation study was designed to investigate whether models correctly specified in terms of LDIF and/or ODIF influence the performance of model fit indices (AIC, BIC, aBIC, and CAIC) and entropy, as compared to models incorrectly specified in terms of either LDIF or ODIF. In addition, the present study examined the recovery of item difficulty parameters and investigated the proportion of replications in which items were correctly or incorrectly identified as displaying DIF, by manipulating DIF effect size and latent class probability. For each simulation condition, two latent classes of 27 item responses were generated to fit a one parameter logistic model with items’ difficulties generated to exhibit DIF across the classes and/or the observed groups.
Results showed that FMMs with binary outcomes performed well in terms of fit indices, entropy, DIF detection, and recovery of large DIF effects. When class probabilities were unequal with small DIF effects, performance decreased for fit indices, power, and the recovery of DIF effects compared to equal class probability conditions. Inflated Type I errors were found for invariant DIF items across simulation conditions. When data were generated to fit a model having ODIF but estimated LDIF, specifying LDIF in the model fully captured ODIF effects when DIF effect sizes were large. / text
|
56 |
An automated test assembly for unidimensional IRT tests containing cognitive diagnostic elementsKim, Soojin 28 August 2008 (has links)
Not available / text
|
57 |
A comparison of Andrich's rating scale model and Rost's succesive intervals modelLustina, Michael John 28 August 2008 (has links)
Not available / text
|
58 |
IRT-based automated test assembly: a sampling and stratification perspectiveChen, Pei-hua 28 August 2008 (has links)
Not available / text
|
59 |
A polytomous nonlinear mixed model for item analysisShin, Seon-hi 25 July 2011 (has links)
Not available / text
|
60 |
Developing a Framework and Demonstrating a Systematic Process for Generating Medical Test ItemsLai, Hollis Unknown Date
No description available.
|
Page generated in 0.0371 seconds