• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A New Item Response Theory Model for Estimating Person Ability and Item Parameters for Multidimensional Rank Order Responses

Seybert, Jacob 01 January 2013 (has links)
The assessment of noncognitive constructs poses a number of challenges that set it apart from traditional cognitive ability measurement. Of particular concern is the influence of response biases and response styles that can influence the accuracy of scale scores. One strategy to address these concerns is to use alternative item presentation formats (such as multidimensional forced choice (MFC) pairs, triads, and tetrads) that may provide resistance to such biases. A variety of strategies for constructing and scoring these forced choice measured have been proposed, though they often require large sample sizes, are limited in the way that statements can vary in location, and (in some cases) require a separate precalibration phase prior to the scoring of forced-choice responses. This dissertation introduces new item response theory models for estimating item and person parameters from rank-order responses indicating preferences among two or more alternatives representing, for example, different personality dimensions. Parameters for this new model, called the Hyperbolic Cosine Model for Rank order responses (HCM-RANK), can be estimated using Markov chain Monte Carlo (MCMC) methods that allow for the simultaneous evaluation of item properties and person scores. The efficacy of the MCMC parameter estimation procedures for these new models was examined via three studies. Study 1 was a Monte Carlo simulation examining the efficacy of parameter recovery across levels of sample size, dimensionality, and approaches to item calibration and scoring. It was found that estimation accuracy improves with sample size, and trait scores and location parameters can be estimated reasonably well in small samples. Study 2 was a simulation examining the robustness of trait estimation to error introduced by substituting subject matter expert (SME) estimates of statement location for MCMC item parameter estimates and true item parameters. Only small decreases in accuracy relative to the true parameters were observed, suggesting that using SME ratings of statement location for scoring might be a viable short-term way of expediting MFC test deployment in field settings. Study 3 was included primarily to illustrate the use of the newly developed IRT models and estimation methods with real data. An empirical investigation comparing validities of personality measures using different item formats yielded mixed results and raised questions about multidimensional test construction practices that will be explored in future research. The presentation concludes with a discussion of MFC methods and potential applications in educational and workforce contexts.
2

The Validity of the CampusReady Survey

French, Elizabeth 29 September 2014 (has links)
The purpose of this study is to examine the evidence underlying the claim that scores from CampusReady, a diagnostic measure of student college and career readiness, are valid indicators of student college and career readiness. Participants included 4,649 ninth through twelfth grade students from 19 schools who completed CampusReady in the 2012-13 school year. The first research question tested my hypothesis that grade level would have an effect on CampusReady scores. There were statistically significant effects of grade level on scores in two subscales, and I controlled for grade level in subsequent analyses on those subscales. The second, third and fourth research questions examined the differences in scores for subgroups of students to explore the evidence supporting the assumption that scores are free of sources of systematic error that would bias interpretation of student scores as indicators of college and career readiness. My hypothesis that students' background characteristics would have little to no effect on scores was confirmed for race/ethnicity and first language but not for mothers' education, which had medium effects on scores. The fifth and six research questions explored the assumption that students with higher CampusReady scores are more prepared for college and careers. My hypothesis that there would be small to moderate effects of students' aspirations for after high school on CampusReady scores was confirmed, with higher scores for students who aspired to attend college than for students with other plans. My hypothesis that there would be small to moderate relationships between CampusReady scores and grade point average was also confirmed. I conclude with a discussion of the implications and limitations of these results for the argument supporting the validity of CampusReady score interpretation as well as the implications of these results for future CampusReady validation research. This study concludes with the suggestion that measures of metacognitive learning skills, such as the CampusReady survey, show promise for measuring student preparation for college and careers when triangulated with other measures of college and career preparation.
3

Detecting Aberrant Responding on Unidimensional Pairwise Preference Tests: An Application of based on the Zinnes Griggs Ideal Point IRT Model

Lee, Philseok 01 January 2013 (has links)
This study investigated the efficacy of the lz person fit statistic for detecting aberrant responding with unidimensional pairwise preference (UPP) measures, constructed and scored based on the Zinnes-Griggs (ZG, 1974) IRT model, which has been used for a variety of recent noncognitive testing applications. Because UPP measures are used to collect both "self-" and "other-" reports, I explored the capability of lz to detect two of the most common and potentially detrimental response sets, namely fake good and random responding. The effectiveness of lz was studied using empirical and theoretical critical values for classification, along with test length, test information, the type of statement parameters, and the percentage of items answered aberrantly (20%, 50%, 100%). We found that lz was ineffective in detecting fake good responding, with power approaching zero in the 100% aberrance conditions. However, lz was highly effective in detecting random responding, with power approaching 1.0 in long-test, high information conditions, and there was no diminution in efficacy when using marginal maximum likelihood estimates of statement parameters in place of the true values. Although using empirical critical values for classification provided slightly higher power and more accurate Type I error rates, theoretical critical values, corresponding to a standard normal distribution, provided nearly as good results.

Page generated in 0.1038 seconds