• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Investigation of Unidimensional Testing Procedures under Latent Trait Theory using Principal Component Analysis

McGill, Michael T. 11 December 2009 (has links)
There are several generally accepted rules for detecting unidimensionality, but none are well tested. This simulation study investigated well-known methods, including but not limited to, the Kaiser (k>1) Criterion, Percentage of Measure Validity (greater than 50%, 40%, or 20%), Ratio of Eigenvalues, and Kelley method, and compares these methods to each other and a new method proposed by the author (McGill method) for assessing unidimensionality. After applying principal component analysis (PCA) to the residuals of a Latent Trait Test Theory (LTTT) model, this study was able to address three purposes: determining the Type I error rates associated with various criterion values, for assessing unidimensionality; determining the Type II error rates and statistical power associated with various rules of thumb when assessing dimensionality; and, finally, determining whether more suitable criterion values could be established for the methods of the study by accounting for various characteristics of the measurement context. For those methods based on criterion values, new modified values are proposed. For those methods without criterion values for dimensionality decisions, criterion values are modeled and presented. The methods compared in this study were investigated using PCA on residuals from the Rasch model. The sample size, test length, ability distribution variability, and item distribution variability were varied and the resulting Type I and Type II error rates of each method were examined. The results imply that certain conditions can cause improper diagnoses as to the dimensionality of instruments. Adjusted methods are suggested to induce a more stable condition relative to the Type I and Type II error rates. The nearly ubiquitous Kaiser method was found to be biased towards signaling multidimensionality whether it exists or not. The modified version of the Kaiser method and the McGill method, proposed by the author were shown to be among the best at detecting unidimensionality when it was present. In short, methods that take into account changes in variables such as sample size, test length, item variability, and person variability are better than methods that use a single, static criterionvalue in decision making with respect to dimensionality. / Ph. D.

Page generated in 0.062 seconds