The purpose of this dissertation is to examine how the method used to a score situational judgment test (SJT) affects the validity of the SJT both in the presence of other predictors and as a single predictor of task performance. To this end, I compared the summed score approach of scoring SJTs with item response theory and multivariate items response theory. Using two samples and three sets of analyses, I found that the method used to score SJTs influences the validity of the test and that IRT and MIRT show promise for increasing SJT validity. However, no individual scoring method produced the highest amount of validity across all sets of analyses. In line with previous research, SJTs added incremental validity in the presence of GMA and personality and, again, the method used to score the SJT affected the incremental validity. A relative weights analysis was performed for each scoring method across all the sets of analyses showing that, depending on the scoring method, SJT score may account for more criterion variance than either GMA or personality. However, it is likely that the samples were influenced by range restriction present in the incumbent samples.
Identifer | oai:union.ndltd.org:vcu.edu/oai:scholarscompass.vcu.edu:etd-4607 |
Date | 01 January 2014 |
Creators | Whelpley, Christopher E. |
Publisher | VCU Scholars Compass |
Source Sets | Virginia Commonwealth University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | © The Author |
Page generated in 0.0014 seconds