Return to search

The Construct Validity Of A Situational Judgment Test In A Maximum Performance Context

A Predictor Response Process model (see Ployhart, 2006) and research findings were leveraged to formulate research questions about, and generate construct validity evidence for, a new situational judgment test (SJT) designed to measure declarative and strategic knowledge. The first question asked if SJT response instructions (i.e., 'Should Do', 'Would Do') moderated the validity of an SJT in a maximum performance context. The second question asked what the upper-bound criterion-related validity coefficient is for SJTs in talent selection contexts in which typical performance is the criterion of interest. The third question asked whether the SJT used in the present study was fair for gender and ethnic-based subgroups according to Cleary's (1968) definition of test fairness. Participants were randomly assigned to complete an SJT with either 'Should Do' or 'Would Do' response instructions and their maximum decision making performance outcomes were captured during a moderate fidelity poker simulation. The findings of this study suggested knowledge, as measured by the SJT, interacted with response instructions when predicting aggregate and average performance outcomes such that the 'Should Do' SJT had stronger criterion-related validity coefficients than the 'Would Do' version. The findings also suggested the uncorrected upper-bound criterion-related validity coefficient for SJTs in selection contexts is at least moderate to strong ([beta] = .478). Moreover, the SJT was fair according to Cleary's definition of test fairness. The implications of these findings are discussed.

Identiferoai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd-2036
Date01 January 2006
CreatorsStagl, Kevin
PublisherSTARS
Source SetsUniversity of Central Florida
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceElectronic Theses and Dissertations

Page generated in 0.0017 seconds