• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Differential Prediction: Understanding a Tool for Detecting Rating Bias in Performance Ratings

Tison, Emilee B. 05 May 2008 (has links)
Three common methods have been used to assess the existence of rating bias in performance ratings: the total association approach, the differential constructs approach and the direct effects approach. One purpose of this study was to examine how the direct effects approach, and more specifically differential prediction analysis, is more useful than the other two approaches in examining the existence of rating bias. However, the usefulness of differential prediction depends on modeling the full rater race X ratee race interaction. Therefore, the second purpose of this study was to examine the conditions where differential prediction has sufficient power to detect this interaction. This was accomplished using monte carlo simulations. Total sample size, magnitude of rating bias, validity of predictor scores, rater race proportion and ratee race proportion were manipulated to identify which conditions of these parameters provided acceptable power to detect the rater race X ratee race interaction; in the conditions where power levels are acceptable, differential prediction is a useful tool in examining the existence of rating bias. The simulation results suggest that total sample size, magnitude of rating bias and rater race proportion have the most impact on power levels. Furthermore, these three parameters interact to effect power. Implications of these results are discussed. / Master of Science
2

O*NET or NOT? Adequacy of the O*NET system's rater and format choices

Hollander, Eran 17 December 2001 (has links)
The O*NET was built to replace the Dictionary of Occupational Titles (DOT) and form a highly accessible, on-line (through the World Wide Web), common language occupational information center (Dye & Silver, 1999). This study tested the relevance of the self-rating choice and unconventional BARS format to be used by the O*NET system for occupational ratings. In addition, a new rating scale format named NBADS, was tested for improved ratings. Fifty three Incumbent raters in two occupations (Graduate teaching assistants and Secretaries) and 87 laypeople raters who have never worked in these occupations, rated 21 item-pairs (Importance and Level type questions) picked randomly from the 52 items on the original O*NET Ability questionnaire. Participants rated each of the 21 item-pairs three times, with the Level question being presented in the O*NET BARS, a Likert GRS and the NBADS formats; The importance type question was always rated using a 1-5 Likert scale. Hypothesis 1a was supported, showing a significant leniency bias across formats for self-ratings. Hypothesis 1b was mostly supported, failing to show significant leniency, elevation error or interrater agreement improvement over laypeople ratings; only the overall-error measure showed a significant improvement for incumbent raters. Hypothesis 2 was not supported, failing to show that the GRS format had any improvement on leniency, accuracy or interrater agreement over the O*NET BARS format. Hypothesis 3a was supported, showing significant leniency reduction, accuracy error reduction and higher interrater agreement using the NBADS format over the GRS format. In a similar sense, hypothesis 3b was partially supported, showing reduction in leniency effect and higher agreement using the NBADS format over the O*NET BARS format. Finally, hypothesis 4 was mostly supported, showing hardly any significant differences in the ratings of the Importance type question across the three format sessions, strengthening the idea that no other interfering variables have caused the format sessions' differences. Implications of the results are discussed. / Master of Science

Page generated in 0.2689 seconds