Uncertainty about rater variance and small dimension effects impact reliability in supervisor ratings

Duncan Jackson, George Michaelides, Chris Dewberry, Amanda Jones, Simon Toms, Benjamin Schwenke, Wei-Ning Yang

Research output: Contribution to journalArticlepeer-review

16 Downloads (Pure)

Abstract

We modeled the effects commonly described as defining the measurement structure of supervisor performance ratings. In doing so, we contribute to different theoretical perspectives, including components of the multifactor and mediated models of performance ratings. Across two reanalyzed samples (Sample 1, N ratees = 392, N raters = 244; Sample 2, N ratees = 342, N raters = 397), we found a structure primarily reflective of general (>27% of variance explained) and rater-related (>49%) effects, with relatively small performance dimension effects (between 1% and 11%). We drew on findings from the assessment center literature to approximate the proportion of rater variance that might theoretically contribute to reliability in performance ratings. We found that even moderate contributions of rater-related variance to reliability resulted in a sizable impact on reliability estimates, drawing them closer to accepted criteria.

Original languageEnglish
Pages (from-to)278-301
Number of pages24
JournalHuman Performance
Volume35
Issue number3-4
Early online date19 Aug 2022
DOIs
Publication statusPublished - Sept 2022

Cite this