Uncertainty about Rater Variance and Small Dimension Effects Impact Reliability in Supervisor Ratings

Duncan Jackson, George Michaelides, Chris Dewberry, Amanda Jones, Simon Toms, Benjamin Schwenke, Wei-Ning Yang

Research output: Contribution to journalArticlepeer-review

Abstract

We modelled the effects commonly described as defining the measurement structure of supervisor performance ratings. In doing so, we contribute to different theoretical perspectives, including components of the multifactor and mediated models of performance ratings. Across 2 samples from the Jackson et al. (2020) data set (Sample 1, Nratees = 392, Nraters = 244; Sample 2, Nratees = 342, Nraters = 397), we found a structure primarily reflective of general ( > 27% of variance explained) and rater-related (> 49%) effects, with relatively small performance dimension effects (between 1% and 11%). We drew on findings from the assessment center literature to approximate the proportion of rater variance that might theoretically contribute to reliability in performance ratings. We found that even moderate contributions of rater-related variance to reliability resulted in a sizable impact on reliability estimates, drawing them closer to accepted criteria.
Original languageEnglish
JournalHuman Performance
Publication statusAccepted/In press - 4 Aug 2022

Cite this