The multifaceted structure of multisource job performance ratings has been a subject of research and debate for over 30 years. However, progress in the field has been hampered by the confounding of effects relevant to the measurement design of multisource ratings and, as a consequence, the impact of ratee-, rater-, source-, and dimension-related effects on the reliability of multisource ratings remains unclear. In separate samples obtained from 2 different applications and measurement designs (N [ratees] = 392, N [raters] = 1,495; N [ratees] = 342, N [raters] = 2,636), we, for the first time, unconfounded all systematic effects commonly cited as being relevant to multisource ratings using a Bayesian generalizability theory approach. Our results suggest that the main contributors to the reliability of multisource ratings are source-related and general performance effects that are independent of dimension-related effects. In light of our findings, we discuss the interpretation and application of multisource ratings in organizational contexts.
- 360-degree ratings
- Bayesian generalizability theory
- Multisource performance ratings
- 360-DEGREE FEEDBACK
- multisource performance ratings
- CRITERION CONSTRUCT SPACE