The implications of unconfounding multisource performance ratings

Duncan J. R. Jackson, George Michaelides, Chris Dewberry, Benjamin Schwenke, Simon Toms

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)
30 Downloads (Pure)

Abstract

The multifaceted structure of multisource job performance ratings has been a subject of research and debate for over 30 years. However, progress in the field has been hampered by the confounding of effects relevant to the measurement design of multisource ratings and, as a consequence, the impact of ratee-, rater-, source-, and dimension-related effects on the reliability of multisource ratings remains unclear. In separate samples obtained from 2 different applications and measurement designs (N [ratees] = 392, N [raters] = 1,495; N [ratees] = 342, N [raters] = 2,636), we, for the first time, unconfounded all systematic effects commonly cited as being relevant to multisource ratings using a Bayesian generalizability theory approach. Our results suggest that the main contributors to the reliability of multisource ratings are source-related and general performance effects that are independent of dimension-related effects. In light of our findings, we discuss the interpretation and application of multisource ratings in organizational contexts.

Original languageEnglish
Pages (from-to)312–329
Number of pages18
JournalJournal of Applied Psychology
Volume105
Issue number3
Early online date22 Jul 2019
DOIs
Publication statusPublished - 2020

Keywords

  • 360-degree ratings
  • Bayesian generalizability theory
  • Multisource performance ratings
  • 360-DEGREE FEEDBACK
  • VARIANCE-COMPONENTS
  • PERSPECTIVE
  • RELIABILITY
  • multisource performance ratings
  • RATER
  • CRITERION CONSTRUCT SPACE
  • JOB-PERFORMANCE
  • ERROR
  • VALIDITY
  • GENERALIZABILITY

Cite this