Surveying for "artifacts": The susceptibility of the ocb-performance evaluation relationship to common rater, item, and measurement context effects

Nathan P. Podsakoff, Steven W. Whiting, David T. Welsh, Ke Michael Mai

Research output: Contribution to journalArticlepeer-review

85 Scopus citations

Abstract

Despite the increased attention paid to biases attributable to common method variance (CMV) over the past 50 years, researchers have only recently begun to systematically examine the effect of specific sources of CMV in previously published empirical studies. Our study contributes to this research by examining the extent to which common rater, item, and measurement context characteristics bias the relationships between organizational citizenship behaviors and performance evaluations using a mixedeffects analytic technique. Results from 173 correlations reported in 81 empirical studies (N = 31,146) indicate that even after controlling for study-level factors, common rater and anchor point number similarity substantially biased the focal correlations. Indeed, these sources of CMV (a) led to estimates that were between 60% and 96% larger when comparing measures obtained from a common rater, versus different raters; (b) led to 39% larger estimates when a common source rated the scales using the same number, versus a different number, of anchor points; and (c) when taken together with other study-level predictors, accounted for over half of the between-study variance in the focal correlations. We discuss the implications for researchers and practitioners and provide recommendations for future research.

Original languageEnglish (US)
Pages (from-to)863-874
Number of pages12
JournalJournal of Applied Psychology
Volume98
Issue number5
DOIs
StatePublished - Sep 2013
Externally publishedYes

Keywords

  • Common method variance bias
  • Employee performance evaluations
  • Organizational citizenship behaviors
  • Rating source
  • Scale anchor points

ASJC Scopus subject areas

  • Applied Psychology

Fingerprint

Dive into the research topics of 'Surveying for "artifacts": The susceptibility of the ocb-performance evaluation relationship to common rater, item, and measurement context effects'. Together they form a unique fingerprint.

Cite this