Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)


Research indicates that ratee job experience and performance variability can affect performance judgments and causal attributions (Leventhal et al., 1977; Fusilier & Hitt, 1983; Scott & Hammer, 1975). When prototype expectations concerning ratee characteristics are not met, raters reacting to the cognitive inconsistency may distort ratee performance information (DeNisi, Cafferty, & Meglino, 1984). Because cognitively complex raters view behavior in a multidimensional manner (Schneier, 1977), not through a simplified schemata, cognitive complexity's possible effect on rating accuracy under cognitively inconsistent conditions was investigated. In an initial pilot study, 147 undergraduate volunteers rated the expected performance variability of a professor with either zero, five, or ten years of experience. Experiment 1 utilized a Ratee Experience $\times$ Ratee Performance Variability $\times$ Rater Cognitive Complexity regression design. One-hundred-thirty-five undergraduate volunteers completed three cognitive complexity measures, six vignettes, six BARS, an attribution questionnaire, and a manipulation check questionnaire. Experiment 2 used a similar design with videotaped rating stimuli (three lectures) to replicate Experiment 1. The pilot study showed that subjects expect only inexperienced professors to have variable performance. Overall rating accuracy was lowest in the moderate variability condition and in the high experience condition. Accuracy component analyses (Cronbach, 1955) indicated that more variable performance was rated more accurately. Further, marginal evidence suggested that higher cognitive complexity is somewhat associated with greater differential elevation accuracy when experienced ratees have variable performance. Also, effort and chance attributions were strongest in the moderate experience condition. In Experiment 2, highly experienced ratees and those with moderate variability were rated more accurately than others. Cognitive complexity was not helpful to rating accuracy. In one case, it was associated with more underrating of highly experienced, highly variable ratees. General support was found for the contention that prototype expectations exist and affect rating accuracy. Very limited support was gained for cognitive complexity as a useful rater characteristic. In consistent experimental findings are discussed in terms of the salience of performance variability and cognitive processing of information. Implications for training raters to observe multiple performance dimensions and attend to the possible biasing affects of ratee characteristics are also discussed.