I’ve been noodling on writing this post for quite a while. Mostly because “performance” is such a big topic to unpack, and partly because I’m still working on my own holistic answer. But it’s time to remind myself that “perfect is the enemy of good” and there’s value in trying to summarize my point of view on this topic even if it’s still a work-in-progress.
A good starting point for this conversation is getting on the same page on how we got to where we are today. The Performance Management Revolution provides a great recap of the evolution of performance reviews and the environmental changes that drove it over the last 80(!) years.
One of the key challenges in the debate around performance reviews is that in its 80 years of history the term has gone through what Martin Fowler refers to as “semantic diffusion”:
Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely — and with it any usefulness to the term
Today there is no clear definition of the attributes that turn a particular set of conversations into a “performance review”. Many articles use this term without providing their own definition, assuming that we’re all talking about the same thing.
If we were to boil down performance reviews to a shared core definition, it would be something along the lines of: “a program to facilitate periodic feedback conversations”. Hardly anybody objects to the argument that having periodic feedback conversations is a valuable organizational practice. Most of the criticism that calls for “abolishing the performance review” tends to criticize specific program design elements which they consider to be part of the core definition of what a performance review is.
It is a fair statement that most performance review programs are designed in ways that ignore some human aspects of the interaction, and especially lessons from the last 30 years of research in the fields of psychology, sociology and neuroscience. This in turn leads many of them to have unintended or sub-optimal results. But this also makes it clear, in my mind at least, that the solution here is to integrate those lessons into the program design rather than get rid of the program altogether.
Humanistic performance reviews principles
With that in mind, I’d like to highlight some of the interim insights that should be taken into account when designing such programs:
- Reduce functional overloading — Many programs today suffer from “functional overloading”, we’re trying to do too many things with the same program and end up in a “jack of all trades, master of none” situation since often a program element optimized for one need, causes harm to another. For example, using a performance program to generate documentation of poor performance to minimize legal risk in performance-based terminations, will likely limit the effectiveness of the developmental feedback that it provides. Deconstructing monolithic performance programs and decoupling the components that serve different organizational needs is a good step towards starting to address that challenge.
- More frequent, but not too frequent — we are all prone to “recency bias”. Out memory is far from perfect and we tend to overweight the importance of things that have happened more recently in formulating our judgment. This suggests that an annual review cycle is probably too long. But the solution is not real-time/continuous feedback either. There is a lower-bound to the frequency, since we need to give the changes we’ve made in the last cycle enough time to impact the outcomes. Otherwise, we’re just introducing thrash. Furthermore, giving good feedback typically requires a period of reflection and thoughtful composition of the feedback which we would not be able to do if we were to give it “on-the-fly”.
- Minimize subjectivity — while subjectivity cannot be eliminated altogether it can certainly be reduced. On the receiving end, by accounting for the overconfidence effect. And on the evaluating end by reducing the idiosyncratic rater effect.
- Avoid rating on a “bell curve” — since human performance does not seem to follow a bell curve.
- Reduce status threat — a threat to status triggers our fight-or-flight response and diminishes our ability to truly listen and learn. Both rating evaluation and change (or lack thereof) to compensation naturally trigger a threat to status. Separating evaluatory feedback from coaching feedback will increase the efficacy of the latter.
- Forward-looking — rather than focus on what happened in the past, the conversation should focus on what should be sustained or changed going forward.
- Maximize credibility — the credibility of the person providing feedback effects our motivation to take action based on it. Three key levers can be addressed structurally in the program design: a) a healthy mix of sustain (“positive”) and change (“negative”) feedback b) structuring the feedback in a way that separates facts from interpretations c) making a request to change while hand-in-hand taking responsibility over one’s own interpretations.
A couple of harder questions
The design principles listed above will go a long way in helping to design more effective performance review programs. However, while far from trivial or easy, they are not the hardest part of this challenge.
To use Ronald Heifetz’s distinction, I believe they capture many of the “technical” aspects of the challenge, but it’s the “adaptive” ones — the ones that have to do with the values underlying the system that are the most difficult to address. And those will change from organization to organization.
A couple of harder, more adaptive questions come to mind:
- How do we define “performance”? a different definition will lead to different forms of measurement and evaluation: Do we take into account efforts, or just results? Do we account for factors that are outside of our control and influenced the results? How do we deal with the relationship between individual performance and group performance? What about investments that haven’t yielded results just yet? How do we account for intangibles?
- What is the role of power in the evaluation of performance? In High Output Management, Andy Grove offers the following:
The review process also represents the most formal type of institutionalized leadership. It is the only time a manager is mandated to act as judge and jury: we managers are required by the organization that employs us to make a judgment regarding a fellow worker, and then to deliver that judgment to him face-to-face.
“This is what I, as your boss, am instructing you to do. I understand that you do not see it my way. You may be right or I may be right. But I am not only empowered, I am required by the organization for which we both work to give you instructions, and this is what I want you to do…”
Some organizations may agree with this definition. Some may not. Their performance review programs will be fundamentally different as a result…