Steven Sinofsky wrote one of the most thought-provoking pieces on performance reviews that I’ve read in a while:
It’s a long, dense read, that is totally worth it. But since I’d probably want to point people that may not have the patience to read the whole thing to his main points, below is a shorter summary of his piece:
One of the key ideas in Sinofsky’s framing is captured in this great quote:
For as much as many might wish to think of performance management as numeric and thus perfectly quantifiable, it is as much a product of context and social science as the products we design and develop. We want quantitative certainty and simplicity, but context is crucial and fluid, and qualitative. We desire fair, which is a relative term, but insist on the truth, which is absolute.
No system can perfectly capture performance, but that’s not a good enough reason for not having a system at all.
These are key assumptions about performance, people, and systems.
- Performance systems conflate performance, promotion/title, and compensation, and organizational budgets — this is a variation on the concept of functional overloading. At according the Sinofsky, the core purpose of the system is “ determining how to pay everyone within the budget”.
- In a group of any size there is a distribution of performance — and I’d add that it’s not normally distributed.
- In a system where you have m labels for performance, people who get all but the most rewarding one believe they are “so close” to the higher one — a point that I covered in detail in Flawed Self-Assessment
- Among any set of groups, almost all the groups think their group is delivering more and other groups are delivering less — the same pattern in #3, applies also to groups.
- Measurement is not absolute but is relative — performance is measured relative to a benchmark. As Sinofsky points out, there’s a big risk in measuring performance against individual goals. Personally, I prefer to measure it against a shared benchmark describing behaviors that support the long-term success of the business (aka “levels”).
10 Real World Attributes
In this section, Sinofsky highlights the common attributes that must be considered and balanced when developing a performance review system; and offers some advice on each.
However, the critical starting point is that any system will be imperfect:
While one can attempt to codify a set of rules, one cannot codify the way humans will implement the rules…
…there are so many complexities it is pointless to attempt to fully codify a system. Rather everyone just goes in with open eyes and a realistic view of the difficulty of the challenge and iterates through the variables throughout the entire dialog. Fixating on any one to the exclusion of others is when ultimately the system breaks down.
- Determining team size — “At some point, the system requires every level of management to honestly assess people based on a dialog of imperfect information.” → “Implement a system in groups of about 100 in seniority and role.”
- Conflating seniority, job function, and projects does not create a peer group — Aiming to create an apples-to-apples comparison, given the constraints → “Define peer groups based on seniority and job function within project teams as best you can.”
- Measuring against goals— Addressing the tension between goals and performance explain in the 5th “reality” → “Let individuals and their manager arrive at goals that foster a sense of mastery of skills and success of the project, while focusing evaluation on the relative (and qualitative) contribution to the broader mission.”
- Understanding cross-organization performance —This is aimed at addressing the imperfection of #2 → “Do not pit organizations against each other by competing for rewards and foster cross-group collaboration via planning and execution of shared bets.”
- Maintaining a system that both rates and rewards —Defining how a rating maps to compensation, Sinofsky argues that both extremes (fully/loosely correlated) are sub-optimal → “A clear rating that lets individuals know where they stand relative to their peer group along with compensation derived from that with the ability of a manager with the most intimate knowledge of the work to adjust compensation within some rating-defined range.”
- Writing a performance appraisal is not making a case — Aiming to manage the risk around the review’s documentation becoming the focal point of the process → “Lower the stakes for the document itself and make it clear that it is not the decision-making tool for the process.”
- Ranking and calibrating are different — Aiming to improve fairness when multiple-raters provide the evaluations, while avoiding false-precision that would invalidate the entire effort → “Define performance groups where members of a team fall but do not attempt to rank with more granularity or “proximity” to other groups.”
- Encouraging excellent teams — This is an odd one. Only by working backwards from the recommendation, I believe that the key argument here is that while organizations continue to improve over time, performance continues to vary and therefore no team or individual should be excluded from the system because they are intrinsically “great” → “Make a system that applies to everyone or have multiple systems and clear rules how membership in different systems is determined.”
- Allowing for exceptions creates an exception-based process — A broader generalization of #8 around the risk of adding additional exceptions → “If there is going to be a system, then stick to it and don’t encourage exceptions.”
- Embracing diversity in all dimensions — Another odd one. Which seems to be making the case of avoiding bias, by having a clear definition of performance and ignoring any non-performance attributes in the system → “Any strong and sustainable team will be diverse in all dimensions.”
Wrapping up, Sinofsky reminds us that:
Like so many things in business, there is no right answer or perfect approach. If there was, then there would be one performance system that everyone would use and it would work all the time. There is not.
As much as any system is maligned, having a system that is visible, has some framework, and a level of cross-organization consistency provides many benefits to the organization as a whole. These benefits accrue even with all the challenges that also exist.
And leaves us with 3 things to keep in mind:
- No one has all the data — “ … no one person has a complete picture of the process. This means everyone is operating with imperfect information. But it does not follow that everyone is operating imperfectly.”
- Share success, take responsibility — “ if you do well be sure to consider how others contributed and thank them as publicly as you can. If you think you are getting a bad deal, don’t push accountability away or point fingers, but look to yourself to make things better.”
- Things work out in the end — “It takes discipline and effort to work within a complex and imperfect system — this is actually one of the skills required for anyone over the course of a career. Whether it is project planning, performance management, strategic choices, management processes and more all of these are social science and all subject to context, error rates, and most importantly learning and iteration.”
Whether you agree or disagree with Sinofsky’s analysis and conclusions, I don’t think anyone can argue with the boldness of the attempt to provide a holistic description of an important but imperfect solution to a core organizational challenge. Given that most of the debate around performance seems to stay at the level between rants and using the limitations of the system as a reason to have no system at all, this was a breath of fresh air.