The Bell Curve Is A Myth

This following post by Josh Bersin (one of Reid Hoffman’s co-authors of “The Alliance” which I covered in the previous post) made the rounds a few months back and is still top-of-mind for me:

The Bell Curve Is A Myth

The gist: most common HR practices (performance reviews, promotions, etc.) pre-suppose a normal-distribution of performance across the employee base. However, careful studies have shown that real performance looks more like a power-distribution than a normal distribution. Many of the common HR practices “break” under those assumptions.

Bersin provides references to the academic papers that drew this conclusion, yet none of them hypothesizes why performance looks the way it does. Here’s my thesis: in the general/broader population, performance/mastery of a given skill does distribute normally. However, in almost any professional setting, we apply some sort of screening process – we intentionally try to hire the people who are “above average” (if not the top 5-10%) who are most qualified for the position. Since we don’t just pick people at random, there’s no reason to expect a normal performance distribution post-screening. We would expect to see something that resembles the right-most portion of a normal distribution – which also looks a lot like a power distribution.

These articles make a compelling case for why the current HR practices don’t make a lot of sense under these assumptions. But they fail to propose a prescriptive way to change them in the right direction.

What do you think? If we assume hat performance (at least in professional settings) follow a power-distribution, how should we change some of the key HR practices?

 

Advertisement
The Bell Curve Is A Myth

Performance Reviews and Functional Overloading

Today’s topic is a bit abstract so I’ll start with a concrete example:

Earlier this year Michael Mallete gave this lovely talk at Agile Singapore:

Performance Appraisals – the Bane of Agile Teams

It’s well worth an hour of your time, but if you can’t spare it, at least skim through the slides.

Here’s the gist: performance reviews are being used for six different purposes: improve company performance, provide career guidance and coaching to employees, improve feedback and communications, manage salary and benefits, facilitate promotions and justify terminations. Since it’s stretched it six different directions, it does a very poor job supporting each of the purposes.

A typical case of “jack of all trades, master of none” / “a bird in the hand is worth two in the bush” / <insert your favorite proverb here>.  I’ve seen this pattern enough times to give it a name: functional overloading. Another common area where you often see it is the product roadmap: is it a planning tool? a marketing tool? an alignment tool?

We usually end up in these situations while having the best intentions in mind. In an attempt to keep overhead/bureaucracy to a minimum, we try to use existing practices and tools to solve new problems by slightly tweaking them. But in the process, we end up stretching them too thin and overloading them.

In the case of performance reviews, Mallete advocates for an extreme approach: go back to the basics and tailor the most effective tool/practice to each one of the different purposes that we were using the original one for. I think this may be too aggressive at times, but some fragmentation of the existing tool/process, even if it’s not to its most basic components, is probably in order.

Your turn: what are the most common tools and practices that you’ve seen get functionally overloaded?

 

Performance Reviews and Functional Overloading