This year’s letter focuses on some key lessons around “high standards”:
They are teachable, rather than intrinsic — “people are pretty good at learning high standards simply through exposure. High standards are contagious. Bring a new person onto a high standards team, and they’ll quickly adapt… And though exposure works well to teach high standards, I believe you can accelerate that rate of learning by articulating a few core principles of high standards”
They are domain specific, rather than universal — “you have to learn high standards separately in every arena of interest… Understanding this point is important because it keeps you humble. You can consider yourself a person of high standards in general and still have debilitating blind spots.”
You must be able to recognize what good looks like in that domain…
… and have realistic expectations for how hard it should be (how much work it will take) to achieve that result — the scope.
More than anything else (lack of skill, inability to recognize the standard, etc.), understanding how much work will be required to meet the high standard seems to be the most common culprit of not meeting it:
Often, when a memo isn’t great, it’s not the writer’s inability to recognize the high standard, but instead a wrong expectation on scope: they mistakenly believe a high-standards, six-page memo can be written in one or two days or even a few hours, when really it might take a week or more! They’re trying to perfect a handstand in just two weeks, and we’re not coaching them right. The great memos are written and re-written, shared with colleagues who are asked to improve the work, set aside for a couple of days, and then edited again with a fresh mind. They simply can’t be done in a day or two. The key point here is that you can improve results through the simple act of teaching scope — that a great memo probably should take a week or more.
It is therefore likely that the key reason why Bezos’ shareholder letters are so compelling, is because he fully appreciates the scope necessary to meet a high standard of shareholder letters 🙂
I’ve been paying closer attention in recent months to the way sports teams and acting talent agencies are handling talent, for a couple of key reasons:
These industries are regularly making high-risk multi-million dollar bets on talent. Therefore, their incentives for applying cutting-edge hiring practices (and continuously push the envelope in that domain) are extremely high.
On the flip side, the relative simplicity of the “definition of success” and the ability to create stronger causal links between talent decisions and outcomes make them rather attractive to study from a research perspective.
I rarely make predictions, but I suspect that in the coming years we’ll see more and more hiring practices that are currently common among elite sports teams and movie production studios propagate out to other industries in which top-tier talent plays a critical component in the success of the business.
None of these industries offer a perfect model for the more common talent market. As mentioned above, they are simpler representations. In sports, the number of “firms” competing for talent is known and rather limited (dozens), measuring overall success is more binaric (games won), and individual performance indicators are more visible, established and straightforward. Movie contracts are relatively short (several months) and this attribute makes that industry significantly different than the broader job market which usually optimizes for longer-term employment.
Masey’s post offers 5 lessons that are fairly applicable to any hiring effort, regardless of industry:
Understand your goal — “People often don’t understand their decision objectives, but the most successful sports teams are clear about their goal and don’t stray from the principles and attributes they’ve established.” — build a “performance profile”/scorecard before you even start looking for the first candidate.
Keep your judges apart– “Don’t let people talk to each other or see other’s opinions before providing their own, expose the candidate to judges in different ways and at different points in time, and bring people with different perspectives into the process. More independence is often the biggest improvement an organization can easily make in their hiring process.” — Easily translatable to the way scorecards, debriefs and hiring recommendations should be made.
Break the candidate into parts… — “ It’s much easier to give one, global evaluation — like or dislike, hire or reject. These overarching evaluations are natural and efficient, but unfortunately, they are often biased. For a more reliable evaluation, you need to break the objective into component parts and evaluate them separately.” — this speaks to the benefit of interviews focused on evaluating just a subset of the overall criteria, and clearly setting expectations with the interview team that they should evaluate the candidate’s performance in their area of focus rather than make an overall hire/don’t hire recommendation.
… and bring them back together mechanically — “ At the team level it can mean summarizing the group’s collective opinion by simply averaging scouts’ opinions. At the very least this approach provides a more systematic starting point for a group discussion.” — personally, I’d err more towards the latter — using the aggregation as a systematic starting point rather than an automatic determination of the outcome. The full algorithmic approach requires full calibration across the interview team, which is often times not the case.
Keep score — “We’ve all been animated by the sense we’ve just seen the next star in our field. The trick is to capture those judgments and track them over time to learn how predictive they are. This applies to all judgments. Hiring is best thought of as a forecasting process, and the only way to improve forecasts is to map them against results and refine the process over time.”