I’ve been thinking a lot about knowledge and context recently. Specifically, when it comes to job interviews. We’re trying to create an experience that enables candidates to demonstrate their knowledge and therefore their fit for a certain role. And yet, it is easier said than done.
I first came across this issue, watching a talk by Jabe Bloom. In the five years since it was given, I must have watched it close to a dozen times, which is extremely unusual in my case. It’s probably one of the most knowledge-packed talks that I’ve ever watched and I’m still unpacking bits and pieces of it.
In his talk, Jabe references Dave Snowden’s work around knowledge management, which I was able to trace back to this short post that’s now a decade old:
Decision-making was and will likely continue to be a major challenge in every collaborative effort.
A big complication that often gets in the way of making good decisions is deciding how to decide. Meaning, what decision-making process one should follow. Deciding how to decide is difficult because there is no one-size-fits-all decision-making process. Picking the right one depends on the situation.
It’s a topic that I’ve been grappling with for quite a while and shared some interim insights and structures here:
However, the real breakthrough in my thinking was due to a cool micro-site that was put together by the folks at NOBL called “How Do We Decide”. In it, they’ve identified 8 different types of decision-making processes and about a dozen situational factors that will lead you to favor one type over the other.
Overwhelmed by the number of different processes and potential situational permutations, I tried to come up with a simpler heuristic to match a certain situation to its optimal decision-making process.
As part of my search, I decided to re-read a McKinsey whitepaper I came across a while back called “Untangling Your Organization’s Decision-Making”. While their suggested set of decision-making processes didn’t quite land with me, the taxonomy they’ve used to classify the various types of situations rang true:
Which led me to my big a-ha moment:
Decision-making processes consist of two core stages (and a few additional ones at the beginning and end):
Identifying and exploring various options
Making the decision (choosing between the options)
The optimal process for each of the core stages depends on different attributes of the situation at hand
At the end of the day, decision-making processes differ from one another in how collaborative they are. Other attributes of the process, such as the speed in which the decision gets made or the amount of buy-in that’s achieved are a byproduct of that.
Finding the right decision-making process seems tricky when we force ourselves to couple the level of collaboration in both of the core stages. Since the optimal process is driven by different attributes, a certain level of collaboration may be a good fit for one stage but not the other. We can be more collaborative in identifying and exploring options, and less collaborative in making the decision (the “consult” option in my original post), do exactly the opposite, or be just as collaborative in both, depending on the situation.
The less familiar we are with the situation, the more collaborative we should be in identifying and exploring options
To assess our level of familiarity we should ask ourselves:
Is this a decision that we’re making frequently? (more frequent = more familiar)
How clear are the options? (clearer = more familiar)
How available is the information required for identifying/exploring the options? (more available = more familiar)
How distributed is the expertise required for identifying/exploring the options? (less distributed = more familiar)
The higher the impact of the outcome, the more collaborative we should be in making the decision
I’m hoping to improve this part of the framework, but for the time being, to assess the level of impact we should ask ourselves:
What would be the breadth of the outcome? (more people impact = more impact)
What would be the depth of the outcome? (more profound impact = more impact)
How reversible would the outcome be? (less reversible = more impact)
As the impact increases, we should opt for a more collaborative decision-making process: from a single decision-maker, through consent and democratic, to consensus.
I found applying different levels of collaboration to the two stages extremely liberating. It provides me with a more nuanced way to tailor the decision-making process to the situation and a stronger sense of certainty that I’m using a process that fits the situation. However, it’s by no means a silver bullet. The challenge is and will continue to be in assessing the levels of familiarity and impact and picking the appropriate transition points from one process to the other.
In this short piece, Lily gave a label to a common pattern that I’ve seen time and time again, in much broader scope than the product development one that Lily’s piece is focused on.
The XY Problem — is asking about your attempted solution rather than the actual problem. You are trying to solve problem X, and you think solution Y would work, but instead of asking about X when you run into trouble, you ask aboutY
The best way to avoid the XY Problem, other than simply being aware of it, is to get into a habit of asking “why”. As Lily suggests, “behind every what there’s a why”. In any problem-solving collaboration, don’t start looking for solutions before you’ve moved from the default starting spot in the What Stack an everyone understands at least a couple of “whys” up the stack.
I didn’t delve too deep into it. Dealing with underperformance was not something that was present for me at the time, so it felt irrelevant. But I knew that even in this narrow frame, it will be relevant for me at some point, so I filed it away to read later.
I realized that I stumbled upon a hidden gem the first time I found myself referencing the article. In a context that had nothing to do with underperformance. We were working on equipping managers with better tools to have career development conversations with their teammates and the questions that Claire proposed in her article seemed relevant.
These questions are just great questions for every manager to ask their teammates at some point. So without further ado, here they are:
Is it clear what needs to get done? How can I make the goals or expectations clearer?
Is the level of quality that’s required for this work clear? What examples or details can I provide to clarify the level of quality that’s needed?
Am I being respectful of the amount of time you have to accomplish something? Can I be doing a better job of protecting your time?
Do you feel you’re being set up to fail in any way? Are my expectations realistic? What am I asking that we should adjust so it’s more reasonable?
Do you have the tools and resources to do your job well?
Have I given you enough context about why this work is important, who the work is for, or any other information that is crucial to do your job well?
What’s irked you or rubbed you the wrong way about my management style? Does my tone come off the wrong way? Do I follow-up too frequently with you, not giving you space to breathe?
How have you been feeling about your own performance lately? Where do you see opportunities to improve, if any?
What are you most enjoying about the work you’re doing? What part of the work is inspiring, motivating, and energizing, if any?
What part of the work do you feel stuck? What have you been trying the “crack the nut” on, but it feels like you’re banging your head?
What part of the work is “meh”? What tasks have you feeling bored or ambivalent about?
When’s the last time you got to talk to or connect with a customer who benefited from the work you did? Would you like more opportunities to do that, and should make that happen?
Do you feel you’re playing to your strengths in your role? Where do you feel like there is a steep learning curve for you?
Would you say you’re feeling optimistic, pessimistic or somewhere in the middle about the company’s future?
This year’s letter focuses on some key lessons around “high standards”:
They are teachable, rather than intrinsic — “people are pretty good at learning high standards simply through exposure. High standards are contagious. Bring a new person onto a high standards team, and they’ll quickly adapt… And though exposure works well to teach high standards, I believe you can accelerate that rate of learning by articulating a few core principles of high standards”
They are domain specific, rather than universal — “you have to learn high standards separately in every arena of interest… Understanding this point is important because it keeps you humble. You can consider yourself a person of high standards in general and still have debilitating blind spots.”
You must be able to recognize what good looks like in that domain…
… and have realistic expectations for how hard it should be (how much work it will take) to achieve that result — the scope.
More than anything else (lack of skill, inability to recognize the standard, etc.), understanding how much work will be required to meet the high standard seems to be the most common culprit of not meeting it:
Often, when a memo isn’t great, it’s not the writer’s inability to recognize the high standard, but instead a wrong expectation on scope: they mistakenly believe a high-standards, six-page memo can be written in one or two days or even a few hours, when really it might take a week or more! They’re trying to perfect a handstand in just two weeks, and we’re not coaching them right. The great memos are written and re-written, shared with colleagues who are asked to improve the work, set aside for a couple of days, and then edited again with a fresh mind. They simply can’t be done in a day or two. The key point here is that you can improve results through the simple act of teaching scope — that a great memo probably should take a week or more.
It is therefore likely that the key reason why Bezos’ shareholder letters are so compelling, is because he fully appreciates the scope necessary to meet a high standard of shareholder letters 🙂
I’ve been paying closer attention in recent months to the way sports teams and acting talent agencies are handling talent, for a couple of key reasons:
These industries are regularly making high-risk multi-million dollar bets on talent. Therefore, their incentives for applying cutting-edge hiring practices (and continuously push the envelope in that domain) are extremely high.
On the flip side, the relative simplicity of the “definition of success” and the ability to create stronger causal links between talent decisions and outcomes make them rather attractive to study from a research perspective.
I rarely make predictions, but I suspect that in the coming years we’ll see more and more hiring practices that are currently common among elite sports teams and movie production studios propagate out to other industries in which top-tier talent plays a critical component in the success of the business.
None of these industries offer a perfect model for the more common talent market. As mentioned above, they are simpler representations. In sports, the number of “firms” competing for talent is known and rather limited (dozens), measuring overall success is more binaric (games won), and individual performance indicators are more visible, established and straightforward. Movie contracts are relatively short (several months) and this attribute makes that industry significantly different than the broader job market which usually optimizes for longer-term employment.
Masey’s post offers 5 lessons that are fairly applicable to any hiring effort, regardless of industry:
Understand your goal — “People often don’t understand their decision objectives, but the most successful sports teams are clear about their goal and don’t stray from the principles and attributes they’ve established.” — build a “performance profile”/scorecard before you even start looking for the first candidate.
Keep your judges apart– “Don’t let people talk to each other or see other’s opinions before providing their own, expose the candidate to judges in different ways and at different points in time, and bring people with different perspectives into the process. More independence is often the biggest improvement an organization can easily make in their hiring process.” — Easily translatable to the way scorecards, debriefs and hiring recommendations should be made.
Break the candidate into parts… — “ It’s much easier to give one, global evaluation — like or dislike, hire or reject. These overarching evaluations are natural and efficient, but unfortunately, they are often biased. For a more reliable evaluation, you need to break the objective into component parts and evaluate them separately.” — this speaks to the benefit of interviews focused on evaluating just a subset of the overall criteria, and clearly setting expectations with the interview team that they should evaluate the candidate’s performance in their area of focus rather than make an overall hire/don’t hire recommendation.
… and bring them back together mechanically — “ At the team level it can mean summarizing the group’s collective opinion by simply averaging scouts’ opinions. At the very least this approach provides a more systematic starting point for a group discussion.” — personally, I’d err more towards the latter — using the aggregation as a systematic starting point rather than an automatic determination of the outcome. The full algorithmic approach requires full calibration across the interview team, which is often times not the case.
Keep score — “We’ve all been animated by the sense we’ve just seen the next star in our field. The trick is to capture those judgments and track them over time to learn how predictive they are. This applies to all judgments. Hiring is best thought of as a forecasting process, and the only way to improve forecasts is to map them against results and refine the process over time.”