In-context technical interviews

Last week I recapped Dave Snowden’s somewhat abstract “7 principles” of knowledge management, hinting that they may have some valuable applicability for recruiting interviews in particular. This week, I want to further flesh out this idea.

The whole purpose of recruiting interviews is to enable candidates to demonstrate the relevant skills and knowledge relevant to the role that they’re interviewing for. Yet the default mode in which many interviews are conducted in is an out-of-context one: discussing a hypothetical problem, that the candidate has never encountered before, in the abstract or in a way that’s very difficult than the way they’d tackle it on the job. This approach is particularly pervasive in technical interviews and much has been written on their adversarial nature and various other challenges.

So what would an in-context technical interview experience look like? 

The first hint comes from Lou Adler (whose book I covered at length here) who recommends using an interview structure he calls “anchor and visualize”, at a high-level it’s a 2-questions interview where the first question focuses on work that the candidate had already done, and the second question focuses on a hypothetical future looking question. Both questions can then be explored further and elaborated upon using a set of follow-up questions (“SMARTe”). Looked at through the lens of staying in context, it’s easy to see why the first question is a good starting point — it is keeping candidates “in context” by asking them to talk about work they’ve already done. The second question, if applied verbatim, still suffers from being out-of-context. But if we take its intent, of seeing how the candidate deals with a new situation, and try to accomplish the same result by adding complexity/modifying some of the details of the initial situation — now we have something interesting in our hands!

The second hint comes from the folks at Stripe, who knowingly or not, sketched out a technical interviewing experience that’s mostly in-context, and best captured in this Quora post. In Stripe’s interview day no coding is done on a whiteboard. The various assignments are done on the candidate’s laptop in their own environment (again, staying in context).

Based on those two things I’d propose the following for an almost-fully in-context technical interview experience:

Take-home assignment: consisting of two parts:

  1. A short coding exercise in which the candidate needs to create working code from scratch — laying the groundwork for assessing the candidate’s design and implementation skills.
  2. A non-coding exercise asking a few comprehension questions based on a larger, existing code-base — laying the groundwork for exercises that will require a bigger code-base, enabling the candidate to become familiar with it (stay in-context) without creating an unreasonably long assignment that’d require them to write it on their own.

On-site: 

Candidates should be using their own laptops and environments throughout the day. I like Stipe’s overall structure for the day as a strong starting point that can then be modified/adapted to fit the unique needs of the company or the role:

  1. Design and implementation (90–120mins) — starting with a code review of the first part of the take-home assignment. Going deeper and adding complications and modifications from there.
  2. Bug squashing (45–60mins) — the code-base shared in the 2nd part of the take-home assignment should contain a failed test. In this session, the candidate will be asked to find the bug and fix it.
  3. Refactoring (45–60mins) — the code-base shared in the 2nd part of the take-home assignment should contain a poorly implemented section. In this session, the candidate will be asked to refactor that section.

There is one important caveat, captured in the Stripe Quora post and is worth mentioning here: There are some aspects of working out-of-context that are expected as part of the job. For example, developers are often required to maintain/support/interact with code they haven’t written themselves. I can see why some companies may choose to take a more extreme approach than using the 2nd part of the take-home assignment to assess that and will want to introduce a never-seen-before code into the interview day. I think that’s ok, but my advice would be to keep it to the one interview that’s focused on evaluating that skill and keep the rest of the day as in-context as possible.

We have a long way to go as an industry to get to this ideal, present company included, but losing out on good engineers is a luxury none of us should be willing to afford.

Advertisement
In-context technical interviews

Rendering Knowledge [Snowden]

I’ve been thinking a lot about knowledge and context recently. Specifically, when it comes to job interviews. We’re trying to create an experience that enables candidates to demonstrate their knowledge and therefore their fit for a certain role. And yet, it is easier said than done.

I first came across this issue, watching a talk by Jabe Bloom. In the five years since it was given, I must have watched it close to a dozen times, which is extremely unusual in my case. It’s probably one of the most knowledge-packed talks that I’ve ever watched and I’m still unpacking bits and pieces of it.

In his talk, Jabe references Dave Snowden’s work around knowledge management, which I was able to trace back to this short post that’s now a decade old:

Rendering Knowledge

Based on a more detailed paper, which I wasn’t able to find (yet), Dave lays out his 7 principles of knowledge management:

  1. Knowledge can only be volunteered it cannot be conscripted
  2. We only know what we know when we need to know it
  3. In the context of real need few people will withhold their knowledge
  4. Everything is fragmented
  5. Tolerated failure imprints learning better than success 
  6. The way we know things is not the way we report we know things
  7. We always know more than we can say, and we will always say more than we can write down

See more detailed descriptions in the original piece, but I find these to be quite profound. #2 and #7 are particularly interesting in the context of interviews…

Rendering Knowledge [Snowden]

Deciding how to decide


Decision-making was and will likely continue to be a major challenge in every collaborative effort.

A big complication that often gets in the way of making good decisions is deciding how to decide. Meaning, what decision-making process one should follow. Deciding how to decide is difficult because there is no one-size-fits-all decision-making process. Picking the right one depends on the situation.

It’s a topic that I’ve been grappling with for quite a while and shared some interim insights and structures here:

It’s always fun to be able to document how my thinking is evolving and the trigger to actually putting pen to paper on this one was a good post by the folks at Coinbase:

However, the real breakthrough in my thinking was due to a cool micro-site that was put together by the folks at NOBL called “How Do We Decide”. In it, they’ve identified 8 different types of decision-making processes and about a dozen situational factors that will lead you to favor one type over the other.

Source: howdowedecide.com

Overwhelmed by the number of different processes and potential situational permutations, I tried to come up with a simpler heuristic to match a certain situation to its optimal decision-making process.

As part of my search, I decided to re-read a McKinsey whitepaper I came across a while back called “Untangling Your Organization’s Decision-Making”. While their suggested set of decision-making processes didn’t quite land with me, the taxonomy they’ve used to classify the various types of situations rang true:

Source: McKinsey

Which led me to my big a-ha moment:

Decision-making processes consist of two core stages (and a few additional ones at the beginning and end): 

  1. Identifying and exploring various options
  2. Making the decision (choosing between the options) 

The optimal process for each of the core stages depends on different attributes of the situation at hand

At the end of the day, decision-making processes differ from one another in how collaborative they are. Other attributes of the process, such as the speed in which the decision gets made or the amount of buy-in that’s achieved are a byproduct of that.

Finding the right decision-making process seems tricky when we force ourselves to couple the level of collaboration in both of the core stages. Since the optimal process is driven by different attributes, a certain level of collaboration may be a good fit for one stage but not the other. We can be more collaborative in identifying and exploring options, and less collaborative in making the decision (the “consult” option in my original post), do exactly the opposite, or be just as collaborative in both, depending on the situation.

The less familiar we are with the situation, the more collaborative we should be in identifying and exploring options

To assess our level of familiarity we should ask ourselves:

  1. Is this a decision that we’re making frequently? (more frequent = more familiar)
  2. How clear are the options? (clearer = more familiar)
  3. How available is the information required for identifying/exploring the options? (more available = more familiar)
  4. How distributed is the expertise required for identifying/exploring the options? (less distributed = more familiar)

The higher the impact of the outcome, the more collaborative we should be in making the decision

I’m hoping to improve this part of the framework, but for the time being, to assess the level of impact we should ask ourselves:

  1. What would be the breadth of the outcome? (more people impact = more impact)
  2. What would be the depth of the outcome? (more profound impact = more impact)
  3. How reversible would the outcome be? (less reversible = more impact)

As the impact increases, we should opt for a more collaborative decision-making process: from a single decision-maker, through consent and democratic, to consensus.


I found applying different levels of collaboration to the two stages extremely liberating. It provides me with a more nuanced way to tailor the decision-making process to the situation and a stronger sense of certainty that I’m using a process that fits the situation. However, it’s by no means a silver bullet. The challenge is and will continue to be in assessing the levels of familiarity and impact and picking the appropriate transition points from one process to the other.

Deciding how to decide

The XY Problem [Chen]

Distinctions are a powerful concept. Labeling a pattern makes it easier to identify it and respond to it. And that’s exactly what Lily Chen did for me with

The most common problem I’ve seen in product/engineering process

In this short piece, Lily gave a label to a common pattern that I’ve seen time and time again, in much broader scope than the product development one that Lily’s piece is focused on.

The XY Problem — is asking about your attempted solution rather than the actual problem. You are trying to solve problem X, and you think solution Y would work, but instead of asking about X when you run into trouble, you ask aboutY

The best way to avoid the XY Problem, other than simply being aware of it, is to get into a habit of asking “why”. As Lily suggests, “behind every what there’s a why”. In any problem-solving collaboration, don’t start looking for solutions before you’ve moved from the default starting spot in the What Stack an everyone understands at least a couple of “whys” up the stack.

The XY Problem [Chen]