The Organizational Lag

The Organizational Lag: Leading Your Team Through Change is a wonderful, short piece by John Vars.

In it, John captures a behavioral pattern that I’ve seen multiple times in various organizations, the challenge to get buy-in and drive change, and identifies an important root cause for it, as well as path for addressing (some) of it.

It can be simply illustrated in this image:

organizational lag.png

Or in this one sound-bite from his piece:

The big lesson for me was this: there is always an organizational lag. By this I mean that management often gets access to information ahead of the rank-and-file employees. Therefore, managers are able to process and get comfortable with the information before sharing with the greater team. The team, naturally quite nervous with a major change, now has to catch up to the managers who are bombarding them with action items. This is the lag. The lag is when managers are ready to start acting and the team is asking WTF. The lag is when managers experience increasing optimism and the team experiences increasing anxiety. The lag always exists and you can’t avoid it. What you can do is shrink the lag time.

Take 5 mins, read his piece. It’s well worth it.

This notion of a “lag” and the visual oscillating pattern immediately drew a connection in my mind between the phenomena that John is describing here and Donella Meadows’ “systems thinking” work (get a taste of it here). What’s particularly interesting here, is that it’s the human interactions that create the system pattern/behavior. If we were talking about machines, one reaching a decision, and then instructing the other to do something, we would not have the same system behavior. There will not be “WTF” moments, there will not be anxiety. I’m probably going to spend some more time reflecting on the interplay between “systems thinking” and “human behavior”. Lots to uncover there.

Advertisement
The Organizational Lag

How to make sense of Cognitive Biases

Buster Benson, of Codex Vitae fame, with another great piece:

Cognitive Bias Cheat Sheet

In addition to being a great piece on the merits of its content alone, on a meta level, it is also a great example of the value that good synthesis of existing knowledge can contribute to the way we make sense of the world around us.

The list of cognitive biases Wikipedia page circa August 2016 contained a jumble of about 175 different biases. Buster set out on the heroic task of helping all of us see the forest for the trees.

Here’s a slightly paraphrased and summarized version of Buster’s categorization:

Problem 1: Too much information.

There is just too much information in the world, we have no choice but to filter almost all of it out. Our brain uses a few simple tricks to pick out the bits of information that are most likely going to be useful in some way.

Therefore, we:

  • notice things that are already primed in memory or repeated often
  • notice bizarre/funny/visually-striking/anthropomorphic things more than non-bizarre/unfunny things
  • notice when something has changed
  • are drawn to details that confirm our own existing beliefs

Problem 2: Not enough meaning.

The world is very confusing, and we end up only seeing a tiny sliver of it, but we need to make some sense of it in order to survive. Once the reduced stream of information comes in, we connect the dots, fill in the gaps with stuff we already think we know, and update our mental models of the world.

Therefore, we:

  • find stories and patterns even in sparse data
  • fill in characteristics from stereotypes, generalities, and prior histories whenever there are new specific instances or gaps in information
  • imagine things and people we’re familiar with or fond of as better than things and people we aren’t familiar with or fond of
  • simplify probabilities and numbers to make them easier to think about
  • think we know what others are thinking
  • project our current mindset and assumptions onto the past and future

Problem 3: Need to act fast.

We’re constrained by time and information, and yet we can’t let that paralyze us. Without the ability to act fast in the face of uncertainty, we surely would have perished as a species long ago. With every piece of new information, we need to do our best to assess our ability to affect the situation, apply it to decisions, simulate the future to predict what might happen next, and otherwise act on our new insight.

Therefore, we:

  • need to be confident in our ability to make an impact and to feel like what we do is important
  • favor the immediate, relatable thing in front of us over the delayed and distant, in order to stay focused
  • are motivated to complete things that we’ve already invested time and energy in
  • are motivated to preserve our autonomy and status in a group, and to avoid irreversible decisions, in order to avoid mistakes

Problem 4: What should we remember?

There’s too much information in the universe. We can only afford to keep around the bits that are most likely to prove useful in the future. We need to make constant bets and trade-offs around what we try to remember and what we forget. For example, we prefer generalizations over specifics because they take up less space. When there are lots of irreducible details, we pick out a few standout items to save and discard the rest. What we save here is what is most likely to inform our filters related to problem 1’s information overload, as well as inform what comes to mind during the processes mentioned in problem 2 around filling in incomplete information. It’s all self-reinforcing.

Therefore, we:

  • edit and reinforce some memories after the fact
  • discard specifics to form generalities
  • reduce events and lists to their key elements
  • store memories differently based on how they were experienced

Since then John Manoogian III has created this beautiful illustration, based on Buster’s post, which was already integrated back into the original Wikipedia entry, and available here in poster-form:

Cognitive_Bias_Codex_-_180+_biases,_designed_by_John_Manoogian_III_(jm3).jpg

How to make sense of Cognitive Biases

Are resumes better than a coin flip?

Aline Lerner, a master recruiter, decided to run an interesting experiment:

She created an anonymized data set of 51 resumes, 33 belonging to “strong” candidates, and 18 to “less strong” candidates. Classification was determined based on how the candidates fared in actual job interviews and subsequent job success.

A group of 152 experts (22 in-house recruiters, 24 agency recruiters, 20 hiring managers, 86 engineers involved in the hiring process) was each shown a random sample of 6 resumes from the data set and asked a simple question: “would you interview this candidate?”

Answers were codified with “yes” = 1 and “no” = 0. These were the results:

boxplot

plot-overall-accuracy-1

On average, participants guesses correctly, 53% of the time, barely better than flipping a coin (in-group averages ranged from 48% to 56% but the differences were not statistically significant).

While the experiment may not be as scientifically rigorous and we’d like it to be, it’s the most diligent attempt that I’ve seen to test out a hypothesis that I’ve been contemplating myself for quite a while:

Do resumes really matter?  

A typical hiring process consists of two sub-processes running in parallel. The emphasis on each changes from one step to the other and from one company to the other:

  1. A screening process – meant to give the company confidence that this is the candidate they want to hire
  2. A selling process – meant to give the candidate confidence that this is the company they want to work for

Resumes are the initial filter for #1. So much time and energy is invested in writing them (by the candidate) and screening them (by the recruiter/hiring manager), but the above experiment suggests that a coin flip would be a more effective initial screener. And we haven’t even started talking about the biases that they introduce to the screening process (plenty of examples here, just Google it).

Is there a better alternative?

The short answer is yes. It’s called a “work sample test” and the sad part is that it was proven to be superior back in 1998, almost 20 years ago, while very little has changed in the way we screen candidates since.

A work-sample test simply means giving candidates a sample piece of work to the one they’ll be asked to do in their role. For software engineers, it doesn’t have to be a coding project, it can also be multiple-choice quiz like the folks at Triplebyte use.

I’m sure many of you have some strong push-back arguments. I know I did when I first started thinking about this. Let me address some of them in FAQ-form:

Q: Creating a work-sample test sounds like a lot of work. Is it really worth it?

A: Consider the alternative cost: both the time investment in (meaningless) resume screening, and losing out on good candidates due to an ineffective screening process.

Q: I see how a work-sample test works well for engineers, but can it work for other roles as well?

A: Absolutely. One of my favorite examples is one of my old bosses screening Executive Assistant candidates by asking them to create a fake itinerary for him given a set of constraints. I once had to conduct a test phone call for school operations role with a fake parent whose kid was consistently late for school. Hiring sales people? Have them videotape themselves giving a 10 min pitch of the product they are currently selling.

Q: What about cheating?

A: Some forms of test are in theory prone to cheating. But so do resumes: we typically only check references in the last step of the process, after we’ve already made the investment of having the candidate go through the entire interview process. If you’re really paranoid about cheating: you can have the candidates come on-site to take the test, or do it live – but keep in mind that you’re considering a replacement to a “coin flip” process, a minor cheating risk seems like a reasonable trade-off.

Q: Completing a work-sample test seems to be rather time consuming. Is it fair to ask candidates to invest so much time up-front in their candidacy?

A: Fair. A work-sample test don’t have to be a lengthy exercise. It can also be a 10-15 min assignment, if you know what you’re doing. For lengthier test consider: a) time-boxing b) paying candidates for their time. Sounds expensive? more expensive than using your current “coin flip” screening process?

Q: We’re a small startup, and there’s no way candidates will be willing to invest the time in taking a work-sample test to work for us. What can we do?

A: If you’re a small startup, the thing you really can’t afford is having a screening process that rejects about 50% of the qualified candidates that are interested in working for you. If you’re convinced that you can’t use a screening filter up-front – get rid of it altogether and just use a jobs@ email address or a contact info form  instead. Give anyone who expressed interest a call.

 

Are resumes better than a coin flip?

Management Moon Shots

In 2008, a group of 35 management business scholars and practitioners, under the leadership of Gary Hamel, met for two days to discuss the future of management:

Moon Shots for Management

Before we get to what they came up with, it’s worth spending a few sentences on the way they’ve defined the science of management – one of the best ones I’ve seen to date:

Management = the structures, processes and techniques used to compound human effort

The group set out to define daring goals, “moon shots”, that will motivate the search for radical new ways for mobilizing and organizing human capabilities.

They ended up with 25, somewhat overlapping challenges, agreeing that the first 10 are the most critical:

mgmt-moonshots-hamel

It’s a fantastic “North Star”, and a powerful assessment criteria for evaluating future and existing management innovations.

 

Management Moon Shots