OWKRs (not a typo)

Pictured: the Great Auk penguin (inspired by the O’Reilly “Effective AWK Programming” book cover)

Goals are a core organizational practice in many organizations and therefore the topic of several blog posts in this publication. From “Goals: connecting strategy and execution”, through “Why setting ambitious goals backfires” and “Goals gone wild”, to “How we align our goals”. 

The challenge with goals is captured beautifully when we look at them through the framework outlined by Donald Sull in the first piece above for the 4 different uses for goals: 

  1. Improve individual performance
  2. Drive strategic alignment
  3. Foster organizational agility
  4. Enable members of a networked organization to self-organize their activities 

#1, in particular, is rife with pitfalls and tends to draw most of the heat when a case against goals is made. Yet if we think about goals less as a target to be hit and more as an intent to align on — it’s clear that they play a critical role in supporting #2. 

More on this here

Abandoning goals altogether is probably a no-go. So how can we shift the way we set and articulate goals to be more supportive of that?

Much as been written about OKRs, the most popular goals structure in use today, and in recent years more nuanced pieces addressed some of the common pitfalls in how phrases and set. For example, avoiding the “OKR cascade”. However, none, that I know of, have suggested any changes to the OKR structure. Which is what I’m intending to do today. 

If we intend to use OKRs primarily as an alignment mechanism, the structural gap becomes clear: the “objective” describes the goal that we’re working towards, but it doesn’t connect it to the broader strategy. It doesn’t help answer the most meaningful question that a conversation should be centered around: 

Why is this goal the best thing you could do to advance our strategy?  

It is in answering this question that the biggest assumptions and interpretations are being made and the risk of meaningful misalignment is highest. Yet, we leave the answer to that question implicit, hoping that all parties involved are skilled enough to uncover it on their own. 

No more. Introducing: OWKRs. 

A small, but meaningful tweak to the traditional OKR structure: 

  • Objective
  • Why? (new) — a short (2–3 sentences) explanation of why this goal is the best thing that you could do to advance the strategy. 
  • Key Results 

My hypothesis is that making the “Why?” explicit in the structure will shift the focus of the O(W)KR setting conversation to discussing the underlying assumptions in selecting the objective and catching any critical misalignments sooner. 

And as a bonus point, OWKR is an anagram for “work”… 🙂 

Advertisement
OWKRs (not a typo)

Visualizing the voice of the employee [Coolen]


Patrick Coolen is the Global Head People Analytics, Strategic Workforce Planning and HR Survey Management at ABN AMRO, the third-largest bank in the Netherlands. Recently he penned a great piece about one of my favorite topics:

Visualizing the voice of the employee

In this piece, Coolen outlines how they conduct and digest engagement survey data at ABN AMRO. 

Data collection

The engagement survey is SUPER simple and light-weight containing only 3 questions: 

  1. How likely are you to recommend our organization to a friend or relative as an organization to work for? (quantitative, NPS-like question)
  2. What is our organization doing well as an employer? (qualitative, “Top” question)
  3. What could our organization do better as an employer? (qualitative, “Tip” question)

To get a more continuous view of the data, while avoiding survey fatigue, since ABN AMRO is a large-enough organization, they run the survey monthly, but only 1/12 of the employees are asked to take it each time, utilizing a stratified sampling approach to ensure that the sample is representative. 

I LOVE the lightweight approach and the balance of a single quantitative question and the two “top & tip” open-ended qualitative questions, as well as leveraging the size of the organization to reduce survey fatigue without jeopardizing the quality of insights.

My one nit is that I’m not a huge fan of the NPS-like quantitative question and would probably replace it with a different quantitative metric that has a causal link to performance. 

Data analysis

The extreme simplicity of the survey and open-endedness of the qualitative do create some non-trivial data analysis challenges in classifying the responses that Coolen’s team did a brilliant job overcoming. 

First, they “normalized” the responses by translating all responses to a single language (English), splitting responses with multiple subjects, lower-casing all text, removing punctuations, and lemmatizing key words. 

Then, they evaluated several machine learning classification algorithms, landing on Support Vector Machine as the best candidate, a refined its precision further using a supervision process. 

The output of the data analysis phase is the classification of all responses to one of 150 topics, who, in turn, roll up to a smaller set of “expert domains” (Recruiting, L&D, IT, etc.). 

Data visualization

The data is then presented and made available to the entire organization using the bubble chart below where each bubble represents a topic: 

source: Patrick Coolen
  • The bubble is larger the more responses map to that topic.
  • The bubble is higher the more the topic showed up in “top” responses, rather than “tip” responses. 
  • The bubble is positioned further to the right, the more positive the responses to the quantitative question were when the topic was brought up in the qualitative questions. 

The area of the chart can be segmented into 4 quadrants driving different actions: 

  • Topics (bubbles) in the top-right — Celebrate — things that the organization does well and are positively correlated with the quantitative measure. 
  • Topics (bubbles) in the bottom-left — Focus Areas — things that the organization does not do well, and are negatively correlated with the quantitative measure. Therefore, they are the areas where the opportunity for impactful change is the highest. 
  • Topics (bubbles) at the bottom-right — Suggestions — things that the organization does not do well, but are not negatively correlated with the quantitative measure. 
  • Topics (bubbles) at the top-left — Investigate — things that the organization does well but are still negatively correlated with the quantitative measure. Since this is an anomalous pattern, it is worthy of further investigation. 

The chart can also be filtered by time, business line, role, etc. to draw more refined insights which are then reviewed and acted upon in quarterly business reviews. 


Net-net I think this comes pretty darn close to the best way for surfacing insights out of a “working on work” exercise. Effective actions will be the next hurdle to overcome. 

Visualizing the voice of the employee [Coolen]

The evolution of Cynefin [Snowden, Corrigan]

I first learned about Dave Snowden’s Cynefin model in a Lean-Kanban conference circa 2015–16 and have made references to it in a handful of blog posts in the past [1, 2]. 

It first received broad recognition in a 2007 HBR piece titled A Leader’s Framework for Decision Making. On March 1 (St. David’s Day) 2019, Snowden took it upon himself to write a series of blog posts (5 in total) covering updates to the model, and in this year’s St. David’s Day, he decided to turn it into an annual ritual. 

Cynefin St David’s Day 2020 (1 of 5)

Chris Corrigan took it upon himself to aggregate the model and the key changes here:

A tour around the latest Cynefin iteration

And I am going to attempt to distill it even further. This is going to be a challenging post to write and I know the end product is not going to be great. Both because the subject matter is difficult, and because I have yet to have mastered the framework. But that’s exactly the point of writing about it…

First, a quick orientation: the Cynefin model is designed to aid decision-making and inform actions, recognizing that the decision-making process leading to the best action is different based on the context (domain) — the environment/situation — in which the action needs to be taken. 

 The model discerns between 5 different domains, the two on the right (Clear, Complicated) are “ordered” domains where the environment is mostly knowable and predictable and problems are solvable. The distinction between those two domains is more nuanced and is a factor of the number of parts in the system/situation. The higher the number, we’re going deeper into the Complicated domain and the level of expertise required to know the right answer increases. 

The two on the left (Complex, Chaotic) are “unordered” domains where the environment is mostly unknowable and unpredictable. In the Complex domain phenomena such as emergence and self-organization exist but those are enabled by some constrains. In the complex domain, there are no meaningful constraints leading to semi-random behavior. 

Going counter-clockwise (Clear -> Complicated -> Complex -> Chaotic) there are fewer constraints and therefore the more unordered and unstable the situation becomes. Going clockwise, there are more constraints on the situation and it becomes more ordered and stable. 

In the middle is the Confusion domain, broken down to “Aporetic” (“at a loss”) where the confusion is unresolved or paradoxical, and “Confused” where we just haven’t fully understood the situation yet — a more temporal state. 

I’m going to keep the green sections indicating liminality out of the scope of this post for the time being. 

Putting the framework to action

Almost any situation that requires a response has multiple aspects, each mapping to a domain. 

Step 1 is decomposing the situation to its various aspects. 

Step 2 is mapping each aspect to its respective domain: 

  • A clear and obvious aspect where things are tightly connected and there is a best practice → Clear
  • An aspect with a knowable answer or a solution, which has an endpoint, but requires an expert to solve it for you → Complicated.
  • An aspect with many different possible approaches, and uncertainty around which is going to work → Complex.
  • An aspect that is a total crisis, which completely overwhelms you → Chaotic.
  • Aspects whose domain is still unclear should be left in the middle, “Confused” domain. 

Step 3 is applying the appropriate approach to the aspects in each domain: 

  • Clear (Sense → Categorize → Response): just do them.
  • Complicated (Sense → Analyze → Response): research using literature and experts, make a plan, and execute.
  • Complex (Probe → Sense → Respond): get a sense of the possibilities, try something, and watch what happens. As you learn things, document practices and principles that guide in making decisions. If rules are too tight, loosen them. If rules are too loose, tighten them. 
  • Chaotic (Act → Sense → Respond): apply constraints quickly and maintain them until the situation stabilizes. 
  • Confusion: monitor those aspects and re-evaluate as new information becomes available and may help classify them into the appropriate domain. 

Key changes in the framework 

  • Renaming the first domain as “Clear” instead of “Simple” (or “Obvious”)
  • Highlighting the roles that constraints play in each of the domains: fixed constrains (Clear), governing constraints (Complicated), enabling constraints (Complex), no constraints (Chaotic). 
  • Renaming the middle domain to “Confusion” (from “Disordered”) and decomposing it to: “Aporetic” and “Confused”. 
  • Adding liminal boundaries around the Complex domain. 
  • Adding approach “labels” in addition to approach sequences: best practice (Clear), good practice (Complicated), exaptive discovery (Complex), novelty under stress (Chaotic). 
The evolution of Cynefin [Snowden, Corrigan]

Hill Charts [Basecamp]

Source: Basecamp

A few weeks ago I wrote a piece in The Ready publication titled “Ending the Tyranny of the Measurable” making the case for the price we pay for our obsession with the quantitatively measurable and offering alternatives for some common use cases. 

This week, I want to add another tool to the toolbox, courtesy of the Basecamp team: 

Hill Charts 

Oddly enough, this post is not even new. It’s 2 years old by now but just got on my radar this past week. 

The premise is very simple: numerical progress tracking is not very insightful. What can we learn from knowing that a project is 42% complete? 

The path towards progress is different depending on what the blocker might be, not to mention that the scope may still be evolving given that unknowns exist. 

Hill charts use the metaphor of a hill to discern between two phases in every problem-solving task. The uphill part is the divergent phase where we figure out different approaches to the solution, and the downhill part is the convergent phase, where we figured out a solution and it’s mostly a matter of execution. 

Source: Basecamp

Hill charts offer a more qualitative, subjective way to reflect progress by positioning a task at a certain point on the hill. Not only does it avoid the false precision in numerical progress tracking, it also allows us to capture relative progress across tasks in a fully relative way — through their different positions on the hill, avoid the proxy of numerical comparisons. Furthermore, reflecting on progress through a hill chart can help direct our attention to the more appropriate strategy for removing blockers or making more progress depending on the problem-solving stage, and act as a trigger for decomposing tasks when we realize that two different pieces are on different places on the hill. And lastly, it helps us avoid misleading numerical aggregation when we zoom out to look at the portfolio level because it’s clear that the underlying project-level assessments are subjective. 

Taking a snapshot of the hill every time we move a task around can serve as a powerful retrospection tool when we look back and aim to learn from our experience completing the project. 

At its core, Hill Charts shift progress tracking from a one-dimensional to a two-dimensional concept and from a discrete to a continuous concept which brings it closer to its true essence in our complex reality. 

As I was learning about Hill Charts, I was immediately reminded of the double-diamond design process, so my only suggested tweak to the Hill Chart would be to turn them into Double Hill Charts, capturing the pre-engineering phases as well. 

The double-diamond design process
Hill Charts [Basecamp]