Want freedom? Constrain it! [WaitButWhy+Zappos]

If you’re not following WaitButWhy Tim Urban’s latest masterpiece series, The Story of Us — do so now! 

As I’m writing this post, the series is still unfolding, so instead, I want to focus on a small piece of it, covered in Tim’s The Enlightenment Kids post and its real-world applicability in the much smaller context of a single business. 

Governance

In the post, Tim offers his narrative for how the US Founding Fathers(aka “The Enlightenment Kids”) designed a system to address the pitfalls of dictatorship — the prevailing system at the time: taking the standard dictator and splitting it into 3 parts: 

  1. The Constitution — replacing the dictator’s ability to generate and modify rules on a whim with a standalone document with significant constraints on how they can be changed. 
  2. The Citizen Body — replacing the dictator’s head in making decisions on what goes on inside the country and its action on the international stage.
  3. The Government — replacing the dictator’s cudgel in enforcing the rules.
Source: waitbutwhy.com

While this almost 250 years-old modification to how countries are run is now widely adopted (57% according to Pew), it is hardly the case for organizations, where the overwhelming majority is still running using an authoritarian model of governance. 

Rather than using hand-wavy claims about “distributed decision-making”, “eliminating hierarchies” and the likes, the transition outlined above is a much more powerful way to explain what Holacracy and other forms of progressive organizational governance aim to accomplish. 

Freedom

The other big shift that Tim covers is how freedom is supported. Under the new regime, freedom is constrained so people can have more of it. Sounds a bit counter-intuitive at first. 

Under the old regime, everyone starts off with unlimited freedom. Until conflict arise. Since everyone can do whatever they want, if they have the power to pull it off, the person with the bigger cudgel (power) will use it to restrict the freedom of the person with the smaller one. Resulting in an outcome in which a handful of powerful people have a lot of freedom and the vast majority of people have very little of it. 

The new regime follows a different rule: Everyone can do whatever they want, as long as it doesn’t harm anyone else. In exchange for giving up the freedom to harm or bully others, you could live a life entirely free from anyone bullying you. No one would be completely free, but everyone would be mostly free:

Source: waitbutwhy.com

Fast-forward almost 250 years, and Zappos finds itself in a similar situation. In an economic context, your economic freedom is reflected in your budget and the ways you’re allowed to allocate it. Having mostly adopted Holacracy, Zappos now tries to solve the next challenge: finding an alternative to the top-down budget allocation process so that its teams will have more economic freedom to pursue economic ventures. 

And it ends up with a similar solution: market-based dynamics (MBD). While the idea has been around since at least the 1990s, this is probably one of the boldest attempts that I’ve seen to implement it in earnest. I won’t go into the full details of the system, but highlight one highly relevant principle. At the core of MBD is the “triangle of accountability”:

The triangle of accountability states that any team at Zappos can do whatever they want so long as they remain accountable on each side of the triangle:

  • Accountable for staying true to the core principles, behaviors, and mindsets that define us as an organization
  • Accountable for continuously delivering beyond their internal and/or external customers’ expectations
  • Accountable for breaking-even on its P&L (where costs and expenses don’t exceed its funding and/or income)

The same pattern emerges again, similar to the Enlightenment Kids, Zappos too had to constrain freedom in order to provide more of it. 

Advertisement
Want freedom? Constrain it! [WaitButWhy+Zappos]

Decoupling performance and development

It’s time for another foray into this enjoyable minefield. I’ve been exploring the interesting space of performance and development over multiple posts over the years. More recently, I’ve been noodling on its deeper, philosophical aspects, grappling with questions such as what is performance? is the notion of individual performance still relevant in organizations where work is highly collaborative and deeply intertwined? and what is required of someone to be able to fairly assess someone else’s performance? 

But today, I’m leaving these deep questions aside and taking a more incremental step, hopefully in the right direction.

Performance reviews/evaluations have been drawing a lot of fire in recent years. The key design flaw in traditional performance review cycles is their tendency to mix up pieces that pertain to a person’s performance and pieces that pertain to their development. The NeuroLeadership Institute has been making a pretty compelling case for why this is a major issue. In a nutshell, there is an inherent trade-off between accurately evaluating performance and accelerating development. When we try to do both in the same program we do a sub-par job on both.

Yet while the push for change is justified, many organizations have reacted in rather naive ways: from paying lip service and rebranding the same program as “development”, through eliminating the program altogether to fully replacing it with a developmental program (often a series of coaching conversations). 

These responses tend to ignore an important truth: that while development is growing in importance and needs to be deliberately designed for, the original reason for introducing performance evaluations still exists — the need to fairly allocate compensation. Steven Sinofsky, Lori Goler, Janelle Galle, and Adam Grant seem to agree. 

And by fairly allocating compensation, I’m not talking about bonuses or other forms of variable pay which I’m strongly opposed to. I’m talking about the ongoing need for proportionality between a person’s contribution to the org and their compensation, from the extremes (termination, promotion) to the mundane (merit increases). 

Obviously paying lip service doesn’t solve anything, but so does the alternatives. Eliminating the program creates a vacuum that’ll most likely lead to a less-fair emergent solution. And a development program will do a terrible job driving fair compensation, just like an evaluation program will do a terrible job driving development. 

BOTH performance management AND professional development are critical organizational needs. But they need to be addressed separately using two different programs with some key design differences.

Performance Management Program/Process

  • Purpose: procedural justice in compensation allocation 
  • Driven by the evaluator(s)
  • Feedback is absolute: comparison against an existing bar/benchmark. Ideally, binary (yes/no rather than ratings) evaluation against job level rubrics that span domain mastery, collaborative ability, and company values
  • Evaluates work in the context of the current role

Professional Development Program/Process

  • Purpose: overcoming our human “present bias” (by creating the space for reflecting on the past and envisioning the future)
  • Driven by the individual motivated to develop
  • Feedback is relative: ideally stack-ranking the dimensions/components of performance from the one to focus the most on to the one to focus the least on
  • Considers whether the current role is the best container for development (vs. taking on add’l responsibilities / new role / different company / etc.)

The two programs are mostly decoupled from one another with one caveat: some outcomes of the performance management process constrain the possible pathways for development.

While we should keep looking for better designs to manifest the purpose of each program, we need to keep in mind that both are critical. 

Decoupling performance and development

Meeting Modes [da Silva and Bastos]

Source: targetteal.com

Paraphrasing the first paragraph of one of my still all-time favorite self-authored posts: the essence of every organization is a synergetic collaborative effort. We deliberately organize because we can create together something better than the sum of what we can separately create on our own. 

Yet “collaboration” is a pretty fuzzy term, so designing structures in support of collaboration requires a more detailed taxonomy, which allows us to decompose collaboration into its respective parts, or modes if you will. I’ve been searching for such MECE taxonomy for quite a while and was delighted to come across Davi Gabriel da Silva and Rodrigo Bastos’ work, which comes pretty close to the goal: 

O2: Organic Organizations— Open-source practices for self-management

While wrapped-up in progressive/”teal”/self-manage-y context, the section about meeting modes stands on its own and its conceptual applicability is not dependent on how “progressive” the organization is. Especially if we take a step back and realize that “meeting” is a label we use to describe a collaborative interaction, so “meeting modes” are synonymous with “collaboration modes”. Da Silva and Bastos identify 5 key modes: 

Review Work

Often times also referred to as “retrospective” and aimed at building a shared understanding of where we stand. 

Sync Efforts

Making peer-to-peer requests to provide information, deliverables or help is an essential part of collaborative efforts. 

Adapt Structure

Since collaboration takes place in a dynamic environment, there needs to be a mechanism for changing the way responsibilities are divided to best meet the changing conditions.

Select People

These dynamic “containers of responsibilities”, need to be dynamically filled by individuals as both the needs of the group and the needs of the individuals change. 

Care for Relationships

A collaborative effort carried out by humans needs to account for our humanity. This mode aims to develop communication, recognize individual needs and nurture openness among collaborators.

In Target Teal’s open-source “pattern library” you can also find more specific examples for how to facilitate each one of the collaboration modes. 

Meeting Modes [da Silva and Bastos]

Service-as-a-Benefit (SaaB)

Breaking the zero-sum game

Over the past few years, the employer benefits ecosystem experienced exponential growth. The inventory of available benefits goes far beyond the core, non-taxable set of medical insurance, 401(k)s and commuter benefits to include a wide range of additional services. Childcare, financial planning, physical therapy, coaching, fertility treatments, and many more can now be offered as an employer-sponsored benefit. 

This trend of packaging a consumer service as an employer benefit, Service-as-a-Benefit, or SaaB for short (corresponding to Software-as-a-Service/SaaS) is fueled by tailwinds on both sides of the marketplace. As the competition for top talent continues to heat up, employers look at unique benefits as a way to tap into employees’ mental accounting, differentiate their employee value proposition and bring their corporate values to life. On the provider side, new entrants, in particular, are looking for effective growth strategies and are drawn to the allure of the corporate channel and its promise of acquiring a large group of users, en masse, while reducing overall acquisition costs, and securing a more reliable source of revenue. 

Hitting the “Business Case” wall

But what first seems like a simple win-win as providers see good traction with enthusiasts and early adopters, reveals its more complex nature when providers try to establish a stronger foothold in the market. 

Benefit administrators without a strong affinity to a particular service, find themselves between a rock and a hard place. On the one hand, a growing abundance of services to choose from. On the other hand, a very heterogeneous value proposition to their employees: while parents may find childcare, for example, absolutely essential, childless employees will find it useless. The important and deliberate investment in building more diverse workforces and growing organizational geographical footprints compound the latter even further, as a more diverse and global workforce has a more diverse set of employee needs.

Given a fixed per-employee benefits budget, provider selection becomes a zero-sum game: choosing to offer coaching-as-a-benefit means not offering financial-planning-as-a-benefit. And how does one supposed to compare coaching to financial planning, especially taking the heterogeneity in value into account? 

Within this paradigm, the only way to break out of the zero-sum game is by making the business case for an overall increase in the benefits budget given the intrinsic value of a particular service. And that business case often proves out to be incredibly difficult to credibly make. 

The speedy early traction grinds to a crawl, if not a complete halt.  

Back to Win-Win(-Win)

There is, however, another path for turning the zero-sum game into a win-win(-win) by redesigning the way benefits management works across providers, administrators, and employees. 

Service-as-a-benefit providers need to shift from lump-sum per-org-size or per-employee pricing schemes to per-activated-employee or per-usage pricing schemes, so they only get paid when an employee has opted to use the service.

Benefits administrators need to both curate a portfolio of service-as-benefits providers that’s strategically aligned with their intended positioning, and provide their employee base with transparent individual budgets to allocate across the portfolio. The preliminary curation is essential to both preventing erosion in mental accounting and the perceived value of the benefits, as well as avoiding an unreasonable cognitive load on employees forced to choose from a nearly endless array of options. 

Employees will then be responsible for constructing a benefits package that best suits their needs, and will have the option of modifying the package on a reasonable cadence (quarterly seems reasonable) to reflect any changes in their personal needs and life circumstances.  

This reconfiguring of the ecosystem also poses an interesting business opportunity in the form of a platform for bringing all three parties together and potentially providing the following services: 

  • Enable providers to easily interact with a large group of benefits administrators and streamline the handling of both contracting and payments.
  • Enable benefits administrators on one side to interact with a large group of providers and easily create their service-as-a-benefit portfolios, and on the other side to set individual benefits budgets. 
  • Enable employees to manage their personal benefits budgets, build and modify their benefits package and onboard onto/enroll in the specific service-as-a-benefit that they selected. 

Now all we need is that platform…

Service-as-a-Benefit (SaaB)

The consequences of over-simplification

We’re going deep into science today, fasten your seatbelts. 

I came across a couple of really interesting articles recently that call to question our assumptions about how our world works in deeply profound ways: 

How ergodicity reimagines economics for the benefit of us all by Mark Buchanan

The Flawed Reasoning Behind the Replication Crisis by Aubrey Clayton

While the former looks at decision-making, the latter looks at statistical analysis. They both challenge the assumptions which underlie the common methods. And in both cases, it’s an error of omission, ignoring some data about the real-world, which is the culprit that causes the method to yield sub-optimal, if not completely flawed, conclusions. 

Ergodicity Economics 

Ergodicity Economics highlights the challenges with the assumption that people use an “expected utility” strategy when making decisions under conditions of uncertainty. 

The expected utility strategy posits that given a choice between several options, people should choose the option with the highest expected utility, calculated by multiplying the probability of the scenario by the value of that outcome in the scenario. 

Bayes’ Rule

The challenge with this strategy is that it ignores an important aspect of real life — time. Or more specifically, the fact that life is a sequence of decisions, so each decision is not made in isolation, it takes into account the consequences of the decisions that were already made and the potential consequences of the decisions that will be made in the future. 

This has some profound implications for cooperation and competition and the conditions under which they are beneficial strategies. Expected utility suggests that people or businesses should cooperate only if, by working together, they can do better than by working alone. For example, if the different parties have complementary skills or resources. Without the potential of a beneficial exchange, it would make no sense for the party with more resources to share or pool them together with the party who has less. 

But when we expand the lens to look not just at a single point in time but a period of time in which a series of risky activities must be taken, the optimal strategy changes. Pooling resources provides all parties with a kind of insurance policy protecting them against occasional poor outcomes of the risks they face. If a number of parties face independent risks, it is highly unlikely that all will experience bad outcomes at the same time. By pooling resources, those who do can be aided by others who don’t. Cooperation can be thought of as a “risk diversification” strategy that, mathematically at least, grows the wealth of all parties. Even those with more resources do better by cooperating with those who have less. 

Bayesian Inference

Consider the following story (paraphrased from Clayton’s piece): 

A woman notices a suspicious lump in her breast and goes in for a mammogram. The report comes back that the lump is malignant. She needs to make a decision on whether to undergo the painful, exhausting and expensive cancer treatment and therefore wants to know the chance of the diagnosis being wrong. Her doctor answers that these scans would find nearly 100% of true cancers and would only misidentify a benign lump as cancer about 5% of the time. Given the relatively low probability of a false positive (5%), she decides to undergo the treatment. 

While the story seems relatively straight forward, it ignores an important piece of data: the overall likelihood that a discovered lump will be cancerous, regardless of whether a mammogram was taken. If we assume, for example, that about 99% of the time a similar patient finds a lump it turns out to be benign, how would that impact her decision? 

This is where Bayes’ Rule comes to our rescue: 

We’re trying to find P(A|B) which in our case is P(benign|positive result)

P(A) = P(benign) = 99% (the new data we just added), and therefore P(malignant) = 1 — P(benign) = 1%

P(B|A) = P(positive result|benign) = 5% the false positive stat the doctor quoted.

The doctor also told us that P(positive result|malignant) = 100% 

Which then helps us find P(B) = P(positive result) since we need to decompose it to: P(benign|positive result)*P(benign) + P(malignant|positive result)*P(malignant).

Not we can plug everything into Bayes’ rule to find that:

P(benign|positive result) = (0.05*0.99)/(0.05*0.99+1*0.01) = approx 83%

So the likelihood of a false positive is 16 times higher than what we thought it was. Would you still move forward with the treatment? 

Clayton’s case is that this “error of omission” in the analysis extends beyond mere life-and-death situations like the one described above and into the broader use of statistical significance as the sole method for drawing statistical conclusions from experiments. 

In Clayton’s view, this is one of the root causes of the replication crisis that the scientific community is now faced with and is beautifully illustrated by the following example: 

In 2012, Professor Norenzayan at UBC had 57 college students randomly assigned to two groups, each were asked to look at an image of a sculpture and then rate their belief in god on a scale of 1 to 100. The first group was asked to look at Rodin’s “The Thinker” and the second was asked to look at Myron’s “Discobolus”. Subjects who had been exposed to “The Thinker” reported a significantly lower mean God-belief score of 41.42 vs. the control group’s 61.55. Or a 33% reduction in the belief in God. The probability of observing a difference at least this large by chance alone was about 3 percent. So he and his coauthor concluded “The Thinker” had prompted their participants to think analytically and that “a novel visual prime that triggers analytic thinking also encouraged disbelief in God.”

According to the study, the results were about 12 times more probable under an assumption of an effect of the observed magnitude than they would have been under an assumption of pure chance.

Despite the highly surprising result (some may even say “ridiculous” or “crazy”), since they were “statistically significant” the paper was accepted for publication in Science. An attempt to replicate the same procedure and almost ten times as many participants, found no significant difference in God-belief between the two groups (62.78 vs. 58.82).

What if instead, we took a Bayesian approach and assumed a general likelihood of disbelief in God by watching Rodin’s “The Thinker” of 0.1% (and therefore a corresponding “no change in beliefs” of 99.9%), and then figure out what is P(disbelief|results)? 

We know from the study that P(results|no change) = 12*P(results|disbelief). Plugging this to Bayes’ Rule we get: 

P(disbelief|results) = 12 * 0.001 / (12 * 0.001 + 1 * 0.999) = 0.012 / 1.011 = approximately 1.2% a far cry from the originally stated 33% reduction…

The consequences of over-simplification