Meeting Modes [da Silva and Bastos]

Source: targetteal.com

Paraphrasing the first paragraph of one of my still all-time favorite self-authored posts: the essence of every organization is a synergetic collaborative effort. We deliberately organize because we can create together something better than the sum of what we can separately create on our own. 

Yet “collaboration” is a pretty fuzzy term, so designing structures in support of collaboration requires a more detailed taxonomy, which allows us to decompose collaboration into its respective parts, or modes if you will. I’ve been searching for such MECE taxonomy for quite a while and was delighted to come across Davi Gabriel da Silva and Rodrigo Bastos’ work, which comes pretty close to the goal: 

O2: Organic Organizations— Open-source practices for self-management

While wrapped-up in progressive/”teal”/self-manage-y context, the section about meeting modes stands on its own and its conceptual applicability is not dependent on how “progressive” the organization is. Especially if we take a step back and realize that “meeting” is a label we use to describe a collaborative interaction, so “meeting modes” are synonymous with “collaboration modes”. Da Silva and Bastos identify 5 key modes: 

Review Work

Often times also referred to as “retrospective” and aimed at building a shared understanding of where we stand. 

Sync Efforts

Making peer-to-peer requests to provide information, deliverables or help is an essential part of collaborative efforts. 

Adapt Structure

Since collaboration takes place in a dynamic environment, there needs to be a mechanism for changing the way responsibilities are divided to best meet the changing conditions.

Select People

These dynamic “containers of responsibilities”, need to be dynamically filled by individuals as both the needs of the group and the needs of the individuals change. 

Care for Relationships

A collaborative effort carried out by humans needs to account for our humanity. This mode aims to develop communication, recognize individual needs and nurture openness among collaborators.

In Target Teal’s open-source “pattern library” you can also find more specific examples for how to facilitate each one of the collaboration modes. 

Advertisements
Meeting Modes [da Silva and Bastos]

Service-as-a-Benefit (SaaB)

Breaking the zero-sum game

Over the past few years, the employer benefits ecosystem experienced exponential growth. The inventory of available benefits goes far beyond the core, non-taxable set of medical insurance, 401(k)s and commuter benefits to include a wide range of additional services. Childcare, financial planning, physical therapy, coaching, fertility treatments, and many more can now be offered as an employer-sponsored benefit. 

This trend of packaging a consumer service as an employer benefit, Service-as-a-Benefit, or SaaB for short (corresponding to Software-as-a-Service/SaaS) is fueled by tailwinds on both sides of the marketplace. As the competition for top talent continues to heat up, employers look at unique benefits as a way to tap into employees’ mental accounting, differentiate their employee value proposition and bring their corporate values to life. On the provider side, new entrants, in particular, are looking for effective growth strategies and are drawn to the allure of the corporate channel and its promise of acquiring a large group of users, en masse, while reducing overall acquisition costs, and securing a more reliable source of revenue. 

Hitting the “Business Case” wall

But what first seems like a simple win-win as providers see good traction with enthusiasts and early adopters, reveals its more complex nature when providers try to establish a stronger foothold in the market. 

Benefit administrators without a strong affinity to a particular service, find themselves between a rock and a hard place. On the one hand, a growing abundance of services to choose from. On the other hand, a very heterogeneous value proposition to their employees: while parents may find childcare, for example, absolutely essential, childless employees will find it useless. The important and deliberate investment in building more diverse workforces and growing organizational geographical footprints compound the latter even further, as a more diverse and global workforce has a more diverse set of employee needs.

Given a fixed per-employee benefits budget, provider selection becomes a zero-sum game: choosing to offer coaching-as-a-benefit means not offering financial-planning-as-a-benefit. And how does one supposed to compare coaching to financial planning, especially taking the heterogeneity in value into account? 

Within this paradigm, the only way to break out of the zero-sum game is by making the business case for an overall increase in the benefits budget given the intrinsic value of a particular service. And that business case often proves out to be incredibly difficult to credibly make. 

The speedy early traction grinds to a crawl, if not a complete halt.  

Back to Win-Win(-Win)

There is, however, another path for turning the zero-sum game into a win-win(-win) by redesigning the way benefits management works across providers, administrators, and employees. 

Service-as-a-benefit providers need to shift from lump-sum per-org-size or per-employee pricing schemes to per-activated-employee or per-usage pricing schemes, so they only get paid when an employee has opted to use the service.

Benefits administrators need to both curate a portfolio of service-as-benefits providers that’s strategically aligned with their intended positioning, and provide their employee base with transparent individual budgets to allocate across the portfolio. The preliminary curation is essential to both preventing erosion in mental accounting and the perceived value of the benefits, as well as avoiding an unreasonable cognitive load on employees forced to choose from a nearly endless array of options. 

Employees will then be responsible for constructing a benefits package that best suits their needs, and will have the option of modifying the package on a reasonable cadence (quarterly seems reasonable) to reflect any changes in their personal needs and life circumstances.  

This reconfiguring of the ecosystem also poses an interesting business opportunity in the form of a platform for bringing all three parties together and potentially providing the following services: 

  • Enable providers to easily interact with a large group of benefits administrators and streamline the handling of both contracting and payments.
  • Enable benefits administrators on one side to interact with a large group of providers and easily create their service-as-a-benefit portfolios, and on the other side to set individual benefits budgets. 
  • Enable employees to manage their personal benefits budgets, build and modify their benefits package and onboard onto/enroll in the specific service-as-a-benefit that they selected. 

Now all we need is that platform…

Service-as-a-Benefit (SaaB)

The consequences of over-simplification

We’re going deep into science today, fasten your seatbelts. 

I came across a couple of really interesting articles recently that call to question our assumptions about how our world works in deeply profound ways: 

How ergodicity reimagines economics for the benefit of us all by Mark Buchanan

The Flawed Reasoning Behind the Replication Crisis by Aubrey Clayton

While the former looks at decision-making, the latter looks at statistical analysis. They both challenge the assumptions which underlie the common methods. And in both cases, it’s an error of omission, ignoring some data about the real-world, which is the culprit that causes the method to yield sub-optimal, if not completely flawed, conclusions. 

Ergodicity Economics 

Ergodicity Economics highlights the challenges with the assumption that people use an “expected utility” strategy when making decisions under conditions of uncertainty. 

The expected utility strategy posits that given a choice between several options, people should choose the option with the highest expected utility, calculated by multiplying the probability of the scenario by the value of that outcome in the scenario. 

Bayes’ Rule

The challenge with this strategy is that it ignores an important aspect of real life — time. Or more specifically, the fact that life is a sequence of decisions, so each decision is not made in isolation, it takes into account the consequences of the decisions that were already made and the potential consequences of the decisions that will be made in the future. 

This has some profound implications for cooperation and competition and the conditions under which they are beneficial strategies. Expected utility suggests that people or businesses should cooperate only if, by working together, they can do better than by working alone. For example, if the different parties have complementary skills or resources. Without the potential of a beneficial exchange, it would make no sense for the party with more resources to share or pool them together with the party who has less. 

But when we expand the lens to look not just at a single point in time but a period of time in which a series of risky activities must be taken, the optimal strategy changes. Pooling resources provides all parties with a kind of insurance policy protecting them against occasional poor outcomes of the risks they face. If a number of parties face independent risks, it is highly unlikely that all will experience bad outcomes at the same time. By pooling resources, those who do can be aided by others who don’t. Cooperation can be thought of as a “risk diversification” strategy that, mathematically at least, grows the wealth of all parties. Even those with more resources do better by cooperating with those who have less. 

Bayesian Inference

Consider the following story (paraphrased from Clayton’s piece): 

A woman notices a suspicious lump in her breast and goes in for a mammogram. The report comes back that the lump is malignant. She needs to make a decision on whether to undergo the painful, exhausting and expensive cancer treatment and therefore wants to know the chance of the diagnosis being wrong. Her doctor answers that these scans would find nearly 100% of true cancers and would only misidentify a benign lump as cancer about 5% of the time. Given the relatively low probability of a false positive (5%), she decides to undergo the treatment. 

While the story seems relatively straight forward, it ignores an important piece of data: the overall likelihood that a discovered lump will be cancerous, regardless of whether a mammogram was taken. If we assume, for example, that about 99% of the time a similar patient finds a lump it turns out to be benign, how would that impact her decision? 

This is where Bayes’ Rule comes to our rescue: 

We’re trying to find P(A|B) which in our case is P(benign|positive result)

P(A) = P(benign) = 99% (the new data we just added), and therefore P(malignant) = 1 — P(benign) = 1%

P(B|A) = P(positive result|benign) = 5% the false positive stat the doctor quoted.

The doctor also told us that P(positive result|malignant) = 100% 

Which then helps us find P(B) = P(positive result) since we need to decompose it to: P(benign|positive result)*P(benign) + P(malignant|positive result)*P(malignant).

Not we can plug everything into Bayes’ rule to find that:

P(benign|positive result) = (0.05*0.99)/(0.05*0.99+1*0.01) = approx 83%

So the likelihood of a false positive is 16 times higher than what we thought it was. Would you still move forward with the treatment? 

Clayton’s case is that this “error of omission” in the analysis extends beyond mere life-and-death situations like the one described above and into the broader use of statistical significance as the sole method for drawing statistical conclusions from experiments. 

In Clayton’s view, this is one of the root causes of the replication crisis that the scientific community is now faced with and is beautifully illustrated by the following example: 

In 2012, Professor Norenzayan at UBC had 57 college students randomly assigned to two groups, each were asked to look at an image of a sculpture and then rate their belief in god on a scale of 1 to 100. The first group was asked to look at Rodin’s “The Thinker” and the second was asked to look at Myron’s “Discobolus”. Subjects who had been exposed to “The Thinker” reported a significantly lower mean God-belief score of 41.42 vs. the control group’s 61.55. Or a 33% reduction in the belief in God. The probability of observing a difference at least this large by chance alone was about 3 percent. So he and his coauthor concluded “The Thinker” had prompted their participants to think analytically and that “a novel visual prime that triggers analytic thinking also encouraged disbelief in God.”

According to the study, the results were about 12 times more probable under an assumption of an effect of the observed magnitude than they would have been under an assumption of pure chance.

Despite the highly surprising result (some may even say “ridiculous” or “crazy”), since they were “statistically significant” the paper was accepted for publication in Science. An attempt to replicate the same procedure and almost ten times as many participants, found no significant difference in God-belief between the two groups (62.78 vs. 58.82).

What if instead, we took a Bayesian approach and assumed a general likelihood of disbelief in God by watching Rodin’s “The Thinker” of 0.1% (and therefore a corresponding “no change in beliefs” of 99.9%), and then figure out what is P(disbelief|results)? 

We know from the study that P(results|no change) = 12*P(results|disbelief). Plugging this to Bayes’ Rule we get: 

P(disbelief|results) = 12 * 0.001 / (12 * 0.001 + 1 * 0.999) = 0.012 / 1.011 = approximately 1.2% a far cry from the originally stated 33% reduction…

The consequences of over-simplification