From individual READMEs and “user manuals” to group working agreements

Source: SY Partners

I’ve been meaning to write about individual READMEs and “user manuals” for quite some time and I’m glad that I held off. 

It was recently brought back to my attention through Ed Batista’s post: To README or not to README. Ed’s been on a tear of good posts lately and I’d highly recommend checking out his blog. 

To quickly recap, the idea of creating a Manager README document seems to trace back to the first of such documents being put together almost 10 years ago, in 2012, by a gentleman named Luc Lavesque. The document’s basic premise is highly benevolent — as a manager, deeply reflect on your own work style and preferences and write them down in a sharable doc. The thesis is that this act of self-disclosure, outlining your communication preferences, how you give feedback, your working hours, etc. will create more explict expectations for your team and lower the likelihood of misunderstandings. 

Several different posts have been written about how publishing these documents can backfire, which Batista summarizes rather succinctly: 

* A Manager README is no substitute for a feedback-rich culture — and in the absence of such a culture it may even be counter-productive, signaling to employees that their input on your inevitable blind spots as well as your performance as a manager is unwelcome.

* The use of a README without carefully attending to the power differential between you and your employees can inhibit psychological safety. The mere existence of such a document doesn’t necessarily create a less-safe environment — the impact stems from how it is employed.

* The purpose of any such document is a clearer understanding of work style differences, and that needs to be a two-way street. Expecting employees to conform to your preferences without making an effort to understand and adapt to theirs isn’t management, it’s coercion.

It’s that last critique that I want to focus on a bit more. The first two mostly highlight that the README artifact will only have a positive impact in the right context — as part of the feedback-rich culture, while being mindful of the power dynamics. The second critique starts pointing into a more profound issue which is more fully articulated in the third one: the README artifact only captures one side of the dynamic — “this is who I am, and this is how you can best work with me” — where, in fact, this artifact aims to enable better collaboration. And collaboration requires more than one person…

This suggests, that the structure of the artifact is all wrong. It shouldn’t be about you, or me — it should be about us. What are our different work style preferences? and what collective agreements are we making together about the way we work, so we can collaborate well, despite those differences? 

Fortunately, we don’t have to start from scratch. The team at SY Partners created a lightweight construct called “How we roll” (see screenshot above) for capturing these differences and beginning to hash out the mutual agreements for working together. In my opinion, it’s a much better starting point than an individualistic README for making the implicit explicit and starting an honest dialogue that can truly help improve collaboration. 

Advertisement
From individual READMEs and “user manuals” to group working agreements

Culture change: changing behaviors to change thinking

Source: MIT Solan Management Review — Winter 2010

A few weeks ago, I had an incredibly generative conversation with Jabe Bloom. One of the hallmarks of a good conversation is that it leaves me with a long list of threads or breadcrumbs that I can explore and dive deeper into after it’s over. 

One of those threads from my conversation with Jabe was John Shook’s work around culture change: 

How to change a culture: Lessons from NUMMI

This short article is well worth a full read buy the key idea is illustrated in the diagrams above and this salient quote: 

The typical Western approach to organizational change is to start by trying to get everyone to think the right way. This causes their values and attitudes to change, which, in turn, leads them naturally to start doing the right things. 

What my NUMMI experience taught me that was so powerful was that the way to change culture is not to first change how people think, but instead to start by changing how people behave — what they do. Those of us trying to change our organizations’ culture need to define the things we want to do, the way we want to behave and want each other to behave, to provide training and then to do what is necessary to reinforce those behaviors. The culture will change as a result. 

Shook’s insight aligns with my personal experience and aligns with other behavior change models that I’ve covered here. With an important caveat. I’d argue that behavior and mindsets (thinking) are tethered together with a rubber band. Change behavior by making changes to the external environment — and mindset changes will follow. BUT make too big of a change, too quickly, and the rubber band will snap, and the new behavior will be rejected. Another good metaphor that we can use here is that of a pressure cooker — the dish won’t cook without (external) heat, but crank up the heat/pressure too quickly, and the whole thing will blow up. Mastery of change requires the ability to sense the sustainable pace of change that won’t cause things to blow up. 

Another challenge I raised was around the “Agile Theater” phenomena — where teams are going though the motions of Agile or Scrum (stand-ups, estimations, etc.) but are not unlocking any of the value. 

Jabe reminded me of the Japanese martial arts concept of shuhari:

  • shu (守) “protect”, “obey” — traditional wisdom — learning fundamentals, techniques, heuristics, proverbs.
  • ha (破) “detach”, “digress” — breaking with tradition — detachment from the illusions of self.
  • ri (離) “leave”, “separate” — transcendence — there are no techniques or proverbs, all moves are natural, becoming one with spirit alone without clinging to forms; transcending the physical.

Teams performing “Agile Theater” are in the “shu” step of mastery, which is a necessary step on the path towards “ha” and “ri”. 

Culture change: changing behaviors to change thinking

“The score takes care of itself” approach to DEI

Can we make faster progress by measuring less?

Source: diversity.google

This past week, I got to do something that brought me great joy: integrating and combining several old blog posts in a novel way that led to a new insight.

In my 2020 wrap-up post, I’ve highlighted “building human connection” as an area of interest of mine for 2021, clarifying further than:

I slot some of the more interesting challenges of distributed work into this category and the maturing DEI space which is finally generating some balanced, evidence-based approaches and practices.

The second half of that sentence is the topic for today. I want to argue that if you are a small-to-midsize company (say, under 1,000 employees), and keen on advancing DEI initiatives, you are probably trying to measure too much rather than too little. This inclination to make progress through measurement is deeply encoded in the modern business ethos and further amplified by imitating larger companies where a quantitative measurement is actually a useful tool. I’ve written more about this broader pattern in ending the tyranny of the measurable — this is just a more specific application.

Allow me to illustrate it with an example: suppose a company wants to improve the diversity in its hiring pipeline and reduce bias in its hiring process. Its first inclination would be to analyze its recruiting funnel and see whether different segments of candidates are treated differently.

In theory, that makes total sense. But then reality comes in and bursts our bubble. Legally, candidates cannot be required to provide segment data (gender, race, etc.) as part of their application process, leaving us with data that’s not just partial but also biased. There’s selection bias in the people who opted to volunteer this information (there’s also selection bias in the people who opted to apply to our open role to begin with). Even if we figured out how to overcome these challenges, we would run into the “small N” challenge. Because not many people go through our recruiting process, even if our analysis yields extreme or seemingly different results, they’re not likely to be statistically significant (more on this in surveys: exploring statistical significance). Our analysis is likely to suggest bias in the process even when there isn’t or not find evidence of bias even when bias exists.

So we’re going to cut through a lot of red tape to get the data, bend over backward to somewhat credibly analyze it, only to come up with results that at best are easy to poke holes in and at worse will be misinterpreted.

There’s got to be a better way. And there is.

I started unpacking an alternative way in if we know in which direction we want do go, does it matter where we are? and a few months later in the score takes care of itself.

All recruiting processes have some bias in them by the sheer fact that humans are involved in them. And there’s plenty of evidence that humans are biased. If our company thinks that we’re all some special bias-resistant snowflakes, no amount of data and analysis will save us.

The gist of “the score takes care of itself” approach is to hold people accountable for the behaviors that eventually lead to long-term success, rather than to a defined outcome with a fixed time horizon (like a diversity metric, for example). And as outlined in inclusive organizations change their systems, not just train their people, organizations are better off changing their systems to be more bias-resistant. It’s a lot easier to do than make people more bias-resistant.

Fortunately, again, there’s a substantial body of evidence available to us on how to build bias-resistant processes. The “bias interrupters” model is outlined in the link above, and specifically in the recruiting context, I’ve aggregated additional pieces in inclusive hiring: a short primer.

Back to our example. An alternative approach will be to audit our existing recruiting process against one, or both, of these benchmarks and start implementing a plan to close the gaps. We can be confident that we’re moving in the right direction since we’re basing our changes on battle-tested interventions.

To wrap up, a few words of caution and important caveats:

First, when picking evidence-based interventions, know how to tell the difference between “popular” and “effective”. There are a lot of popular but ineffective practices out there. “Because Google is doing it” is not a good enough reason to adopt a practice. Do your due diligence.

Second, measurement can be useful when used in the right context and in the right way. I’m not saying “measurement is evil, don’t ever do it”. I’m just inviting you when you’re quoting Drucker (“you can’t manage what you can’t measure”) also to keep Goodhart in mind (“when a measure becomes a target, it stops being a good measure”).

“The score takes care of itself” approach to DEI

Corporate benefits - what’s equitable?

Photo by Tingey Injury Law Firm on Unsplash

A recent conversation with a colleague about a company’s 401k contribution matching strategy triggered an interesting reflection on how equity (fairness) is handled in different company benefits. 

Many companies, caring for their employees’ long-term financial well-being, incentivize employee contributions to their 401k plan by offering to match their contributions up to a certain percentage of their salary (3% and 6% seem to be the most common ones). 

That incentive can lead to some interesting outcomes. Consider 3 employees of AcmeCorp, making $75K, $150K and $300K respectively. AcmeCorp matches its employees 401k contributions up to 6% of their salary. All three are hard-working and care deeply about their long-term financial well-being so they contribute the maximum amount they are allowed to contribute to their 401k each year — currently $19,500. 

  • Employee A ($75k) will receive a $4,500 match from AcmeCorp.
  • Employee B ($150K) will receive a $9,000 match from AcmeCorp. 
  • Employee C ($300K) will receive a $18,000 match from AcmeCorp.  

Same contributions. Very different matches. A cynic may say that this incentive scheme acts as a regressive tax — the more you earn, the more you benefit from it. The less you earn, the less you benefit from it. 

Ironically, the reason many companies opt for this matching scheme is because of regulations by the IRS that require companies to perform annual tests to ensure that the 401k plan does not discriminate in favor of high earners in the company. Failing the test has some pretty painful implications. However, companies that opt to use one of three Safe Harbor plan designs are not required to perform the non-discrimination test. The full match up to x% of salary is one of those three designs…

 Side note: the combined contribution limit (employee + employer) during the writing of those lines is $57,000/year. So assuming an employee maxes out their portion, the employer can contribute up to $37.5k/year extra. Imagine the impact that this can have on long-term financial well-being. 

Zooming out

What’s even more interesting is that when we zoom out, benefits are not handled according to a consistent equity standard. 

Some benefits are regressive. Like the 401k example above.  

Some benefits are progressive/needs-based. Like paid family leave beyond what’s legally required by law. 

Some benefits are neutral. Everyone is getting exactly the same thing. Healthcare premium participation usually falls into this category. Defined PTO plans to some extend (the $ value of a day differs). 

That insight brought me back to revisiting an idea I explored a year and a half ago — the Service-as-a-Benefit (SaaB) platform

In addition to the advantages I outlined in the original post, it can also enable companies to create a more coherent benefits strategy from an equity perspective, whichever way they choose to define it. Companies can set a consistent equitable allocation of funds at the overall budget level, instead of at the individual benefit level like they do today. That budget can be set as a % of salary, a fixed sum per employee, or a more complex formula taking individual needs into account. For there, each employee can allocate those funds to the benefits that matter to them the most: a higher 401k match, a lower health insurance premium, a longer family leave, or a sitter for your cat.

Having realized that, I’m even keener about a SaaB platform today, than I was more than a year ago. 

Corporate benefits - what’s equitable?

Interviewing for values alignment

With as little bias as humanly possible

source: repeatforever.substack.com

 As I’ve defined elsewhere

Culture is our shared set of beliefs and mindsets, reflected through our behaviors and supported by our organizational systems (processes, protocols, etc.)

A cohesive organizational culture is honest, clear and reinforced.

A cohesive culture imposes constraints on our decisions. Therefore, it comes with a cost that we must pay in order to maintain it. One of those costs is not hiring candidates who don’t share our values and don’t exhibit the behaviors that we view as essential for our success. 

I’m not saying that we should hire people who are exactly like us in every way. The value of organizational diversity is unquestionable. That diversity needs to be supported by a cohesive non-negotiable core. I’m arguing that this core needs to be wider than “acting in accordance with the law”, but anything that falls outside of the core, we should view as welcomed and valued differences. 

But how do we interview for alignment with that core? 

As Rich Paret points out in the essay that this post will revolve around, The Career Story Interview, companies tend to spend more time on evaluating technical skills than on evaluating organizational skills (such as values alignment) by a factor of six. Those who make the extra effort to create a dedicated interview for organizational skills often leave it unstructured, resulting in questions like “if you could be any animal you could choose, which animal would you be?” and evaluation criteria that can be summed up by “would I enjoy having a drink with that person?”. 

There is a better way. 

In his essay, Rich lays out the foundations for a structured career story interview that credibly evaluates a candidate’s alignment with the company culture. It pulls a lot of good ideas from the Topgrading interview method (which I first covered in 2014), and modularizes the interview so it can be plugged into any recruiting process. A lot of the advice in the essay is applicable to interviewing well in general. 

Before the interview

What you’re listening for: a set of competencies or values, broken down into a set of observable behaviors. For example, “communication” can be broken down into: listens to understand, clear and concise in speech and writing, etc. 6 ∓ 2 competencies/values are reasonable ground to cover in a single interview. 

What questions to ask: the career story interview uses a set of predefined questions for every career chapter:

  • What are you most proud of accomplishing during this time?
  • What were the trouble spots, the things you struggled with, during this time?
  • What’s your most significant memory about the people you worked with during this time?
  • Who would be the person most familiar with your work during this time, and will you set up a time for us to talk with them if we both want to move forward after this interview?
  • What closed this episode of your career and opened the next one?

Rich makes the more general case for favoring past-behavior questions (PBQ) over future/hypothetical situational questions (SQ) which is very much in line with the distinction between MSA and PSQ questions I covered here. As he points out, the questions above work better than the typical “tell me about a time…” questions since they allow the interviewer to see behavioral patterns emerging rather than map a single question to a single competency. 

What you need/like/don’t want to hear: even companies that are mindful of the benefit of structured interviews tend to skip this crucial step. To be effective, the evaluation needs to be as structured as the interview. I like Rich’s language of need/like/don’t want to hear which corresponds to must-have/nice-to-have/red flags. 

Pre-communications with the candidate: unless you’re intentionally trying to assess the candidate’s ability to think on their feet, or their memory recall abilities, it’s crucial to give the candidate as much context as possible ahead of the interview. Failing to do so means that you’ll be evaluating them on those two things instead of on what you actually want to evaluate them on. The specific language that Rich suggests in a few points in the essay is a great starting point (I’ve paraphrased it a bit below): 

In our interview, we’ll be talking about your career from the beginning up until now. To start, we’ll divide your career into a series of episodes. For each episode, I have a set of things I’d like to talk about. It’s your story, so how many episodes there are is up to you. We’ll at least talk about how you started out, what you’ve been doing most recently, and what happened in between.

When you’re telling the story of each episode of your career, it can be helpful to structure the story using the acronym STAR:

Situation: Set the context for the story.

Task: Describe what your responsibility was in that situation.

Action: Explain in detail what you did in the story.

Result: Share what outcomes your actions achieved.

During the interview 

Some good pointers: 

  • A career story interview for a post-entry-level candidate usually takes 60 mins to be done right. Personally, I found it easier to work backward in time and start with the candidate’s most recent chapter. It’s the easiest for them to recall and makes it easier for me to transition to candidate Q&A/selling the role when we’re getting close to time, knowing that I covered the most relevant time period. 
  • Withhold judgment/making a decision until after the interview. 
  • Focus on listening mindfully. Taking detailed notes can help you stay in that mindset. 
  • Don’t interject with your own assumptions. Ask open-ended follow-up questions: Who/What/Why/Can you elaborate? Lou Adler’s SMARTe follow-up questions can be another good tool here. 

After the interview

Review your notes, and evaluate whether the candidate exhibited each of the observable behavior you were looking for. The need/like/don’t want to hear list should set a consistent bar across all candidates. 

Interviewing for values alignment

Principled corporate activism

Be clear on where you stand, regardless of where that is 

Corporate activism, or companies taking a stand and acting on social issues, has been a hot discussion topic in the US since the BLM protests in the summer of 2020. 

There is no single right answer to whether and to what extent organizations should take a public stand and action on various social issues (though some disagree even with this statement. But what all right answers have in common is that organizations should be clear and consistent in the stance they take. Not taking a stance is also a stance. 

When organizations don’t clarify where they stand, they foster a false, short-term sense of alignment. Different members can make different assumptions on where the organization stands, and they’ll all be correct. But that false sense of alignment quickly bursts when an external event forces the company to take action (or inaction) and reveal its actual stand. When the misalignment is revealed during a crisis, the damage is far greater. 

Clarifying where you stand leads to short-term pain, when some members have to grapple with the realization that the organization may see things differently than they do. Yet, it pays dividends in the long-run fostering true alignment. 

The now infamous Coinbase is a mission focused company blog post was a commendable attempt in doing just that. Readers directed most of the attention (and criticism) at the stance they took. Still, regardless of whether you agree or disagree with their stance, they deserve credit for taking it, and enduring through the short-term pain on the path for true alignment.

In my reflections and conversations with peers, I tried to extrapolate from that incident (and others) how an organization can more clearly and consistently describe its stance, so when the next big external event happens, it can respond rather than react. 

Two key insights emerged: 

  1. The stance an organization takes on an issue is partially captured by the “sphere” in which it takes action on it, starting with sheer compliance, through actions impacting its members, to action impacting outside the organization at the local, federal or global level. 
  2. Organizations routinely take different stances on different issues. Some organizations care more about issue X, while others care more about issue Y. 

Putting those two insights together allows us to create a map of the organization’s stance on different issues: 

When I re-read the Coinbase post and built the Coinbase version of this map, I ended up with something that looks like this: 

Agree or disagree with the stance itself; it is clearer than what I’ve seen from most organizations.

There are many more nuances and refinements that can and should be captured here, perhaps in future iterations. 

My hope is that experimenting with putting together a version of this map, can help organizations avoid yet-another-knee-jerk-reaction the next time a notable event takes place.

Principled corporate activism

Addressing the shortcomings of the division of labor

Is organizational structure the best solution? 

Source: Amazon.com

A recent post I read and shared about the challenges in using the Chief of Staff role in companies brought me back, full circle, to a post I’ve written more than five years/300-posts ago about the VP of Business and People Operations role

I’ve intentionally invited several colleagues holding Chief-of-Staff (CoS) roles in companies to critique Kovacevich’s post, to learn from their collective experience. I learned from those conversations that some of Kovacevich’s arguments can be dismissed as issues with specific implementations of the role, or issues that can exist in all corporate roles and therefore don’t stem from the CoS role definition in particular. Arguments around the lack of clarity in the role’s responsibilities, or the role’s allegiance to their manager rather than the benefit of the company fall in that category. But a few arguments remained standing and salient: 

  1. There is a gap in the organization’s ability to collaborate cross-functionally well that the CoS role is often meant to address. 
  2. The role’s positioning outside of the formal reporting lines makes it more challenging to accomplish #1. 
  3. The label used to describe the role (“CoS”) often adds, rather than removes, ambiguity — also getting in the way of #1. 

The root cause for the gap described in #1 is Conway’s Law — the unfortunate downside of the function-based division of labor on the executive team. The quote from Rich Mironov I included in that post is still one of the most concise and clear ways to describe the phenomena:

Organizations must have some division of labor … [and] every division of labor creates the potential for narrower thinking, boundary skirmishes, and inefficient resource allocation.”

Specifically, in this case, the five core systems that drive business results are owned by everyone and no one at all. 

Creating a role in the organization that owns all or some of those systems attempts to solve those organizational challenges using organizational structure. 

Similar patterns exist with the VP of People role — a role that in the five-systems model is given some authority over the people, incentive, and org structure systems:

  • It does not own its domain to the same extent that the VP of Sales owns sales, or that the VP of Engineering owns engineering. While in those domains meeting the function’s objectives relies 70% on the function and 30% on effective collaboration with other functions, those ratios are inversed in the People function.
  • A big part of that challenge is due to the org structure. Employees report into their functional orgs, not the People org. The People’s org ability to influence their experiences and actions is only secondary to the influence exuded by their organic reporting chains. A person’s manager will always be more influential than the most influential People program. 
  • The label (“People”) doesn’t help. It invites comparisons to other labels (“Sales”, “Engineering”) even though the underlying responsibility is structurally different. 

Where does all of this leave me? With better questions than answers. That’s ok for now. 

Organizations are trying to tackle some of the core shortcomings of their functionally-oriented division of labor using org structure solutions (CoS, VP of People) with minor success at best. 

Is an org structure solution the best way to solve this challenge? 

If so, what are the attributes of an effective org structure solution? How do we avoid some of the pitfalls mentioned above? 

I don’t have good answers to these questions. Yet. I have a hunch that looking at this challenge as a part & whole polarity to be navigated, rather than a solution to be solved can yield exciting insights. 

To be continued… 

Addressing the shortcomings of the division of labor

Structures as scaffoldings / structures as shoes

Reinforcing behavior without dependence 

Photo by Kirill Sharkovski on Unsplash

I’ve written quite a bit about the critical role that the organizational environment, and specifically, its processes and structures play in driving behavior change, and reinforcing the culture

Recently, I came across an interesting edge case that I’m not sure how to solve yet. I hope that a clearer articulation here, and any dialogue that may ensue as a result, may help shed some more light on the shape of the potential solutions. 

It happens when the behavior that we want to reinforce becomes dependent on the structure that we’ve put in place. Let me give a couple of examples. 

  1. Situation: we want our team to take time off to celebrate holidays that are important to them. → Solution: we identify a subset of the holidays that most team members view as important and make them “official company holidays” → Complication: people take the company holidays but are more reluctant to take other holidays off assuming that if they are not company holidays, they are not important enough to justify taking time off for them. 
  2. Situations: we want our team to take time off when they are dealing with distressing events in their lives. → Solution: when events that are broadly distressing take place (let’s say, a national tragedy), we remind and encourage our team to take time off if they need to. → Complication: team members use the company-wide communication as a barometer for which events are distressing enough to justify taking time off. If we haven’t communicated about it, it must not be distressing enough to justify. 

I’m not sure that there’s a broader pattern here, given that the two examples that are more present for me are focused on taking time off work. Or perhaps the complications are not as common as I believe them to be. Nonetheless, these examples raise a question with broader applicability: 

How do we design structures that are less likely to generate behavioral dependency?

The best metaphor that I was able to come up with is the difference between scaffoldings and shoes. Both are structures that we commonly use. Scaffoldings are temporary: when we remove them, the building, in its new form stands freely on its own. Whereas shoes, in some way weaken us. While they allow us to traverse challenging surfaces with more ease, once we remove them, our ability to traverse more standard surfaces on our own diminishes, compared to someone who’s been walking barefoot their entire life. 

Structures as scaffoldings / structures as shoes

Increasing relational productivity

What management *should* be doing

Photo by christie greene on Unsplash

Time to pay down some “writing debt” and cover some good articles I came across last year, that stood the (short) test of time between now and then. Starting with this BCG piece, I made reference to in my 2020 wrap-up

How the lockdown unlocked real work

I have my qualms with some of the framing and word choices. Specifically, the careless use of terms like complexity and complicatedness just because they sound less judgmental than what they’re really trying to describe: bureaucracy. But I digress. 

The core piece of value in the article is its “side box”, which highlights “complementarities” — when one factor of production increases the contribution of the other factor of production to the overall outcome — as a core reason for the existence of organizations in the first place. It then introduces a distinction between two types of complementaries: proximity and relational: 

  • Proximity complementarities — complementaries that accrue directly from physical co-location, such as the scale economies made possible by the steam engine or the assembly line — both requiring grouping people together. The discipline of management developed in this context and the close control of work it made possible. 
  • Relational complementarities — complementarities that accrue from the alignment and coherence of the actual behaviors of each node and the way it increases the contribution of every other node. This is articulated visually in a post I covered a couple of years ago titled “People as Vectors” and more abstractly in “Bounded Specialization”.

Most business models rely on both types of complementarities, but because the former is far more visible, organizations tend to over-rely on it, assuming that proximity will make up for any gaps in the productive content of the relationships. Proximity functions as a misleading proxy for working effectively together.

For example, avoiding the hard work of creating trust and explicit commitments around deliverables, knowing that if anything doesn’t go according to plan, we can just walk down the hall and talk it out. 

The pandemic has abruptly swept away organizations’ ability to (over)rely on proximity. The pace of change prevented organizations from applying a bureaucratic procedural response. Instead, they had to strengthen relational productivity by increasing the degree to which people interact effectively and work together in the service of a collective task. Three levers play a role in this endeavor: leadership, engagement and cooperation. 

Leadership: managers add value by getting people to do what they wouldn’t do spontaneously in the absence of interaction with them. Specifically, by maximizing alignment between the individual’s actions and what the organization as a whole is trying to accomplish: navigating across multiple priorities, and identifying and removing blockers. Tactically, that requires having conversations with a broader set of nodes in the network (not just direct reports) and having those conversations focused on creating clarity around a core set of questions: “What are your current priorities, the things you must accomplish, the battles you must win?”, “What are you worried about?”, “Do you feel like you know what you need to know?”, “If not, do you know who to go to get what you need?”.

Engagement: defined here as the degree and intensity of individuals’ connectedness to the organization and its goals, to the roles they occupy in the organization, and to the tasks they perform. The tactics called out here included anchoring the organization’s day-to-day activities to a higher goal, organizing work around time-limited and iterative sprints — focusing people on the immediate priorities that they can control and completing commitments they’ve made to the team, rather than on the uncertainties they can’t control. 

Cooperation: defined here as the process by which people put their autonomy, initiative, and judgment in the service of a collective purpose or task — which sometimes means compromising their own goals or needs for the greater good. The tactics mentioned here included explicitly building trust among far-flung team members, by giving everyone on the team enough air time, and identifying and proactively managing interdependencies.

The evidence presented for decomposing relational productivity to these three levers was somewhat lacking, and the tactical examples under each seem partial at best. 

Nonetheless, the distinction between proximity and relational complementarities and the interaction between the two is a powerful lens to looking at and making sense of organizational behavior and the initial levers for starting to strengthen relational productivity are a good starting point as any for further exploration down that path. 

Increasing relational productivity

OrgHacking 2020 wrap-up

This is my 6th full year of posting on OrgHacking and continuing my annual tradition of writing an end-of-year wrap-up post (previous reviews: 2019, 2018, 2017: 1 & 2, 2016, 2015). I’m going to stick with last year’s 3-part format in a slightly abbreviated form. 

And what a year it was. Pair the global pandemic that’s been raging since March upending traditional ways of working, with a couple of major life events: starting a new full-time job and getting married — and 2020 became a year of many “firsts”. 

Part 1: 2020 reflection 

At the end of last year, I highlighted org governance and distributing power, value generation (performance) and value allocation (comp), and distributed work as areas of interest for 2020. Those remained top-of-mind for me throughout the year, with the latter in particular becoming a lot more real due to COVID-19. I did not get to just think about them, but also write about them, with the relevant posts captured in the “organization fundamentals” and “distributed work” sections of Part 3. 

It was fascinating to see the “two sides of the same coin” relationship between performance and compensation a lot clearer (the value you generate for the group, and your share of the collective value) and see how they fit in broader frameworks of fundamental challenges of organizations. 

The big a-ha moment around distributed work came from shifting my language from talking about “remote work”, which stands in contrast to “co-located work”, to “distributed work” which is more of a spectrum. Companies start to work in a distributed fashion a lot earlier than they used to (it’s hard to find a 100-person tech company that’s 100% co-located), and while many have hit “peak distributed” during the pandemic, some will stay there, and some will go back to a lower lever of distribution that’ll still be higher than their pre-pandemic level. It highlighted that many of us have been working distributedly for quite a while, but perhaps not in a very skilled way. It also highlighted the role that the human element plays in that dynamic: from the challenge of attempting to collaborate in a way that goes beyond our evolutionary wiring, to the possibility of mitigating the relational biases that we tend to rely on heavily while working co-located. This takes me to intentional community design, but I’ll skip that rabbit hole for now. 

My personal life events (and perhaps pandemic-related fatigue build-up) have impacted my writing in the last couple of months of the year, and I did not stick to my weekly cadence. There are a few pieces of content I came across and did not have the time or mindshare to write about: 

I’m hoping to get back to them at the beginning of the new year, but I’m also planning to hold my intention of writing a weekly post a bit more lightly in 2021. We’ll see how it goes. 

Part 2: 2021 areas of interest 

My 2021 areas of interest seem to be a refinement and evolution of my 2020 themes:

  • Intentional governance/design of productive communities/organizations in general and the value generation (performance)/value distribution (compensation) challenge in particular. 
  • Collective sense-making — an emerging category this year, that I hope to continue to explore further. Most likely, through the work of Dave Snowden and the Cognitive Edge. 
  • Building human connection and addressing human needs in collaborative efforts. I slot some of the more interesting challenges of distributed work into this category and the maturing DEI space which is finally generating some balanced, evidence-based approaches and practices. 

Part 3: 2020 posts by emergent categories 

About mid-year I moved away from the old naming convention of using [] in summary posts. Therefore, original (synthesis) posts are indicated below with an *. 

People practices

Organization fundamentals

Sense-making

Distributed work

Behavior change

Small-group dynamics

Large-group dynamics

DEI

Knowledge Management

Misc

OrgHacking 2020 wrap-up