Combines two of my favorite things: community and operating rhythm.
Being part of the Occupy movement in NZ has been a formative experience for Richard and greatly shaped his professional journey. Among other things, it led him to create Enspiral, one of the most inspiring organizations out there, in my opinion, and one of my regular “go-to”s, whenever I’m looking for a progressive solution to an organizational challenge. While still involved in Ensipral and continuing to make amazing contributions on this domain (like this one), more recently he’s been exploring how to generalize some of the “secret source” of Enspiral under the banner of microsolidarity — a small group of people supporting each other to do more meaningful work.
In essence, it expands and generalizes one of my favorite Enspiral practices: care pods into a broader operating rhythm. Looking at it through the lens of the “community canvas”, it fleshes out a blueprint for the core set of rituals, shared experiences and content that such a community should have.
I decided to title this post borrowing the more inspirational title of a different post of Richard’s since “People, Practices, Place” seemed way too generic.
The key design principle here is to focus on a predictable, steady rhythm to minimize the distraction of scheduling.
2 in-person gatherings a year, coinciding with the full moon to build connections and introduce new potential members to the community.
10 video calls on the other full moons where small pods (3–6 people) meet to help each other with their outward-facing work using “case clinic” or peer-coaching methodologies.
12 video calls on new moons for partners (see below) to work on the community — the inward-facing governance and admin work.
The synchronous meetings are used for sense-making and deliberation, while decisions are made asynchronously (using Loomio).
There are two membership categories to explicitly acknowledge the all-too-common participation inequality.
Partners are members with more commitment to the community and the capacity to do work on the container.
Friends get to participate as individuals and not worry too much about the big picture.
Partners choose who to invite as Friends, and their first interaction with the community must be through attending a gathering. Partners can invite Friends to become Partners after they’ve attended their second gathering (or later) and if they’re willing to take on the additional responsibilities. There’s no expectation that all Friends will become Partners eventually and no shame in staying a Friend.
Everyone is part of a Home Group (small pod of 3–6 people), is encouraged to share leads and opportunities of working together, can be in a member of a temporary working group on an internal project, and is expected to make regular financial contributions to support the internal work.
The community should be organized around a physical place to simplify coordination and make it easy to increase the density of relationships.
While the 2 gatherings take place in person, the community engages virtually through an instant messaging platform for informal interactions, an asynchronous discussion platform for long-lasting information (knowledge, decisions, etc.), and a regular newsletter to maintain a consistent baseline of shared context.
I hope the minimalist and holistic nature of this skeletal community comes across clearly. The only thing that I would also want to add as a core component of the community that I’m part of is a care pods-like rhythm for personal development. But that’s certainly not a required component. Just something that I personally long for.
One of the biggest organizational conundrums out there
Discussing performance may very well be my favorite organizational can of worms to open. It’s one of the things that make work organizations exponentially more complex than other forms of organizing, and I’ve been noodling on it quite a bit over the years.
Human performance is a fuzzy concept, especially in a knowledge work context. But the way it’s utilized is a bit more straight forward. The assessment of performance is being used to distribute the collective value that the organization had generated to the members of the organization. More concretely, the assessment drives decisions such as an increase in compensation, a promotion (more responsibilities and more compensation), a termination, or simply maintaining the status quo.
The silver lining here is that understanding that performance is a distributional issue gives us criteria for assessing different ways of managing it. If this is essentially a distributional issue, then what we should care about are procedural justice/fairness and distributive justice/fairness.
If you’re reading this hoping for a big “a-ha” moment and a robust solution, I’m going to disappoint you. I haven’t found a good solution. Yet. And it may very well be that the current way we’re managing performance is “the worst form, except from all the others that have been tried from time to time”, to blatantly misquote Churchill.
What follows below is my messy attempt to dissect this issue. My hope is that by doing so some smaller pieces of the issue will prove out to be non-issues at all, while other pieces may have better point solutions emerging over time. I found it helpful to break down performance management to three sub-issues: What are we evaluating? Who is doing the evaluation? When are they doing so?
What are we evaluating?
Machine performance is fairly straight forward: both outputs and inputs are visible and predictable, and the dimensions of evaluation are easily defined: speed, quality, efficiency.
Human performance, in a professional setting, is more challenging. If we look at performance as a distributive challenge, we can define it as the gap between the value that we generate to the organization and the value we’re taking out of the organization. Some simplistic examples for illustration: if I’m suddenly doing poor work and the value that I’m adding to the organization is lower, while I’m still taking the same salary, we’d say that my performance has declined. Similarly, if I just got promoted and receiving a higher salary, but still making the same contribution to the organization that I made prior to being promoted, we’d say that I’m not meeting the new performance expectations.
A big part of the challenge in fairly managing performance stems from the gap between output and outcome. Which makes both measuring and attributing value difficult.
The value I take out of the organization certainly includes my salary, as well as costs incurred by the organization such as health care premiums or the fractional cost of renting the office. But should probably also include harder-to-measure elements such as the time and energy I take from others. What if I get the job done, but I do it in a way that alienates and demotivates others, perhaps even to a point that causes them to leave the organization?
Unless I’m a salesperson, the value that I generate to the organization is even harder to measure since my monetary contribution is indirect. And the same component of interacting with others, now in the positive case of supporting and helping them succeed is just as difficult to measure as the negative case.
Attributing value is challenging across two dimensions. The first has to do with the collaborative nature of work. If I was a traveling salesman selling shoe polish door-to-door, you could argue that whether I’m successful in selling the shoe polish is mostly up to me (we’ll touch on an important caveat in a moment) and therefore I should get the full credit for every sale I make. But what if I’m selling enterprise software, and the sales process required multiple conversations, including a lot of heavy lifting by the sales engineer; both the product manager, the marketing manager and the CEO had to make guest appearances and address specific concerns in their respective domains; the engineering team had to make a small tweak in the product to make the integration work, and the support team had to commit to a non-standard SLA? How much of the credit for making the sale should I get then?
The second attribution challenge has to do with separating skill from luck or any other element that impacted the outcome and was out of our control. I’ve been obsessing recently over this thought experiment from “Thinking in Bets”:
Take a minute to imagine your best decision last year. Now take a minute to imagine your worst decision.
It’s very likely that your best decision preceded a good outcome and your worst decision preceded a bad outcome. That was definitely the case for me, and a classic case of “outcome bias”. We tend to conflate good outcomes and good decision-making (skill). Which one should we get credit for? Is it fair to get credit/be blamed for things that were out of our control?
A holistic solution may be found by integrating some of the imperfect pieces below, or by taking a completely different approach:
Goals — successful execution against (SMART) goals is one of the most common approaches to measuring performance. While this approach has many flaws, some can be rectified, for example by using relative goals rather than absolute ones.
Career ladders — thoughtfully designed career ladders define a connection between a certain level of contribution to the business and a certain level of compensation. Good rubrics span “being good at what you do”, “being a good teammate”, and “having an impact on the business”. In addition to avoiding the short-termism often associated with goals-based performance management, they also partially mitigate some of the side-effects of exclusively focusing on just one of the three categories. However, this “balanced scorecard” approach, amplifies the “who?” challenge in the performance management process, which we’ll cover in the next section.
Track record — if you beat me in one hand of poker, it’ll be hard to tell if you’re a better player than me, or you just got lucky. But if you beat me in 90 out of 100 hands, it’d be fair to say that skill must’ve had something to do with it. It’s incredibly difficult to measure a track record in a knowledge work setting, but that has not discouraged companies like Bridgewater Associates from trying. Some may even say, quite successfully.
Learning — in some ways, we can think of learning as the derivative, in the mathematical sense, of the value that we create, and as a leading indicator of the value we will create. Similar to the way that velocity is the derivative of the distance we’re traveling and a leading indicator of the distance that we will travel. A focus on learning helps address some of the value attribution issues, but it does not simplify its measurement by much.
Avoiding definition — perhaps the purest and most chaotic solution. The case for it is best articulated here. But in a nutshell, since this is a multi-party distribution issue, we may not need a universal definition of performance or fairness. As long as all parties agree that the distributive process and outcome are fair, we are good. Even if they’ve reached those conclusions using different evaluation processes. Deloitte’s “I would always want this person on my team” and “I would give this person the highest possible compensation” sit better with me than the more standard expectations-based rubric (meets, exceeds, etc.).
Who should be evaluating performance?
The standard approach to performance management tasks the manager with evaluating the performance of her team members. A big part of that makes sense since it’s the manager’s job to align the effort of their team with what the organization as a whole is trying to accomplish, so they’re in the best position to evaluate how an individual contribution supports those efforts.
However, research suggests that:
Managers overrating their team is an enduring, scientifically proven fact in companies. It’s most pronounced where performance ratings are used to determine compensation, where it’s difficult to assess an employee’s true competence, and where the manager and employee have a strong relationship. (references here)
The collection of peer feedback as input to the manager’s evaluation and the use of a cross-managers calibration exercise help mitigate some of this effect but not all of it, and significantly increase the level of effort in the overall process.
Having individuals evaluate themselves is certainly an option, but here as well, research suggests that it may not lead us to a process and outcome that would be considered fair:
Self-perceptions correlated with objective performance roughly .29 — a correlation that is hardly useless, but still far from perfection (source)
Our immediate team is the next usual suspect, as the people with the most visibility into our work and contribution, though with a more limited view on the impact we have on the organization at large. Furthermore, if some skills-based elements are integrated into our definition of performance (career ladders), then it doesn’t make much sense to have more junior/inexperienced member of a team assess the skill of more senior/experienced member.
This, among other things, is what led Google down the path of having a standalone committee of more senior members evaluate the performance of more junior members of the organization. However, while they are able to evaluate the skill elements of performance more accurately, being so disconnected from the actual context in which those skills are applied makes it harder to assess the impact components, increasing the risk of “confusing motion for progress”.
For team members that are not in an individual contributor role, another option becomes available: under a servant leadership paradigm, the people the leader is serving should be the ones evaluating their performance. However, the typical way that these roles are designed tends to contain responsibilities beyond the team that you serve which the team may not have visibility into. Power dynamics will also add some distortion to the evaluations. And the same competency question exists — using a more extreme analogy: as patients, are we able to evaluate how good our doctor is? or just their bedside manner?
We can also look at a more generalizable case of the servant leadership model and argue that we all have customers/stakeholders that we serve, whether internally or externally. Therefore, they should be the ones evaluating our performance since, at the end of the day, the value we generate goes to them. We can think of our manager as a customer (in their alignment of efforts capacity), we can think of our teammates as customers (utilizing our advice, feedback, and expertise), and of course, the people who are using our work product, be it another engineering team, the organization at large, or external customers. Having multiple customers/stakeholders provide their evaluation on the pieces of value that matter to them, certainly complicates things, but not in an insurmountable way. Yet the patient/doctor challenge still applies, to an extent, in this setting as well.
When should we be evaluating?
Perhaps less contentious than the first two sections, the timing and synchronization of the evaluation process are not without challenge either.
Performance evaluations are typically done in a synchronized cycle once a year. The synchronicity allows for calibration and a fair allocation of budget, and the annual cadence supports the observation of long-term patterns of performance (smoothes out short-term fluctuations) and keeps the overall process effort in check. The annual cadence, in particular, has drawn a lot of criticism. Some of it justified, like the impact recency bias has on the process, and the obligation to address significant changes in performance, in either direction, quicker than once a year. Some of it, unjustified. You can and should provide and receive developmental feedback more frequently than once a year. And you can and should set, modify and review goals more frequently than once a year. Neither has anything to do with the cadence by which you evaluate performance. If we stick to a synchronous, cadence-based approach, a review every six months seems to be a better anchor point.
However, there is a growing body of alternatives to the synchronous cadence-based approach, that happens more ad-hoc, based on a natural trigger or an emerging need. Deloitte’s project-based work marries itself well to an evaluation that’s triggered at the end of the project (projects usually take 3–9 months). The Bridgewater Associates’ approach mentioned above, essentially conducts micro-evaluation at the end of every meeting/interaction to construct a track record that’s rich enough for analysis. In some self-managed organizations, the process that can lead to a team member’s termination can be triggered at any time by any member of the team. And in others, team members can decide when they want to initiate the process for reviewing and updating their own salary. A key concern here is that unintended bias will be introduced into the process. For example, someone with stronger self-advocacy skills or a more positive perception of their performance will trigger the process more frequently than someone who is just as worthy of a raise but is more humble or not as confident in their skills. Some of those concerns can be mitigated by putting in place some cadence-based guardrails (automatic review X months since the last review) and having a strong coaching and support culture.
So where does this +2,000 words essay leave us? Definitely not with a comprehensive solution. But that wasn’t the intent either.
Understanding that performance is a distributive challenge, balancing value generated and value taken, was illuminating to me. It then allowed me to understand the core drivers behind the difficulties in assessing those elements, and how different solutions either mitigate or amplify them. While it’s even clearer to me now that there may not be a perfect solution, it does seem like some solutions are better than others. At least in my mind.
But perhaps most importantly, it made me realize how much of the core beliefs and assumptions that underlie this foundational organizational practice are often left implicit and uncommunicated. The simple act of making them explicit, coupled with painting the whole landscape of alternative solutions and the trade-offs that they entail, can go a long way in driving a stronger collective sense of fairness in any organization, regardless of the assumptions/beliefs they have, the trade-offs they choose, and the solutions they implement.
And the enhancement of the collective sense of fairness is exactly what we were trying to do, to begin with.
Congratulations, VC partner. One of your portfolio companies just found product-market fit!
They’ve graduated from startup, a temporary organization designed to search for a repeatable and scalable business model, to an early-stage growth company. As you know, they’ll probably be spending the next couple of years and hire their next 250 employees in this leg of their journey.
There’s just one problem. They are not really set up to scale.
They’ve spent all their time, as they should have, iterating on a product that solves a real need. But now they need to build an organization that will continue to build and evolve that product in the long run. The culture is completely informal and heavily reliant on the founder being in the room. The small management team has limited managerial experience. The recruiting process is a mess and riddled with bias. Compensation is bespoke and out-of-whack with the market. Career ladders? Professional development? Forget about it! Should I go on?
On their journey to being set up for scale, they’ll be clawing their way, by hook or by crook, flying by the seat of their pants, hoping that the tidal wave of organizational debt that they have accrued and will continue accruing along the way doesn’t come crashing down on their head. They’ll be stretching their office manager way beyond the reasonable boundaries of their job, asking them to own benefits, immigration and compliance in addition to their day job. They’ll look to their sole recruiter to not only flawlessly execute the hiring plan but also design a bias-free recruiting process and train everyone in the company on their part in it. And both of us already feel bad for their first Head of People, tasked with the superhuman endeavor of being the subject matter expert on L&D, compensation, benefits, culture, org design, performance management, compliance and everything else in between, seamlessly context-switching across the domains and from getting whatever needs to get done today, to thinking strategically about the needs of the organization 1–2 years from now.
You’ve seen this pattern time and time again so, consequently, you know this means that they’ll be entering the uncanny valley of employee experience. No longer a startup, the appeal with the early stage folks is lost. With the existential risk to the business mostly de-risked, the financial upside is still substantial but not what it used to be. And there’s no way they can compete with the kind of support and comfort that the later stage unicorns are able to offer their staff. Recruiting will become a grueling endeavor on a whole new level, and regrettable attrition will be notably higher than what you’d like it to be.
But hey, this is really out of anyone’s hands, right? Building a world-class People team really doesn’t make any economic sense at their current scale. Not to mention that the challenge they can offer to the people with the necessary expertise is not big and interesting enough to attract them. So there’s really nothing else that you or anyone else can do other than clench your jaw, grit your teeth, and hope they make it through this adolescent, growing-pains phase of the business.
Or is there?
In a couple of years and another 200 employees, all the puzzle pieces will fall into place. They’ll have the scale to justify a world-class People team and challenges that are sufficiently interesting to attract that kind of talent. If only there was a time machine that would allow us to bring that future into the present…
If you continued reading thus far, I have some great news to share: we don’t need a time machine. We just need to zoom out and change our perspective. The key to this conundrum is this: while the depth of expertise is required from day 1, in some cases they are not needed as frequently/intensely as they would be in day 750. And in other cases, just using a best-of-class solution, while not a custom-fit one, is better than what they can cobble together on their own. Take manager training, for example. Skilled managers are needed from day 1 but they may not have the volume at first to run frequent training cohorts. Great instructional design coupled with evidence-based content will likely be good enough for building a solid foundation, even if it’s not 100% fitted to the (still evolving) organizational values. Or consider the recruiting process. Adopting a battle-tested blueprint will likely yield better outcomes than an emergent design which (unknowingly) repeats old/known mistakes.
These simpler needs mean that we can look for a solution to the scale gap across companies. And no person is in a better position to implement such a cross-organizational solution than you, VC partner. You’ve already staked a large sum of money on their success, and on several other companies at a similar growth stage with similar challenges. And since most VCs rarely make competing bets in the same portfolio you have a vested interest in helping as many of your portfolio companies succeed. It’ll look something like this:
The shared people services team, housing the subject-matter experts in various people disciplines starts off being provided completely by VC staff, supporting more junior, generalist points of contact in the portfolio companies. Training and development programs are offered across the portfolio. Over time, as portfolio companies continue to grow, they gradually hire their own experts and bring the programs and services in-house, eventually reaching the standard end-state of having a completely decoupled, standalone people function.
Making this shift stops the accrual of organizational debt sooner in the company lifecycle, and perhaps even starts its repayment. It creates a more optimal experience for early growth stage employees, closing a big chunk of the competitive talent gap between early-stage and late-stage companies. In sum, it allows the company to reach late-stage organizational maturity, quicker, following a safer, less volatile path and wasting less talent, energy, and resources in the process. A win for the employees. A win for the companies. A win for the VCs.
Once product-market fit is found, it really comes down to the people refining the bringing the business model to life. This may be a great way to invest directly in their success.
The post outlines the way a shared map of enabling constraints, at different abstraction levels, can be used for balancing maintaining alignment and enabling autonomy at the same time.
It covers a lot of ground, and addresses a few subtle issues towards the end of the post that I’m deliberately omitting from this summary. It’s a fairly visual piece and I’m going to play a bit with the way that I’m presenting the content to make it even more so.
Goals, used in this context to mean “what I/we want to achieve”, can be defined at various levels of abstraction. By asking ourselves “why?” we move to a goal at a higher level of abstraction. By asking ourselves “how?” we move to a goal at a lower level of abstraction.
Attaining a higher abstraction goal requires either a longer timeframe or a bigger scope to complete it, or both. A standardized set of units of time, and a standardized set of units of scope create a calibrated canvas to which goals can be anchored. Different organizations may choose different units of time and scope for their alignment efforts.
Each goal we define constraints goals further down and to the left. The constraint enables autonomy at the level below by providing clarity on the boundary between what we’re doing and not doing. Boundaries don’t have to be precise, just clear enough so everyone has a sufficient understanding of where they are.
Alignment efforts, therefore, focus on the diagonal area in the middle of the canvas. The area at the top left is too detailed, and the area at the bottom right is too vague to merit defining and aligning on.
Goal types fall on a spectrum between the aspirational and the practical, and differ in the mechanisms that are used to formulate them:
Intentions describe what we aspire to achieve. Typically formulated as a vision (what is the future we are trying to create) and a mission (what we do and for whom).
Outcomes describe what we effect through what we do. Typically formulated as objectives in OKRs, or more fine-grained user stories in a product backlog.
Outputs describe what we are going to produce. Typically formulated using specifications.
Inputs describe how much time and effort we want to invest. Typically formulated using timeboxes like 2-week iterations or the number of hours that’ll be spent investigating a bug.
More practical goal categories can be used to describe goals at lower levels of abstraction. The different level of abstraction also create natural boundaries for assigning ownership:
Foundation: The reason we are all here and contribute our efforts. The founders of an organization usually define this.
Direction: This is the top-level direction from the most senior leaders. It is solely bounded by the foundation and not by any higher-level desired outcomes.
Coordination: This is where desired outcomes across organizational scopes are coordinated.
Autonomy: The part fully left to the teams who execute it. Teams break down desired outcomes into outputs and inputs.
The heart of the alignment effort takes place in the coordination zone. Perfect alignment on a desired outcome requires clarity not only on the outcome itself but also on:
How progress will be measured?
What will be produced to make progress? (outputs)
What will be needed to make progress? (conditions)
A healthy process strives to maximize outcome and progress while keeping outputs and conditions flexible.
After the initial inception, where the map of goals inside the zone of alignment is defined, on-going alignment takes place at the edges of the relevant time-boxes.
At the end of a time-box, the map is consulted to review what goals were achieved, as well as what progress was made towards the goals at a higher level. If the map doesn’t match reality, opt for adjusting the scope and keeping timeboxes fixed. A goal may be discontinued altogether if it is no longer desirable.
At the beginning of a time-box, the goals for the time-box are defined (planned). This is another place to check for the relevancy of the goals one abstraction level above, to make sure that progress is made towards a goal that still matters.
The rest of the post adds some additional color around the choice of time-frames and units of scope, how efficiency and effectiveness can be described using these building blocks, at which level of accuracy to maintain the map, the order of describing output ingredients, leadership and accountability, expectations and happiness and the timing of committing to a goal (or deferring commitment).
While examination and reflection are, sadly, not common staples of most operating rhythms in most organizations, there is a fairly wide consensus around the importance of continuously evolving the way we work together to the long-term success of the business, and its ability to continue to attract and keep its talent. Efforts on this front are usually labeled as managing “employee engagement” (or one of its derivatives) though I like the simpler label of “Working on Work” (WoW) to describe the efforts undertaken to improve the way we work, as opposed to efforts that are doing the work itself.
The traditional approach tends to follow a semi or annual cycle of running a survey, compiling the results and defining initiatives to address the gaps/opportunities identified. My goal in this piece is to highlight four big opportunities to make this approach significantly more effective.
One aspect that I’m intentionally leaving out of my analysis is the frequency of the cycle. I believe that shorter, more frequent reflections should be integrated into the operating rhythm in places where they don’t exist today, but I don’t think that they are a full replacement for this cycle. Some patterns take longer to be observed and some changes take longer times to be affected. Therefore, there is value in this form of macro reflection on this cadence, plus or minus a quarter. So with this short disclaimer, let’s jump in.
Approach: From “one and done” to “continuous improvement”
WoW is a never-ending, “continuous improvement” effort. And yet, it is often approached as if it is a problem that can and should be solved with a one-time effort. There is an absolute benchmark for what “good enough” looks like, often on a 5-point scale, and a view of “success” as showing an improvement in scores from one period to the next. As if when we’ll score all 5s we’ll be done and can fire half of our HR staff…
A continuous improvement approach starts at a different point: accepting the perpetual nature of the effort, it will define a distribution of our overall capacity to do work between “doing work” and “working on work” that can be refined or adjusted from one period to the next. The systems for managing work and managing WoW will be integrated, where at the moment the latter seem to mostly fall outside the capacity and direction efforts used for the work itself (OKRs, budgets, etc.). The sensing/reflecting piece of the cycle will be oriented more towards setting the right direction for the efforts rather than measuring progress. More on that shortly.
Sensing: From ”false precision” to “focus and patterns”
Current sensing efforts collect evaluations using a 5-point Likert scale, and analysis consists of comparing the scores either across demographics or time periods. The absolute numerical score opens the door for false precision and misinterpretations of the scores. Many organizations tend to walk through that door.
On a more tactical level, it shows up as over-reaction to changes in the scores that are not statistically significant. On a more strategic level, it shows up as inferring causality between intervention and outcome where only correlation exists. Did our new employee recognition program led to the increase in scores from the last time? Or was it the changes in personality dynamics in our employee base due to all the new hires? Or perhaps, the announcement we made yesterday about winning that big multi-million dollar contract? Unless we have a way to control for everything else that’s changed or happened at the same time period, it’s unlikely we’ll be able to determine causality with any degree of certainty.
Sensing oriented towards “focus and patterns” avoids the absolutes and reduces the risk of misinterpretation. It looks to order the different potential areas of focus from “the one we should focus on the most” to “the one we should focus on the least” since the goal is now limited to figuring out what we should do next. It also places more weight on a different attribute of the data: looking at its variability both quantitatively and qualitatively to inform how to target a potential change/intervention. Low variability in the top area of focus suggests an organization-wide opportunity that should be matched with an organization-wide change. High variability suggests that things are working well in some areas but not others, requiring a more local change and pointing to good places where potential answers might be found.
Implementation: From “initiatives and projects” to “systems and mindsets”
Reactions to insights surfaced in the sensing phase tend to take the shape of initiatives and projects, often as part of the HR team roadmap, in the best cases in collaborations with the executive team and managers. But those tend to ignore the power of existing organizational systems in shaping existing behaviors and perceptions. Efforts to improve collaboration will likely fail as long as individual performance bonuses are in place. Efforts to improve quality will likely fail as long as targets/goals only measure throughput and cost. The more tangible will always trump the less tangible. Furthermore, efforts tend to focus on the external environment, ignoring the powerful impact that mindsets and internal beliefs have on driving change. Yes, my manager has a part to play in me “knowing what’s expected of me in my role” (a common engagement question). But so do I. Have I sought out clarity if the expectations were unclear to me? If I haven’t, why? What underlying beliefs led to my inaction? How can I test them out and weaken their hold on me?
More effective courses of action will focus on long-lasting changes to both systems and mindsets over temporary initiatives or the addition of yet-another-program.
Ownership: From “not my job” to “everyone’s job”
We like to say that culture, a fuzzy label for the thing we change when we’re WoW is “everyone’s job”. Yet that is hardly reflected in the way traditional cycles are run, perpetuating the dichotomy observed by Chris Argyris’ 25 years ago: “Employees must tell the truth as they see it; leaders must modify their own and the company’s behavior. In other words, employees educate, and managers act”. If only HR has capacity allocated towards WoW — real change is unlikely to happen.
An alternative will posit that everyone has “skin in the game” in both things being the way they are right now, and in changing them. That means that everyone must have an opportunity to both recognize their part in causing the current tension and playing an active part in addressing it. That does not mean that everyone should be involved in WoW in the same amount or in the same way. Specialization is the secret sauce of effective collaboration, but it needs to be bounded. When pushed to the extreme, rigid boundary, it becomes detrimental. Working on work should never be an extracurricular activity, bolted on top of an already full plate of the work itself, for any role in the organization.
As readers of this blog already know, I’m constantly on the lookout for innovative compensation approaches. “How to redistribute the value generated by the organization to the people who created it?” is one of the most profound organizational questions. And financial compensation is one of the most tangible indicators of our values and beliefs systems. Any attempt to shift to a new operating paradigm without taking these two issues into account is bound to fail.
Over the last several years, Nicolas di Tada and the team at Manas Tech, a 30-person Buenos Aires-based dev shop, have carefully evolved their process for allocating pay raises. Not only did they document and share their process in:
But Nicolas was also kind enough to hop on a call with me a few weeks back and clarify some of the points that were not clear to me at first read.
In a traditional compensation review process, an autocratic decision-maker (manager), uses quantitative inputs from a performance review to set a new salary according to a predefined salary ladder. The team at Manas sees challenges, bias and limitations in all three key “design elements” mentioned above, so they set out to design a compensation review process without them. The key design principle underlying their system posits that the “wisdom of the team” would lead to a superior outcome than a process using the three elements outlined above.
Their process currently works as follows:
Every 4 months, the team will review its automated financial model to determine the portion of profit that should be allocated as salary increases. If confidence in future billable hours is lower than desired, the same amount will be allocated as one-time bonuses rather than permanent salary increases.
The process runs in 3 to 5 rounds (exact number determined at the beginning of the cycle).
In each round, each team member sees the base salaries of all other team members, and the total pool of salary increases that can be allocated. They can then allocate it across team members in any way they see fit. Team members cannot give themselves a raise.
At the end of each round, the average increase that each team member received gets permanently allocated to them and subtracted from the overall pool.
The next round follows the same steps with team members also being able to see the cumulative salary increases that were already permanently allocated to each team member, and the updated (lower) total salary increase pool still remaining to incrementally allocate.
An interesting challenge from a technical/algorithmic perspective has been dealing with the “fuzzy” relationship between an individual team member’s input/recommendation for a salary increase and the resulting increase, since the recommendations of all other team members have to be factored in as well. It creates an incentive to provide an input that’s different than the outcome that you think is appropriate, in an attempt to account for the impact of the other inputs.
In order to get as close as possible to the desired outcome, the solution that the Manas team landed on after multiple iterations is to only permanently allocate the average recommended increase and run several rounds of the process.
While the process does offer a unique solution to some of the greatest challenges of the more conventional approaches, it does pose its own set of challenges. The two that immediately come to mind are scalability and market dynamics.
The solution works well now for Manas at ~30 people where most people know most people well enough. But what happens where there are 200 people in the org? Simply averaging the increase recommendations in each round will require a lot more rounds since a smaller portion of teammates will have a non-0 recommendation for each individual teammate. Potential solutions can be either finding a more permissive “discounting function” that’ll require fewer rounds, or potentially following a tiered process where execs allocate the overall pool across departments, managers allocate the departmental pool across teams, and individuals allocate the team pools across individuals. Each of these comes with its own set of advantages and disadvantages.
The market dynamics tension is a bit more challenging to resolve and the Manas team hasn’t found a one-time systemic solution to it. If we evaluate the Manas approach through a “compensation polarity” lens, their approach falls very close to the “internal fairness” pole. Since compensation market data and market seniority definitions (levels) don’t play a part in the process, it’s not unlikely for salaries to drift overtime from their market comparables and have someone in a role where they are being paid either significantly above or below what they would get paid for doing a similar role in a different company.
In sum, I’m grateful for Nicolas and the folks at Manas for taking a pretty big leap, redesigning a compensation system from scratch breaking many of the challenging assumptions in a more conventional system. It is not perfect and not without its shortcomings, but neither is the existing system, making it a viable alternative offering a plausible trade-off.
This one was a tough one to parse through but contained some sold gold nuggets that made it worth it. This post is a synthesis of two posts by Joost Minnaar, one of the two founders of Corporate Rebels:
To the best of my understanding, they are the product of Joost’s Phd thesis where he’s exploring the way progressive organizations are choosing to organize in a way that minimizes hierarchy and bureaucracy, which he dubs as “middle managerless organizations” or MMLOs for short. While I wish the research methodology was more rigorous than case studies, Joost did aim to extend the robustness of the sample of organizations studied, from the mid-sized US-based core to include also large-sized and international organizations.
The label “middle managerless” still seem rather hyperbolic to me, but both the problem framing, and the patterns/archetypes he identifies in the applied solutions are rather interesting.
The organizing problem
Building on his academic literature review, Joost defines “organizing” as solving three intertwined problems:
Strategy — the problem of organizing the strategic direction of the company and related objectives. This was traditionally done by a top-management team defining short-term (monetary) goals, and now while the structure is unchanged, the management teams tend to be smaller and focus more on defining long term objectives and curating the organizational culture as a means for steering strategy.
Division of Labor — organizing “vertically” in the company. This is broken down into Organizational Structure — the problem of decomposing the objectives set by top management into tasks and roles, traditionally done by the introduction of hierarchy (functional departments, etc.); and Task Allocation — the problem of assigning these tasks and role to employees, traditionally the responsibility of middle management.
Integration of Effort — organizing “horizontally” across the company This is broken down into Coordination — the problem of providing employees with the information they need in order to coordinate their actions with peers, traditionally done by middle management by introducing a set of rules and procedures (“bureaucracy”); and Motivation — the problem of monitoring the performance of employees and distributing rewards for the tasks they have performed, traditionally done by middle management which assess performance and allocates rewards.
I like this construct a lot. The one thing that doesn’t fully sit with me is grouping “motivation” under “integration of effort”, but it may just be the behavioristic language that’s used to describe it. When I use simpler, more succinct language it seems to fit better:
Organizing means figuring out:
* The shared direction we want to head in together (organizing “direction”)
* Who is doing what (organizing “vertically”)
* How to complete coupled pieces of work and ensure that all work gets done (organizing “horizontally”)
Since the focus of the research is on the role that middle managers play in organizing and its alternatives, the solutions focus on the “vertically” and “horizontally” aspects of organizing where middle managers play a key role.
In addressing the “vertical” organization problem Joost identified two key approaches that differ in the way they think about the smallest organizational building block. The first views individuals as the basic building block, who then form ad-hoc teams who emerge and dissolve organically. Individuals can be part of one or more of those teams at the same time. Most Holacratic/Sociocratic systems that champion the separation of “role and sole” adopt this approach. Whereas the second views teams as the basic building block, where people self-organize into permanent teams and individuals can only be part of a single stable team at a time.
In addressing the “horizontal” organization Joost also identified two key approaches that differ in the way the “rules of the game” for coordination between building blocks are defined. The first adopts a more collaborative approach emphasizing a sense of community, belonging and a shared mission. The second adopts a more competitive approach by introducing an internal market system for coordination, oftentimes placing a financial value on transactions and services rendered by one building block to the other.
The combination of the “vertical” and “horizontal” organization approaches creates 4 organizational model archetypes:
European Model — Permanent teams, collaborative dynamics. Companies like NER Group, FAVI, and Buurtzorg as the primary examples. Though it’s also easy to see how Spotify fits that model.
Asian Model — Permanent teams, competitive dynamics. Haier being the leading example.
American Model — Ad-hoc teams, competitive dynamics. Companies like W.L Gore, Morningstar and Valve.
Digital Model — Ad-hoc teams, collaborative dynamics. Mostly common in open-source projects where the collaborative motivation is non-financial like: Wikipedia, Linux and (former) GitHub.
Just like any 2×2, the real world is more complex, with multiple “hybrid” companies that fall in various places on the spectrum between these extremes. Companies may also move along on the spectrum over time, Zappos being one interesting example moving from the “American Model” to the “Asian Model” and breaking the naming convention in the process 🙂 Nonetheless, this seems like a super useful taxonomy for codifying different organizational patterns.