“A secondment (sometimes referred to as a “job rotation”) is a chance to temporarily work on a different team within your organization, or in some cases, for a different organization entirely. Think of secondments as the on-the-job equivalent of exchange student programs. And just like exchange programs, they’re an excellent learning opportunity.”
A while ago, during my Opower tenure, I set up a similar program inside our software engineering department that was modeled after Facebook’s hackmonth program.
Despite their compelling appeal from a learning and development perspective, they are rarely institutionalized as a formal People program. However, setting up an ad-hoc secondment rotation may not be as difficult as it may seem at first, and primarily includes having a conversation with your manager and the other team’s manager to flesh out a win-win-win rotation. Atlassian suggests going through the following set of reflection questions to help get to that alignment quicker:
Is there a business need?
Is it beneficial to both teams?
What is the intent of the secondment?
What is the skillset the new team member will learn?
What will the benefit be to the team with the secondment member?
What will happen to the team and the individual after the secondment is over?
As far as mechanics are concerned, Atlassian recommends to set up a secondment’s duration to be between 6 and 12 months and to keep existing pay and benefits as-is whenever possible.
There are a few ideas from the Facebook program that can be integrated into the program design as well:
Define a tenure-based eligibility criterion (“you are eligible to participate in a secondment rotation after being on your home team for x months”) and perhaps even nudge managers and teammates to consider a secondment rotation when that threshold is reached and every x months after that.
Build a shared backlog of secondment-friendly projects that either require less context to be done successfully or can effectively leverage a skill set that the team doesn’t currently have. That may enable creating secondment opportunities that are shorter than 6–12 months, a non-trivial time commitment, and still be highly beneficial for all parties involved.
One of the key challenges with setting up formal secondment programs is that they require a certain level of organizational scale in order to offer a diverse portfolio of secondment opportunities and be able to sustain teammates going on rotation for those periods of time. This often excludes smaller companies from running a secondment program and yet those companies are often the ones that can benefit from such programs the most, both in getting access to knowledge and expertise that they currently don’t have, and in successfully retaining more of their existing teammates by offering them better learning and development opportunities.
This is another example of where venture capital companies can potentially play a pivotal role, by creating a secondment marketplace across their portfolio companies and facilitating temporary and reciprocal talent exchanges that benefit all parties involved in the exchange.
Combines two of my favorite things: community and operating rhythm.
Being part of the Occupy movement in NZ has been a formative experience for Richard and greatly shaped his professional journey. Among other things, it led him to create Enspiral, one of the most inspiring organizations out there, in my opinion, and one of my regular “go-to”s, whenever I’m looking for a progressive solution to an organizational challenge. While still involved in Ensipral and continuing to make amazing contributions on this domain (like this one), more recently he’s been exploring how to generalize some of the “secret source” of Enspiral under the banner of microsolidarity — a small group of people supporting each other to do more meaningful work.
In essence, it expands and generalizes one of my favorite Enspiral practices: care pods into a broader operating rhythm. Looking at it through the lens of the “community canvas”, it fleshes out a blueprint for the core set of rituals, shared experiences and content that such a community should have.
The Community Canvas framework
I decided to title this post borrowing the more inspirational title of a different post of Richard’s since “People, Practices, Place” seemed way too generic.
Periodic practices
The key design principle here is to focus on a predictable, steady rhythm to minimize the distraction of scheduling.
2 in-person gatherings a year, coinciding with the full moon to build connections and introduce new potential members to the community.
10 video calls on the other full moons where small pods (3–6 people) meet to help each other with their outward-facing work using “case clinic” or peer-coaching methodologies.
12 video calls on new moons for partners (see below) to work on the community — the inward-facing governance and admin work.
The synchronous meetings are used for sense-making and deliberation, while decisions are made asynchronously (using Loomio).
People
There are two membership categories to explicitly acknowledge the all-too-common participation inequality.
Partners are members with more commitment to the community and the capacity to do work on the container.
Friends get to participate as individuals and not worry too much about the big picture.
Partners choose who to invite as Friends, and their first interaction with the community must be through attending a gathering. Partners can invite Friends to become Partners after they’ve attended their second gathering (or later) and if they’re willing to take on the additional responsibilities. There’s no expectation that all Friends will become Partners eventually and no shame in staying a Friend.
Everyone is part of a Home Group (small pod of 3–6 people), is encouraged to share leads and opportunities of working together, can be in a member of a temporary working group on an internal project, and is expected to make regular financial contributions to support the internal work.
Place
The community should be organized around a physical place to simplify coordination and make it easy to increase the density of relationships.
While the 2 gatherings take place in person, the community engages virtually through an instant messaging platform for informal interactions, an asynchronous discussion platform for long-lasting information (knowledge, decisions, etc.), and a regular newsletter to maintain a consistent baseline of shared context.
I hope the minimalist and holistic nature of this skeletal community comes across clearly. The only thing that I would also want to add as a core component of the community that I’m part of is a care pods-like rhythm for personal development. But that’s certainly not a required component. Just something that I personally long for.
Discussing performance may very well be my favorite organizational can of worms to open. It’s one of the things that make work organizations exponentially more complex than other forms of organizing, and I’ve been noodling on it quite a bit over the years.
Human performance is a fuzzy concept, especially in a knowledge work context. But the way it’s utilized is a bit more straight forward. The assessment of performance is being used to distribute the collective value that the organization had generated to the members of the organization. More concretely, the assessment drives decisions such as an increase in compensation, a promotion (more responsibilities and more compensation), a termination, or simply maintaining the status quo.
The silver lining here is that understanding that performance is a distributional issue gives us criteria for assessing different ways of managing it. If this is essentially a distributional issue, then what we should care about are procedural justice/fairness and distributive justice/fairness.
If you’re reading this hoping for a big “a-ha” moment and a robust solution, I’m going to disappoint you. I haven’t found a good solution. Yet. And it may very well be that the current way we’re managing performance is “the worst form, except from all the others that have been tried from time to time”, to blatantly misquote Churchill.
What follows below is my messy attempt to dissect this issue. My hope is that by doing so some smaller pieces of the issue will prove out to be non-issues at all, while other pieces may have better point solutions emerging over time. I found it helpful to break down performance management to three sub-issues: What are we evaluating? Who is doing the evaluation? When are they doing so?
Let’s go.
What are we evaluating?
Machine performance is fairly straight forward: both outputs and inputs are visible and predictable, and the dimensions of evaluation are easily defined: speed, quality, efficiency.
Human performance, in a professional setting, is more challenging. If we look at performance as a distributive challenge, we can define it as the gap between the value that we generate to the organization and the value we’re taking out of the organization. Some simplistic examples for illustration: if I’m suddenly doing poor work and the value that I’m adding to the organization is lower, while I’m still taking the same salary, we’d say that my performance has declined. Similarly, if I just got promoted and receiving a higher salary, but still making the same contribution to the organization that I made prior to being promoted, we’d say that I’m not meeting the new performance expectations.
A big part of the challenge in fairly managing performance stems from the gap between output and outcome. Which makes both measuring and attributing value difficult.
The value I take out of the organization certainly includes my salary, as well as costs incurred by the organization such as health care premiums or the fractional cost of renting the office. But should probably also include harder-to-measure elements such as the time and energy I take from others. What if I get the job done, but I do it in a way that alienates and demotivates others, perhaps even to a point that causes them to leave the organization?
Unless I’m a salesperson, the value that I generate to the organization is even harder to measure since my monetary contribution is indirect. And the same component of interacting with others, now in the positive case of supporting and helping them succeed is just as difficult to measure as the negative case.
Attributing value is challenging across two dimensions. The first has to do with the collaborative nature of work. If I was a traveling salesman selling shoe polish door-to-door, you could argue that whether I’m successful in selling the shoe polish is mostly up to me (we’ll touch on an important caveat in a moment) and therefore I should get the full credit for every sale I make. But what if I’m selling enterprise software, and the sales process required multiple conversations, including a lot of heavy lifting by the sales engineer; both the product manager, the marketing manager and the CEO had to make guest appearances and address specific concerns in their respective domains; the engineering team had to make a small tweak in the product to make the integration work, and the support team had to commit to a non-standard SLA? How much of the credit for making the sale should I get then?
The second attribution challenge has to do with separating skill from luck or any other element that impacted the outcome and was out of our control. I’ve been obsessing recently over this thought experiment from “Thinking in Bets”:
Take a minute to imagine your best decision last year. Now take a minute to imagine your worst decision.
It’s very likely that your best decision preceded a good outcome and your worst decision preceded a bad outcome. That was definitely the case for me, and a classic case of “outcome bias”. We tend to conflate good outcomes and good decision-making (skill). Which one should we get credit for? Is it fair to get credit/be blamed for things that were out of our control?
A holistic solution may be found by integrating some of the imperfect pieces below, or by taking a completely different approach:
Goals — successful execution against (SMART) goals is one of the most common approaches to measuring performance. While this approach has many flaws, some can be rectified, for example by using relative goals rather than absolute ones.
Career ladders — thoughtfully designed career ladders define a connection between a certain level of contribution to the business and a certain level of compensation. Good rubrics span “being good at what you do”, “being a good teammate”, and “having an impact on the business”. In addition to avoiding the short-termism often associated with goals-based performance management, they also partially mitigate some of the side-effects of exclusively focusing on just one of the three categories. However, this “balanced scorecard” approach, amplifies the “who?” challenge in the performance management process, which we’ll cover in the next section.
Track record — if you beat me in one hand of poker, it’ll be hard to tell if you’re a better player than me, or you just got lucky. But if you beat me in 90 out of 100 hands, it’d be fair to say that skill must’ve had something to do with it. It’s incredibly difficult to measure a track record in a knowledge work setting, but that has not discouraged companies like Bridgewater Associates from trying. Some may even say, quite successfully.
Learning — in some ways, we can think of learning as the derivative, in the mathematical sense, of the value that we create, and as a leading indicator of the value we will create. Similar to the way that velocity is the derivative of the distance we’re traveling and a leading indicator of the distance that we will travel. A focus on learning helps address some of the value attribution issues, but it does not simplify its measurement by much.
Avoiding definition — perhaps the purest and most chaotic solution. The case for it is best articulated here. But in a nutshell, since this is a multi-party distribution issue, we may not need a universal definition of performance or fairness. As long as all parties agree that the distributive process and outcome are fair, we are good. Even if they’ve reached those conclusions using different evaluation processes. Deloitte’s “I would always want this person on my team” and “I would give this person the highest possible compensation” sit better with me than the more standard expectations-based rubric (meets, exceeds, etc.).
Who should be evaluating performance?
The standard approach to performance management tasks the manager with evaluating the performance of her team members. A big part of that makes sense since it’s the manager’s job to align the effort of their team with what the organization as a whole is trying to accomplish, so they’re in the best position to evaluate how an individual contribution supports those efforts.
However, research suggests that:
Managers overrating their team is an enduring, scientifically proven fact in companies. It’s most pronounced where performance ratings are used to determine compensation, where it’s difficult to assess an employee’s true competence, and where the manager and employee have a strong relationship. (references here)
The collection of peer feedback as input to the manager’s evaluation and the use of a cross-managers calibration exercise help mitigate some of this effect but not all of it, and significantly increase the level of effort in the overall process.
Having individuals evaluate themselves is certainly an option, but here as well, research suggests that it may not lead us to a process and outcome that would be considered fair:
Self-perceptions correlated with objective performance roughly .29 — a correlation that is hardly useless, but still far from perfection (source)
Our immediate team is the next usual suspect, as the people with the most visibility into our work and contribution, though with a more limited view on the impact we have on the organization at large. Furthermore, if some skills-based elements are integrated into our definition of performance (career ladders), then it doesn’t make much sense to have more junior/inexperienced member of a team assess the skill of more senior/experienced member.
This, among other things, is what led Google down the path of having a standalone committee of more senior members evaluate the performance of more junior members of the organization. However, while they are able to evaluate the skill elements of performance more accurately, being so disconnected from the actual context in which those skills are applied makes it harder to assess the impact components, increasing the risk of “confusing motion for progress”.
For team members that are not in an individual contributor role, another option becomes available: under a servant leadership paradigm, the people the leader is serving should be the ones evaluating their performance. However, the typical way that these roles are designed tends to contain responsibilities beyond the team that you serve which the team may not have visibility into. Power dynamics will also add some distortion to the evaluations. And the same competency question exists — using a more extreme analogy: as patients, are we able to evaluate how good our doctor is? or just their bedside manner?
We can also look at a more generalizable case of the servant leadership model and argue that we all have customers/stakeholders that we serve, whether internally or externally. Therefore, they should be the ones evaluating our performance since, at the end of the day, the value we generate goes to them. We can think of our manager as a customer (in their alignment of efforts capacity), we can think of our teammates as customers (utilizing our advice, feedback, and expertise), and of course, the people who are using our work product, be it another engineering team, the organization at large, or external customers. Having multiple customers/stakeholders provide their evaluation on the pieces of value that matter to them, certainly complicates things, but not in an insurmountable way. Yet the patient/doctor challenge still applies, to an extent, in this setting as well.
When should we be evaluating?
Perhaps less contentious than the first two sections, the timing and synchronization of the evaluation process are not without challenge either.
Performance evaluations are typically done in a synchronized cycle once a year. The synchronicity allows for calibration and a fair allocation of budget, and the annual cadence supports the observation of long-term patterns of performance (smoothes out short-term fluctuations) and keeps the overall process effort in check. The annual cadence, in particular, has drawn a lot of criticism. Some of it justified, like the impact recency bias has on the process, and the obligation to address significant changes in performance, in either direction, quicker than once a year. Some of it, unjustified. You can and should provide and receive developmental feedback more frequently than once a year. And you can and should set, modify and review goals more frequently than once a year. Neither has anything to do with the cadence by which you evaluate performance. If we stick to a synchronous, cadence-based approach, a review every six months seems to be a better anchor point.
However, there is a growing body of alternatives to the synchronous cadence-based approach, that happens more ad-hoc, based on a natural trigger or an emerging need. Deloitte’s project-based work marries itself well to an evaluation that’s triggered at the end of the project (projects usually take 3–9 months). The Bridgewater Associates’ approach mentioned above, essentially conducts micro-evaluation at the end of every meeting/interaction to construct a track record that’s rich enough for analysis. In some self-managed organizations, the process that can lead to a team member’s termination can be triggered at any time by any member of the team. And in others, team members can decide when they want to initiate the process for reviewing and updating their own salary. A key concern here is that unintended bias will be introduced into the process. For example, someone with stronger self-advocacy skills or a more positive perception of their performance will trigger the process more frequently than someone who is just as worthy of a raise but is more humble or not as confident in their skills. Some of those concerns can be mitigated by putting in place some cadence-based guardrails (automatic review X months since the last review) and having a strong coaching and support culture.
In Sum
So where does this +2,000 words essay leave us? Definitely not with a comprehensive solution. But that wasn’t the intent either.
Understanding that performance is a distributive challenge, balancing value generated and value taken, was illuminating to me. It then allowed me to understand the core drivers behind the difficulties in assessing those elements, and how different solutions either mitigate or amplify them. While it’s even clearer to me now that there may not be a perfect solution, it does seem like some solutions are better than others. At least in my mind.
But perhaps most importantly, it made me realize how much of the core beliefs and assumptions that underlie this foundational organizational practice are often left implicit and uncommunicated. The simple act of making them explicit, coupled with painting the whole landscape of alternative solutions and the trade-offs that they entail, can go a long way in driving a stronger collective sense of fairness in any organization, regardless of the assumptions/beliefs they have, the trade-offs they choose, and the solutions they implement.
And the enhancement of the collective sense of fairness is exactly what we were trying to do, to begin with.
Congratulations, VC partner. One of your portfolio companies just found product-market fit!
They’ve graduated from startup, a temporary organization designed to search for a repeatable and scalable business model, to an early-stage growth company. As you know, they’ll probably be spending the next couple of years and hire their next 250 employees in this leg of their journey.
There’s just one problem. They are not really set up to scale.
They’ve spent all their time, as they should have, iterating on a product that solves a real need. But now they need to build an organization that will continue to build and evolve that product in the long run. The culture is completely informal and heavily reliant on the founder being in the room. The small management team has limited managerial experience. The recruiting process is a mess and riddled with bias. Compensation is bespoke and out-of-whack with the market. Career ladders? Professional development? Forget about it! Should I go on?
On their journey to being set up for scale, they’ll be clawing their way, by hook or by crook, flying by the seat of their pants, hoping that the tidal wave of organizational debt that they have accrued and will continue accruing along the way doesn’t come crashing down on their head. They’ll be stretching their office manager way beyond the reasonable boundaries of their job, asking them to own benefits, immigration and compliance in addition to their day job. They’ll look to their sole recruiter to not only flawlessly execute the hiring plan but also design a bias-free recruiting process and train everyone in the company on their part in it. And both of us already feel bad for their first Head of People, tasked with the superhuman endeavor of being the subject matter expert on L&D, compensation, benefits, culture, org design, performance management, compliance and everything else in between, seamlessly context-switching across the domains and from getting whatever needs to get done today, to thinking strategically about the needs of the organization 1–2 years from now.
You’ve seen this pattern time and time again so, consequently, you know this means that they’ll be entering the uncanny valley of employee experience. No longer a startup, the appeal with the early stage folks is lost. With the existential risk to the business mostly de-risked, the financial upside is still substantial but not what it used to be. And there’s no way they can compete with the kind of support and comfort that the later stage unicorns are able to offer their staff. Recruiting will become a grueling endeavor on a whole new level, and regrettable attrition will be notably higher than what you’d like it to be.
But hey, this is really out of anyone’s hands, right? Building a world-class People team really doesn’t make any economic sense at their current scale. Not to mention that the challenge they can offer to the people with the necessary expertise is not big and interesting enough to attract them. So there’s really nothing else that you or anyone else can do other than clench your jaw, grit your teeth, and hope they make it through this adolescent, growing-pains phase of the business.
Or is there?
Zooming out
In a couple of years and another 200 employees, all the puzzle pieces will fall into place. They’ll have the scale to justify a world-class People team and challenges that are sufficiently interesting to attract that kind of talent. If only there was a time machine that would allow us to bring that future into the present…
If you continued reading thus far, I have some great news to share: we don’t need a time machine. We just need to zoom out and change our perspective. The key to this conundrum is this: while the depth of expertise is required from day 1, in some cases they are not needed as frequently/intensely as they would be in day 750. And in other cases, just using a best-of-class solution, while not a custom-fit one, is better than what they can cobble together on their own. Take manager training, for example. Skilled managers are needed from day 1 but they may not have the volume at first to run frequent training cohorts. Great instructional design coupled with evidence-based content will likely be good enough for building a solid foundation, even if it’s not 100% fitted to the (still evolving) organizational values. Or consider the recruiting process. Adopting a battle-tested blueprint will likely yield better outcomes than an emergent design which (unknowingly) repeats old/known mistakes.
These simpler needs mean that we can look for a solution to the scale gap across companies. And no person is in a better position to implement such a cross-organizational solution than you, VC partner. You’ve already staked a large sum of money on their success, and on several other companies at a similar growth stage with similar challenges. And since most VCs rarely make competing bets in the same portfolio you have a vested interest in helping as many of your portfolio companies succeed. It’ll look something like this:
The shared people services team, housing the subject-matter experts in various people disciplines starts off being provided completely by VC staff, supporting more junior, generalist points of contact in the portfolio companies. Training and development programs are offered across the portfolio. Over time, as portfolio companies continue to grow, they gradually hire their own experts and bring the programs and services in-house, eventually reaching the standard end-state of having a completely decoupled, standalone people function.
VC staff provides shared people services, training and development programs span the portfolioEntire shared people services team is in-house, training and development programs span departments
Making this shift stops the accrual of organizational debt sooner in the company lifecycle, and perhaps even starts its repayment. It creates a more optimal experience for early growth stage employees, closing a big chunk of the competitive talent gap between early-stage and late-stage companies. In sum, it allows the company to reach late-stage organizational maturity, quicker, following a safer, less volatile path and wasting less talent, energy, and resources in the process. A win for the employees. A win for the companies. A win for the VCs.
Once product-market fit is found, it really comes down to the people refining the bringing the business model to life. This may be a great way to invest directly in their success.