OWKRs (not a typo)

Pictured: the Great Auk penguin (inspired by the O’Reilly “Effective AWK Programming” book cover)

Goals are a core organizational practice in many organizations and therefore the topic of several blog posts in this publication. From “Goals: connecting strategy and execution”, through “Why setting ambitious goals backfires” and “Goals gone wild”, to “How we align our goals”. 

The challenge with goals is captured beautifully when we look at them through the framework outlined by Donald Sull in the first piece above for the 4 different uses for goals: 

  1. Improve individual performance
  2. Drive strategic alignment
  3. Foster organizational agility
  4. Enable members of a networked organization to self-organize their activities 

#1, in particular, is rife with pitfalls and tends to draw most of the heat when a case against goals is made. Yet if we think about goals less as a target to be hit and more as an intent to align on — it’s clear that they play a critical role in supporting #2. 

More on this here

Abandoning goals altogether is probably a no-go. So how can we shift the way we set and articulate goals to be more supportive of that?

Much as been written about OKRs, the most popular goals structure in use today, and in recent years more nuanced pieces addressed some of the common pitfalls in how phrases and set. For example, avoiding the “OKR cascade”. However, none, that I know of, have suggested any changes to the OKR structure. Which is what I’m intending to do today. 

If we intend to use OKRs primarily as an alignment mechanism, the structural gap becomes clear: the “objective” describes the goal that we’re working towards, but it doesn’t connect it to the broader strategy. It doesn’t help answer the most meaningful question that a conversation should be centered around: 

Why is this goal the best thing you could do to advance our strategy?  

It is in answering this question that the biggest assumptions and interpretations are being made and the risk of meaningful misalignment is highest. Yet, we leave the answer to that question implicit, hoping that all parties involved are skilled enough to uncover it on their own. 

No more. Introducing: OWKRs. 

A small, but meaningful tweak to the traditional OKR structure: 

  • Objective
  • Why? (new) — a short (2–3 sentences) explanation of why this goal is the best thing that you could do to advance the strategy. 
  • Key Results 

My hypothesis is that making the “Why?” explicit in the structure will shift the focus of the O(W)KR setting conversation to discussing the underlying assumptions in selecting the objective and catching any critical misalignments sooner. 

And as a bonus point, OWKR is an anagram for “work”… 🙂 

OWKRs (not a typo)

Visualizing the voice of the employee [Coolen]


Patrick Coolen is the Global Head People Analytics, Strategic Workforce Planning and HR Survey Management at ABN AMRO, the third-largest bank in the Netherlands. Recently he penned a great piece about one of my favorite topics:

Visualizing the voice of the employee

In this piece, Coolen outlines how they conduct and digest engagement survey data at ABN AMRO. 

Data collection

The engagement survey is SUPER simple and light-weight containing only 3 questions: 

  1. How likely are you to recommend our organization to a friend or relative as an organization to work for? (quantitative, NPS-like question)
  2. What is our organization doing well as an employer? (qualitative, “Top” question)
  3. What could our organization do better as an employer? (qualitative, “Tip” question)

To get a more continuous view of the data, while avoiding survey fatigue, since ABN AMRO is a large-enough organization, they run the survey monthly, but only 1/12 of the employees are asked to take it each time, utilizing a stratified sampling approach to ensure that the sample is representative. 

I LOVE the lightweight approach and the balance of a single quantitative question and the two “top & tip” open-ended qualitative questions, as well as leveraging the size of the organization to reduce survey fatigue without jeopardizing the quality of insights.

My one nit is that I’m not a huge fan of the NPS-like quantitative question and would probably replace it with a different quantitative metric that has a causal link to performance. 

Data analysis

The extreme simplicity of the survey and open-endedness of the qualitative do create some non-trivial data analysis challenges in classifying the responses that Coolen’s team did a brilliant job overcoming. 

First, they “normalized” the responses by translating all responses to a single language (English), splitting responses with multiple subjects, lower-casing all text, removing punctuations, and lemmatizing key words. 

Then, they evaluated several machine learning classification algorithms, landing on Support Vector Machine as the best candidate, a refined its precision further using a supervision process. 

The output of the data analysis phase is the classification of all responses to one of 150 topics, who, in turn, roll up to a smaller set of “expert domains” (Recruiting, L&D, IT, etc.). 

Data visualization

The data is then presented and made available to the entire organization using the bubble chart below where each bubble represents a topic: 

source: Patrick Coolen
  • The bubble is larger the more responses map to that topic.
  • The bubble is higher the more the topic showed up in “top” responses, rather than “tip” responses. 
  • The bubble is positioned further to the right, the more positive the responses to the quantitative question were when the topic was brought up in the qualitative questions. 

The area of the chart can be segmented into 4 quadrants driving different actions: 

  • Topics (bubbles) in the top-right — Celebrate — things that the organization does well and are positively correlated with the quantitative measure. 
  • Topics (bubbles) in the bottom-left — Focus Areas — things that the organization does not do well, and are negatively correlated with the quantitative measure. Therefore, they are the areas where the opportunity for impactful change is the highest. 
  • Topics (bubbles) at the bottom-right — Suggestions — things that the organization does not do well, but are not negatively correlated with the quantitative measure. 
  • Topics (bubbles) at the top-left — Investigate — things that the organization does well but are still negatively correlated with the quantitative measure. Since this is an anomalous pattern, it is worthy of further investigation. 

The chart can also be filtered by time, business line, role, etc. to draw more refined insights which are then reviewed and acted upon in quarterly business reviews. 


Net-net I think this comes pretty darn close to the best way for surfacing insights out of a “working on work” exercise. Effective actions will be the next hurdle to overcome. 

Visualizing the voice of the employee [Coolen]

The evolution of Cynefin [Snowden, Corrigan]

I first learned about Dave Snowden’s Cynefin model in a Lean-Kanban conference circa 2015–16 and have made references to it in a handful of blog posts in the past [1, 2]. 

It first received broad recognition in a 2007 HBR piece titled A Leader’s Framework for Decision Making. On March 1 (St. David’s Day) 2019, Snowden took it upon himself to write a series of blog posts (5 in total) covering updates to the model, and in this year’s St. David’s Day, he decided to turn it into an annual ritual. 

Cynefin St David’s Day 2020 (1 of 5)

Chris Corrigan took it upon himself to aggregate the model and the key changes here:

A tour around the latest Cynefin iteration

And I am going to attempt to distill it even further. This is going to be a challenging post to write and I know the end product is not going to be great. Both because the subject matter is difficult, and because I have yet to have mastered the framework. But that’s exactly the point of writing about it…

First, a quick orientation: the Cynefin model is designed to aid decision-making and inform actions, recognizing that the decision-making process leading to the best action is different based on the context (domain) — the environment/situation — in which the action needs to be taken. 

 The model discerns between 5 different domains, the two on the right (Clear, Complicated) are “ordered” domains where the environment is mostly knowable and predictable and problems are solvable. The distinction between those two domains is more nuanced and is a factor of the number of parts in the system/situation. The higher the number, we’re going deeper into the Complicated domain and the level of expertise required to know the right answer increases. 

The two on the left (Complex, Chaotic) are “unordered” domains where the environment is mostly unknowable and unpredictable. In the Complex domain phenomena such as emergence and self-organization exist but those are enabled by some constrains. In the complex domain, there are no meaningful constraints leading to semi-random behavior. 

Going counter-clockwise (Clear -> Complicated -> Complex -> Chaotic) there are fewer constraints and therefore the more unordered and unstable the situation becomes. Going clockwise, there are more constraints on the situation and it becomes more ordered and stable. 

In the middle is the Confusion domain, broken down to “Aporetic” (“at a loss”) where the confusion is unresolved or paradoxical, and “Confused” where we just haven’t fully understood the situation yet — a more temporal state. 

I’m going to keep the green sections indicating liminality out of the scope of this post for the time being. 

Putting the framework to action

Almost any situation that requires a response has multiple aspects, each mapping to a domain. 

Step 1 is decomposing the situation to its various aspects. 

Step 2 is mapping each aspect to its respective domain: 

  • A clear and obvious aspect where things are tightly connected and there is a best practice → Clear
  • An aspect with a knowable answer or a solution, which has an endpoint, but requires an expert to solve it for you → Complicated.
  • An aspect with many different possible approaches, and uncertainty around which is going to work → Complex.
  • An aspect that is a total crisis, which completely overwhelms you → Chaotic.
  • Aspects whose domain is still unclear should be left in the middle, “Confused” domain. 

Step 3 is applying the appropriate approach to the aspects in each domain: 

  • Clear (Sense → Categorize → Response): just do them.
  • Complicated (Sense → Analyze → Response): research using literature and experts, make a plan, and execute.
  • Complex (Probe → Sense → Respond): get a sense of the possibilities, try something, and watch what happens. As you learn things, document practices and principles that guide in making decisions. If rules are too tight, loosen them. If rules are too loose, tighten them. 
  • Chaotic (Act → Sense → Respond): apply constraints quickly and maintain them until the situation stabilizes. 
  • Confusion: monitor those aspects and re-evaluate as new information becomes available and may help classify them into the appropriate domain. 

Key changes in the framework 

  • Renaming the first domain as “Clear” instead of “Simple” (or “Obvious”)
  • Highlighting the roles that constraints play in each of the domains: fixed constrains (Clear), governing constraints (Complicated), enabling constraints (Complex), no constraints (Chaotic). 
  • Renaming the middle domain to “Confusion” (from “Disordered”) and decomposing it to: “Aporetic” and “Confused”. 
  • Adding liminal boundaries around the Complex domain. 
  • Adding approach “labels” in addition to approach sequences: best practice (Clear), good practice (Complicated), exaptive discovery (Complex), novelty under stress (Chaotic). 
The evolution of Cynefin [Snowden, Corrigan]

Hill Charts [Basecamp]

Source: Basecamp

A few weeks ago I wrote a piece in The Ready publication titled “Ending the Tyranny of the Measurable” making the case for the price we pay for our obsession with the quantitatively measurable and offering alternatives for some common use cases. 

This week, I want to add another tool to the toolbox, courtesy of the Basecamp team: 

Hill Charts 

Oddly enough, this post is not even new. It’s 2 years old by now but just got on my radar this past week. 

The premise is very simple: numerical progress tracking is not very insightful. What can we learn from knowing that a project is 42% complete? 

The path towards progress is different depending on what the blocker might be, not to mention that the scope may still be evolving given that unknowns exist. 

Hill charts use the metaphor of a hill to discern between two phases in every problem-solving task. The uphill part is the divergent phase where we figure out different approaches to the solution, and the downhill part is the convergent phase, where we figured out a solution and it’s mostly a matter of execution. 

Source: Basecamp

Hill charts offer a more qualitative, subjective way to reflect progress by positioning a task at a certain point on the hill. Not only does it avoid the false precision in numerical progress tracking, it also allows us to capture relative progress across tasks in a fully relative way — through their different positions on the hill, avoid the proxy of numerical comparisons. Furthermore, reflecting on progress through a hill chart can help direct our attention to the more appropriate strategy for removing blockers or making more progress depending on the problem-solving stage, and act as a trigger for decomposing tasks when we realize that two different pieces are on different places on the hill. And lastly, it helps us avoid misleading numerical aggregation when we zoom out to look at the portfolio level because it’s clear that the underlying project-level assessments are subjective. 

Taking a snapshot of the hill every time we move a task around can serve as a powerful retrospection tool when we look back and aim to learn from our experience completing the project. 

At its core, Hill Charts shift progress tracking from a one-dimensional to a two-dimensional concept and from a discrete to a continuous concept which brings it closer to its true essence in our complex reality. 

As I was learning about Hill Charts, I was immediately reminded of the double-diamond design process, so my only suggested tweak to the Hill Chart would be to turn them into Double Hill Charts, capturing the pre-engineering phases as well. 

The double-diamond design process
Hill Charts [Basecamp]

What makes a team effective?

What makes a team effective is a question I’ve asked myself, and thought about multiple times. I’ve also written a bit about it here and here. More recently, I found myself revisiting this topic as I was preparing for an executive workshop aimed at helping the team work better together. 

The team is the sum of its parts

The more common approach to this question takes the perspective that the team is the sum of its parts. Meaning, the sum of the individuals that are on the team, their traits, strengths, weaknesses, and preferences and the way those interact with one another. This then leads to utilizing some sort of an individual assessment, such as Insights, Hogan, or even Enneagrams as a tool for capturing a simplified representation of the individuals on the team, understanding them in isolation and then looking at the team aggregate to understand their interplay and the areas where the team as a whole is particularly strong in or likely to have blind spots. Personally I prefer exercises that don’t reduce people to a “type” for a whole set of reasons I’ve listed here, but regardless of the method you choose, there’s definitely value at looking at the team through such lens. 

But a team is much more than the sum of its parts

However, to really understand teams we also need to look at them holistically as teams. If we think of teams as complex systems (any human system is a complex system) — some attributes of the system will only manifest themselves at a certain level of the system and not in other, because the attributes are a result of the interactions between the parts not of the parts themselves. I know this sounds pretty abstract but hopefully, the more concrete examples below will make it more tangible. 

So I started looking for frameworks that’ll help the team diagnose where they are currently at and where they should focus on first. Focused action is critical to making progress. My key criteria were a framework whose elements are as MECE (mutually exclusive, collectively exhaustive) as possible, and that’s granular enough to drive focus action. Building on my own experience and additional research, I ended up with 5 candidate frameworks. 

Runner ups

I considered Lencioni’s “5 Dysfunctions of a Team”, Wageman and Hackman’s “What makes a team leadable?”, Google’s “Project Aristotle” and Atlassian’s “Team Health monitor for leadership teams”. All frameworks rang true but didn’t fully pass the comprehensiveness test. Atlassian’s was my strongest runner-up but still seemed to have some fuzzy overlap between the attributes — not as mutually exclusive as I’d wanted it to be. 

Lencioni’s “5 Dysfunctions of a Team”
Wageman and Hackman’s “What makes a team leadable?”
Google’s “Project Aristotle”
Atlassian’s “Team Health Monitor for Leadership teams”

The winner

Oddly enough I ended up going with a framework from an unknown origin. It was used in a leadership team assessment that I’ve taken 3 years ago, but I wasn’t able to track down its source. 

The top-level distinction between impact, governance, and interaction really resonated with me. It creates a clear separation between the work that the team is doing together (impact) and how it’s getting done, separating the latter into the more mechanical/procedural pieces (governance) and the more relational pieces (interaction). The next level down attributes are also helpful in zeroing in on the issue that’s most critical to tackling first. The team will likely end up with solutions that are different for tackling “clarity and alignment issues” vs. “escalation/resolution” issues. Issues around “information flow” will require a different course of action than issues around “decision-making process”. 

What makes a team effective?

Doing surveys right (and some remote work insights)

Long time readers of this blog know that I hold some pretty strong opinions about the ill-use of surveys. More recently I wrote about it in “Working on Work” and my guest contribution in The Ready magazine, “Ending the Tyranny of the Measurable”. At a more tactical level, the overuse and abuse of the Likert scale have a prominent spot on my rather short pet peeves list.  

I had a chance to put my $$$ where my mouth is, working on a recent project where surveying was essential to getting the insights I was looking for. It was a short (17 questions) survey about remote work where I wanted to learn from long-time practitioners about their perspective on the key advantages, challenges, and essential practices of remote work. This was in early Feb 2020, mind you, pre -COVID-19 .

I was looking for relative insights: which advantage do practitioners find to be the biggest advantage? which challenges do practitioners find to be the biggest challenges? etc. Rather than use a series of Likert scale questions where respondents would have to rate each advantage on a 1-to-5 scale from “not important” to “very important”, I used a single question asking respondents to sort/rank the advantages from the one most meaningful to them, to the one least meaningful to them.

The end results, courtesy of SurveyGizmo’s beautiful reporting feature looked like this: 

Top-3 advantages: 

Top-3 challenges: 

In addition to immediately getting the relative ranking, which was what I was looking for, it was also super easy to glean some high-level statistical insights on the responses: the advantages had more differentiated winners (differences in the overall score between #1,2, and 3) and responses were more consistent — most people chose the same advantages to be the top ones, as indicated by the overly positive “rank distribution”. The top challenges, on the other hand, won by a very small margin and responses were more polarized — some chose the winners as the top challenges while others didn’t, as indicated by the rather neutral/even “rank distribution”. 

Top-3 mastered practices: 

Top-3 critical practices: 

The gap between the practices practitioners have mastered and the practices they find most critical for remote work, was the core insight I was looking for, as it highlights the areas where tools, programs, and support can be most beneficial. While the #1 item is shared across both lists, the #2 and #3 most critical practices, didn’t make the top-3 most mastered practices. 

Full survey results can be found here for those curious. 

Conclusion and important disclaimer

While this survey was an awesome methodological validation — using a sort/rank question, rather than a series of Likert scale questions to surface the insight I was looking for — it did not provide me with the clarity and directional confidence I was hoping for. This was due to the low number of total respondents (<100) leading to results that are not very robust/statistically significant, and preventing any further analysis using the demographic/psychographic data I collected. 

Doing surveys right (and some remote work insights)

Remote Work Canvas

In recent weeks, and most likely in the upcoming months, many organizations are finding themselves in the precarious situation of transitioning their entire staff to working remotely. 

Many understand that a successful transition requires more than just making sure that their IT stack functions properly and staff can access the tools and data they need to do their jobs. Working well remotely also means working in a completely different context and collaborating in different ways. 

Beyond a knowledge challenge, it’s a far greater behavioral challenge. 10 minutes of googling will turn up lots of good resources on how to work well remotely. But without creating the time and space for “meaning-making” much of their value will be lost. We need to think through “what does this mean for me, in my unique context?” and “how might I implement this piece of advice?” for their value to be realized. 

I created a simple, remote-friendly, tool/exercise to help facilitate individual meaning-making reflection and peer dialogue around this challenging transition, and I’m making it publicly available below: 

Remote Work Canvas

Remote Work Canvas — Instructions

  • Fill in name + date, and choose a remote partner.
  • Complete a first draft of the canvas, filling the columns in the following order: 
  1. Transitioning to working remotely blurs the lines between the personal and the professional, so start by strengthening that boundary, filling the middle column first.
  2. Then “put your oxygen mask first” and ensure that you have a good personal care plan in place, by filling the middle-left column.
  3. Next, take care of your loved ones and address any outstanding personal items, by filling the far-left column.
  4. Transitioning to the professional, create your individual game plan, by filling the middle-right column.
  5. Finally, consider your team and the way this new setup will impact the way you work together, and address any outstanding professional items, by filling the far-right column.
  • Review your canvas with your remote partner. Solicit feedback and advice on boxes that were more challenging, and make any necessary changes based on the conversation.
  • Set up a date and time a few weeks out with your remote partner to reflect on your lived experience and make any changes to the canvas accordingly.
  • Share your canvas with your team, and invite them to create their own versions.
Remote Work Canvas

The six fundamental problems of organizing [Martela]

Photo by Josh Calabrese on Unsplash

This paper has been open in a tab for a few weeks now so I can’t recall how I stumbled upon it:

What makes self-managing organizations novel? Comparing how Weberian bureaucracy, Mintzberg’s adhocracy, and self-organizing solve six fundamental problems of organizing

To answer this question, the author, Frank Martela, has been building on work by Puranam et el. First, he adopted their definition of an organization:

(1) a multiagent system with (2) identifiable boundaries and (3) system-level goals (purpose) toward which (4) the constituent agent’s efforts are expected to make a contribution.

Neat.

Then, he expanded their taxonomy of four universal problems that any form of organization, by definition, must solve, into six. These fundamental problems of organizing are something that I’ve grappled with lately, and I like what Martela has to offer (mapping to the Corporate Rebels taxonomy in parentheses):

  1. Division of labor: task division (CR: Org structure)
  2. Division of labor: task allocation (CR: Task allocation)
  3. Provision of reward: rewarding desired behavior (CR: Motivation)
  4. Provision of reward: eliminating freeriding (CR: Motivation)
  5. Provision of information: direction setting (CR: Strategy)
  6. Provision of information: coordination of interdependent tasks (CR: Coordination)

The two “division of labor” problems are identical to the Corporate Rebels taxonomy.

Compensation receives its appropriate place as a standalone fundamental problem, broken down to rewarding the desired behaviors and eliminating the undesired behaviors (freeriding). My one tweak, in light of my recent explorations, would be to use slightly less behaviorist label such as “provision of value” or perhaps “motivation”.

While at first, it seemed strange to group strategy and coordination together as one seems more strategic (pun intended) than the other, it actually makes sense, since they both have to do with creating shared context: knowing how to orient my work by understanding where are we trying to go together and what everyone else is doing to get us there.

And, if you were curious about how bureaucracy, adhocracy and self-managing organizations differ in their approaches to solving the six fundamental problems of organizing, here’s the answer:

Source: Martela (2019)
The six fundamental problems of organizing [Martela]

The competing values framework & Culture Contract [Quinn & NOBL] 

A roadmap for navigating cultural polarities

The competing values framework was developed by Robert E. Quinn and John Rohrbaugh as they searched for criteria that predict if an organization performs effectively. Their empirical studies identified two dimensions that enabled them to classify various organizations’ “theory of effectiveness”. The first dimension (flexible/focused) differentiates an emphasis on flexibility, discretion, and dynamism from an emphasis on stability, order, and control. The second dimension (internal/external) differentiates an internal orientation with a focus on integration, collaboration, and unity from an external orientation with a focus on differentiation, competition, and rivalry. Together these dimensions form four quadrants, each representing a distinct set of organizational and individual factors.

Astute readers will likely see some similarities to Wilber’s 4Q model and the organizational model archetypes I covered here.  

Each dimension highlights two values that are opposite to one another, building on a different set of assumptions, resulting in quadrants that are also opposite to one another on the diagonal. Each of those quadrants represents a different type of organization, with a different type of culture, orientation, leader type, values drivers, and theory of effectiveness. 

Source: The CFG Group
Source: NOBL

The Culture Contract 

While many organizations subscribe implicitly to one of these archetypes, they rarely make that implicit shared agreement between teammates explicit. Which is exactly what the team at NOBL suggests we should do in: 

The Culture Contract 

It starts off explicitly calling out the archetype that they subscribe to (Elephant Herd/Collaborate in this case), clarifying the core values tension and the choice they’re making (using an even-over statement), explaining the “Why?” behind that choice, detailing the emblematic behaviors. 

Source: NOBL

It then explores the values tension in more detail, looking at it through two different lenses, what the organization owes the individual and what the individual owes the organization, again utilizing even-over statements. 

Source: NOBL

Lastly, it addresses failure modes: calling out the early warning signs that we may not be living up to the cultural contract, and a process to manage the concern that the contract might have been broken. 

Source: NOBL

Taking it to the next level

The act of undertaking an organizational self-reflection exercise and making the implicit culture contract explicit is already a massive step forward in building a strong, coherent organizational culture. The next level realization is that while the four values are competing with one another, we need to integrate all four to build a truly robust, long-lasting organization. And that requires viewing the values paradox as a polarity to be managed, rather than a choice to be made. The good news is that the culture contract can still be an invaluable aid in doing so, with a couple of minor tweaks:

  1. Transitioning from a single reflection exercise that creates the document to a recurring reflection exercise, probably every 6–12 months in which we determine “which pole (quadrant) do we need to get closer to next?”
  2. Since the polarity perspective introduces the concept of going too far from one pole or too close to another, we need to add a section to the culture contract doc that highlights the early warning signs that we’ve gotten too close to that pole
The competing values framework & Culture Contract [Quinn & NOBL] 

Secondments [Atlassian]

The on-the-job equivalent of exchange student programs

I found the the “puppet master” aspect of this visual a bit awkward but it was too funny to not use

It’s always awesome to come across innovative people programs so I was delighted to come across the following post from the team at Atlassian

Secondments: the most powerful job training you’ve never heard of

In a nutshell: 

“A secondment (sometimes referred to as a “job rotation”) is a chance to temporarily work on a different team within your organization, or in some cases, for a different organization entirely. Think of secondments as the on-the-job equivalent of exchange student programs. And just like exchange programs, they’re an excellent learning opportunity.”

A while ago, during my Opower tenure, I set up a similar program inside our software engineering department that was modeled after Facebook’s hackmonth program.

Despite their compelling appeal from a learning and development perspective, they are rarely institutionalized as a formal People program. However, setting up an ad-hoc secondment rotation may not be as difficult as it may seem at first, and primarily includes having a conversation with your manager and the other team’s manager to flesh out a win-win-win rotation. Atlassian suggests going through the following set of reflection questions to help get to that alignment quicker: 

  1. Is there a business need?
  2. Is it beneficial to both teams?
  3. What is the intent of the secondment?
  4. What is the skillset the new team member will learn?
  5. What will the benefit be to the team with the secondment member?
  6. What will happen to the team and the individual after the secondment is over?

As far as mechanics are concerned, Atlassian recommends to set up a secondment’s duration to be between 6 and 12 months and to keep existing pay and benefits as-is whenever possible. 

There are a few ideas from the Facebook program that can be integrated into the program design as well: 

  • Define a tenure-based eligibility criterion (“you are eligible to participate in a secondment rotation after being on your home team for x months”) and perhaps even nudge managers and teammates to consider a secondment rotation when that threshold is reached and every x months after that. 
  • Build a shared backlog of secondment-friendly projects that either require less context to be done successfully or can effectively leverage a skill set that the team doesn’t currently have. That may enable creating secondment opportunities that are shorter than 6–12 months, a non-trivial time commitment, and still be highly beneficial for all parties involved. 

One of the key challenges with setting up formal secondment programs is that they require a certain level of organizational scale in order to offer a diverse portfolio of secondment opportunities and be able to sustain teammates going on rotation for those periods of time. This often excludes smaller companies from running a secondment program and yet those companies are often the ones that can benefit from such programs the most, both in getting access to knowledge and expertise that they currently don’t have, and in successfully retaining more of their existing teammates by offering them better learning and development opportunities. 

This is another example of where venture capital companies can potentially play a pivotal role, by creating a secondment marketplace across their portfolio companies and facilitating temporary and reciprocal talent exchanges that benefit all parties involved in the exchange. 

Secondments [Atlassian]