I’ll be the first to admit that there’s nothing ground-breaking in this operating rhythm. Some may even say that it’s rather conservative/traditional. But I’ve seen almost identical rhythms, same “building blocks” slightly different cadences, emerge “the hard way” in multiple companies so hopefully, this can save up some and help avoid some “old mistakes” by serving as a starting point to iterate on.
Matt’s post includes more detailed explanations on what each building block entails, as well as some of the nuances around using them that’s specific to Skillshare.
The only thing that the post was really missing is a simple visual to see how all the different pieces of the puzzle fit together across the 4 cadences, so I went ahead and created one at the top of this post.
In full disclosure, I’ve been a long-time hater of the MBTI. As a matter of fact, one of my favorite pastime activities is responding with this 2-min Adam Ruins Everything episode about the MBTI whenever someone asks for advice about an MBTI facilitator.
But this initial hate turned over time to curiosity around questions like: why do people find this assessment so compelling? why do workplaces keep using it? and are there better alternatives out there?
As I noodled on these questions I realized that they extend to a broader category of assessments that can best be grouped under the psychometrics label: “the science of measuring mental capacities and processes.” It’s a bit of a generous label for some of these assessments, as many don’t meet the “science” qualifier (MBTI, for example) but their intent aligns with the second part of that sentence.
The appeal of “clustering tests”
Most people don’t like to take tests. And not everyone views feedback as a gift. Yet a subset of psychometrics tests seems to be highly popular and some people even enjoy taking them. Why?
This subset of tests is part of a category I’d call “clustering tests” — their aim is to group human behavior into clusters and then figure out which cluster you belong to, often referred to as your “style” or “type”. Wilson and Walton’s theory provides one of the best explanations for the appeal of “clustering tests”: we are all meaning-making machines, and interpret a given situation in a way that serves three underlying needs:
The need to understand — make sense of things around us in a way that allows us to predict behavior and guide our own action effectively.
The need for self-integrity — view ourselves positively and believe that we are adequate, moral, competent and coherent.
The need for belonging — feel connected to others, accepted, included and valued.
We find “clustering tests” appealing because they check all three boxes: they make it seem easier to understand ourselves and others, reducing our and their complex set of behaviors into a simpler, more predictable “type”. They strengthen self-integrity by highlighting all the positives in our “type”. And they foster our need for belonging by connecting us to a tribe: the INTJs, the ENTPs — a group of people who are “like us”.
But what do we find when we dig below simply satisfying our meaning-making cravings?
The bright side
The core benefit of psychometrics comes from acknowledging that other meaning-making strategies, such as self-reflection, or the feedback and perspectives of others are not without their shortcomings and limitations either. They too are often inaccurate, subjective and incomplete. Therefore, when used properly, psychometrics can be a valuable complement for other meaning-making strategies.
Furthermore, grappling with complexity is hard. And sometimes, but definitely not always, grappling with a simpler challenge helps us better understand the more complex challenge. The simpler can sometimes be used as a starting point, gradually layering on complexity in later stages. Therefore the simplicity that psychometrics offer, when held loosely, can be helpful.
The dark side
Picking up from where the bright side left off, holding on to simplicity too tightly creates problems. Especially when used for prediction, a simplistic model of complex phenomena will often yield false predictions.
Furthermore, the way “clustering tests” create a sense of belonging can also lead to othering: there’s us, the INTJs, and there’s them, the ENTPs…
Finally, coming full circle to the beginning of this post, many of the assessments do not sit on solid scientific foundations. While I’m not sure that this is intrinsically problematic, it is certainly a serious issue where psychometrics are meant to support fair and objective decision-making. More on this shortly.
How to best use (and not use) psychometrics
The inherent attributes of both the bright and dark sides of psychometrics: meaning-making power, reductionism, scientific credibility, belonging, etc. are often at odds with one another making this an optimization problem that is very situational. In different circumstances, some of these attributes matter more, or less, than others.
This pithy advice from a Forbes piece captures this sentiment fairly well:
First distinguish between real tests and masquerading fortune cookies. Select an assessment that is psychometrically sound, situationally relevant and provides actionable insights.
Let’s unpack this a little further, given the different situations in which psychometrics are often used.
Personal/Professional Development
Perhaps the least contentious use-case of the three. The most dominant factor here is the assessment’s meaning-making ability as the value comes not just from highlighting the opportunity for growth but also from building the motivation to do so.
Since scientific credibility has a positive impact on my personal motivation, I tend to prefer assessments such as Hogan and the Leadership Circle Profile, but I don’t think it’s a hard requirement. If you find the Enneagrams narrative more meaningful to you — use that.
Organizational decision-making
Under this use-case, psychometrics are used as data to support organizational decision-making in the hiring or advancement/promotion processes.
The key factor that outweighs all others in this situation is the scientific validity of the assessment. I’ll also repeat a somewhat controversial statement that I’ve made before: if the scientific basis is sound, evidence of systemic bias should not automatically lead to exclusion of the assessment. It is almost certainly the case that other data points used in the decision-making process (interview team scores, manager’s evaluation, etc.) are also biased, it’s just that the bias there is more erratic and difficult to quantify. The “upside” so to speak, of systemic bias, is that it can be systemically corrected for.
Here, psychometrics are often used to validate/normalize the different work and communication styles of the different team members and serve as the basis for discussion on strategies that’ll allow the team to collaborate effectively despite those differences.
However, I would argue that the dark side of psychometrics here is perhaps the most dominant. Both reducing people to their “type” and accomplishing a sense of belonging through tribalism have the most negative effects in this context.
It’s the self-reflection and team dialogue that the assessment triggers that are most valuable. So instead of using an assessment, I’d advocate for using other self/team exercises that accomplish the same goal. The NY Times Kickoff Kit outlines three such exercises (“muppet analysis”, “how we work best”, ”hopes, dreams, and non-negotiables”) offering a much better alternative.
Long-time readers may know that while the focus of this publication is primarily on the human side of the business, I sometimes dabble in strategy and other “business fundamentals”.
This topic has been of particular interest to me over the years since there seems to be a large “knowing vs. doing” gap around it. While a moat (we’ll get to the definition in a second) is essential to the long-term longevity of any business, it seems to be an afterthought at best in most startups. “What moat are we creating?” does not seem to drive any of the critical business decisions that most startups are making.
One of the things that made exploring this issue challenging in the past was a notable overlap in the distinctions between some key terms: barriers to entry, network effects, marketplaces, economies-of-scale — all seem to be having a lot in common.
Neumann reconciles this issue in the opening paragraph:
Value is created through innovation, but how much of that value accrues to the innovator depends partly on how quickly their competitors imitate the innovation. Innovators must deter competition to get some of the value they created. These ways of deterring competition are called, in various contexts, barriers to entry, sustainable competitive advantages, or, colloquially, moats.
Yes! Moats are barriers to imitating innovation due to structural causes, as opposed to talent, vision and the likes.
The rest of the post lays out a comprehensive taxonomy of moats organized by the 4 key structural causes that generate them:
State Granted (patents, tariffs, regulation, etc.)
Special Know-how (tacit knowledge, customer insights, etc.)
Scale (network effects, sunk costs, willingness to experiment, etc.)
System Rigidity (business model innovation, brand, complementary assets, etc.)
In addition to providing a more detailed explanation of each category and providing concrete examples for the application of the moat, Neumann also looks at it specifically through the lens of a startup/early-stage company, assessing the viability of building that moat as an initial strategy. Most of that is also summarized in the final paragraphs:
Some startups have a moat they start with. These moats are generally fungible: they have the same or greater value if they are sold to an existing company as they would if they were incorporated into a new company… Other startups develop moats over time. No company can have a moat from returns to scale, for instance, before they have scale. No company can generate collective tacit knowledge in their organization until they have an organization. And building links from the product into the surrounding system takes time and, usually, a working product… Startups can avoid the competition of better-resourced incumbents for some period of time by using system rigidity against them through disruptive innovation or value chain innovation… [but] developing a moat based on system rigidity also takes time.
But it’s the next logical leap that Neumann takes that is truly mind-blowing:
Uncertainty can be seen everywhere in the startup process: in the people, in the technology, in the product, and in the market. This analysis shows something more interesting though: uncertainty is not just a nuisance startup founders can’t avoid, it is an integral part of what allows startups to be successful. Startups that aim to create value can’t have a moat when they begin, uncertainty is what protects them from competition until a proper moat can be built. Uncertainty becomes their moat.
I love operating rhythms. From the organizational to the individual, they help us balance the tactical and the strategic, the urgent and the important, by pre-allocating our time and deliberately carving out space for the things that we’re likely going to drop at the heat of the moment.
“Regular” one-on-ones — To provide support, coaching, and candor that helps your direct report grow, succeed, and excel.
Skip-level meetings — To help your managers become better bosses, build a rapport with your teammates, and get organizational/team feedback to improve the work environment.
Career conversations — To further get to know your direct reports, learn their aspirations, and plan how to help them reach those dreams.
Goals review — To review your direct report’s current goals and ensure they are accurately tracking towards them.
Performance reviews — To improve your direct report’s performance.
For each meeting type, Ajahne defines the purpose, provides a more detailed overview, outlines the standard agenda and topics that are covered, and recommends appropriate frequency and length. Each meeting section also includes links to add’l resources that can help provide more clarity and help improve mastery of the particular format.
While I have minor qualms with some of the frequencies, for the career convos and perf reviews, in particular, they do not take away from these definitions and schedule being a fantastic starting point that each manager should start with and customize/tweak to make it their own.