Hunter Walk published a short post titled:
Hiring for Fit Gets You to IPO, but Over Time, Hiring for Potential Wins
It’s based on a 2002 research paper by Baron and Hannan exploring the impact of “employment blueprints” on companies success.
I was excited to read the 25-page paper as it covers one of the topics that I’m most passionate about: the impact of organizational choices and decisions on company performance. And on a more meta level, it also seems to corroborate a broader pattern that I’ve observed: we seem to have a serious knowledge management problem when it come to organizational design. This paper was published in 2002. 13 years later it is rediscovered, but during that time, the debate about some of the insights (for example: hiring for fit vs. potential) raged on, uninformed by data as always…
Reading the paper left me with mixed feelings. On the one hand, the “blueprint” approach seems to resonate and yield some pretty interesting insights. On the other hand, the opacity around the analysis done by the authors, casts a serious doubt on the validity of the insights, at least in my opinion.
The team looked at a 200-company data set from the “Stanford Project on Emerging Companies” . They’ve classified the employment decisions that they’ve made on three dimensions: basis of attachment and retention, criterion for selection and means of control & coordination:
They then clustered the 36 possible permutations into five common blueprints (archetypes):
After looking at the impact blueprint selection has on softer aspects, such as other choices in organization-building (timing of bringing in specialized HR capacity, sequence of hiring compared to other business milestones, level of early attention to organizational concerns), and exploring some initial attributes of blueprint switching (who switches, why, and to what), the authors turn their attention to the main research question:
Once a certain blueprint was chosen, do the benefits of switching outweigh its costs?
To answer that question, the authors first explore the intrinsic costs and benefits of each blueprint. The costs are measured in the level of administrative overhead over time, and the benefits are measured through the impact on three performance indicators:
- Likelihood of Failure
- Likelihood of IPO
- Annual growth rate in market capitalization (post IPO)
The “commitment” blueprint seems to be the clear winner of the first two, while the “star” blueprint is the winner of the last one.
Then they turn to explore the impact of changing the blueprint, and find that while it slightly increases the chance of IPO, it has more profound effects in increasing the likelihood of failure, increasing employee turnover and reducing yearly growth in market cap. Perhaps the most interesting finding w/r/t employee turnover was the following:
“It turns out that CEO succession does have a strong effect on
turnover. However, this effect appears to be due entirely to the tendency for
CEO succession to be accompanied by changes in HR blueprints”
As I’ve eluded to in the intro, the opacity around the statistical analysis done by the authors is an issue that kept bothering me throughout the paper. The authors mention several times that they’ve taken other factors into account while performing the analysis but don’t reveal any of the data. A subsequent working paper sheds some additional light, but not enough alleviate my concerns. This is particularly troubling for several reasons:
- Small data set – only 156 companies
- Selection bias – only 42 of the companies went public, so any analysis of post-IPO performance is particularly selection-prone
- Unclear statistical significance of the results – for example, w/r/t the information shown in figures 6 & 7, the authors acknowledge in the working paper that: “The differences among models (aside from the contrast vis-à-vis Commitment) are not jointly significant”
- Unclear explanatory power – I’m probably not using the right statistical term here, but here’s the issue: almost all the information is presented in relative terms looking at the impact of one blueprint compared to another, typically using the “engineering” blueprint as the default. However, what portion of the overall variability in the performance indicators is explained by blueprint selection (and whether that insight is statistically significant) is never discussed. Put differently: it seems unlikely in the heavily-scrutinized post-IPO environment that if switching from the “engineering” blueprint to the “star” blueprint would have yielded an 80% improvement in the annual growth in market cap, only 11% of companies would switch blueprints…
- Unique time bias– the boom-and-bust period of the late 90s and early 00s
The subsequent working paper exposes another facet of this paper, which may help explain some of the analytical challenges with it. In it, the authors state that:
“The main focus of this research was to learn whether changing initial blueprints destabilized the SPEC companies”
And that is indeed, the less controversial part of the research. I really like the idea of organizational blueprints/archetypes and think it merits further exploration. Beyond that, personally I find it hard to overcome the analytical challenges in making a call on whether one blueprint is better than the others.