Patrick Coolen is the Global Head People Analytics, Strategic Workforce Planning and HR Survey Management at ABN AMRO, the third-largest bank in the Netherlands. Recently he penned a great piece about one of my favorite topics:
In this piece, Coolen outlines how they conduct and digest engagement survey data at ABN AMRO.
The engagement survey is SUPER simple and light-weight containing only 3 questions:
- How likely are you to recommend our organization to a friend or relative as an organization to work for? (quantitative, NPS-like question)
- What is our organization doing well as an employer? (qualitative, “Top” question)
- What could our organization do better as an employer? (qualitative, “Tip” question)
To get a more continuous view of the data, while avoiding survey fatigue, since ABN AMRO is a large-enough organization, they run the survey monthly, but only 1/12 of the employees are asked to take it each time, utilizing a stratified sampling approach to ensure that the sample is representative.
I LOVE the lightweight approach and the balance of a single quantitative question and the two “top & tip” open-ended qualitative questions, as well as leveraging the size of the organization to reduce survey fatigue without jeopardizing the quality of insights.
My one nit is that I’m not a huge fan of the NPS-like quantitative question and would probably replace it with a different quantitative metric that has a causal link to performance.
The extreme simplicity of the survey and open-endedness of the qualitative do create some non-trivial data analysis challenges in classifying the responses that Coolen’s team did a brilliant job overcoming.
First, they “normalized” the responses by translating all responses to a single language (English), splitting responses with multiple subjects, lower-casing all text, removing punctuations, and lemmatizing key words.
Then, they evaluated several machine learning classification algorithms, landing on Support Vector Machine as the best candidate, a refined its precision further using a supervision process.
The output of the data analysis phase is the classification of all responses to one of 150 topics, who, in turn, roll up to a smaller set of “expert domains” (Recruiting, L&D, IT, etc.).
The data is then presented and made available to the entire organization using the bubble chart below where each bubble represents a topic:
- The bubble is larger the more responses map to that topic.
- The bubble is higher the more the topic showed up in “top” responses, rather than “tip” responses.
- The bubble is positioned further to the right, the more positive the responses to the quantitative question were when the topic was brought up in the qualitative questions.
The area of the chart can be segmented into 4 quadrants driving different actions:
- Topics (bubbles) in the top-right — Celebrate — things that the organization does well and are positively correlated with the quantitative measure.
- Topics (bubbles) in the bottom-left — Focus Areas — things that the organization does not do well, and are negatively correlated with the quantitative measure. Therefore, they are the areas where the opportunity for impactful change is the highest.
- Topics (bubbles) at the bottom-right — Suggestions — things that the organization does not do well, but are not negatively correlated with the quantitative measure.
- Topics (bubbles) at the top-left — Investigate — things that the organization does well but are still negatively correlated with the quantitative measure. Since this is an anomalous pattern, it is worthy of further investigation.
The chart can also be filtered by time, business line, role, etc. to draw more refined insights which are then reviewed and acted upon in quarterly business reviews.
Net-net I think this comes pretty darn close to the best way for surfacing insights out of a “working on work” exercise. Effective actions will be the next hurdle to overcome.