A little over a year ago, I published my first post on this topic.
This second post was triggered by the following First Round Review article, covering the work of Dave Zwieback:
The article itself is a mixed bag. Some parts cover topics that are fairly trivial, but one part stood out clearly above the rest, and was good enough in and of itself to justify a post. It’s the part that outlines the core principles behind Dave’s approach to learning from failures and conducting “learning reviews” (which is a much better name for postmortems):
- The purpose of the learning review is to learn so that we can improve our systems and organizations. No one will be blamed, shamed, demoted, fired, or punished in any way for providing a full account of what happened. Going beyond blame and punishment is the only way to gather full accounts of what happened—to fully hold people accountable.
- We’re likely working within complex, adaptive systems, and thus cannot apply the simplistic, linear, cause-and-effect models to investigating trouble within such systems. (See A Leader’s Framework for Decision Making by David J. Snowden and Mary E. Boone)
- Failure is a normal part of the functioning of complex systems. All systems fail—it’s just a matter of time. (See How Complex Systems Fail by Richard I. Cook, MD.)
- Weseek not only to understand the few things that go wrong, but also the many things that go right, in order to make our systems more resilient. (See From Safety-I to Safety-II: A White Paper by Erik Hollnagel, et al.)
- The root cause for both the functioning and malfunctions in all complex systems is impermanence (i.e., the fact that all systems are changeable by nature). Knowing the root cause, we no longer seek it, and instead look for the many conditions that allowed a particular situation to manifest. We accept that not all conditions are knowable or fixable.
- Human error is a symptom—never the cause—of trouble deeper within the system (e.g., the organization). We accept that no person wants to do a bad job, and we reject the “few bad apples” theory. We seek to understand why it made sense for people to do what they did, given the information they had at the time. (From The Field Guide to Understanding Human Error by Sidney Dekker)
- While conducting the learning review, we will fall under the influence of cognitive biases. The most common ones are hindsight, outcome, and availability biases; and fundamental attribution error. We may not notice that we’re under the influence, so we request help from participants in becoming aware of biases during the review. (Read Thinking, Fast and Slowby Daniel Kahneman)
The beauty of these principles is that they synthesize ideas from both systems thinking and behavioral psychology into a set of guidelines for how to operate in a complex-adaptive human system (aka “organization”). They are able to avoid the typical failure modes of traditional “root causes analyses” and “5 whys” exercises by calling out the root cause in advance and focusing the the exploration on understanding the conditions, and preventing “human error” from being anything but a symptom to a much deeper problem in the system.