When documenting a root cause analysis or risk assessment or any of the myriad other technical reports we are making a logical argument. In this post, I want to evaluate six common pitfalls to avoid in your writing.
Claiming to follow logically: Non Sequiturs and Genetic Fallacies
Non-sequiturs and genetic fallacies involve statements that are offered in a way that suggests they follow logically one from the other, when in fact no such link exists.
Non-sequiturs (meaning ‘that which does not follow’) often happens when we make connective explanations without justification. Genetic fallacies occur when we draw assumptions about something by tracing its origins back even though no necessary link can be made between the present situation and the claimed original one.
This is a very common mistake and usually stems from poor use of causal thinking. The best way to address it in an organization is continuing to build discipline in thought processes and documenting the connections and why things are connected.
Making Assumptions: Begging the Question
Begging the question, assuming the very point at issue happens a lot in investigations. One of the best ways to avoid this is to ensure a proper problem statement.
Restricting the Options to Two: ‘Black and White’ Thinking
In black and white thinking or the false dichotomy, the arguer gives only two options when other alternatives are possible.
Being Unclear: Equivocation and Ambiguity
Lexical: Refers to individual words
Referential: Occurs when the context is unclear
Syntactical: Results from grammatical confusions
Just think of all the various meanings of validation and you can understand this problem.
Good problem-solving will drive down the tendency to assume conclusions, but these probably exist in every organization.
All three are right on the nose, and I’ve posted a bunch on the topics. Definitely go and read the post.
What I want to delve deeper into is Stephanie’s point that “Deviation systems should also be built to triage events into risk-based categories with sufficient time allocated to each category to drive risk-based investigations and focus the most time and effort on the highest risk and most complex events.”
That is an accurate breakdown, and exactly what regulators are asking for. However, I think the implementation of risk-based categories can sometimes lead to confusion, and we can spend some time unpacking the concept.
Risk is the possible effect of uncertainty. Risk is often described in terms of risk sources, potential events, their consequences, and their likelihoods (where we get likelihoodXseverity from).
But there are a lot of types of uncertainty, IEC31010 “Risk management – risk management techniques” lists the following examples:
uncertainty as to the truth of assumptions, including presumptions about how people or systems might behave
variability in the parameters on which a decision is to be based
uncertainty in the validity or accuracy of models which have been established to make predictions about the future
events (including changes in circumstances or conditions) whose occurrence, character or consequences are uncertain
uncertainty associated with disruptive events
the uncertain outcomes of systemic issues, such as shortages of competent staff, that can have wide ranging impacts which cannot be clearly defined lack of knowledge which arises when uncertainty is recognized but not fully understood
uncertainty arising from the limitations of the human mind, for example in understanding complex data, predicting situations with long-term consequences or making bias-free judgments.
Most of these are only, at best, obliquely relevant to risk categorizing deviations.
So it is important to first build the risk categories on consequences. At the end of the day these are the consequence that matter in the pharmaceutical/medical device world:
harm to the safety, rights, or well-being of patients, subjects or participants (human or non-human)
compromised data integrity so that confidence in the results, outcome, or decision dependent on the data is impacted
These are some pretty hefty areas and really hard for the average user to get their minds around. This is why building good requirements, and understanding how systems work is so critical. Building breadcrumbs in our procedures to let folks know what deviations are in what category is a good best practice.
There is nothing wrong with recognizing that different areas have different decision trees. Harm to safety in GMP can mean different things than safety in a GLP study.
The second place I’ve seen this go wrong has to do with likelihood, and folks getting symptom confused with problem confused with cause.
All deviations are with a situation that is different in some way from expected results. Deviations start with the symptom, and through analysis end up with a root cause. So when building your decision-tree, ensure it looks at symptoms and how the symptom is observed. That is surprisingly hard to do, which is why a lot of deviation criticality scales tend to focus only on severity.
Gemba, as a term, is here to stay. We’re told that gemba comes from the Japanese for “the actual place”, and people who know more than me say it probably should translate as “Genba” but phonetically it uses an “m” instead and as a result, it’s commonly referred to as gemba – so that’s how it is used. Someday I’ll see a good linguistic study of loan words in quality circles, and I have been known to fight against some of the “buzz-terminess” of adopting words from Japanese. But gemba is a term that seems to have settled in, and heck, English is a borrowing language.
Just don’t subject me to any more hour-long talks about how we’re all doing lean wrong because we misunderstood a Japanese written character (I can assure you I don’t know any Japanese written characters). The Lean practitioner community sometimes reminds me of 80s Ninja movies, and can be problematic in all the same ways – you start with “Enter the Ninja” and before long it’s Remo Williams baby!
So let’s pretend that gemba is an English word now, we’ve borrowed it and it means “where the work happens.” It also seems to be a noun and a verb.
And if you know any good studies on the heady blend of Japanophobia mixed with Japanophilia from the 80s and 90s that saturated quality and management thinking, send them my way.
I think we can draw from ethnography more in our methodology.
The Importance of the Gemba Walk
Gemba is a principle from the lean methodology that says “go and see” something happening for real – you need to go and see how the process really works. This principle rightly belongs as one of the center points of quality thinking. This may be fighting words but I think it is the strongest of the principles from Lean because of the straightforward “no duh” of the concept. Any quality idea that feels so straightforward and radical at the same time is powerful.
You can think of a gemba through the PDCA lifecycle -You plan, you do it, you decide on the learnings, you follow through.
This is all about building a shared understanding of problems we all face together by:
Observation of specific issues where things don’t go as intended, listening to the people who do the work.
Discussion of what those issues mean both in the details of operations but also on a wider strategic level.
Commitment to problem solving in order to investigate further – not to fix the issue but to have the time to delve deeper. The assumption is that if people understand better what they do, they perform better at every aspects of their job
Gemba walks demonstrate visible commitment from the leadership to all members of the organization. They allow leadership to spread clear messages using open and honest dialogue and get a real indication of the progress of behavioral change at all levels. They empower employees because their contributions to site results are recognized and their ideas for continuous improvements heard.
Elements of a Successful Gemba
Define your goal
What is it that you want to do a gemba walk for? What do you hope to find out? What would make this activity a success? A successful walk stresses discovery.
Set a scope
Which areas will you observe? A specific process? Team? This will allow you to zoom into more detail and get the most out of the activity.
Set a theme
What challenges or topics will you focus on? Specific and targeted gemba walks are the most effective. For example, having a emba focusing on Data Integrity, or area clearance or error reduction.
Picking the right challenge is critical. Workplaces are complex and confusing, a gemba walk can help find concrete problems and drive improvement linked to strategy.
Find additional viewpoints
Who else can help you? Who could add a “fresh pair of eyes” to see the big questions that are left un-asked. Finding additional people to support will result in a richer output and can get buy in from your stakeholders.
Bring visibility and sponsorship for your gemba. Ensure all stakeholders are aware and on board.
Plan the Logistics
Identify Suitable Time
Find a suitable time from the process’ perspective. Be sure to also consider times of day, days of the week and any other time-based variations that occur in the process.
Find right location
Where should you see the process? Also, do you need to consider visiting multiple sites or areas?
Map what you’ll see
Define the process steps that you expect to see.
Build an agenda
What parts of the process will you see in what order? Are there any time sensitive processes to observe?
Share that agenda
Sharing your agenda to get help from the operational owner and other subject matter experts.
Doing the Gemba Walk
Explain what you are doing
Put people at ease when you’re observing the process.
When you are on the walk you need to challenge in a productive yet safe manner to create a place where everyone feels they’ve learned something useful and problems can be resolved. It pays to communicate both the purpose and overall approach by explaining the why, the who, and the when.
Use your agenda
Keep some flexibility but also make sure to cover everything.
Open discussion and explore the process challenges.
Ask closed questions
Use this to check your understanding of the process.
Capture reality with notes
Take notes as soon as possible to make sure you recall the reality of the situation.
As a coach, your objective is not to obtain results – that’s the person you’re coaching’s role – but to keep them striving to improve. Take a step back and focus on dismantling barriers.
What did you learn
What did you expect to see but didn’t? Also, what did you not expect to happen?
The ask questions, coach, learn aspect can be summarized as:
Visualize the ideal performance with your inner eye
Spot the specific difficulty the person is having (they’ll tell you – just listen)
Explain that (though sometimes they won’t want to hear it)
Spell out a simple exercise to practice overcoming the difficulty.
After the Gemba Walk
What did you learn?
Were challenges widespread or just one offs? Review challenges with a critical eye. The best way I’ve heard this explained is “helicopter” thinking – start n a very detailed operational point and ascend to the big picture and then return to the ground.
Resolve challenges with a critical eye
Define next steps and agree which are highest priority. It is a good outcome when what is observed on the gemba walk leads to a project that can transform the organization.
Follow-through on the agreed upon actions. Make them visible. In order to avoid being seen only as a critic you need to contribute firsthand.
Hold yourself to account
Share your recommendations with others. Engage in knowledge management and ensure actions are complete and effective.
Key points for executing a successful GEMBA
Gemba Walks as Standard Work
You can standardize a lot of the preparation of a gemba walk by creating standard work. I’ve seen this successfully done for data integrity, safety, material management, and other topics.
Build a frequency, and make sure they are often, and then hold leaders accountable.
Best Practice Frequency
Minimum Recommended Frequency
First line supervisors
Each shift, multiple times
Team leaders in individual units
Daily covering different shifts
2 per week
1 per day
1 per week
1 per day
1 per month
Internal customers and support (e.g. purchasing, finance, HR)
1 per month
1 per quarter
Frequency recommendation example
Going to the Gemba for a Deviation and Root Cause Analysis
These same principles can apply to golden-hour deviation triage and root cause analysis. This form of gemba means bringing together a cross-functional team meeting that is assembled where a potential deviation event occurred. Going to the gemba and “freezing the scene” as close as possible to the time the event occurred will yield valuable clues about the environment that existed at the time – and fresher memories will provide higher quality interviews. This gemba has specific objectives:
Obtain a common understanding of the event: what happened, when and where it happened, who observed it, who was involved – all the facts surrounding the event. Is it a deviation?
Clearly describe actions taken, or that need to be taken, to contain impact from the event: product quarantine, physical or mechanical interventions, management or regulatory notifications, etc.
Interview involved operators: ask open-ended questions, like how the event unfolded or was discovered, from their perspective, or how the event could have been prevented, in their opinion – insights from personnel experienced with the process can prove invaluable during an investigation.
You will gain plenty of investigational leads from your observations and interviews at the gemba – which documents to review, which personnel to interview, which equipment history to inspect, and more. The gemba is such an invaluable experience that, for many minor events, root cause and CAPA can be determined fairly easily from information gathered solely at the gemba.
Improvement is a process and sometimes it can feel like it is a one-step-forward-two-steps-back sort of shuffle. And just like any dance, knowing the steps to avoid can be critical. Here are some important ones to consider. In many ways they can be considered an onion, we systematically can address a problem layer and then work our way to the next.
The vague, ambiguous and poorly defined bucket concept called human error is just a mess. Human error is never the root cause; it is a category, an output that needs to be understood. Why did the human error occur? Was it because the technology was difficult to use or that the procedure was confusing? Those answers are things that are “actionable”—you can address them with a corrective action.
The only action you can take when you say “human error” is to get rid of the people. As an explanation the concept it widely misused and abused.
Human error has been a focus for a long time, and many companies have been building programmatic approaches to avoiding this pitfall. But we still have others to grapple with.
We like to build our domino cascades that imply a linear ordering of cause-and-effect – look no further than the ubiquitous presence of the 5-Whys. Causal chains force people to think of complex systems by reducing them when we often need to grapple with systems for their tendency towards non-linearity, temporariness of influence, and emergence.
This is where taking risk into consideration and having robust problem-solving with adaptive techniques is critical. Approach everything like a simple problem and nothing will ever get fixed. Similarly, if every problem is considered to need a full-on approach you are paralyzed. As we mature we need to have the mindset of types of problems and the ability to easily differentiate and move between them.
We remove human error, stop overly relying on causal chains – the next layer of the onion is to take a hard look at the concept of a root cause. The idea of a root cause “that, if removed, prevents recurrence” is pretty nonsensical. Novice practitioners of root cause analysis usually go right to the problem when they ask “How do I know I reached the root cause.” To which the oft-used stopping point “that management can control” is quite frankly fairly absurd. The concept encourages the idea of a single root cause, ignoring multiple, jointly necessary, contributory causes let alone causal loops, emergent, synergistic or holistic effects. The idea of a root cause is just an efficiency-thoroughness trade-off, and we are better off understanding that and applying risk thinking to deciding between efficiency and resource constraints.
Our problem solving needs to strive to drive out monolithic explanations, which act as proxies for real understanding, in the form of big ideas wrapped in simple labels. The labels are ill-defined and come in and out of fashion – poor/lack of quality culture, lack of process, human error – that tend to give some reassurance and allow the problem to be passed on and ‘managed’, for instance via training or “transformations”. And yes, maybe there is some irony in that I tend to think of the problems of problem solving in light of these ways of problem solving.
An appropriate level of root cause analysis should be applied during the investigation of deviations, suspected product defects and other problems. This can be determined using Quality Risk Management principles. In cases where the true root cause(s) of the issue cannot be determined, consideration should be given to identifying the most likely root cause(s) and to addressing those. Where human error is suspected or identified as the cause, this should be justified having taken care to ensure that process, procedural or system based errors or problems have not been overlooked, if present.
Appropriate corrective actions and/or preventative actions (CAPAs) should be identified and taken in response to investigations. The effectiveness of such actions should be monitored and assessed, in line with Quality Risk Management principles.
EU Guidelines for Good Manufacturing Practice for Medicinal Products for Human and Veterinary Use, Chapter 1 Pharmaceutical System C1.4(xiv)
The MHRA cited 210 companies in 2019 on failure to conduct good root cause analysis and develop appropriate CAPAs. 6 of those were critical and a 100 were major.
My guess is if I asked those 210 companies in 2018 how their root cause analysis and CAPAs were doing, 85% would say “great!” We tend to overestimate our capabilities on the fundamentals (which root cause analysis and CAPA are) and not to continuously invest in improvement.
Of course, without good benchmarking, its really easy to say good enough and not be. There can be a tendency to say “Well we’ve never had a problem here, so we’re good.” Where in reality its just the problem has never been seen in an inspection or has never gone critical.
The FDA has fairly similar observations around root cause analysis. As does anyone who shares their metrics in any way. Bad root cause and bad CAPAs are pretty widespread.
This comes up a lot because the quality of CAPAs (and quantity) are considered key indicators of an organization’s health. CAPAs demonstrate that issues are acknowledged, tracked and remediated in an effective manner to eliminate or reduce the risk of a recurrence. The timeliness and robustness of these processes and records indicate whether an organization demonstrates effective planning and has sufficient resources to manage, resolve and correct past issues and prevent future issues.
A good CAPA system covers problem identification (which can be, and usually is a few different processes), root cause analysis, corrective and preventive actions, CAPA effectiveness, metrics, and governance. It is a house of cards, short one and the whole structure will fall down around you, often when you least need it to.
We can’t freeze our systems with superglue. If we are not continually improving then we are going backwards. No steady state when it comes to quality.