In the continual saga of companies making fundamental GMP mistakes, Gilead has recalled two lots of its coronavirus treatment drug Remdesivir because of the “presence of glass particulates.”
If only there existed international standards on visual inspection and there were a solid set of best practices on lyophilization.
Oh, wait there are.
But then Gilead has a multi-year track record in deficiencies in their testing and manufacturing processes. In all fairness, they are contracting manufacturing to Pfizer’s McPherson site…..oh wait that site got an FDA 483 in 2018 specifying significant violations of good manufacturing practices, such as an inadequate investigation into the detected presence of cardboard in vial samples.
We deserve better manufacturers. Companies need to take the quality of their products seriously. We are always improving or we are always one step away from the sort of press Gilead gets.
The Is-Is Not matrix (proper name Kepner-Tregoe Problem Specification Statement) is a great tool for problem-solving, that I usually recommend helping frame the problem. It is based on the 5W2H methodology and then asks the question of what is different between the problem and what has been going right.
IS
IS NOT
What
What specific objects have the deviation? What is the specific deviation?
What similar object(s) could reasonably have the deviation, but does not? What other deviations could reasonably be observed, but are not?
Where
Where is the object when the deviation is observed (geographically)? Where is the deviation on the object?
Where else could the object be when the deviations are observed but are not? Where else could the deviation be located on the object, but is not?
When
When was the deviation observed first (in clock and calendar time)? When since that time has the deviation been observed? Any pattern? When, in the object’s history or life cycle, was the deviation observed first?
When else could the deviation have been observed, but was not? When since that time could the deviation have been observed, but was not? When else, in the object’s history or life cycle, could the deviation have been observed first, but was not?
Extent
How many objects have the deviation? What is the size of a single deviation? How many deviations are on each object? What is the trend? (…in the object?) (…in the number of occurrences of the deviation?) (…in the size of the deviation?)
How many objects could have the deviation, but do not? What other size could the deviation be, but is not? How many deviations could there be on each object, but are not? What could be the trend, but is not? (…in the object?) (…in the number of occurrences of the deviation?) (…in the size of the deviation?)
Who
Who is involved (avoid blame, stick to roles, shifts, etc) To whom, by whom, near whom does this occur
Who is not involved? Is there a trend of a specific role, shift, or another distinguishing factor?
In the latest edition of “Executive utilizes Harvard Business Review to whitewash their activities” we have Hubert Joly, CEO of Best Buy, who informs us that we should all
Making meaningful purpose a genuine priority of business operations
The “human magic” of empowered and self-directed employees
Admitting you don’t have all the answers is a sign of strong leadership.
Let’s see how Best Buy puts those practices in place.
It is hard to take the editors of HBR seriously when they discuss what a good company culture looks like when they whitewash corporate leaders with this sort of track record.
Probably Best Buy paid a lot of money for the reputation bump just before Christmas.
All three are right on the nose, and I’ve posted a bunch on the topics. Definitely go and read the post.
What I want to delve deeper into is Stephanie’s point that “Deviation systems should also be built to triage events into risk-based categories with sufficient time allocated to each category to drive risk-based investigations and focus the most time and effort on the highest risk and most complex events.”
That is an accurate breakdown, and exactly what regulators are asking for. However, I think the implementation of risk-based categories can sometimes lead to confusion, and we can spend some time unpacking the concept.
Risk is the possible effect of uncertainty. Risk is often described in terms of risk sources, potential events, their consequences, and their likelihoods (where we get likelihoodXseverity from).
But there are a lot of types of uncertainty, IEC31010 “Risk management – risk management techniques” lists the following examples:
uncertainty as to the truth of assumptions, including presumptions about how people or systems might behave
variability in the parameters on which a decision is to be based
uncertainty in the validity or accuracy of models which have been established to make predictions about the future
events (including changes in circumstances or conditions) whose occurrence, character or consequences are uncertain
uncertainty associated with disruptive events
the uncertain outcomes of systemic issues, such as shortages of competent staff, that can have wide ranging impacts which cannot be clearly defined lack of knowledge which arises when uncertainty is recognized but not fully understood
unpredictability
uncertainty arising from the limitations of the human mind, for example in understanding complex data, predicting situations with long-term consequences or making bias-free judgments.
Most of these are only, at best, obliquely relevant to risk categorizing deviations.
So it is important to first build the risk categories on consequences. At the end of the day these are the consequence that matter in the pharmaceutical/medical device world:
harm to the safety, rights, or well-being of patients, subjects or participants (human or non-human)
compromised data integrity so that confidence in the results, outcome, or decision dependent on the data is impacted
These are some pretty hefty areas and really hard for the average user to get their minds around. This is why building good requirements, and understanding how systems work is so critical. Building breadcrumbs in our procedures to let folks know what deviations are in what category is a good best practice.
There is nothing wrong with recognizing that different areas have different decision trees. Harm to safety in GMP can mean different things than safety in a GLP study.
The second place I’ve seen this go wrong has to do with likelihood, and folks getting symptom confused with problem confused with cause.
bridge with a gap
All deviations are with a situation that is different in some way from expected results. Deviations start with the symptom, and through analysis end up with a root cause. So when building your decision-tree, ensure it looks at symptoms and how the symptom is observed. That is surprisingly hard to do, which is why a lot of deviation criticality scales tend to focus only on severity.
The SIPOC is a great tool for understanding data and fits nicely into a larger data process mapping initiative.
By understanding where the data comes from (Suppler), what it is used for (Customer) and what is done to the data on its trip from supplier to the customer (Process), you can:
Understand the requirements that the customer has for the data
Understand the rules governing how the data is provided
Determine the gap between what is required and what is provided
Track the root cause of data failures – both of type and of quality
Create requirements for modifying the processes that move the data
The SIPOC can be applied at many levels of detail. At a high level, for example, batch data is used to determine supply. At a detailed level, a rule for calculating a data element can result in an unexpected number because of a condition that was not anticipated.
SIPOC for manufacturing data utilizing the MES (high level)