Changes meet their intended objectives and pre-defined effectiveness criteria. Any deviations from those criteria are adequately assessed, accepted and managed/justified. Whenever possible, quantitative data are leveraged to objectively determine change effectiveness (e.g. statistical confidence and coverage).
CQV activities can tell you if the intended objective is met. Effectiveness reviews must be made up of:
Sufficient data points, as described in the implementation plan, gathered to a described timeline, before an assessment of the change is made.
The success criteria should be achieved. If not, reasons why they have not been achieved should be assessed along with the mitigation steps to address the reasons why, including reverting to the previous operating state where appropriate. This may require the proposal of a subsequent change or amendment of the implementation plan to ensure success.
Data and knowledge gathered from implementation of the change should be shared with the development function and other locations, as appropriate, to ensure that learning can be applied in products under development or to similar products manufactured at the same or other locations
As part of the quality risk management activities, residual risks are assessed and managed to acceptable levels, and appropriate adaptations of procedures and controls are implemented.
These are action items in the change control.
As part of the closure activities, revise the risk assessment, clearly delineating risk assessment in two phases.
Any unintended consequences or risks introduced as a result of changes are evaluated, documented, accepted and handled adequately, and are subject to a pre-defined monitoring timeframe.
Leverage the deviation system.
Prior to or after change closure
Requirement
Important Points
Any post-implementation actions needed (including those for deviations from pre-defined acceptance criteria and/or CAPAs) are identified and adequately completed.
If you waterfall into a CAPA system, it is important to include effectiveness reviews that are to the change, and not just to the root cause.
Relevant risk assessments are updated post-effectiveness assessments. New product/process knowledge resulting from those risk assessments are captured in the appropriate Quality and Operations documents (e.g. SOPs, Reports, Product Control Strategy documents, etc.)
Changes are monitored via ongoing monitoring systems to ensure maintenance of a state of control, and lessons learned are captured and shared/communicated.
ICH Q12 “Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management” was adopted by the ICH in Singapore, which means Q12 is now in Stage 5, Implementation. Implementation should be interesting as concepts like “established conditions” and “product lifecycle management” which sit at the core of Q12 are still open for interpretation as Q12 is implemented in specific regulatory markets.
This draft guidance is now in a review period by regulatory agencies. Which means no public comments, but it will be applied on a 6-month trial basis by PIC/S participating authorities, which include the US Food and Drug Administration and other regulators across Europe, Australia, Canada, South Africa, Turkey, Iran, Argentina and more.
This document is aligned to ICH Q10, and there should be few surprised in this. Given PIC/S concern that “ongoing continual improvement has probably not been realised to a meaningful extent. The PIC/S QRM Expert Circle, being well-placed to focus on the QRM concepts of the GMPs and of ICH Q10, is seeking to train GMP inspectors on what a good risk-based change management system can look like within the PQS, and how to assess the level of effectiveness of the PQS in this area” it is a good idea to start aligning to be ahead of the curve.
“Changes typically have an impact assessment performed within the change control system. However, an impact assessment is often not as comprehensive as a risk assessment for the proposed change.”
This is a critical thing that agencies have been discussing for years. There are a few key takeaways.
The difference between impact and risk is critical. Impact is best thought of as “What do I need to do to make the change.” Risk is “What could go wrong in making this change?” Impact focuses on assessing the impact of the proposed change on various things such as on current documentation, equipment cleaning processes, equipment qualification, process validation, training, etc. While these things are very important to assess, asking the question about what might go wrong is also important as it is an opportunity for companies to try to prevent problems that might be associated with the proposed change after its implementation.
This 8 page document is really focusing on the absence of clear links between risk assessments, proposed control strategies and the design of validation protocols.
The guidance is very concerned about appropriately classifying changes and using product data to drive decisions. While not specifying it in so many words, one of the first things that popped to my mind was around how we designate changes as like-for-like in the absence of supporting data. Changes that are assigned a like-for-like classification are often not risk-assessed, and are awarded limited oversight from a GMP perspective. These can sometimes result in major problems for companies, and one that I think people are way to quick to rush to.
It is fascinating to look at appendix 1, which really lays out some critical goals of this draft guidance: better risk management, real time release, and innovative approaches to process validation. This is sort of the journey we are all on.
Gilbert’s Behavior Engineering Model (BEM) presents a concise way to consider both the environmental and the individual influences on a person’s behavior. The model suggests that a person’s environment supports impact to one’s behavior through information, instrumentation, and motivation. Examples include feedback, tools, and financial incentives (respectively), to name a few. The model also suggests that an individual’s behavior is influenced by their knowledge, capacity, and motives. Examples include training/education, physical or emotional limitations, and what drives them (respectively), to name a few. Let’s look at some further examples to better understand the variability of individual behavioral influences to see how they may negatively impact data integrity.
Good article in Pharmaceutical Online last week. It cannot be stated enough, and it is good that folks like Kip keep saying it — to understand data integrity we need to understand behavior — what people do and say — and realize it is a means to an end. It is very easy to focus on the behaviors which are observable acts that can be seen and heard by management and auditors and other stakeholders but what is more critical is to design systems to drive the behaviors we want. To recognize that behavior and its causes are extremely valuable as the signal for improvement efforts to anticipate, prevent, catch, or recover from errors.
By realizing that error-provoking aspects of design, procedures, processes, and human nature exist throughout our organizations. And people cannot perform better than the organization supporting them.
Design Consideration
Human Error Considerations
Manage Controls
Define the Scope of Work
·Identify the critical steps
·Consider the possible errors associated with each critical step
and the likely consequences.
·Ponder the "worst that could happen."
·Consider the appropriate human performance tool(s) to use.
·Identify other controls, contingencies, and relevant operating
experience.
When tasks are identified and prioritized, and resources
are properly allocated (e.g., supervision, tools, equipment, work
control, engineering support, training), human performance can flourish.
These organizational factors create a unique array of job-site conditions
– a good work environment – that sets people up for success. Human error increases
when expectations are not set, tasks are not clearly identified, and
resources are not available to carry out the job.
The error precursors – conditions that provoke error – are reduced.
This includes things such as:
·Unexpected conditions
·Workarounds
·Departures from the routine
·Unclear standards
·Need to interpret requirements
Properly managing controls is
dependent on the elimination of error precursors that challenge the
integrity of controls and allow human error to become consequential.
Apply proactive Risk Management
When risk is properly analyzed we can take appropriate action to
mitigate the risks. Include the criteria in risk assessments:
·Adverse environmental conditions (e.g. impact of gowning,
noise, temperature, etc)
·Unclear roles/responsibilities
·Time pressures
·High workload
·Confusing displays or controls
Addressing risk through engineering and administrative controls are a
cornerstone of a quality system.
Strong administrative and cultural controls can withstand human error.
Controls are weakened when conditions are present that provoke error.
Eliminating error precursors
in the workplace reduces
the incidences of active errors.
Perform Work
Utilizing error reduction tools as part of all work. Examples
include:
·Self-checking
oQuestioning
attitude
oStop when
unsure
oEffective
communication
oProcedure use
and adherence
oPeer-checking
oSecond-person
verifications
oTurnovers
Engineering Controls can often take the place of some of these, for
example second-person verifications can be replaced by automation.
Appropriate process and tools in place to ensure that the
organizational processes and values are in place to adequately support
performance.
Because people err and make mistakes, it is all the more important
that controls are implemented and properly maintained.
Feedback and Improvement
Continuous improvement is critical. Topics should include:
·Surprises or unexpected outcomes.
·Usability and quality of work documents
·Knowledge and skill shortcomings
·Minor errors during the activity
·Unanticipated workplace conditions
·Adequacy of tools and Resources
·Quality of work planning/scheduling
·Adequacy of supervision
Errors during work are inevitable. If we strive to understand and
address even inconsequential acts we can strengthen controls and make future
performance better.
Vulnerabilities with controls can be found and corrected when management
decides it is important enough to devote resources to the effort
The fundamental aim of oversight is to improve resilience to
significant events triggered by active errors in the workplace—that is, to
minimize the severity of events.
Oversight controls provide opportunities to see what is happening, to
identify specific vulnerabilities or performance gaps, to take action to
address those vulnerabilities and performance gaps, and to verify that they
have been resolved.
The decisions we make are often complex and uncertain. A good decision-making process better is critical to success – knowing how we make decisions, and how to confirm we are making good decisions – allows us to bring quality to our decisions. To do this we need to understand what a quality decision looks like and how to obtain it.
There is no universal best process or set of steps to follow in making good decisions. However, any good decision process needs to have the idea of decision-quality as the measurable destination.
Decisions do not come ready to be made. They must be shaped starting by declaring what the decision you that must be made. All decisions have one thing in common – the best choice creates the best possibility of what you truly want. To find that best choice, you need decision-quality and you must recognize it as the destination when you get there. You cannot reach a good decision, achieve decision-quality, if you are unable to visualize or describe it. Nor can you say you have accomplished it, if you cannot recognize it when it is achieved.
What makes a Good Decision?
The six requirements for a good decision are: (1) an
appropriate frame, (2) creative alternatives, (3) relevant and reliable
information, (4) clear values and trade-offs, (5) sound reasoning, and (6)
commitment to action. To judge the quality of any decision before you act, each
requirement must be met and addressed with quality. I like representing it as a
chain, because a decision is no better than the weakest link.
The frame specifies the problem or opportunity you are
tackling, asking what is to be decided. It has three parts: purpose in making the decision; scope of what
will be included and left out; and your perspective including your point of
view, how you want to approach the decision, what conversations will be needed,
and with whom. Agreement on framing is essential, especially when more than one
party is involved in decision making. What is important is to find the frame
that is most appropriate for the situation. If you get the frame wrong, you will
be solving the wrong problem or not dealing with the opportunity in the correct
way.
The next three links are: alternatives – defining what you
can do; information – capturing what you know and believe (but cannot control),
and values – representing what you want and hope to achieve. These are the
basis of the decision and are combined using sound reasoning, which guides you
to the best choice (the alternative that gets you the most of what you want and
in light of what you know). With sound reasoning, you reach clarity of
intention and are ready for the final element – commitment to action.
Asking: “What is the decision I should be making?” is not a
simple question. Furthermore, asking the question “On what decision should I be
focusing?” is particularly challenging. It is a question, however, that is
important to be asked, because you must know what decision you are making. It
defines the range within which you have creative and compelling alternatives.
It defines constraints. It defines what is possible. Many organizations fail to
create a rich set of alternatives and simply debate whether to accept or reject
a proposal. The problem with this approach is that people frequently latch on
to ideas that are easily accessible, familiar or aligned directly with their
experiences.
Exploring alternatives is a combination of analysis, rigor, technology and judgement. This is about the past and present – requiring additional judgement to anticipate future consequences. What we know about the future is uncertain and therefore needs to be described with possibilities and probabilities. Questions like: “What might happen?” and “How likely is it to happen?” are difficult and often compound. To produce reliable judgements about future outcomes and probabilities you must gather facts, study trends and interview experts while avoiding distortions from biases and decision traps. When one alternative provides everything desired, the choice among alternatives is not difficult. Trade-offs must be made when alternatives do not provide everything desired. You must then decide how much of one value you are willing to give up to receive more of another.
Commitment to action is reached by involving the right
people in the decision efforts. The right people must include individuals who
have the authority and resources to commit to the decision and to make it stick
(the decision makers) and those who will be asked to execute the decided-upon
actions (the implementers). Decision makers are frequently not the implementers
and much of a decision’s value can be lost in the handoff to implementers. It
is important to always consider the resource requirements and challenges for
implementation.
These six requirements of decision-quality can be used to
judge the quality of the decision at the time it is made. There is no need to
wait six months or six years to assess its outcome before declaring the
decision’s quality. By meeting the six requirements you know at the time of the
decision you made a high-quality choice. You cannot simply say: “I did all the
right steps.” You have got to be able to judge the decision itself, not just
how you got to that decision. When you ask, “How good is this decision if we
make it now?” the answer must be a very big part of your process. The piece
missing in the process just may be in the material and the research and that is
a piece that must go right.
Decision-quality is all about reducing comfort zone bias – when people do what they know how to do, rather than what is needed to make a strong, high-quality decision. You overcome the comfort zone bias by figuring out where there are gaps. Let us say the gap is with alternatives. Your process then becomes primarily a creative process to generate alternatives instead of gathering a great deal more data. Maybe we are awash in a sea of information, but we just have not done the reasoning and modelling and understanding of the consequences. This becomes more of an analytical effort. The specific gaps define where you should put your attention to improve the quality of the decision.
Leadership needs to have clearly defined decision rights and
understand that the role of leadership is assembling the right people to make
quality decisions. Once you know how to recognize digital quality, you need an
effective and efficient process to get there and that process involves many
things including structured interactions between decision maker and decision staff,
remembering that productive discussions result when multiple parties are
involved in the decision process and difference in judgement are present.
Beware Advocacy
The most common decision process tends to be an advocacy
decision process – you are asking somebody to sell you an answer. Once you are
in advocacy mode, you are no longer in a decision-quality mode and you cannot
get the best choice out of an advocacy decision process. Advocacy suppresses
alternatives. Advocacy forces confirming evidence bias and means selective
attention to what supports your position. Once in advocacy mode, you are really
in a sales mode and it becomes a people competition.
When you want quality in a decision, you want the alternatives to compete, not the people. From the decision board’s perspective, when you are making a decision, you want to have multiple alternatives in front of you and you want to figure out which of these alternatives beats the others in terms of understanding the full consequences in risk, uncertainty and return. For each of the alternatives one will show up better. If you can make this happen, then it is not the advocate selling it, it is you trying to help look at which of these things gives us the most value for our investment in some way.
The role outcomes play in the measuring of decision quality
Always think of decisions and outcomes as separate because
when you make decisions in an uncertain world, you cannot fully control the
outcomes. When looking back from an outcome to a decision, the only thing you
can really tell is if you had a good outcome or a bad outcome. Hindsight bias
is strong, and once triggered, it is hard to put yourself back into
understanding what decisions should have been made with what you knew, or could
have known, at the time.
In understanding how we use outcomes in terms of evaluating
decisions, you need to understand the importance of documenting the decision
and the decision quality at the time of the decision. Ask yourself, if you were
going to look back two years from now, what about this decision file answers
the questions: “Did we make a decision that was good?” and “What can we learn
about the things about which we had some questions?” This kind of documentation
is different from what people usually do. What is usually documented is the
approval and the working process. There is usually no documentation answering
the question: “If we are going to look back in the future, what would we need
to know to be able to learn about making better decisions?”
The reason you want to look back is because that is the way
you learn and improve the whole decision process. It is not for blaming; in the
end, what you are trying to show in documentation is: “We made the best
decision we could then. Here is what we thought about the uncertainties. Here
is what we thought were the driving factors.” Its about having a learning
culture.
When decision makers and individuals understand the
importance of reaching quality in each of the six requirements, they feel
meeting those requirements is a decision-making right and should be demanded as
part of the decision process. To be in a position where they can make a good
decision, they know they deserve a good frame and significantly different alternatives
or they cannot be in a position to reach a powerful, correct conclusion and
make a decision. From a decision-maker’s perspective, these are indeed needs and
rights to be thought about. From a decision support perspective, these needs and
rights are required to be able to position the decision maker to make a good
choice.
Building decision-quality enables measurable value creation and its framework can be learned, implemented and measured. Decision-quality helps you navigate the complexity of uncertainty of significant and strategic choices, avoid mega biases and big decision traps.