Maybe you’ve been there too, you need to take a risk-based approach to determine environmental monitoring, so you go to a HAACP or FMEA and realize those tools just do not work to provide information to determine how to distribute monitoring to best verify that processes are operating under control.
What you want to do is build a heat map showing the relative probability of contamination in a defined area or room| covering six areas:
Amenability of equipment and surfaces to cleaning and sanitization
Personnel presence and flow
Proximity to open product or exposed direct product-contact material
Interventions/operations by personnel and their complexity
Frequency of interventions/process operations.
This approach builds off of the design activities and is part of a set of living risk assessments that inform the environmental monitoring part of your contamination control strategy.
The FDA recently released a Form 483 it handed to Catalent Belgium following an inspection of its 265,000 square-foot facility in Brussels in October 2021. Catalent is a pretty sizable entity, so it is very valuable to see what we can learn from their observations.
Failure to adequately assess an unexplained discrepancy or deviation
“Standard Operating Procedure STB-QA-0010, Deviation Management, v21 classifies deviations as minor, major or critical based on the calculation of a risk priority number, with a HEPA filter failure within a Grade A environment often classified as minor. Specifically, Deviation 327567 (Date of occurrence 04 March 2021) was for a HEPA filter failure on the <redacted> fill line, with a breach at the HEPA filter frame.”
This one is more common than it should be. I’ve recently written about categorization and criticality of events. I want to stress the term potential when addressing impact in the classification of events.
Control barriers exist for a reason. You breach that control barrier in any way, you have the potential to impact product or environment. It is really easy for experienced SMEs to say “But this has never had any real impact before” and then downgrade the deviation classification. Before long it becomes the norm that HEPA filter failures are minor because they never have impact. And then one does. Then there are shortages or worse.
It is important to avoid that complacency and treat each and every control barrier failure to the same level of investigation based on their potentiality to impact.
The other problem here is failure to identify trends and deal with them. I can honestly say that the last thing I ever want anyone, especially an inspector, to write about something where I have quality oversight is a failure to investigate multiple control barrier events.
“Other GMP manufacturing areas have a similar elevated level of HEPA filter failures, with the root cause of the HEPA filter failures unknown. There is no CAPA in support of correction action. Your firm failed to ensure your investigations identify appropriate root causes and you failed to implement sustainable corrective action and preventive action (CAPA).“
Contamination Control function
Observation 2 and 3 are doozies, but there is probably a lack of expertise involved here. The site is using out-of-date and inadequate methods in their validation. Hire a strong contamination control expert and leverage them. Build expertise in the organization through a robust training program. Connect this to all relevant quality systems/processes.
Corrective Maintenance and Troubleshooting
“Equipment and facilities used in the manufacture of drug product are not adequately maintained or appropriately designed to facilitate operations for their intended use.“
This is starting to feel a lot like my upcoming presentation at the 2022 ISPE Aseptic Conference where I will be speaking on “Contamination Control, Risk and the Quality Management System”
“Contamination Control is a fairly wide term used to mean “getting microbiologists out of the lab” and involved in risk management and the quality management system. This presentation will evaluate best practices in building a contamination control strategy and ensuring its use throughout the quality system. Leveraging a House of Quality approach, participants will learn how to: Create targeted/ risk based measures of contamination avoidance; Implement Key performance indicators to assess status of contamination control; and ensure a defined strategy for deviation management (investigations), CAPA and change management.”
When I teach an introductory risk management class, I usually use an icebreaker of “What is the riskiest activity you can think of doing. Inevitably you will get some version of skydiving, swimming with sharks, jumping off bridges. This activity is great because it starts all conversations around likelihood and severity. At heart, the question brings out the concept of risk important activities and the nature of controls.
The things people think of, such as skydiving, are great examples of activities that are surrounded by activities that control risk. The very activity is based on accepting reducing risk as low as possible and then proceeding in the safest possible pathway. These risk important activities are the mechanism just before a critical step that:
Ensure the appropriate transfer of information and skill
Ensure the appropriate number of actions to reduce risk
Influence the presence or effectiveness of barriers
Influence the ability to maintain positive control of the moderation of hazards
Risk important activities is a concept important to safety-thought and are at the center of a lot of human error reduction tools and practices. Risk important activities are all about thinking through the right set of controls, building them into the procedure, and successfully executing them before reaching the critical step of no return. Checklists are a great example of this mindset at work, but there are a ton of ways of doing them.
In the hospital they use a great thought process, “Five rights of Safe Medication Practices” that are: 1) right patient, 2) right drug, 3) right dose, 4) right route, and 5) right time. Next time you are getting medication in the doctor’s office or hospital evaluate just what your caregiver is doing and how it fits into that process. Those are examples of risk important activities.
Assessing controls during risk assessment
Risk is affected by the overall effectiveness of any controls that are in place.
The key aspects of controls are:
the mechanism by which the controls are intended to modify risk
whether the controls are in place, are capable of operating as intended, and are achieving the expected results
whether there are shortcomings in the design of controls or the way they are applied
whether there are gaps in controls
whether controls function independently, or if they need to function collectively to be effective
whether there are factors, conditions, vulnerabilities or circumstances that can reduce or eliminate control effectiveness including common cause failures
A risk can have more than one control and controls can affect more than one risk.
We always want to distinguish between controls that change likelihood, consequences or both, and controls that change how the burden of risk is shared between stakeholders
Any assumptions made during risk analysis about the actual effect and reliability of controls should be validated where possible, with a particular emphasis on individual or combinations of controls that are assumed to have a substantial modifying effect. This should take into account information gained through routine monitoring and review of controls.
Risk Important Activities, Critical Steps and Process
Critical steps are the way we meet our critical-to-quality requirements. The activities that ensure our product/service meets the needs of the organization.
These critical steps are the points of no-return, the point where the work-product is transformed into something else. Risk important activities are what we do to remove the danger of executing that critical step.
Beyond that critical step, you have rejection or rework. When I am cooking there is a lot of prep work which can be a mixture of critical steps, from which there is no return. I break the egg wrong and get eggshells in my batter, there is a degree of rework necessary. This is true for all our processes.
The risk-based approach to the process is to understand the critical steps and mitigate controls.
We are thinking through the following:
Critical Step: The action that triggers irreversibility. Think in terms of critical-to-quality attributes.
Output: The desired result (positive) or the possible difficulty (negative)
Preconditions: Technical conditions that must exist before the critical step
Resources: What is needed for the critical step to be completed
Local factors: Things that could influence the critical step. When human beings are involved, this is usually what can influence the performer’s thinking and actions before and during the critical step
One cannot control risk, or even successfully identify it unless a system is able flexibly to monitor both its own performance (what happens inside the system’s boundary) and what happens in the environment (outside the system’s boundary). Monitoring improves the ability to cope with possible risks
When performing the risk assessment, challenge existing monitoring and ensure that the right indicators are in place. But remember, monitoring itself is a low-effectivity control.
Ensure that there are leading indicators, which can be used as valid precursors for changes and events that are about to happen.
For each monitoring control, as yourself the following:
How have the indicators been defined? (By analysis, by tradition, by industry consensus, by the regulator, by international standards, etc.)
When was the list created? How often is it revised? On which basis is it revised? Who is responsible for maintaining the list?
How many of the indicators are of the ‘leading,’ type and how many are of the lagging? Do indicators refer to single or aggregated measurements?
How is the validity of an indicator established (regardless of whether it is leading or lagging)? Do indicators refer to an articulated process model, or just to ‘common sense’?
For lagging indicators, how long is the typical lag? Is it acceptable?
What is the nature of the measurements? Qualitative or quantitative? (If quantitative, what kind of scaling is used?)
How often are the measurements made? (Continuously, regularly, every now and then?)
What is the delay between measurement and analysis/interpretation? How many of the measurements are directly meaningful and how many require analysis of some kind? How are the results communicated and used?
Are the measured effects transient or permanent?
Is there a regular inspection scheme or -schedule? Is it properly resourced? Where does this measurement fit into the management review?
Last night speaking at the DFW Audit SIG one of the topics I wished I had gone a little deeper on were controls, and how to gauge their strength.
As I am preparing to interview candidates for a records management position, I thought I would flesh out controls specific to the storage of and access to completed or archived paper records, such as forms, as an example.
These controls are applied at the record or system level and are meant to prevent a potential data integrity issue from occurring.
Generation and Reconciliation of Documents
For each record
Who performs controlled issuance
Individuals authorized by quality unit from designated unit (limited, centralized)
Individuals authorized by quality unit from (limited, decentralized)
Anyone (unlimited, decrentalized), often user of record
Full reconciliation of record and pages based on unique identifier
Full reconciliation of records and pages based on quantity issued
Yes, by controlled process
Destruction of blank forms
Performed by issuing unit, quality oversight required (High level of evidence)
Performed by the operating or issuing unit, quality unit oversight required
Performed by the individual, quality unit oversight required (periodic walk throughs, self-inspections and audits)
Storage and Access to completed and archived paper records
Office retention location
How Removed & Returned
Limited conditions for removal (e.g. regulatory inspections) method of recording the removal and return of the record(e.g. archive management system, logbook). Most use of documents either in controlled reading area or by scans.
Method of recording the removal and return of the record(e.g., archive management system, logbook).
Method (e.g. logbook) recording of documents checked-in/checked-out
Card key access with entry and exit documented.
Card key access with entry and exit documented.
Limited key access
Periodic User Access Review
Every 2 years
There are also the need to consider controls for paper to electronic, electronic to paper and my favorite beast, the true copy.
For paper records a true copy of a picture of the original that keeps everything – a scan. The regulations state that you can get rid of the paper if you have a true copy. Many things called a true copy are probably not a true copy, to ensure an accurate true copy add two more controls.
Documented review by second person from the quality unit for legibility, accuracy, and completeness
Documented review by second person (not necessarily from the quality unit) for legibility, accuracy, and completeness
Documented verification by person performing the scan for legibility, accuracy, and completeness
Discard of original allowed
Yes, as defined by quality unit oversight, unless there is a seal, watermark, or other identifier that can’t be accurately reproduced electronically.
Yes, performed by the operating unit, unless there is a seal, watermark, or other identifier that can’t be accurately reproduced electronically. Quality unit oversight required
Yes, individual can discard original Quality unit oversight required
Gilbert’s Behavior Engineering Model (BEM) presents a concise way to consider both the environmental and the individual influences on a person’s behavior. The model suggests that a person’s environment supports impact to one’s behavior through information, instrumentation, and motivation. Examples include feedback, tools, and financial incentives (respectively), to name a few. The model also suggests that an individual’s behavior is influenced by their knowledge, capacity, and motives. Examples include training/education, physical or emotional limitations, and what drives them (respectively), to name a few. Let’s look at some further examples to better understand the variability of individual behavioral influences to see how they may negatively impact data integrity.
Good article in Pharmaceutical Online last week. It cannot be stated enough, and it is good that folks like Kip keep saying it — to understand data integrity we need to understand behavior — what people do and say — and realize it is a means to an end. It is very easy to focus on the behaviors which are observable acts that can be seen and heard by management and auditors and other stakeholders but what is more critical is to design systems to drive the behaviors we want. To recognize that behavior and its causes are extremely valuable as the signal for improvement efforts to anticipate, prevent, catch, or recover from errors.
By realizing that error-provoking aspects of design, procedures, processes, and human nature exist throughout our organizations. And people cannot perform better than the organization supporting them.
Human Error Considerations
Define the Scope of Work
·Identify the critical steps
·Consider the possible errors associated with each critical step
and the likely consequences.
·Ponder the "worst that could happen."
·Consider the appropriate human performance tool(s) to use.
·Identify other controls, contingencies, and relevant operating
When tasks are identified and prioritized, and resources
are properly allocated (e.g., supervision, tools, equipment, work
control, engineering support, training), human performance can flourish.
These organizational factors create a unique array of job-site conditions
– a good work environment – that sets people up for success. Human error increases
when expectations are not set, tasks are not clearly identified, and
resources are not available to carry out the job.
The error precursors – conditions that provoke error – are reduced.
This includes things such as:
·Departures from the routine
·Need to interpret requirements
Properly managing controls is
dependent on the elimination of error precursors that challenge the
integrity of controls and allow human error to become consequential.
Apply proactive Risk Management
When risk is properly analyzed we can take appropriate action to
mitigate the risks. Include the criteria in risk assessments:
·Adverse environmental conditions (e.g. impact of gowning,
noise, temperature, etc)
·Confusing displays or controls
Addressing risk through engineering and administrative controls are a
cornerstone of a quality system.
Strong administrative and cultural controls can withstand human error.
Controls are weakened when conditions are present that provoke error.
Eliminating error precursors
in the workplace reduces
the incidences of active errors.
Utilizing error reduction tools as part of all work. Examples
Engineering Controls can often take the place of some of these, for
example second-person verifications can be replaced by automation.
Appropriate process and tools in place to ensure that the
organizational processes and values are in place to adequately support
Because people err and make mistakes, it is all the more important
that controls are implemented and properly maintained.
Feedback and Improvement
Continuous improvement is critical. Topics should include:
·Surprises or unexpected outcomes.
·Usability and quality of work documents
·Knowledge and skill shortcomings
·Minor errors during the activity
·Unanticipated workplace conditions
·Adequacy of tools and Resources
·Quality of work planning/scheduling
·Adequacy of supervision
Errors during work are inevitable. If we strive to understand and
address even inconsequential acts we can strengthen controls and make future
Vulnerabilities with controls can be found and corrected when management
decides it is important enough to devote resources to the effort
The fundamental aim of oversight is to improve resilience to
significant events triggered by active errors in the workplace—that is, to
minimize the severity of events.
Oversight controls provide opportunities to see what is happening, to
identify specific vulnerabilities or performance gaps, to take action to
address those vulnerabilities and performance gaps, and to verify that they
have been resolved.