Health of the Validation Program

In the Metrics Plan for Facility, Utility, System and Equipment that is being developed a focus is on effective commissioning, qualification, and validation processes.

To demonstrate the success of a CQV program we might brainstorm the following metrics.

Deviation and Non-Conformance Rates

  • Track the number and severity of deviations related to commissioned, qualified and validated processes and FUSE elements.
  • The effectiveness of CAPAs that involve CQV elements

Change Control Effectiveness

  • Measure the number of successful changes implemented without issues
  • Track the time taken to implement and qualify validate changes

Risk Reduction

  • Quantify the reduction in high and medium risks identified during risk assessments as a result of CQV activities
  • Monitor the effectiveness of risk mitigation strategies

Training and Competency

  • Measure the percentage of personnel with up-to-date training on CQV procedures
  • Track competency assessment scores for key validation personnel

Documentation Quality

  • Measure the number of validation discrepancies found during reviews
  • Track the time taken to approve validation documents

Supplier Performance

  • Monitor supplier audit results related to validated systems or components
  • Track supplier-related deviations or non-conformances

Regulatory Inspection Outcomes

  • Track the number and severity of validation-related observations during inspections
  • Measure the time taken to address and close out regulatory findings

Cost and Efficiency Metrics

  • Measure the time and resources required to complete validation activities
  • Track cost savings achieved through optimized CQV approaches

By tracking these metrics, we might be able to demonstrate a comprehensive and effective CQV program that aligns with regulatory expectations. Or we might just spend time measuring stuff that may not be tailored to our individual company’s processes, products, and risk profile. And quite frankly, will they influence the system the way we want? It’s time to pull out an IMPACT key behavior analysis to help us tailor a right-sized set of metrics.

The first thing to do is to go to first principles, to take a big step back and ask – what do I really want to improve?

The purpose of a CQV program is to provide documented evidence that facilities, systems, equipment and processes have been designed, installed and operate in accordance with predetermined specifications and quality attributes:

  • To verify that critical aspects of a facility, utility system, equipment or process meet approved design specifications and quality attributes.
  • To demonstrate that processes, equipment and systems are fit for their intended use and perform as expected to consistently produce a product meeting its quality attributes.
  • To establish confidence that the manufacturing process is capable of consistently delivering quality product.
  • To identify and understand sources of variability in the process to better control it.
  • To detect potential problems early in development and prevent issues during routine production.

The ultimate measure of success is demonstrating and maintaining a validated state that ensures consistent production of safe and effective products meeting all quality requirements. 

Focusing on the Impact is important. What are we truly concerned about for our CQV program. Based on that we come up with two main factors:

  1. The level of deviations that stem from root causes associated with our CQV program
  2. The readiness of FUSE elements for use (project adherence)

Reducing Deviations from CQV Activities

First, we gather data, what deviations are we looking for? These are the types of root causes that we will evaluate. Of course, your use of the 7Ms may vary, this list is to start conversation.

  Means  Automation or Interface Design Inadequate/DefectiveValidated machine or computer system interface or automation failed to meet specification due to inadequate/defective design.
  Means  Preventative Maintenance InadequateThe preventive maintenance performed on the equipment was insufficient or not performed as required.
  MeansPreventative Maintenance Not DefinedNo preventive maintenance is defined for the equipment used.
  MeansEquipment Defective/Damaged/FailureThe equipment used was defective or a specific component failed to operate as intended.
  Means  Equipment IncorrectEquipment required for the task was set up or used incorrectly or the wrong equipment was used for the task.
  Means  Equipment Design Inadequate/DefectiveThe equipment was not designed or qualified to perform the task required or the equipment was defective, which prevented its normal operation.
MediaFacility DesignImproper or inadequate layout or construction of facility, area, or work station.
  MethodsCalibration Frequency is Not Sufficient/DeficiencyCalibration interval is too long and/or calibration schedule is lacking.
  Methods  Calibration/Validation ProblemAn error occurred because of a data collection- related issue regarding calibration or validation.
MethodsSystem / Process Not DefinedThe system/tool or the defined process to perform the task does not exist.

Based on analysis of what is going on we can move into using a why-why technique to look at our layers.

Why 1Why are deviations stemming from CQV events not at 0%
Because unexpected issues or discrepancies arise after the commissioning, qualification, or validation processes

Success factor needed for this step: Effectiveness of the CQV program

Metric for this step: Adherence to CQV requirements
Why 2 (a)Why are unexpected issues arising after CQV?
Because of inadequate planning and resource constraints in the CQV process.

Success Factor needed for this step: Appropriate project and resource planning

Metric for this Step: Resource allocation
Why 3 (a)Why are we not performing adequate resource planning?
Because of the tight project timelines, and the involvement of multiple stakeholders with different areas of expertise

Success Factor needed for this step: Cross-functional governance to implement risk methodologies to focus efforts on critical areas

Metric for this Step: Risk Coverage Ratio measuring the percentage of identified critical risks that have been properly assessed and and mitigated through the cross-functional risk management process. This metric helps evaluate how effectively the governance structure is addressing the most important risks facing the organization.
Why 2 (b)Why are unexpected issues arising after CQV?
Because of poorly executed elements of the CQV process stemming from poorly written procedures and under-qualified staff.

Success Factor needed for this step: Process Improvements and Training Qualification

Metric for this Step: Performance to Maturity Plan

There were somethings I definitely glossed over there, and forgive me for not providing numbers there, but I think you get the gist.

So now I’ve identified the I – How do we improve reliability of our CQV program, measured by reducing deviations. Let’s break out the rest.

ParametersExecuted for CQV
IDENTIFYThe desired quality or process improvement goal (the top-level goal)Improve the effectiveness of the CQV program by taking actions to reduce deviations stemming from verification of FUSE and process.
MEASUREEstablish the existing Measure (KPI) used to conform and report achievement of the goalSet a target reduction of deviations related to CQV activities.
PinpointPinpoint the “desired” behaviors necessary to deliver the goal (behaviors that contribute successes and failures)Drive good project planning and project adherence.

Promote and coach for enhanced attention to detail where “quality is everyone’s job.”

Encourage a speak-up culture where concerns, issues or suggestions are shared in a timely manner in a neutral constructive forum.
ACTIVATE the CONSEQUENCESActivate the Consequences to motivate the delivery of the goal
(4:1 positive to negative actionable consequences)
Organize team briefings on consequences

Review outcomes of project health

Senior leadership celebrate/acknowledge

Acknowledge and recognize improvements

Motivate the team through team awards

Measure success on individual deliverables through a Rubric
TRANSFERTransfer the knowledge across the organization to sustain the performance improvementCreate learning teams

Lessons learned are documented and shared

Lunch-and-learn sessions

Create improvement case studies

From these two exercises I’ve now identified my lagging and leading indicators at the KPI and the KBI level.

Metrics Scoring

As I develop metrics for FUSE, it is important to have a method rating a metric for effectiveness. Here’s the rubric I’ll be using.

RelevanceMeasurabilityPrecisionActionabilityPresence of Baseline
Rating ScaleHow strongly does this metric connect to business objectives?How much effort would it take to track this metric?How often and by what margin does the metric change?Can we clearly articulate actions we would take in response to this metric?Does internal or external baseline data exist to indicate good/poor performance for this metric?
5Empirically Direct – Data proves the metric directly supports at least one business objectiveAlmost None – Data already collected and visualized in a centralized systemHighly Predictable -– Metric fluctuates narrowly and infrequentlyClear consensus on action, and capability currently exists to take actionBaseline can be based on both internal and external data
4Logically Direct – Clear logic shows how the metric directly supports at least one business objectiveLow – Data collected and measured consistently, but not aggregated in central system.Somewhat Predictable – Metric fluxtuates either narrowly or infrequentlySome consensus on action, and capability currently exists to take actionBaseline can be based on either internal or external data
3Empirically Indirect – Data proves the metric indirectly supports at least one business objectiveMedium – Data exists but in local systems, minor collection or measurement challenges may exist.Neither Volatile or PredictableSome consensus on action, and capability to take action expected in the futureBaseline must be based on incomplete or directional data
2Logically Indirect – Clear logic shows how the metric indirectly supports at least one business objectiveHigh – Inconsistent measurements across sites, data not being collected regularly.Somewhat Volatile – Metric fluctuates either widely or frequentlySome consensus on action, but no current or expected future capability to take actionNo data exists to establish baseline, but data can be generated within six months
1Unclear – Connection to business objective is unclearPotentially Prohibitive – No defined measurement or collection method in place.Highly Volatile – Metric fluctuates widely and frequentlyNo consensus on actionNo data exists to establish baseline, and data needed will take more than a year to generate
Weights25%20%20%25%10%

Metrics Plan for Facility, Utility, System and Equipment

As October rolls around I am focusing on 3 things: finalizing a budget; organization design and talent management; and a 2025 metrics plan. One can expect those three things to be the focus of a lot of my blog posts in October.

Go and read my post on Metrics plans. Like many aspects of a quality management system we don’t spend nearly enough time planning for metrics.

So over the next month I’m going to develop the strategy for a metrics plan to ensure the optimal performance, safety, and compliance of our biotech manufacturing facility, with a focus on:

  1. Facility and utility systems efficiency
  2. Equipment reliability and performance
  3. Effective commissioning, qualification, and validation processes
  4. Robust quality risk management
  5. Stringent contamination control measures

Following the recommended structure of a metrics plan, here is the plan:

Rationale and Desired Outcomes

Implementing this metrics plan will enable us to:

  • Improve overall facility performance and product quality
  • Reduce downtime and maintenance costs
  • Ensure regulatory compliance
  • Minimize contamination risks
  • Optimize resource allocation

Metrics Framework

Our metrics framework will be based on the following key areas:

  1. Facility and Utility Systems
  2. Equipment Performance
  3. Commissioning, Qualification, and Validation (CQV)
  4. Quality Risk Management (QRM)
  5. Contamination Control

Success Criteria

Success will be measured by:

  • Reduction in facility downtime
  • Improved equipment reliability
  • Faster CQV processes
  • Decreased number of quality incidents
  • Reduced contamination events

Implementation Plan

Steps, Timelines & Milestones

  1. Develop detailed metrics for each key area (Month 1)
  2. Implement data collection systems (Month 2)
  3. Train personnel on metrics collection and analysis (Month 3)
  4. Begin data collection and initial analysis (Month 4)
  5. Review and refine metrics (Month 9)
  6. Full implementation and ongoing analysis (Month 12 onwards)

This plan gets me ready to evaluate these metrics as part of governance in January of next year.

In October I will breakdown some metrics, explaining them and provide the rationale, and demonstrate how to collect. I’ll be striving to break these metrics into key performance indicators (KPI), key behavior indicators (KBI) and key risk indicators (KRI).

When Your Deviation/CAPA Program Runs Smoothly Expect a Period of Increased Deviations

One reason to invest in the CAPA program is that you will see fewer deviations over time as you fix issues. That is true, but it takes time. Yes, you’ve dealt with your backlog, improved your investigations, integrated risk management, built problem-solving into your processes, and are truly driving preventative actions. And yet your deviations remain high. What is going on?

It’s because you are getting good at things and working your way through the bolus of problems. Here’s what is going on:

  1. Improved Detection and Reporting: As a CAPA program matures, it enhances an organization’s ability to detect and report deviations. Employees become more adept at identifying and documenting deviations due to better training and awareness, leading to a temporary increase in reported deviations.
  2. Thorough Root Cause Analysis: A well-functioning CAPA program emphasizes thorough root cause analysis. This process often uncovers previously unnoticed issues and identifies additional deviations that need to be addressed.
  3. Increased Scrutiny and Compliance: As the CAPA program gains momentum, management usually scrutinizes it more, which can lead to the discovery of more deviations. Organizations become more vigilant in maintaining compliance, resulting in more deviations being reported and documented.
  4. Systematic Process Improvements: The CAPA process often leads to systemic improvements in processes and procedures. As these improvements are implemented, any deviations from the new standards are more likely to be identified and recorded, contributing to an initial rise in deviation reports.
  5. Cultural Shift Towards Quality: A successful CAPA program fosters a culture of quality and continuous improvement. Employees may feel more empowered and responsible for reporting deviations, increasing the number of deviations captured.

Expect these changes and build your metric program around them. Avoid introducing a metric like a reduction in deviations in the first year, as such a metric will drive bad behavior. Instead, focus on metrics that demonstrate the success of the changes and, over time, introduce metrics to see the overall benefits.

Risk Based Thinking

Risk-based thinking is a crucial component of modern quality management systems and consists of four key aspects: anticipate, monitor, respond, and learn. Each aspect ensures an organization can effectively manage and mitigate risks, enhancing overall performance and reliability.

Anticipate

Anticipating risks involves proactively identifying and analyzing potential risks that could impact the organization’s operations or objectives. This step is about foreseeing problems before they occur and planning how to address them. It requires a thorough understanding of the organization’s processes, the external and internal factors that could affect these processes, and the potential consequences of various risks. By anticipating risks, organizations can prepare more effectively and prevent many issues from occurring.

Monitor

Monitoring involves continuously observing and tracking the operational environment to detect risk indicators early. This ongoing process helps catch deviations from expected outcomes or standards, which could indicate the emergence of a risk. Effective monitoring relies on establishing metrics that help to quickly and accurately identify when things are starting to veer off course. This real-time data collection is crucial for enabling timely responses to potential threats.

Respond

Responding to risks is about taking appropriate actions to manage or mitigate identified risks based on their severity and potential impact. This step involves implementing the planned risk responses that were developed during the anticipation phase. The effectiveness of these responses often depends on the speed and decisiveness of the actions taken. Responses can include adjusting processes, reallocating resources, or activating contingency plans. The goal is to minimize the organization’s and its stakeholders’ negative impact.

Learn

Learning from the management of risks is a critical component that closes the loop of risk-based thinking. This aspect involves analyzing the outcomes of risk responses and understanding what worked well and what did not. Learning from these experiences is essential for continuous improvement. It helps organizations refine risk management processes, improve response strategies, and better prepare for future risks. This iterative learning process ensures that risk management efforts are increasingly effective over time.

The four aspects of risk-based thinking—anticipate, monitor, respond, and learn—form a continuous cycle that helps organizations manage uncertainties proactively. This approach protects the organization from potential downsides and enables it to seize opportunities that arise from a well-understood risk landscape. Organizations can enhance their resilience and adaptability by embedding these practices into everyday operations.

Implementing Risk-Based Thinking

1. Understand the Concept of Risk-Based Thinking

Risk-based thinking involves a proactive approach to identifying, analyzing, and addressing risks. This mindset should be ingrained in the organization’s culture and used as a basis for decision-making.

2. Identify Risks and Opportunities

Identify potential risks and opportunities. This can be achieved through various methods such as SWOT analysis, brainstorming sessions, and process mapping. It’s crucial to involve people at all levels of the organization since they can provide diverse perspectives on potential risks and opportunities.

3. Analyze and Prioritize Risks

Once risks and opportunities are identified, they should be analyzed to understand their potential impact and likelihood. This analysis will help prioritize which risks need immediate attention and which opportunities should be pursued.

4. Plan and Implement Responses

After prioritizing, develop strategies to address these risks and opportunities. Plans should include preventive measures for risks and proactive steps to seize opportunities. Integrating these plans into the organization’s overall strategy and daily operations is important to ensure they are effective.

5. Monitor and Review

Implementing risk-based thinking is not a one-time activity but an ongoing process. Regular monitoring and reviewing of risks, opportunities, and the effectiveness of responses are crucial. This can be done through regular audits, performance evaluations, and feedback mechanisms. Adjustments should be made based on these reviews to improve the risk management process.

6. Learn and Improve

Organizations should learn from their experiences in managing risks and opportunities. This involves analyzing what worked well and what didn’t and using this information to improve future risk management efforts. Continuous improvement should be a key goal, aligning with the Plan-Do-Check-Act (PDCA) cycle.

7. Documentation and Compliance

Maintaining proper documentation is essential for tracking and managing risk-based thinking activities. Documents such as risk registers, action plans, and review reports should be updated and readily available.

8. Training and Culture

Training and cultural adaptation are necessary to implement risk-based thinking effectively. All employees should be trained on the principles of risk-based thinking and how to apply them in their roles. Creating a culture encouraging open communication about risks and supporting risk-taking within defined limits is also vital.