Health of the Validation Program

In the Metrics Plan for Facility, Utility, System and Equipment that is being developed a focus is on effective commissioning, qualification, and validation processes.

To demonstrate the success of a CQV program we might brainstorm the following metrics.

Deviation and Non-Conformance Rates

  • Track the number and severity of deviations related to commissioned, qualified and validated processes and FUSE elements.
  • The effectiveness of CAPAs that involve CQV elements

Change Control Effectiveness

  • Measure the number of successful changes implemented without issues
  • Track the time taken to implement and qualify validate changes

Risk Reduction

  • Quantify the reduction in high and medium risks identified during risk assessments as a result of CQV activities
  • Monitor the effectiveness of risk mitigation strategies

Training and Competency

  • Measure the percentage of personnel with up-to-date training on CQV procedures
  • Track competency assessment scores for key validation personnel

Documentation Quality

  • Measure the number of validation discrepancies found during reviews
  • Track the time taken to approve validation documents

Supplier Performance

  • Monitor supplier audit results related to validated systems or components
  • Track supplier-related deviations or non-conformances

Regulatory Inspection Outcomes

  • Track the number and severity of validation-related observations during inspections
  • Measure the time taken to address and close out regulatory findings

Cost and Efficiency Metrics

  • Measure the time and resources required to complete validation activities
  • Track cost savings achieved through optimized CQV approaches

By tracking these metrics, we might be able to demonstrate a comprehensive and effective CQV program that aligns with regulatory expectations. Or we might just spend time measuring stuff that may not be tailored to our individual company’s processes, products, and risk profile. And quite frankly, will they influence the system the way we want? It’s time to pull out an IMPACT key behavior analysis to help us tailor a right-sized set of metrics.

The first thing to do is to go to first principles, to take a big step back and ask – what do I really want to improve?

The purpose of a CQV program is to provide documented evidence that facilities, systems, equipment and processes have been designed, installed and operate in accordance with predetermined specifications and quality attributes:

  • To verify that critical aspects of a facility, utility system, equipment or process meet approved design specifications and quality attributes.
  • To demonstrate that processes, equipment and systems are fit for their intended use and perform as expected to consistently produce a product meeting its quality attributes.
  • To establish confidence that the manufacturing process is capable of consistently delivering quality product.
  • To identify and understand sources of variability in the process to better control it.
  • To detect potential problems early in development and prevent issues during routine production.

The ultimate measure of success is demonstrating and maintaining a validated state that ensures consistent production of safe and effective products meeting all quality requirements. 

Focusing on the Impact is important. What are we truly concerned about for our CQV program. Based on that we come up with two main factors:

  1. The level of deviations that stem from root causes associated with our CQV program
  2. The readiness of FUSE elements for use (project adherence)

Reducing Deviations from CQV Activities

First, we gather data, what deviations are we looking for? These are the types of root causes that we will evaluate. Of course, your use of the 7Ms may vary, this list is to start conversation.

  Means  Automation or Interface Design Inadequate/DefectiveValidated machine or computer system interface or automation failed to meet specification due to inadequate/defective design.
  Means  Preventative Maintenance InadequateThe preventive maintenance performed on the equipment was insufficient or not performed as required.
  MeansPreventative Maintenance Not DefinedNo preventive maintenance is defined for the equipment used.
  MeansEquipment Defective/Damaged/FailureThe equipment used was defective or a specific component failed to operate as intended.
  Means  Equipment IncorrectEquipment required for the task was set up or used incorrectly or the wrong equipment was used for the task.
  Means  Equipment Design Inadequate/DefectiveThe equipment was not designed or qualified to perform the task required or the equipment was defective, which prevented its normal operation.
MediaFacility DesignImproper or inadequate layout or construction of facility, area, or work station.
  MethodsCalibration Frequency is Not Sufficient/DeficiencyCalibration interval is too long and/or calibration schedule is lacking.
  Methods  Calibration/Validation ProblemAn error occurred because of a data collection- related issue regarding calibration or validation.
MethodsSystem / Process Not DefinedThe system/tool or the defined process to perform the task does not exist.

Based on analysis of what is going on we can move into using a why-why technique to look at our layers.

Why 1Why are deviations stemming from CQV events not at 0%
Because unexpected issues or discrepancies arise after the commissioning, qualification, or validation processes

Success factor needed for this step: Effectiveness of the CQV program

Metric for this step: Adherence to CQV requirements
Why 2 (a)Why are unexpected issues arising after CQV?
Because of inadequate planning and resource constraints in the CQV process.

Success Factor needed for this step: Appropriate project and resource planning

Metric for this Step: Resource allocation
Why 3 (a)Why are we not performing adequate resource planning?
Because of the tight project timelines, and the involvement of multiple stakeholders with different areas of expertise

Success Factor needed for this step: Cross-functional governance to implement risk methodologies to focus efforts on critical areas

Metric for this Step: Risk Coverage Ratio measuring the percentage of identified critical risks that have been properly assessed and and mitigated through the cross-functional risk management process. This metric helps evaluate how effectively the governance structure is addressing the most important risks facing the organization.
Why 2 (b)Why are unexpected issues arising after CQV?
Because of poorly executed elements of the CQV process stemming from poorly written procedures and under-qualified staff.

Success Factor needed for this step: Process Improvements and Training Qualification

Metric for this Step: Performance to Maturity Plan

There were somethings I definitely glossed over there, and forgive me for not providing numbers there, but I think you get the gist.

So now I’ve identified the I – How do we improve reliability of our CQV program, measured by reducing deviations. Let’s break out the rest.

ParametersExecuted for CQV
IDENTIFYThe desired quality or process improvement goal (the top-level goal)Improve the effectiveness of the CQV program by taking actions to reduce deviations stemming from verification of FUSE and process.
MEASUREEstablish the existing Measure (KPI) used to conform and report achievement of the goalSet a target reduction of deviations related to CQV activities.
PinpointPinpoint the “desired” behaviors necessary to deliver the goal (behaviors that contribute successes and failures)Drive good project planning and project adherence.

Promote and coach for enhanced attention to detail where “quality is everyone’s job.”

Encourage a speak-up culture where concerns, issues or suggestions are shared in a timely manner in a neutral constructive forum.
ACTIVATE the CONSEQUENCESActivate the Consequences to motivate the delivery of the goal
(4:1 positive to negative actionable consequences)
Organize team briefings on consequences

Review outcomes of project health

Senior leadership celebrate/acknowledge

Acknowledge and recognize improvements

Motivate the team through team awards

Measure success on individual deliverables through a Rubric
TRANSFERTransfer the knowledge across the organization to sustain the performance improvementCreate learning teams

Lessons learned are documented and shared

Lunch-and-learn sessions

Create improvement case studies

From these two exercises I’ve now identified my lagging and leading indicators at the KPI and the KBI level.

Profound Knowledge

In his System of Profound Knowledge, Deming provides a framework based on a deep and comprehensive understanding of a subject or system that goes beyond surface-level information to provide a holistic approach to leadership and management.

Profound knowledge is central to a quality understanding as it is the ability to deeply understand an organization or its critical processes, delving beneath surface-level observations to uncover fundamental principles and truths. This knowledge is a guiding force for daily living, shaping one’s thinking and values, ultimately manifesting in their conduct. It embodies wisdom, morality, and deep insight, offering a comprehensive framework for understanding complex systems and making informed decisions. Profound knowledge goes beyond mere facts or data, encompassing a holistic view that allows individuals to navigate challenges and drive meaningful improvements within their organizations and personal lives.

Components of Deming’s System of Profound Knowledge

Deming’s SoPK consists of four interrelated components:

  1. Appreciation for a System: Understanding how different parts of an organization interact and work together as a whole system.
  2. Knowledge about Variation: Recognizing that variation exists in all processes and systems, and understanding how to interpret and manage it.
  3. Theory of Knowledge: Understanding how we learn and gain knowledge, including the importance of prediction and testing theories.
  4. Psychology: Understanding human behavior, motivation, and interactions within an organization.

Applications of Profound Knowledge

  • Organizational Transformation: Profound knowledge provides a framework for improving and transforming systems.
  • Decision Making: It helps leaders make more informed decisions by providing a comprehensive lens through which to view organizational issues.
  • Continuous Improvement: The SoPK promotes ongoing learning and refinement of processes.
  • Leadership Development: It transforms managers into leaders by providing a new perspective on organizational management.

Profound knowledge, as conceptualized by Deming, provides a comprehensive framework for understanding and improving complex systems, particularly in organizational and management contexts. It encourages a holistic view that goes beyond subject-matter expertise to foster true transformation and continuous improvement.

Depth and Comprehensiveness

Profound knowledge goes beyond surface-level understanding or mere subject matter expertise. It provides a deep, fundamental understanding of systems, principles, and underlying truths. While regular knowledge might focus on facts or specific skills, profound knowledge seeks to understand the interconnections and root causes within a system.

Holistic Perspective

Profound knowledge takes a holistic approach to understanding and improving systems. It consists of four interrelated components:

  1. Appreciation for a system
  2. Knowledge about variation
  3. Theory of knowledge
  4. Psychology

These components work together to provide a comprehensive framework for understanding complex systems, especially in organizational contexts.

Interdisciplinary Nature

Profound knowledge often transcends traditional disciplinary boundaries. It combines insights from various fields, such as systems thinking, psychology, and epistemology, to create a more comprehensive understanding of complex phenomena.

Focus on Improvement and Optimization

While regular knowledge might be sufficient for maintaining the status quo, profound knowledge is geared towards improvement and optimization of systems. It provides a framework for understanding how to make meaningful changes and improvements in organizations and processes.

Knowledge as Object or Social Action

Deming’s System of Profound Knowledge can be easily seen as an application of knowledge as social action.

The concept of knowledge as object versus knowledge as social action represents two distinct perspectives on the nature and function of knowledge in society. This dichotomy, rooted in sociological theory, offers contrasting views on how knowledge is created, understood, and utilized. Knowledge as object refers to the traditional view of knowledge as a static, codified entity that can be possessed, stored, and transferred independently of social context. In contrast, knowledge as social action emphasizes the dynamic, socially constructed nature of knowledge, viewing it as an active process embedded in social interactions and practices. This distinction, largely developed through the work of sociologists like Karl Mannheim, challenges us to consider how our understanding of knowledge shapes our approach to learning, decision-making, and social organization.

Knowledge as Object

Knowledge as object refers to knowledge as a static, codified entity that can be possessed, stored, and transferred. Key aspects include:

  • Knowledge is seen as propositional or factual information that can be articulated and recorded. For example, knowledge stored in documents or expert systems.
  • It involves an awareness of facts, familiarity with situations, or practical skills that an individual possesses.
  • Knowledge is often characterized as justified true belief – a belief that is both true and justified.
  • It can be understood as a cognitive state of an individual person.
  • Knowledge as object aligns with more traditional, rationalist views of knowledge as something that can be objectively defined and measured.

Knowledge as Social Action

Knowledge as social action views knowledge as an active, dynamic process that is socially constructed. Key aspects include:

  • Knowledge is produced through social interactions, relationships and collective processes rather than being a static entity.
  • It emphasizes how knowledge is created, shared and applied in social contexts.
  • Social action theories examine the motives and meanings of individuals as they engage in knowledge-related behaviors.
  • Knowledge is seen as emerging from and being shaped by social, cultural and historical contexts.
  • It focuses on knowledge as a process of knowing rather than a fixed object.
  • This view aligns with social constructivist and pragmatist perspectives on knowledge.

Key Differences

  • Static vs. Dynamic: Knowledge as object is fixed and stable, while knowledge as social action is fluid and evolving.
  • Individual vs. Collective: The object view focuses on individual cognition, while the social action view emphasizes collective processes.
  • Product vs. Process: Knowledge as object treats knowledge as an end product, while social action views it as an ongoing process.
  • Context-independent vs. Context-dependent: The object view assumes knowledge can be decontextualized, while social action emphasizes situatedness.
  • Possession vs. Practice: Knowledge as object can be possessed, while knowledge as social action is enacted through practices.

Knowledge as object reflects a more traditional, cognitive view of knowledge as factual information possessed by individuals. In contrast, knowledge as social action emphasizes the dynamic, socially constructed nature of knowledge as it is created and applied in social contexts. Both perspectives offer valuable insights into the nature of knowledge, with the social action view gaining prominence in fields like sociology of knowledge and science studies.

Knowledge sharing as a form of social action plays a crucial role in modern organizations, influencing various aspects of organizational life and performance. Here’s an analysis of how knowledge as social action manifests in contemporary organizations:

Knowledge Sharing as a Social Process

In organizations knowledge sharing is increasingly viewed as a social process rather than a simple transfer of information. This perspective emphasizes:

  • The interactive nature of knowledge exchange
  • The importance of relationships and trust in facilitating sharing
  • The role of organizational culture in promoting or hindering knowledge flow

Knowledge sharing becomes a form of social action when employees actively engage in exchanging ideas, experiences, and expertise with their colleagues.

Impact on Organizational Culture

Knowledge sharing as social action can significantly shape organizational culture by:

  • Fostering a climate of openness and collaboration
  • Encouraging continuous learning and innovation
  • Building trust and strengthening interpersonal relationships

Organizations that successfully implement knowledge sharing practices often see a shift towards a more transparent and cooperative work environment.

Enhancing Employee Engagement

When knowledge sharing is embraced as a social action, it can boost employee engagement by:

  • Making employees feel valued for their expertise and contributions
  • Increasing their sense of belonging and connection to the organization
  • Empowering them with information to make better decisions

Engaged employees are more likely to participate in knowledge sharing activities, creating a virtuous cycle of engagement and collaboration.

Driving Innovation and Performance

Knowledge as social action can be a powerful driver of innovation and organizational performance:

  • It facilitates the cross-pollination of ideas across departments
  • It helps in identifying and solving problems more efficiently
  • It reduces duplication of efforts and promotes best practices

By leveraging collective knowledge through social interactions, organizations can enhance their problem-solving capabilities and competitive advantage.

Challenges and Considerations

While knowledge sharing as social action offers numerous benefits, organizations may face challenges in implementing and sustaining such practices:

  • Overcoming knowledge hoarding behaviors
  • Addressing power dynamics that may hinder open sharing
  • Ensuring equitable access to knowledge across the organization

Leaders play a crucial role in addressing these challenges by modeling knowledge sharing behaviors and creating supportive structures.

Technology as an Enabler

Modern organizations often leverage technology to facilitate knowledge sharing as a social action:

  • Knowledge management systems
  • Collaborative platforms and social intranets
  • Virtual communities of practice

These tools can help break down geographical and hierarchical barriers to knowledge flow, enabling more dynamic and inclusive sharing practices.

Psychological Safety and Knowledge Sharing

The concept of psychological safety is closely tied to knowledge sharing as social action:

  • A psychologically safe environment encourages risk-taking in interpersonal interactions
  • It reduces fear of negative consequences for sharing ideas or admitting mistakes
  • It promotes open communication and collective learning

Organizations that foster psychological safety are more likely to see robust knowledge sharing practices among their employees.

Viewing knowledge sharing as a form of social action in organizations highlights its transformative potential. It goes beyond mere information exchange to become a catalyst for cultural change, employee engagement, and organizational innovation. By recognizing and nurturing the social aspects of knowledge sharing, organizations can create more dynamic, adaptive, and high-performing work environments.

Good Engineering Practices Under ASTM E2500

ASTM E2500 recognizes that Good Engineering Practices (GEP) are essential for pharmaceutical companies to ensure the consistent and reliable design, delivery, and operation of engineered systems in a manner suitable for their intended purpose.

Key Elements of Good Engineering Practices

  1. Risk Management: Applying systematic processes to identify, assess, and control risks throughout the lifecycle of engineered systems. This includes quality risk management focused on product quality and patient safety.
  2. Cost Management: Estimating, budgeting, monitoring and controlling costs for engineering projects and operations. This helps ensure projects deliver value and stay within budget constraints.
  3. Organization and Control: Establishing clear organizational structures, roles and responsibilities for engineering activities. Implementing monitoring and control mechanisms to track performance.
  4. Innovation and Continual Improvement: Fostering a culture of innovation and continuous improvement in engineering processes and systems.
  5. Lifecycle Management: Applying consistent processes for change management, issue management, and document control throughout a system’s lifecycle from design to decommissioning.
  6. Project Management: Following structured approaches for planning, executing and controlling engineering projects.
  7. Design Practices: Applying systematic processes for requirements definition, design development, review and qualification.
  8. Operational Support: Implementing asset management, calibration, maintenance and other practices to support systems during routine operations.

Key Steps for Implementation

  • Develop and document GEP policies, procedures and standards tailored to the company’s needs
  • Establish an Engineering Quality Process (EQP) to link GEP to the overall Pharmaceutical Quality System
  • Provide training on GEP principles and procedures to engineering staff
  • Implement risk-based approaches to focus efforts on critical systems and processes
  • Use structured project management methodologies for capital projects
  • Apply change control and issue management processes consistently
  • Maintain engineering documentation systems with appropriate controls
  • Conduct periodic audits and reviews of GEP implementation
  • Foster a culture of quality and continuous improvement in engineering
  • Ensure appropriate interfaces between engineering and quality/regulatory functions

The key is to develop a systematic, risk-based approach to GEP that is appropriate for the company’s size, products and operations. When properly implemented, GEP provides a foundation for regulatory compliance, operational efficiency and product quality in pharmaceutical manufacturing.

Invest in a Living, Breathing Engineering Quality Process (EQP)

The EQP establishes the formal connection between GEP and the Pharmaceutical Quality System it resides within, serving as the boundary between Quality oversight and engineering activities, particularly for implementing Quality Risk Management (QRM) based integrated Commissioning and Qualification (C&Q).

It should also provide an interface between engineering activities and other systems like business operations, health/safety/environment, or other site quality systems.

Based on the information provided in the document, here is a suggested table of contents for an Engineering Quality Process (EQP):

Table of Contents – Engineering Quality Process (EQP)

  1. Introduction
    1.1 Purpose
    1.2 Scope
    1.3 Definitions
  2. Application and Context
    2.1 Relationship to Pharmaceutical Quality System (PQS)
    2.2 Relationship to Good Engineering Practice (GEP)
    2.3 Interface with Quality Risk Management (QRM)
  3. EQP Elements
    3.1 Policies and Procedures for the Asset Lifecycle and GEPs
    3.2 Risk Assessment
    3.3 Change Management
    3.4 Document Control
    3.5 Training
    3.6 Auditing
  4. Deliverables
    4.1 GEP Documentation
    4.2 Risk Assessments
    4.3 Change Records
    4.4 Training Records
    4.5 Audit Reports
  5. Roles and Responsibilities
    5.1 Engineering
    5.2 Quality
    5.3 Operations
    5.4 Other Stakeholders
  6. EQP Implementation
    6.1 Establishing the EQP
    6.2 Maintaining the EQP
    6.3 Continuous Improvement
  7. References
  8. Appendices

Risk Assessments as part of Design and Verification

Facility design and manufacturing processes are complex, multi-stage operations, fraught with difficulty. Ensuring the facility meets Good Manufacturing Practice (GMP) standards and other regulatory requirements is a major challenge. The complex regulations around biomanufacturing facilities require careful planning and documentation from the earliest design stages. 

Which is why consensus standards like ASTM E2500 exist.

Central to these approaches are risk assessment, to which there are three primary components:

  • An understanding of the uncertainties in the design (which includes materials, processing, equipment, personnel, environment, detection systems, feedback control)
  • An identification of the hazards and failure mechanisms
  • An estimation of the risks associated with each hazard and failure

Folks often get tied up on what tool to use. Frankly, this is a phase approach. We start with a PHA for design, an FMEA for verification and a HACCP/Layers of Control Analysis for Acceptance. Throughout we use a bow-tie for communication.

AspectBow-TiePHA (Preliminary Hazard Analysis)FMEA (Failure Mode and Effects Analysis)HACCP (Hazard Analysis and Critical Control Points)
Primary FocusVisualizing risk pathwaysEarly hazard identificationPotential failure modesSystematically identify, evaluate, and control hazards that could compromise product safety
Timing in ProcessAny stageEarly developmentAny stage, often designThroughout production
ApproachCombines causes and consequencesTop-downBottom-upSystematic prevention
ComplexityModerateLow to moderateHighModerate
Visual RepresentationCentral event with causes and consequencesTabular formatTabular formatFlow diagram with CCPs
Risk QuantificationCan include, not requiredBasic risk estimationRisk Priority Number (RPN)Not typically quantified
Regulatory AlignmentLess common in pharmaAligns with ISO 14971Widely accepted in pharmaLess common in pharma
Critical PointsIdentifies barriersDoes not specifyIdentifies critical failure modesIdentifies Critical Control Points (CCPs)
ScopeSpecific hazardous eventSystem-level hazardsComponent or process-level failuresProcess-specific hazards
Team RequirementsCross-functionalLess detailed knowledge neededDetailed system knowledgeFood safety expertise
Ongoing ManagementCan be used for monitoringOften updated periodicallyRegularly updatedContinuous monitoring of CCPs
OutputVisual risk scenarioList of hazards and initial risk levelsPrioritized list of failure modesHACCP plan with CCPs
Typical Use in PharmaRisk communicationEarly risk identificationDetailed risk analysisProduct Safety/Contamination Control

At BOSCON this year I’ll be talking about this fascinating detail, perhaps too much detail.

Retrospective Validation Doesn’t Really Exist

A recent FDA Warning Letter really drove home a good point about the perils of ‘retrospective validation’ and how that normally doesn’t mean what folks want it to mean.

“In lieu of process validation studies, you attempted to retrospectively review past batches without scientifically establishing blend uniformity and other critical process performance indicators. You do not commit to conduct further process performance qualification studies that scientifically establish the ability of your manufacturing process to consistently yield finished products that meet their quality attributes.”

The FDA’s response here is important for three truths:

  1. Validation needs to be done against critical quality attributes and critical process parameters to scientifically establish that the manufacturing process is consistent.
  2. Batch data on its own is rather useless.
  3. Validation is a continuous exercise, it is not once-and-done (or rather in most people’s view thrice-and-done).

I don’t think the current GMPs really allow the concept of retrospective validation as most people want it to mean (including the recipient of that warning letter). It’s probably a term we should go into the big box of Nope.

AI generated art

Retrospective validation as most people mean it is a type of process validation that involves evaluating historical data and records to demonstrate that an existing process consistently produces products meeting predetermined specifications. As an approach retrospective validation involves evaluating historical data and records to demonstrate that an existing process consistently produces products meeting predetermined specifications. 

The problem here is that this really just tells you what you were already hoping was true.

Retrospective validation has some major flaws:

  1. Limited control over data quality and completeness: Since retrospective validation relies on historical data, there may be gaps or inconsistencies in the available information. The data may not have been collected with validation in mind, leading to missing critical parameters or measurements. It rather throws out most of the principles of science.
  2. Potential bias in existing data: Historical data may be biased or incomplete, as it was not collected specifically for validation purposes. This can make it difficult to draw reliable conclusions about process performance and consistency.
  3. Difficulty in identifying and addressing hidden flaws: Since the process has been in use for some time, there may be hidden flaws or issues that have not been identified or challenged. These could potentially lead to non-conforming products or hazardous operating conditions.
  4. Difficulty in recreating original process conditions: It may be challenging to accurately recreate or understand the original process conditions under which the historical data was generated, potentially limiting the validity of conclusions drawn from the data.

What is truly called for is to perform concurrent validation.