Retrospective Validation Doesn’t Really Exist

A recent FDA Warning Letter really drove home a good point about the perils of ‘retrospective validation’ and how that normally doesn’t mean what folks want it to mean.

“In lieu of process validation studies, you attempted to retrospectively review past batches without scientifically establishing blend uniformity and other critical process performance indicators. You do not commit to conduct further process performance qualification studies that scientifically establish the ability of your manufacturing process to consistently yield finished products that meet their quality attributes.”

The FDA’s response here is important for three truths:

  1. Validation needs to be done against critical quality attributes and critical process parameters to scientifically establish that the manufacturing process is consistent.
  2. Batch data on its own is rather useless.
  3. Validation is a continuous exercise, it is not once-and-done (or rather in most people’s view thrice-and-done).

I don’t think the current GMPs really allow the concept of retrospective validation as most people want it to mean (including the recipient of that warning letter). It’s probably a term we should go into the big box of Nope.

AI generated art

Retrospective validation as most people mean it is a type of process validation that involves evaluating historical data and records to demonstrate that an existing process consistently produces products meeting predetermined specifications. As an approach retrospective validation involves evaluating historical data and records to demonstrate that an existing process consistently produces products meeting predetermined specifications. 

The problem here is that this really just tells you what you were already hoping was true.

Retrospective validation has some major flaws:

  1. Limited control over data quality and completeness: Since retrospective validation relies on historical data, there may be gaps or inconsistencies in the available information. The data may not have been collected with validation in mind, leading to missing critical parameters or measurements. It rather throws out most of the principles of science.
  2. Potential bias in existing data: Historical data may be biased or incomplete, as it was not collected specifically for validation purposes. This can make it difficult to draw reliable conclusions about process performance and consistency.
  3. Difficulty in identifying and addressing hidden flaws: Since the process has been in use for some time, there may be hidden flaws or issues that have not been identified or challenged. These could potentially lead to non-conforming products or hazardous operating conditions.
  4. Difficulty in recreating original process conditions: It may be challenging to accurately recreate or understand the original process conditions under which the historical data was generated, potentially limiting the validity of conclusions drawn from the data.

What is truly called for is to perform concurrent validation.

Navigating the Evolving Landscape of Validation in Biotech: Challenges and Opportunities

The biotech industry is experiencing a significant transformation in validation processes, driven by rapid technological advancements, evolving regulatory standards, and the development of novel therapies.

The 2024 State of Validation report, authored by Jonathan Kay and funded by Kneat, provides a overview of trends and challenges in the validation industry. Here are some of the key findings:

  1. Compliance and efficiency are top priorities: Creating process efficiencies and ensuring audit readiness have become the primary goals for validation programs.
    • Compliance burden emerged as the top validation challenge in 2024, replacing shortage of human resources which was the top concern in 2022-2023
  2. Digital transformation is accelerating: 83% of respondents are either using or planning to adopt digital validation systems. The top benefits include improved data integrity, continuous audit readiness, and global standardization.
    • 79% of those using digital validation rely on third-party software providers
      • Does this mean that 21% of respondents are in companies that have created their own bespoke systems? Or is something else going on there
    • 63% reported that ROI from digital validation systems met or exceeded expectations
  3. Artificial intelligence and machine learning are on the rise: 70% of respondents believe AI and ML will play a pivotal role in the future of validation.
  4. Remote audits are becoming more common: 75% of organizations conducted at least some remote regulatory audits in the past year.
  5. Challenges persist: The industry faces ongoing challenges in balancing costs, attracting talent, and keeping pace with technological advancements.
    • 61% reported an increase in validation workload over the past 12 months
  6. Industry 4.0 adoption is growing: 60% of organizations are in the early stages or actively implementing Industry/Pharma 4.0 technologies.
  7. Digital Transformation:

As highlighted in the 2024 State of Validation report and my previous blog post on “Challenges in Validation,” several key trends and challenges are shaping the future of validation in biotech:

  1. Technological Integration: The integration of AI, machine learning, and automation into validation processes presents both opportunities and challenges. While these technologies offer the potential for increased efficiency and accuracy, they also require new validation frameworks and methodologies.
  2. Regulatory Compliance: Keeping pace with evolving regulatory standards remains a significant challenge. Regulatory bodies are continuously updating guidelines to address technological advancements, requiring companies to stay vigilant and adaptable.
  3. Data Management and Integration: With the increasing use of digital tools and platforms, managing and integrating vast amounts of data has become a critical challenge. The industry is moving towards more robust data analytics and machine learning tools to handle this data efficiently.
  4. Resource Constraints: Particularly for smaller biotech companies, resource limitations in funding, personnel, and expertise can hinder the implementation of advanced validation techniques.
  5. Risk Management: Adopting a risk-based approach to validation is essential but challenging. Companies must develop effective strategies to identify and mitigate risks throughout the product lifecycle.
  6. Collaboration and Knowledge Sharing: Ensuring effective communication and data sharing among various stakeholders is crucial for streamlining validation efforts and aligning goals.
  7. Digital Transformation: The industry is witnessing a shift from traditional, paper-heavy validation methods to more dynamic, data-driven, and digitalized processes. This transformation promises enhanced efficiency, compliance, and collaboration.
  8. Workforce Development: We are a heavily experience driven field. With 38% of validation professionals having 16 or more years of experience, there’s a critical need for knowledge transfer and training to equip newer entrants with necessary skills.
  9. Adoption of Computer Software Assurance (CSA): The industry is gradually embracing CSA processes, driven by recent FDA guidance, though there’s still considerable room for further adoption. I always find this showing up in surveys to be disappointing, as CSA is a racket, as it basically is already existing validation principles. But consultants got to consult.
  10. Focus on Efficiency and Audit Readiness: Creating process efficiencies and ensuring audit readiness have emerged as top priorities for validation programs.

As the validation landscape continues to evolve, it’s crucial for biotech companies to embrace these changes proactively. By leveraging new technologies, fostering collaboration, and focusing on continuous improvement, the industry can overcome these challenges and drive innovation in validation processes.

The future of validation in biotech lies in striking a balance between technological advancement and regulatory compliance, all while maintaining a focus on product quality and patient safety. As we move forward, it’s clear that the validation field will continue to be dynamic and exciting, offering numerous opportunities for innovation and growth.

Conducting A Hazard and Operability Study (HAZOP)

A Hazard and Operability Study (HAZOP) is a structured and systematic examination of a complex planned or existing process or operation to identify and evaluate problems that may represent risks to product, personnel or equipment. The primary goal of a HAZOP is to ensure that risks are managed effectively by identifying potential hazards and operability problems and developing appropriate mitigation strategies.

Why Use HAZOP?

Biotech facilities involve intricate processes that can be prone to various risks, including contamination, equipment failure, and process deviations. Implementing a HAZOP can:

  • Risk Identification and Mitigation: HAZOPs help identify potential hazards associated with biotech processes, such as contamination risks, equipment malfunctions, and deviations from standard operating procedures. By identifying these risks, facilities can implement mitigation strategies to prevent accidents and ensure safety.
  • Process Optimization: Through the systematic analysis of processes, HAZOPs can identify inefficiencies and areas for improvement, leading to optimized operations and enhanced productivity.

Part of a Continuum of Risk Tools

A HAZOP (Hazard and Operability) study differs from other risk assessment methods in a few key ways:

  1. Systematic examination of process deviations: HAZOP uses a very structured approach of examining potential deviations from the intended design and operation of a process, using guidewords like “more”, “less”, “no”, “reverse”, etc. This systematic approach helps identify hazards that may be missed by other methods.
  2. Focus on operability issues: The HAZOP examines operability problems that could impact process efficiency or product quality.
  3. Node-by-node analysis: The process is broken down into nodes or sections that are analyzed individually, allowing for very thorough examination.
  4. Qualitative analysis: Unlike quantitative risk assessment methods, HAZOP is primarily qualitative, focusing on identifying potential hazards rather than quantifying risk levels. HAZOPs do not typically assign numerical scores or rankings to risks.
  5. Consideration of causes and consequences: For each deviation, the team examines possible causes, consequences, and existing safeguards before recommending additional actions.
  6. Applicable to complex processes: The structured approach makes HAZOP well-suited for analyzing complex processes with many variables and potential interactions.
MethodDescriptionStrengthsLimitations
HAZOP (Hazard and Operability Study)Systematic examination of process/operation to identify potential hazards and operability problems– Very thorough and structured approach
– Examines deviations from design intent
– Team-based
– Time consuming
– Primarily qualitative
FMEA (Failure Mode and Effects Analysis)Systematic method to identify potential failure modes and their effects– Quantitative risk prioritization
– Proactive approach
– Can be used on products and processes
– Does not consider combinations of failures
– Can be subjective
HACCP (Hazard Analysis and Critical Control Points)Systematic approach to food safety hazards– Focus on prevention
– Identifies critical control points
– Requires prerequisite programs in place
PHA (Preliminary Hazard Analysis)Early stage hazard identification technique– Can be used early in design process
– Relatively quick to perform
– Identifies major hazards
– Not very detailed
– Qualitative only
– May miss some hazards
Bow-Tie AnalysisCombines fault tree and event tree analysis– Visual representation of risk pathways
– Shows preventive and mitigative controls
– Good communication tool
– Does not show detailed failure logic
– Can oversimplify complex scenarios
– Time consuming for multiple hazards

Key differences:

  • HAZOP focuses on deviations from design intent, while FMEA looks at potential failure modes
  • HACCP is specific to identify hazards and is commonly used in food safety, while the others are more general risk assessment tools
  • PHA is used early in design, while the others are typically used on existing systems
  • Bow-Tie provides a visual risk pathway, while the others use more tabular formats
  • FMEA and HAZOP tend to be the most thorough and time-intensive methods

The choice of method depends on the specific application, stage of design, and level of detail required. Often a combination of methods may be used.

Instructions for Conducting a HAZOP

Preparation

    • Assemble a multidisciplinary team comprising appropriate experts
    • Define the scope of the HAZOP study, including the specific processes or operations to be analyzed.
    • Gather and review all relevant documentation, such as process flow diagrams, piping and instrumentation diagrams, and standard operating procedures.

    Execution

      • Divide the Process into Nodes: Break down the process into manageable sections or nodes. Each node typically represents a specific part of the process, such as a piece of equipment or a process step.
      • Identify Deviations: For each node, guidewords are applied to identify potential deviations from the intended design or operation. Common guidewords include:
        • No: Complete absence of a process parameter (e.g., no flow).
        • More: Quantitative increase (e.g., more pressure).
        • Less: Quantitative decrease (e.g., less temperature).
        • As well as: Presence of additional elements (e.g., contamination).
        • Part of: Partial completion of an action (e.g., partial mixing).
        • Reverse: Logical opposite of the intended action (e.g., reverse flow).
      • Analyze Causes and Consequences: Determine the possible causes of each deviation and analyze the potential consequences on safety, environment, and operations. This involves considering various factors such as equipment failure, human error, environmental conditions, or procedural issues that could lead to the deviation.
        • Use of Experience and Knowledge: The team relies on their collective experience and knowledge of the process, equipment, and industry standards to hypothesize potential causes. This may include reviewing historical data, previous incidents, and near misses.
      • Recommend Actions: Develop recommendations for mitigating identified risks, such as changes to the process, additional controls, or procedural modifications.

      Documentation and Follow-Up

        • Document all findings, including identified hazards, potential consequences, and recommended actions.
        • Assign responsibilities for implementing recommendations and establish timelines for completion.
        • Conduct follow-up reviews to ensure that recommended actions have been implemented effectively and that the process remains safe and operable.

        Review and Update

          • Regularly review and update the HAZOP study to account for changes in processes, equipment, or regulations.
          • Ensure continuous improvement by incorporating lessons learned from past incidents or near misses.
          • Iterative Process: The process is iterative, with the team revisiting and refining their analysis as more information becomes available or as the understanding of the process deepens.
          NodeGuidewordParameterDeviationCauseConsequenceSafeguardsRecommendationsActions
          Specific section or equipment being analyzedGuideword applied (e.g. No, More, Less, Reverse, etc.)Process parameter being examined (e.g. Flow, Temperature, Pressure, etc.)How the parameter deviates from design intent when guideword is appliedPossible reasons for the deviationPotential results if deviation occursExisting measures to prevent or mitigate the deviationSuggested additional measures to control the riskSpecific tasks assigned to implement recommendations

          Inappropriate Uses of Quality Risk Management

          Quality Risk Management (QRM) is a vital aspect of pharmaceutical and biotechnology manufacturing, aimed at ensuring product quality and safety. I write a lot about risk management because risk management is so central to what I do. However, inappropriate uses of QRM can lead to significant negative consequences and I think it is a fairly common refrain in my day that an intended use is not an appropriate use of risk management. Let us explore these inappropriate uses, their potential consequences, and provide some examples so folks know what to avoid.

          1. Justifying Non-Compliance

          Inappropriate Use: Using QRM to justify deviations from Good Practices (GxP) or regulatory standards.

          Consequences: This can lead to regulatory non-compliance, resulting in action from regulatory bodies, such as warnings, fines, or even shutdowns. Everytime I read a Warning Letter I imagine that there was some poorly thought out risk assessment. Using risk management this way undermines the integrity of manufacturing processes and can compromise product safety and efficacy.

          Example: A company might use risk assessments to justify not adhering to environmental controls, claiming the risk is minimal. This can lead to contamination issues, as seen in cases where inadequate environmental monitoring led to microbial contamination of products.

          2. Substituting for Scientific Evidence

          Inappropriate Use: Relying on QRM as a substitute for robust scientific data and empirical evidence.

          Consequences: Decisions made without scientific backing can lead to ineffective risk mitigation strategies, resulting in product failures or recalls.

          Example: A manufacturer might use QRM to decide on process parameters without sufficient scientific validation, leading to inconsistent product quality. For example the inadequate scientific evaluation of raw materials led to variability in cell culture media performance.

          3. Supporting Predetermined Conclusions

          Inappropriate Use: Manipulating QRM to support conclusions that have already been decided.

          Consequences: This biases the risk management process, potentially overlooking significant risks and leading to inadequate risk controls.

          Example: In a biopharmaceutical facility, QRM might be used to support the continued use of outdated equipment, despite known risks of cross-contamination, leading to product recalls.

          4. Rationalizing Workarounds

          Inappropriate Use: Using QRM to justify workarounds that bypass standard procedures or controls.

          Consequences: This can introduce new risks into the manufacturing process, potentially leading to product contamination or failure.

          Example: A facility might use QRM to justify a temporary fix for a malfunctioning piece of equipment instead of addressing the root cause, leading to repeated equipment failures and production delays.

          5. Ignoring Obvious Issues

          Inappropriate Use: Conducting risk assessments instead of addressing clear and evident problems directly.

          Consequences: This can delay necessary corrective actions, exacerbating the problem and potentially leading to regulatory actions.

          Example: A company might conduct a lengthy risk assessment instead of immediately addressing a known contamination source, resulting in multiple batches being compromised.

          Inappropriate uses of Quality Risk Management can have severe implications for product quality, regulatory compliance, and patient safety. It is crucial for organizations to apply QRM objectively, supported by scientific evidence, and aligned with regulatory standards to ensure its effectiveness in maintaining high-quality manufacturing processes.

          The Attributes of Good Procedure

          Good documentation practices when documenting Work as Prescribed stresses the clarity, accuracy, thoroughness and control of the procedural instruction being written.

          Clarity and Accuracy: Documentation should be clear and free from errors, ensuring that instructions are understood and followed correctly. This aligns with the concept of being precise in documentation.

          Thoroughness: All relevant activities impacting quality should be recorded and controlled, indicating a need for comprehensive documentation practices.

          Control and Integrity: The need for strict control over documentation to maintain integrity, accuracy, and availability throughout its lifecycle.

          To meet these requirements we leverage three writing principles of precise, comprehensive and rigid.

          Type of InstructionDefinitionAttributesWhen NeededWhyDifferencesExample
          Precise Exact and accurate, leaving little room for interpretation.– Specific
          – Detailed
          – Unambiguous
          When accuracy is critical, such as in scientific experiments or programming.Regulatory agencies require precise documentation to ensure tasks are performed consistently and correctlyFocuses on exactness and clarity, ensuring tasks are performed without deviation.Instructions for assembling a computer, specifying exact components and steps.
          Comprehensive Complete and covering all necessary aspects of a task.– Thorough
          – Inclusive
          – Exhaustive
          When a task is complex and requires understanding of all components, such as in training manuals.Comprehensive SOPs are crucial for ensuring all aspects of a process are covered, ensuring compliance with regulatory requirements.Provides a full overview, ensuring no part of the task is overlooked.Employee onboarding manual covering company policies, procedures, and culture.
          Rigid Strict and inflexible, not allowing for changes.– Fixed
          – Inflexible
          – Consistent
          When safety and compliance are paramount, such as batch recordsRigid instructions ensure compliance with strict regulatory standards.Ensures consistency and adherence to specific protocols, minimizing risks.Safety procedures for operating heavy machinery, with no deviations allowed.

          When writing documents based on cognitive principles these three are often excellent for detailed task design but there are significant trade-offs inherent in these attributes when we codify knowledge:

          • The more comprehensive the instructions, the less likely that they can be absorbed, understood, and remembered by those responsible for execution – which is why it is important these instructions are followed at time of execution. Moreover, comprehensive instructions also risk can dilute the sense of responsibility felt by the person executing.
          • The more precise the instructions, the less they allow for customization or the exercise of employee initiative.
          • The more rigid the instructions, the less they will be able to evolve spontaneously as circumstances change. They require rigorous change management.

          This means these tools are really good for complicated executions that must follow a specific set of steps. Ideal for equipment operations, testing, batch records. But as we shade into complex processes, which relies on domain knowledge, we start decreasing the rigidity, lowering the degree of precision, and walking a fine line on comprehensiveness.

          Where organizations continue to struggle is in this understanding that it is not one size fits all. Every procedure is on a continuum and the level of comprehensiveness, precision and rigidity change as a result. Processes involving human judgement, customization for specific needs, or adaptations for changing circumstances should be written to a different standard than those involving execution of a test. It is also important to remember that a document may require high comprehensiveness, medium precision and low rigidity (for example a validation process).

          Remember to use them with other tools for document writing. The goal here is to write documents that are usable to reach the necessary outcome.