Maturity Models, Utilizing the Validation Program as an Example

Maturity models offer significant benefits to organizations by providing a structured framework for benchmarking and assessment. Organizations can clearly understand their strengths and weaknesses by evaluating their current performance and maturity level in specific areas or processes. This assessment helps identify areas for improvement and sets a baseline for measuring progress over time. Benchmarking against industry standards or best practices also allows organizations to see how they compare to their peers, fostering a competitive edge.

One of the primary advantages of maturity models is their role in fostering a culture of continuous improvement. They provide a roadmap for growth and development, encouraging organizations to strive for higher maturity levels. This continuous improvement mindset helps organizations stay agile and adaptable in a rapidly changing business environment. By setting clear goals and milestones, maturity models guide organizations in systematically addressing deficiencies and enhancing their capabilities.

Standardization and consistency are also key benefits of maturity models. They help establish standardized practices across teams and departments, ensuring that processes are executed with the same level of quality and precision. This standardization reduces variability and errors, leading to more reliable and predictable outcomes. Maturity models create a common language and framework for communication, fostering collaboration and alignment toward shared organizational goals.

The use of maturity models significantly enhances efficiency and effectiveness. Organizations can increase productivity and use their resources by identifying areas for streamlining operations and optimizing workflows. This leads to reduced errors, minimized rework, and improved process efficiency. The focus on continuous improvement also means that organizations are constantly seeking ways to refine and enhance their operations, leading to sustained gains in efficiency.

Maturity models play a crucial role in risk reduction and compliance. They assist organizations in identifying potential risks and implementing measures to mitigate them, ensuring compliance with relevant regulations and standards. This proactive approach to risk management helps organizations avoid costly penalties and reputational damage. Moreover, maturity models improve strategic planning and decision-making by providing a data-backed foundation for setting priorities and making informed choices.

Finally, maturity models improve communication and transparency within organizations. Providing a common communication framework increases transparency and builds trust among employees. This improved communication fosters a sense of shared purpose and collaboration, essential for achieving organizational goals. Overall, maturity models serve as valuable tools for driving continuous improvement, enhancing efficiency, and fostering a culture of excellence within organizations.

Business Process Maturity Model (BPMM)

A structured framework used to assess and improve the maturity of an organization’s business processes, it provides a systematic methodology to evaluate the effectiveness, efficiency, and adaptability of processes within an organization, guiding continuous improvement efforts.

Key Characteristics of BPMM

Assessment and Classification: BPMM helps organizations understand their current process maturity level and identify areas for improvement. It classifies processes into different maturity levels, each representing a progressive improvement in process management.

Guiding Principles: The model emphasizes a process-centric approach focusing on continuous improvement. Key principles include aligning improvements with business goals, standardization, measurement, stakeholder involvement, documentation, training, technology enablement, and governance.

Incremental Levels

    BPMM typically consists of five levels, each building on the previous one:

    1. Initial: Processes are ad hoc and chaotic, with little control or consistency.
    2. Managed: Basic processes are established and documented, but results may vary.
    3. Standardized: Processes are well-documented, standardized, and consistently executed across the organization.
    4. Predictable: Processes are quantitatively measured and controlled, with data-driven decision-making.
    5. Optimizing: Continuous process improvement is ingrained in the organization’s culture, focusing on innovation and optimization.

    Benefits of BPMM

    • Improved Process Efficiency: By standardizing and optimizing processes, organizations can achieve higher efficiency and consistency, leading to better resource utilization and reduced errors.
    • Enhanced Customer Satisfaction: Mature processes lead to higher product and service quality, which improves customer satisfaction.
    • Better Change Management: Higher process maturity increases an organization’s ability to navigate change and realize project benefits.
    • Readiness for Technology Deployment: BPMM helps ensure organizational readiness for new technology implementations, reducing the risk of failure.

    Usage and Implementation

    1. Assessment: Organizations can conduct BPMM assessments internally or with the help of external appraisers. These assessments involve reviewing process documentation, interviewing employees, and analyzing process outputs to determine maturity levels.
    2. Roadmap for Improvement: Organizations can develop a roadmap for progressing to higher maturity levels based on the assessment results. This roadmap includes specific actions to address identified deficiencies and improve process capabilities.
    3. Continuous monitoring and regular evaluations are crucial to ensure that processes remain effective and improvements are sustained over time.

    A BPMM Example: Validation Program based on ASTM E2500

    To apply the Business Process Maturity Model (BPMM) to a validation program aligned with ASTM E2500, we need to evaluate the program’s maturity across the five levels of BPMM while incorporating the key principles of ASTM E2500. Here’s how this application might look:

    Level 1: Initial

    At this level, the validation program is ad hoc and lacks standardization:

    • Validation activities are performed inconsistently across different projects or departments.
    • There’s limited understanding of ASTM E2500 principles.
    • Risk assessment and scientific rationale for validation activities are not systematically applied.
    • Documentation is inconsistent and often incomplete.

    Level 2: Managed

    The validation program shows some structure but lacks organization-wide consistency:

    • Basic validation processes are established but may not fully align with ASTM E2500 guidelines.
    • Some risk assessment tools are used, but not consistently across all projects.
    • Subject Matter Experts (SMEs) are involved, but their roles are unclear.
    • There’s increased awareness of the need for scientific justification in validation activities.

    Level 3: Standardized

    The validation program is well-defined and consistently implemented:

    • Validation processes are standardized across the organization and align with ASTM E2500 principles.
    • Risk-based approaches are consistently used to determine the scope and extent of validation activities.
    • SMEs are systematically involved in the design review and verification processes.
    • The concept of “verification” replaces traditional IQ/OQ/PQ, focusing on critical aspects that impact product quality and patient safety.
    • Quality risk management tools (e.g., impact assessments, risk management) are routinely used to identify critical quality attributes and process parameters.

    Level 4: Predictable

    The validation program is quantitatively managed and controlled:

    • Key Performance Indicators (KPIs) for validation activities are established and regularly monitored.
    • Data-driven decision-making is used to continually improve the efficiency and effectiveness of validation processes.
    • Advanced risk management techniques are employed to predict and mitigate potential issues before they occur.
    • There’s a strong focus on leveraging supplier documentation and expertise to streamline validation efforts.
    • Engineering procedures for quality activities (e.g., vendor technical assessments and installation verification) are formalized and consistently applied.

    Level 5: Optimizing

    The validation program is characterized by continuous improvement and innovation:

    • There’s a culture of continuous improvement in validation processes, aligned with the latest industry best practices and regulatory expectations.
    • Innovation in validation approaches is encouraged, always maintaining alignment with ASTM E2500 principles.
    • The organization actively contributes to developing industry standards and best practices in validation.
    • Validation activities are seamless integrated with other quality management systems, supporting a holistic approach to product quality and patient safety.
    • Advanced technologies (e.g., artificial intelligence, machine learning) may be leveraged to enhance risk assessment and validation strategies.

    Key Considerations for Implementation

    1. Risk-Based Approach: At higher maturity levels, the validation program should fully embrace the risk-based approach advocated by ASTM E2500, focusing efforts on aspects critical to product quality and patient safety.
    2. Scientific Rationale: As maturity increases, there should be a stronger emphasis on scientific understanding and justification for validation activities, moving away from a checklist-based approach.
    3. SME Involvement: Higher maturity levels should see increased and earlier involvement of SMEs in the validation process, from equipment selection to verification.
    4. Supplier Integration: More mature programs will leverage supplier expertise and documentation effectively, reducing redundant testing and improving efficiency.
    5. Continuous Improvement: At the highest maturity level, the validation program should have mechanisms in place for continuous evaluation and improvement of processes, always aligned with ASTM E2500 principles and the latest regulatory expectations.

    Process and Enterprise Maturity Model (PEMM),

    The Process and Enterprise Maturity Model (PEMM), developed by Dr. Michael Hammer, is a comprehensive framework designed to help organizations assess and improve their process maturity. It is a corporate roadmap and benchmarking tool for companies aiming to become process-centric enterprises.

    Key Components of PEMM

    PEMM is structured around two main dimensions: Process Enablers and Organizational Capabilities. Each dimension is evaluated on a scale to determine the maturity level.

    Process Enablers

    These elements directly impact the performance and effectiveness of individual processes. They include:

    • Design: The structure and documentation of the process.
    • Performers: The individuals or teams executing the process.
    • Owner: The person responsible for the process.
    • Infrastructure: The tools, systems, and resources supporting the process.
    • Metrics: The measurements used to evaluate process performance.

    Organizational Capabilities

    These capabilities create an environment that supports and sustains high-performance processes. They include:

    • Leadership: The commitment and support from top management.
    • Culture: The organizational values and behaviors that promote process excellence.
    • Expertise: The skills and knowledge required to manage and improve processes.
    • Governance: The mechanisms to oversee and guide process management activities.

    Maturity Levels

    Both Process Enablers and Organizational Capabilities are assessed on a scale from P0 to P4 (for processes) and E0 to E4 (for enterprise capabilities):

    • P0/E0: Non-existent or ad hoc processes and capabilities.
    • P1/E1: Basic, but inconsistent and poorly documented.
    • P2/E2: Defined and documented, but not fully integrated.
    • P3/E3: Managed and measured, with consistent performance.
    • P4/E4: Optimized and continuously improved.

    Benefits of PEMM

    • Self-Assessment: PEMM is designed to be simple enough for organizations to conduct their own assessments without needing external consultants.
    • Empirical Evidence: It encourages the collection of data to support process improvements rather than relying on intuition.
    • Engagement: Involves all levels of the organization in the process journey, turning employees into advocates for change.
    • Roadmap for Improvement: Provides a clear path for organizations to follow in their process improvement efforts.

    Application of PEMM

    PEMM can be applied to any type of process within an organization, whether customer-facing or internal, core or support, transactional or knowledge-intensive. It helps organizations:

    • Assess Current Maturity: Identify the current state of process and enterprise capabilities.
    • Benchmark: Compare against industry standards and best practices.
    • Identify Improvements: Pinpoint areas that need enhancement.
    • Track Progress: Monitor the implementation and effectiveness of process improvements.

    A PEMM Example: Validation Program based on ASTM E2500

    To apply the Process and Enterprise Maturity Model (PEMM) to an ASTM E2500 validation program, we can evaluate the program’s maturity across the five process enablers and four enterprise capabilities defined in PEMM. Here’s how this application might look:

    Process Enablers

    Design:

      • P-1: Basic ASTM E2500 approach implemented, but not consistently across all projects
      • P-2: ASTM E2500 principles applied consistently, with clear definition of requirements, specifications, and verification activities
      • P-3: Risk-based approach fully integrated into design process, with SME involvement from the start
      • P-4: Continuous improvement of ASTM E2500 implementation based on lessons learned and industry best practices

      Performers:

        • P-1: Some staff trained on ASTM E2500 principles
        • P-2: All relevant staff trained and understand their roles in the ASTM E2500 process
        • P-3: Staff proactively apply risk-based thinking and scientific rationale in validation activities
        • P-4: Staff contribute to improving the ASTM E2500 process and mentor others

        Owner:

          • P-1: Validation program has a designated owner, but role is not well-defined
          • P-2: Clear ownership of the ASTM E2500 process with defined responsibilities
          • P-3: Owner actively manages and improves the ASTM E2500 process
          • P-4: Owner collaborates across departments to optimize the validation program

          Infrastructure:

            • P-1: Basic tools in place to support ASTM E2500 activities
            • P-2: Integrated systems for managing requirements, risk assessments, and verification activities
            • P-3: Advanced tools for risk management and data analysis to support decision-making
            • P-4: Cutting-edge technology leveraged to enhance efficiency and effectiveness of the validation program

            Metrics:

              • P-1: Basic metrics tracked for validation activities
              • P-2: Comprehensive set of metrics established to measure ASTM E2500 process performance
              • P-3: Metrics used to drive continuous improvement of the validation program
              • P-4: Predictive analytics used to anticipate and prevent issues in validation activities

              Enterprise Capabilities

              Leadership:

                • E-1: Leadership aware of ASTM E2500 principles
                • E-2: Leadership actively supports ASTM E2500 implementation
                • E-3: Leadership drives cultural change to fully embrace risk-based validation approach
                • E-4: Leadership promotes ASTM E2500 principles beyond the organization, influencing industry standards

                Culture:

                  • E-1: Some recognition of the importance of risk-based validation
                  • E-2: Culture of quality and risk-awareness developing across the organization
                  • E-3: Strong culture of scientific thinking and continuous improvement in validation activities
                  • E-4: Innovation in validation approaches encouraged and rewarded

                  Expertise:

                    • E-1: Basic understanding of ASTM E2500 principles among key staff
                    • E-2: Dedicated team of ASTM E2500 experts established
                    • E-3: Deep expertise in risk-based validation approaches across multiple departments
                    • E-4: Organization recognized as thought leader in ASTM E2500 implementation

                    Governance:

                      • E-1: Basic governance structure for validation activities in place
                      • E-2: Clear governance model aligning ASTM E2500 with overall quality management system
                      • E-3: Cross-functional governance ensuring consistent application of ASTM E2500 principles
                      • E-4: Governance model that adapts to changing regulatory landscape and emerging best practices

                      To use this PEMM assessment:

                      1. Evaluate your validation program against each enabler and capability, determining the current maturity level (P-1 to P-4 for process enablers, E-1 to E-4 for enterprise capabilities).
                      2. Identify areas for improvement based on gaps between current and desired maturity levels.
                      3. Develop action plans to address these gaps, focusing on moving to the next maturity level for each enabler and capability.
                      4. Regularly reassess the program to track progress and adjust improvement efforts as needed.

                      Comparison Table

                      AspectBPMMPEMM
                      CreatorObject Management Group (OMG)Dr. Michael Hammer
                      PurposeAssess and improve business process maturityRoadmap and benchmarking for process-centricity
                      StructureFive levels: Initial, Managed, Standardized, Predictable, OptimizingTwo components: Process Enablers (P0-P4), Organizational Capabilities (E0-E4)
                      FocusProcess-centric, incremental improvementProcess enablers and organizational capabilities
                      Assessment MethodOften requires external appraisersDesigned for self-assessment
                      Guiding PrinciplesStandardization, measurement, continuous improvementEmpirical evidence, simplicity, organizational engagement
                      ApplicationsEnterprise systems, business process improvement, benchmarkingProcess reengineering, organizational engagement, benchmarking

                      In summary, while both BPMM and PEMM aim to improve business processes, BPMM is more structured and detailed, often requiring external appraisers, and focuses on incremental process improvement across organizational boundaries. In contrast, PEMM is designed for simplicity and self-assessment, emphasizing the role of process enablers and organizational capabilities to foster a supportive environment for process improvement. Both have advantages, and keeping both in mind while developing processes is key.

                      Quality Book Shelf: Mastering Safety Risk Management for Medical and In Vitro Devices

                      Disclaimer: I have had the privilege of being a former colleague of Jayet’s, and hold him in immense regard.

                      Mastering Safety Risk Management for Medical and In Vitro Devices by Jayet Moon and Arun Mathew is a comprehensive guide that addresses the critical aspects of risk management in medical and in vitro devices. This book is an essential resource for professionals involved in medical device design, production, and post-market phases, providing a structured approach to ensure product safety and regulatory compliance.

                      Starting with a solid overview of risk management principles that apply not only to medical devices under ISO13485 but will also teach pharmaceutical folks following ICH Q9 white a bit, this book delivers a heavy dose of knowledge and the benefit of wisdom in applying it.

                      The book then goes deep into the design assurance process, which is crucial for identifying, understanding, analyzing, and mitigating risks associated with healthcare product design. This foundational approach ensures that practitioners can perform a favorable benefit-risk assessment, which is vital for the safety and efficacy of medical devices.

                      Strengths

                      • Regulatory Compliance: The authors provide detailed guidance on conforming to major international standards such as ISO 13485:2016, ISO 14971:2019, the European Union Medical Device Regulation (MDR), In Vitro Diagnostic Regulation (IVDR), and the US FDA regulations, including the new FDA Quality Management System Regulation (QMSR).
                      • Risk Management Tools: The book offers a variety of tools and methodologies for effective risk management. These include risk analysis techniques, risk evaluation methods, and risk control measures, which are explained clearly and practically.
                      • Lifecycle Approach: One of the standout features of this book is its lifecycle approach to risk management. It emphasizes that risk management does not end with product design but continues through production and into the post-market phase, ensuring ongoing safety and performance.

                      The authors, Jayet Moon and Arun Mathew, bring their extensive experience in the field to bear, providing real-world examples and case studies that illustrate the application of risk management principles in various scenarios. This practical approach helps readers to understand how to implement the theoretical concepts discussed in the book. This book is essential for anyone working in medical devices and a good read for other quality life sciences professionals as there is much to draw on here.

                      Shipping Container Validation

                      Follow a systematic process to validate a shipping container by involving the traditional three main stages: Design Qualification (DQ), Operational Qualification (OQ), and Performance Qualification (PQ).

                      Design Qualification (DQ)

                      The DQ stage involves establishing that the shipping container design meets the user requirements and regulatory standards. Key steps include:

                      1. Define user requirement specifications (URS) for the container, including temperature range, duration of transport, and product-specific needs.
                      2. Review the container design specifications provided by the manufacturer.
                      3. Assess the container’s compatibility with the pharmaceutical product and its storage requirements.
                      4. Evaluate the container’s compliance with relevant regulatory guidelines and standards.

                      Operational Qualification (OQ)

                      OQ involves testing the container under controlled conditions to ensure it operates as intended. This stage includes:

                      1. Conducting empty container tests to verify basic functionality.
                      2. Testing temperature control systems and monitoring devices.
                      3. Evaluating the container’s ability to maintain required conditions under various environmental scenarios.
                      4. Assessing the ease of use and any potential operational issues.

                      Performance Qualification (PQ)

                      PQ is the most critical stage, involving real-world testing to ensure the container performs as required under actual shipping conditions. Steps include:

                      1. Develop a detailed PQ protocol that outlines test conditions, acceptance criteria, and data collection methods.
                      2. Conduct shipping trials using actual or simulated product loads.
                      3. Test the container under worst-case scenarios, including extreme temperature conditions and extended shipping durations.
                      4. Monitor and record temperature data throughout the shipping process.
                      5. Assess the impact of various handling conditions (e.g., vibration, shock) on container performance.
                      6. Evaluate the container’s performance across different shipping lanes and modes of transport.

                      Additional Considerations

                      • Associated Materials and Equipment: Ensure all associated materials (e.g., coolants, packaging materials) and monitoring equipment are also qualified.
                      • Re-qualification: For reusable containers, establish a process for periodic re-qualification to ensure ongoing performance.
                      • Documentation: Maintain comprehensive documentation of all qualification stages, including test results, data analysis, and conclusions.
                      • Risk Assessment: Conduct a risk assessment to identify potential failure modes and mitigation strategies.

                      Best Practices

                      1. Use a risk-based approach to determine the extent of testing required for each container type and shipping scenario.
                      2. Consider seasonal variations in ambient temperature profiles when designing qualification studies.
                      3. Utilize pre-qualified containers from reputable suppliers when possible to streamline the validation process.
                      4. Implement a robust change control process to manage any container or shipping process modifications post-validation.
                      5. Regularly review and update validation documentation to reflect any changes in regulatory requirements or shipping conditions.

                      Following this comprehensive approach, you can ensure that your shipping containers are properly validated for pharmaceutical transport, maintaining product quality and integrity throughout the supply chain. Validation is an ongoing process, and containers should be periodically reassessed to ensure continued compliance and performance.

                      Extractables and Leachables (E&L)

                      An effective program for managing extractables and leachables (E&L) in biotech involves a comprehensive approach that ensures product safety and compliance with regulatory standards. As single-use technologies have become more prevalent in biopharmaceutical manufacturing, leachables from bags, tubing, and other plastic components have become an area of concern. This has led to more rigorous supplier qualification and leachables risk assessment for single-use systems.

                      Extractables are chemical compounds that can be extracted from materials (like single-use systems, packaging, or manufacturing equipment) under exaggerated conditions such as elevated temperature, extended contact time, or use of strong solvents. They represent a “worst-case” scenario of chemicals potentially migrating into a drug product. Extractables are specific to the tested material and are independent of the drug product.

                      Leachables are chemical compounds that actually migrate from materials into the drug product under normal conditions of use or storage. They are specific to the combination of the material and the particular drug substance or product, representing the contaminants that may be present in the final drug formulation. Leachables are typically a subset of extractables that migrate under real-world conditions.

                      The accumulation of extractables and leachabes in a process fluid is governed by thermodynamics (the extent to which the materials would migrate) and kinetic (the rate at which would migrate) factors, as well as the amount of time during which such migration will occur. Higher temperatures increase the migration rate of leachables from the bulk of plastic to the surface in contact with the process stream or formulation.

                      Key points

                      • Extractables studies are performed on materials using exaggerated conditions.
                      • Leachables studies are performed on the actual drug product under normal conditions.
                      • Extractables represent potential contaminants, while leachables are actual contaminants.
                      • Both are critical for assessing product safety and quality in biotech manufacturing.

                      Proper evaluation of extractables and leachables is essential for regulatory compliance and ensuring patient safety in biopharmaceutical products.

                      Program Objectives

                      • Safety Assurance: Ensure that any chemicals leached from materials into the product do not pose a risk to patient safety.
                      • Regulatory Compliance: Meet all relevant regulatory requirements and guidelines.
                      • Quality Control: Maintain the integrity and quality of the biopharmaceutical product.

                      Regulatory Requirements

                      • Compliance with USP <661> Plastic Packaging Systems and Their Materials of Construction, and USP <381> Elastomeric Closure for Injection
                      • Compliance with USP <87> Biological Reactivity, In Vitro and USP <88> Biological Reactivity, In Vivo
                      • Compliance with European Pharmacopoeia (EP) requirements for materials used in containers, including EP General Chapter 3.1 Materials Used for the Manufacture of Containers and EP 3.2.9 Rubber Closures
                      • Compliance with Japanese Pharmacopoeia (JP) chapter 7.03 Test for Rubber Closures for Aqueous Infusions
                      • Compliance with EU Commission Decision 97/534/EC for Animal derived stearates
                      • Adherence to ICH Q8, Q9, and Q10 guidelines for quality risk management
                      • Leverage ISO 10993-1:2018 Biological evaluation of medical devices

                      Program Components

                      Design Space

                      The starting point should be a review of the supplier’s data. These studies should be performed on materials at the component level under standardized conditions of temperature time, surface, area, etc., so that the data is representative of intended use, including sterilization techniques. Using this data, the end-user can calculate the minimum amount of extractables based on surface area and other conditions. Consider the impact of dilution and clearance over the complete process through risk assessment and then complement with targeted studies.

                      These studies should be developed based on Quality-by-design principles described in ICH Q8 to gather all the attributes and parameters used to determine a design space. Scientific variables should be identified to set up the Design of Experiment (DoE) for the testing plan.

                      Risk Assessment

                      • Material Selection: Evaluate materials used for their potential to release harmful substances.
                      • Process Understanding: Understand the process conditions (e.g., temperature, pH, solvents) that might affect the leaching of chemicals.
                      • Risk Prioritization: Prioritize materials and processes based on their risk of contributing harmful leachables. Consider factors like stage of manufacturing, contact time, and proximity to final product.

                      The risk assessment needs to be within the overall context of process performance and product safety and efficacy. It should not be a separate risk assessment. You will dive deeper with more specific risk questions, but the hazard identification starts at the process level. In evaluating risks the following factors should be considered:

                      • Proximity of the process steps undergoing a change to the final product. Polymeric components in process steps closer to DS or DP will carry a higher risk rating than those in upstream process steps. For example, a bag or filter used for the final filtration of bulk drug substance (BDS) will have a much higher risk rating than components used in upstream process steps since there are no purification steps post-UF/DF.
                      • Storage and processing conditions (e.g., duration of exposure, temperature, pressure, pH extremes, buffer extraction propensity)
                      • The type of process fluid (e.g., purification buffer versus formulated drug substance, presence of solubilizing agents)
                      • Construction materials
                      • Potential adverse events, including synergistic and additive affects
                      • Drug dose, mode, and frequency of administration
                      • Therapeutic necessity

                      Your risk assessment will drive study design and should consider:

                      Analytical challenges

                      • Detecting and quantifying trace levels of leachables, which are often present at extremely low concentrations
                      • Developing analytical methods capable of detecting and quantifying a wide range of potential extractables/leachables
                      • Interference from formulation components or degradation products

                      Determining appropriate extraction conditions:

                      • Selecting solvents and conditions that adequately simulate or exaggerate real-world use conditions
                      • Balancing the need for aggressive extraction (to identify potential leachables) with realistic use conditions

                      Toxicological assessment

                      • Evaluating the safety impact of identified extractables/leachables, especially for novel compounds
                      • Determining appropriate safety thresholds and analytical evaluation thresholds

                      Regulatory expectations

                      • Meeting evolving regulatory requirements and expectations, which can vary between regions
                      • Justifying the extent of E&L studies performed based on risk assessment

                      Unexpected interactions

                      • Leachables causing unexpected effects, such as oxidation of preservatives or formation of protein-leachable adducts
                      • Interactions between leachables and the drug product that were not predicted by extractables studies

                      Time and resource constraints

                      • E&L studies can be time-consuming and resource-intensive, potentially impacting development timelines

                      Absorption issues

                      • Adsorption or absorption of drug product components by single-use materials, potentially affecting product stability or efficacy

                      Stability considerations

                      • Leachables appearing during stability studies that were not identified in initial extractables screening
                      • Changes in leachables profile over time or under different storage conditions

                      Material variability

                      • Inconsistencies in extractables/leachables profiles between different lots of materials or components

                      Biopharmaceutical-specific challenges

                      • Potential impact of leachables on sensitive cell lines or biological processes
                      • Interference of leachables with bioassays or analytical methods specific to biologics

                      Extractables Studies

                      • Objective: Identify potential extractables from materials under exaggerated conditions.
                      • Methodology:
                        • Use a range of solvents that mimic the process fluids.
                        • Apply exaggerated conditions such as elevated temperatures and extended contact times.
                        • Analyze the extracts using techniques like GC-MS, LC-MS, and ICP-MS.
                      • Data Review: Compare supplier-provided extractable data with the intended use to determine the need for specific studies.

                      Leachables Studies

                      • Objective: Identify and quantify leachables under actual process conditions.
                      • Methodology:
                        • Conduct studies during the development stages and monitor during stability studies.
                        • Use appropriate solvent systems and conditions that mimic the actual process.
                        • Analyze the product for leachables using validated analytical methods.
                      • Toxicological Assessment: Assess the toxicological impact of identified leachables to ensure they are within safe limits.

                      Migration Studies

                      • Objective: Evaluate the migration of chemicals from materials into the product over time.
                      • Methodology:
                        • Perform studies during the development phase.
                        • Monitor leachables during formal stability studies under normal and accelerated conditions.

                      Absorption Studies

                      • Objective: Assess the potential for adsorption or absorption of product components.
                      • Methodology:
                        • Conduct studies if stability issues are observed during hold time studies.
                        • Evaluate the impact on product stability and quality.

                      Stability Studies

                      • Objective: Ensure the stability of the product in contact with materials.
                      • Methodology:
                        • Conduct real-time and accelerated stability studies.
                        • Monitor product quality attributes such as potency, purity, and safety.

                      Implementation and Validation

                      Supplier Qualification

                      • Supplier Evaluation: Assess suppliers’ ability to provide materials that meet E&L requirements.
                      • Documentation Review: Ensure suppliers provide comprehensive extractables data and compliance certificates.

                      In-House Testing

                      • Validation: Validate the findings from supplier data with in-house testing.
                      • Protocol Development: Develop protocols for E&L testing specific to the product and process conditions.
                      • Acceptance Criteria: Establish acceptance criteria based on regulatory guidelines and risk assessments.

                      Toxicological Assessment and Risk Mitigation

                      Assess the toxicological impact of identified leachables to ensure they are within safe limits. Perform Risk Mitigation to:

                      • Implement appropriate controls based on risk assessment results
                      • Consider factors like materials selection, process parameters, and analytical testing
                      • Develop strategies to minimize leachables impact on product quality and safety

                      Continuous Monitoring

                      • Routine Testing: Implement routine testing of leachables during production.
                      • Change Management: Re-evaluate E&L profiles when there are changes in materials, suppliers, or processes.

                      Training and Education

                      Staff Training

                      • Awareness: Train staff on the importance of E&L studies and their impact on product safety.
                      • Technical Training: Provide technical training on conducting E&L studies and interpreting results.

                      Supplier Collaboration

                      • Engagement: Work closely with suppliers to ensure they understand and meet E&L requirements.
                      • Feedback: Provide feedback to suppliers based on study results to improve material quality.

                      Conclusion

                      A robust E&L program in biotech is essential for ensuring product safety, regulatory compliance, and maintaining high-quality standards. By implementing a comprehensive approach that includes risk assessment, thorough testing, supplier qualification, continuous monitoring, and staff training, biotech companies can effectively manage the risks associated with extractables and leachables.

                      Data Quality, Data Bias, and the Risk Assessment

                      I’ve seen my fair share of risk assessments listing data quality or bias as hazards. I tend to think that is pretty sloppy. I especially see this a lot in conversations around AI/ML. Data quality is not a risk. It is a causal factor in the failure or severity.

                      Data Quality and Data Bias

                      Data Quality

                      Data quality refers to how well a dataset meets certain criteria that make it fit for its intended use. The key dimensions of data quality include:

                      1. Accuracy – The data correctly represents the real-world entities or events it’s supposed to describe.
                      2. Completeness – The dataset contains all the necessary information without missing values.
                      3. Consistency – The data is uniform and coherent across different systems or datasets.
                      4. Timeliness – The data is up-to-date and available when needed.
                      5. Validity – The data conforms to defined business rules and parameters.
                      6. Uniqueness – There are no duplicate records in the dataset.

                      High-quality data is crucial for making informed quality decisions, conducting accurate analyses, and developing reliable AI/ML models. Poor data quality can lead to operational issues, inaccurate insights, and flawed strategies.

                      Data Bias

                      Data bias refers to systematic errors or prejudices present in the data that can lead to inaccurate or unfair outcomes, especially in machine learning and AI applications. Some common types of data bias include:

                      1. Sampling bias – When the data sample doesn’t accurately represent the entire population.
                      2. Selection bias – When certain groups are over- or under-represented in the dataset.
                      3. Reporting bias – When the frequency of events in the data doesn’t reflect real-world frequencies.
                      4. Measurement bias – When the data collection method systematically skews the results.
                      5. Algorithmic bias – When the algorithms or models introduce biases in the results.

                      Data bias can lead to discriminatory outcomes and produce inaccurate predictions or classifications.

                      Relationship between Data Quality and Bias

                      While data quality and bias are distinct concepts, they are closely related:

                      • Poor data quality can introduce or exacerbate biases. For example, incomplete or inaccurate data may disproportionately affect certain groups.
                      • High-quality data doesn’t necessarily mean unbiased data. A dataset can be accurate, complete, and consistent but still contain inherent biases.
                      • Addressing data bias often involves improving certain aspects of data quality, such as completeness and representativeness.

                      Organizations must implement robust data governance practices to ensure high-quality and unbiased data, regularly assess their data for quality issues and potential biases, and use techniques like data cleansing, resampling, and algorithmic debiasing.

                      Identifying the Hazards and the Risks

                      It is critical to remember the difference between a hazard and a risk. Data quality is a causal factor in the hazard, not a harm.

                      Hazard Identification

                      Think of it like a fever. An open wound is a causal factor for the fever, which has a root cause of poor wound hygiene. I can have the factor (the wound), but without the presence of the root cause (poor wound hygiene), the event (fever) would not develop (okay, there may be other root causes in play as well; remember there is never really just one root cause).

                      Potential Issues of Poor Data Quality and Inadequate Data Governance

                      The risks associated with poor data quality and inadequate data governance can significantly impact organizations. Here are the key areas where risks can develop:

                      Decreased Data Quality

                      • Inaccurate, incomplete, or inconsistent data leads to flawed decision-making
                      • Errors in customer information, product details, or financial data can cause operational issues
                      • Poor quality data hinders effective analysis and forecasting

                      Compliance Failures:

                      • Non-compliance with regulations can result in regulatory actions
                      • Legal complications and reputational damage from failing to meet regulatory requirements
                      • Increased scrutiny from regulatory bodies

                      Security Breaches

                      • Inadequate data protection increases vulnerability to cyberattacks and data breaches
                      • Financial costs associated with breach remediation, legal fees, and potential fines
                      • Loss of customer trust and long-term reputational damage

                      Operational Inefficiencies

                      • Time wasted on manual data cleaning and correction
                      • Reduced productivity due to employees working with unreliable data
                      • Inefficient processes resulting from poor data integration or inconsistent data formats

                      Missed Opportunities

                      • Failure to identify market trends or customer insights due to unreliable data
                      • Missed sales leads or potential customers because of inaccurate contact information
                      • Inability to capitalize on business opportunities due to lack of trustworthy data

                      Poor Decision-Making

                      • Decisions based on inaccurate or incomplete data leading to suboptimal outcomes, including deviations and product/study impact
                      • Misallocation of resources due to flawed insights from poor quality data
                      • Inability to effectively measure and improve performance

                      Potential Issues of Data Bias

                      Data bias presents significant risks across various domains, particularly when integrated into machine learning (ML) and artificial intelligence (AI) systems. These risks can manifest in several ways, impacting both individuals and organizations.

                      Discrimination and Inequality

                      Data bias can lead to discriminatory outcomes, systematically disadvantaging certain groups based on race, gender, age, or socioeconomic status. For example:

                      • Judicial Systems: Biased algorithms used in risk assessments for bail and sentencing can result in harsher penalties for people of color compared to their white counterparts, even when controlling for similar circumstances.
                      • Healthcare: AI systems trained on biased medical data may provide suboptimal care recommendations for minority groups, potentially exacerbating health disparities.

                      Erosion of Trust and Reputation

                      Organizations that rely on biased data for decision-making risk losing the trust of their customers and stakeholders. This can have severe reputational consequences:

                      • Customer Trust: If customers perceive that an organization’s AI systems are biased, they may lose trust in the brand, leading to a decline in customer loyalty and revenue.
                      • Reputation Damage: High-profile cases of AI bias, such as discriminatory hiring practices or unfair loan approvals, can attract negative media attention and public backlash.

                      Legal and Regulatory Risks

                      There are significant legal and regulatory risks associated with data bias:

                      • Compliance Issues: Organizations may face legal challenges and fines if their AI systems violate anti-discrimination laws.
                      • Regulatory Scrutiny: Increasing awareness of AI bias has led to calls for stricter regulations to ensure fairness and accountability in AI systems.

                      Poor Decision-Making

                      Biased data can lead to erroneous decisions that negatively impact business operations:

                      • Operational Inefficiencies: AI models trained on biased data may make poor predictions, leading to inefficient resource allocation and operational mishaps.
                      • Financial Losses: Incorrect decisions based on biased data can result in financial losses, such as extending credit to high-risk individuals or mismanaging inventory.

                      Amplification of Existing Biases

                      AI systems can perpetuate and even amplify existing biases if not properly managed:

                      • Feedback Loops: Biased AI systems can create feedback loops where biased outcomes reinforce the biased data, leading to increasingly skewed results over time.
                      • Entrenched Inequities: Over time, biased AI systems can entrench societal inequities, making it harder to address underlying issues of discrimination and inequality.

                      Ethical and Moral Implications

                      The ethical implications of data bias are profound:

                      • Fairness and Justice: Biased AI systems challenge the principles of fairness and justice, raising moral questions about using such technologies in critical decision-making processes.
                      • Human Rights: There are concerns that biased AI systems could infringe on human rights, particularly in areas like surveillance, law enforcement, and social services.

                      Perform the Risk Assessment

                      ICH Q9 (r1) Risk Management Process

                      Risk Management happens at the system/process level, where an AI/ML solution will be used. As appropriate, it drills down to the technology level. Never start with the technology level.

                      Hazard Identification

                      It is important to identify product quality hazards that may ultimately lead to patient harm. What is the hazard of that bad decision? What is the hazard of bad quality data? Those are not hazards; they are causes.

                      Hazard identification, the first step of a risk assessment, begins with a well-defined question defining why the risk assessment is being performed. It helps define the system and the appropriate scope of what will be studied. It addresses the “What might go wrong?” question, including identifying the possible consequences of hazards. The output of the hazard identification step is the identification of the possibilities (i.e., hazards) that the risk event (e.g., impact to product quality) happens.

                      The risk question takes the form of “What is the risk of using AI/ML solution for <Process/System> to <purpose of AI/MIL solution.” For example, “What is the risk of using AI/ML to identify deviation recurrence and help prioritize CAPAs?” or “What is the risk of using AI/ML to monitor real-time continuous manufacturing to determine the need to evaluate for a potential diversion?”

                      Process maps, data maps, and knowledge maps are critical here.

                      We can now identify the specific failure modes associated with AI/ML. This may involve deeep dive risk assessments. A failure mode is the specific way a failure occurs. So in this case, the specific way that bad data or bad decision making can happen. Multiple failure modes can, and usually do, lead to the same hazardous situation.

                      Make sure you drill down on failure causes. If more than 5 potential causes can be identified for a proposed failure mode, it is too broad and probably written at a high level in the process or item being risk assessed. It should be broken down into several specific failure modes with fewer potential causes and more manageable.

                      Start with an outline of how the process works and a description of the AI/ML (special technology) used in the process. Then, interrogate the following for potential failure modes:

                      • The steps in the process or item under study in which AI/ML interventions occur;
                      • The process/procedure documentation for example, master batch records, SOPs, protocols, etc.
                        • Current and proposed process/procedure in sufficient detail to facilitate failure mode identification;
                      • Critical Process Controls