The Product Lifecycle Management Document: Pharmaceutical Quality’s Central Repository for Managing Post-Approval Reality

Pharmaceutical regulatory frameworks have evolved substantially over the past two decades, moving from fixed-approval models—where products remained frozen in approved specifications after authorization—toward dynamic lifecycle management approaches that acknowledge manufacturing reality. Products don’t remain static across their commercial life. Manufacturing sites scale up. Suppliers introduce new materials. Analytical technologies improve. Equipment upgrades occur. Process understanding deepens through continued manufacturing experience. Managing these inevitable changes while maintaining product quality and regulatory compliance has historically required regulatory submission and approval for nearly every meaningful post-approval modification, regardless of risk magnitude or scientific foundation.

This traditional submission-for-approval model reflected regulatory frameworks designed when pharmaceutical manufacturing was less understood, analytical capabilities were more limited, and standardized post-approval change procedures were the best available mechanism for regulatory oversight. Organizations would develop products, conduct manufacturing validation, obtain market approval, then essentially operate within a frozen state of approval—any meaningful change required regulatory notification and frequently required prior approval before distribution of product made under the changed conditions.

The limitations of this approach became increasingly apparent over the 2000s. Regulatory approval cycles extended as the volume of submitted changes increased. Organizations deferred beneficial improvements to avoid submission burden. Supply chain disruptions couldn’t be addressed quickly because qualified alternative suppliers required prior approval supplements with multi-year review timelines. Manufacturing facilities accumulated technical debt—aging equipment, suboptimal processes, outdated analytical methods—because upgrading would trigger regulatory requirements disproportionate to the quality impact. Quality culture inadvertently incentivized resistance to change rather than continuous improvement.

Simultaneously, the pharmaceutical industry’s scientific understanding evolved. Quality by Design (QbD) principles, implemented through ICH Q8 guidance on pharmaceutical development, enabled organizations to develop products with comprehensive process understanding and characterized design spaces. ICH Q10 on pharmaceutical quality systems introduced systematic approaches to knowledge management and continual improvement. Risk management frameworks (ICH Q9) provided scientific methods to evaluate change impact with quantitative rigor. This growing scientific sophistication created opportunity for more nuanced, risk-informed post-approval change management than the binary approval/no approval model permitted.

ICH Q12 “Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management” represents the evolution toward scientific, risk-based lifecycle management frameworks. Rather than treating all post-approval changes as equivalent regulatory events, Q12 provides a comprehensive toolbox: Established Conditions (designating which product elements warrant regulatory oversight if changed), Post-Approval Change Management Protocols (enabling prospective agreement on how anticipated changes will be implemented), categorized reporting approaches (aligning regulatory oversight intensity with quality risk), and the Product Lifecycle Management (PLCM) document as central repository for this lifecycle strategy.

The PLCM document itself represents this evolutionary mindset. Where traditional regulatory submissions distribute CMC information across dozens of sections following Common Technical Document structure, the PLCM document consolidates lifecycle management strategy into a central location accessible to regulatory assessors, inspectors, and internal quality teams. The document serves “as a central repository in the marketing authorization application for Established Conditions and reporting categories for making changes to Established Conditions”. It outlines “the specific plan for product lifecycle management that includes the Established Conditions, reporting categories for changes to Established Conditions, PACMPs (if used), and any post-approval CMC commitments”.

This approach doesn’t abandon regulatory oversight. Rather, it modernizes oversight mechanisms by aligning regulatory scrutiny with scientific understanding and risk assessment. High-risk changes warrant prior approval. Moderate-risk changes warrant notification to maintain regulators’ awareness. Low-risk changes can be managed through pharmaceutical quality systems without regulatory notification—though the robust quality system remains subject to regulatory inspection.

The shift from fixed-approval to lifecycle management represents maturation in how the pharmaceutical industry approaches quality. Instead of assuming that quality emerges from regulatory permission, the evolved approach recognizes that quality emerges from robust understanding, effective control systems, and systematic continuous improvement. Regulatory frameworks support this quality assurance by maintaining oversight appropriate to risk, enabling efficient improvement implementation, and incentivizing investment in product and process understanding that justifies flexibility.

For pharmaceutical organizations, this evolution creates both opportunity and complexity. The opportunity is substantial: post-approval flexibility enabling faster response to supply chain challenges, incentives for continuous improvement no longer penalized by submission burden, manufacturing innovation supported by risk-based change management rather than constrained by regulatory caution. The complexity emerges from requirements to build the organizational capability, scientific understanding, and quality system infrastructure supporting this more sophisticated approach.

The PLCM document is the central planning and communication tool, making this evolution operational. Understanding what PLCM documents are, how they’re constructed, and how they connect control strategy development to commercial lifecycle management is essential for organizations navigating this transition from fixed-approval models toward dynamic, evidence-based lifecycle management.

Established Conditions: The Foundation Underlying PLCM Documents

The PLCM document cannot be understood without first understanding Established Conditions—the regulatory construct that forms the foundation for modern lifecycle management approaches. Established Conditions (ECs) are elements in a marketing application considered necessary to assure product quality and therefore requiring regulatory submission if changed post-approval. This definition appears straightforward until you confront the judgment required to distinguish “necessary to assure product quality” from the extensive supporting information submitted in regulatory applications that doesn’t meet this threshold.

The pharmaceutical development process generates enormous volumes of data. Formulation screening studies. Process characterization experiments. Analytical method development. Stability studies. Scale-up campaigns. Manufacturing experience from clinical trial material production. Much of this information appears in regulatory submissions because it supports and justifies the proposed commercial manufacturing process and control strategy. But not all submitted information constitutes an Established Condition.

Consider a monoclonal antibody purification process submitted in a biologics license application. The application describes the chromatography sequence: Protein A capture, viral inactivation, anion exchange polish, cation exchange polish. For each step, the application provides:

  • Column resin identity and supplier
  • Column dimensions and bed height
  • Load volume and load density
  • Buffer compositions and pH
  • Flow rates
  • Gradient profiles
  • Pool collection criteria
  • Development studies showing how these parameters were selected
  • Process characterization data demonstrating parameter ranges that maintain product quality
  • Viral clearance validation demonstrating step effectiveness

Which elements are Established Conditions requiring regulatory submission if changed? Which are supportive information that can be managed through the Pharmaceutical Quality System without regulatory notification?

The traditional regulatory approach made everything potentially an EC through conservative interpretation—any element described in the application might require submission if changed. This created perverse incentives against thorough process description (more detail creates more constraints) and against continuous improvement (changes trigger submission burden regardless of quality impact). ICH Q12 explicitly addresses this problem by distinguishing ECs from supportive information and providing frameworks for identifying ECs based on product and process understanding, quality risk management, and control strategy design.

The guideline describes three approaches to identifying process parameters as ECs:

Minimal parameter-based approach: Critical process parameters (CPPs) and other parameters where impact on product quality cannot be reasonably excluded are identified as ECs. This represents the default position requiring limited process understanding—if you haven’t demonstrated that a parameter doesn’t impact quality, assume it’s critical and designate it an EC. For our chromatography example, this approach would designate most process parameters as ECs: resin type, column dimensions, load parameters, buffer compositions, flow rates, gradient profiles. Only clearly non-impactful variables (e.g., specific pump model, tubing lengths within reasonable ranges) would be excluded.

Enhanced parameter-based approach: Leveraging extensive process characterization and understanding of parameter impacts on Critical Quality Attributes (CQAs), the organization identifies which parameters are truly critical versus those demonstrated to have minimal quality impact across realistic operational ranges. Process characterization studies using Design of Experiments (DoE), prior knowledge from similar products, and mechanistic understanding support justifications that certain parameters, while described in the application for completeness, need not be ECs because quality impact has been demonstrated to be negligible. For our chromatography process, enhanced understanding might demonstrate that precise column dimensions matter less than maintaining appropriate bed height and superficial velocity within characterized ranges. Gradient slope variations within defined design space don’t impact product quality measurably. Flow rate variations of ±20% from nominal don’t affect separation performance meaningfully when other parameters compensate appropriately.

Performance-based approach: Rather than designating input parameters (process settings) as ECs, this approach designates output performance criteria—in-process or release specifications that assure quality regardless of how specific parameters vary. For chromatography, this might mean the EC is aggregate purity specification rather than specific column operating parameters. As long as the purification process delivers aggregates below specification limits, variation in how that outcome is achieved doesn’t require regulatory notification. This provides maximum flexibility but requires robust process understanding, appropriate performance specifications representing quality assurance, and effective pharmaceutical quality system controls.

The choice among these approaches depends on product and process understanding available at approval and organizational lifecycle management strategy. Products developed with minimal Quality by Design (QbD) application, limited process characterization, and traditional “recipe-based” approaches default toward minimal parameter-based EC identification—describing most elements as ECs because insufficient knowledge exists to justify alternatives. Products developed with extensive QbD, comprehensive process characterization, and demonstrated design spaces can justify enhanced or performance-based approaches that provide greater post-approval flexibility.

This creates strategic implications. Organizations implementing ICH Q12 for legacy products often confront applications describing processes in detail without the underlying characterization studies that would support enhanced EC approaches. The submitted information implies everything might be critical because nothing was systematically demonstrated non-critical. Retrofitting ICH Q12 concepts requires either accepting conservative EC designation (reducing post-approval flexibility) or conducting characterization studies to generate understanding supporting more nuanced EC identification. The latter option represents significant investment but potentially generates long-term value through reduced regulatory submission burden for routine lifecycle changes.

For new products, the strategic decision occurs during pharmaceutical development. QbD implementation, process characterization investment, and design space establishment aren’t simply about demonstrating understanding to reviewers—they create the foundation for efficient lifecycle management by enabling justified EC identification that balances quality assurance with operational flexibility.

The PLCM Document Structure: Central Repository for Lifecycle Strategy

The PLCM document consolidates this EC identification and associated lifecycle management planning into a central location within the regulatory application. ICH Q12 describes the PLCM document as serving “as a central repository in the marketing authorization application for ECs and reporting categories for making changes to ECs”. The document “outlines the specific plan for product lifecycle management that includes the ECs, reporting categories for changes to ECs, PACMPs (if used) and any post-approval CMC commitments”.

The functional purpose is transparency and predictability. Regulatory assessors reviewing a marketing application can locate the PLCM document and immediately understand:

  • Which elements the applicant considers Established Conditions (versus supportive information)
  • The reporting category the applicant believes appropriate if each EC changes (prior approval, notification, or managed solely in PQS)
  • Any Post-Approval Change Management Protocols (PACMPs) proposed for planned future changes
  • Specific post-approval CMC commitments made during regulatory negotiations

This consolidation addresses a persistent challenge in regulatory assessment and inspection. Traditional applications distribute CMC information across dozens of sections following Common Technical Document (CTD) structure. Critical process parameters appear in section 3.2.S.2.2 or 3.2.P.3.3. Specifications appear in 3.2.S.4.1 or 3.2.P.5.1. Analytical procedures scatter across multiple sections. Control strategy discussions appear in pharmaceutical development sections. Regulatory commitments might exist in scattered communications, meeting minutes, and approval letters accumulated over the years.

When post-approval changes arise, determining what requires submission involves archeology through historical submissions, approval letters, and regional regulatory guidance. Different regional regulatory authorities might interpret submission requirements differently. Change control groups debate whether manufacturing site changes to mixing speed from 150 RPM to 180 RPM triggers prior approval (if RPM was specified in the approved application) or represent routine optimization (if only “appropriate mixing” was specified).

The PLCM document centralizes this information and makes commitments explicit. When properly constructed and maintained, the PLCM becomes the primary reference for change management decisions and regulatory inspection discussions about lifecycle management approach.

Core Elements of the PLCM Document

ICH Q12 specifies that the PLCM document should contain several key elements:

Summary of product control strategy: A high-level summary clarifying and highlighting which control strategy elements should be considered ECs versus supportive information. This summary addresses the fundamental challenge that control strategies contain extensive elements—material controls, in-process testing, process parameter monitoring, release testing, environmental monitoring, equipment qualification requirements, cleaning validation—but not all control strategy elements necessarily rise to EC status requiring regulatory submission if changed. The control strategy summary in the PLCM document maps this landscape, distinguishing legally binding commitments from quality system controls.

Established Conditions listing: The proposed ECs for the product should be listed comprehensively with references to detailed information located elsewhere in the CTD/eCTD structure. A tabular format is recommended though not mandatory. The table typically includes columns for: CTD section reference, EC description, justification for EC designation, current approved state, and reporting category for changes.

Reporting category assignments: For each EC, the reporting category indicates whether changes require prior approval (major changes with high quality risk), notification to regulatory authority (moderate changes with manageable risk), or can be managed solely within the PQS without regulatory notification (minimal or no quality risk). These categorizations should align with regional regulatory frameworks (21 CFR 314.70 in the US, EU variation regulations, equivalent frameworks in other ICH regions) while potentially proposing justified deviations based on product-specific risk assessment.

Post-Approval Change Management Protocols: If the applicant has developed PACMPs for anticipated future changes, these should be referenced in the PLCM document with location of the detailed protocols elsewhere in the submission. PACMPs represent prospective agreements with regulatory authorities about how specific types of changes will be implemented, what studies will support implementation, and what reporting category will apply when acceptance criteria are met. The PLCM document provides the index to these protocols.

Post-approval CMC commitments: Any commitments made to regulatory authorities during assessment—additional validation studies, monitoring programs, method improvements, process optimization plans—should be documented in the PLCM with timelines and expected completion. This addresses the common problem of commitments made during approval negotiations becoming lost or forgotten without systematic tracking.

The document is submitted initially with the marketing authorization application or via supplement/variation for marketed products when defining ECs. Following approval, the PLCM document should be updated in post-approval submissions for CMC changes, capturing how ECs have evolved and whether commitments have been fulfilled.

Location and Format Within Regulatory Submissions

The PLCM document can be located in eCTD Module 1 (regional administrative information), Module 2 (summaries), or Module 3 (quality information) based on regional regulatory preferences. The flexibility in location reflects that the PLCM document functions somewhat differently than traditional CTD sections—it’s a cross-reference and planning document rather than detailed technical information.

Module 3 placement (likely section 3.2.P.2 or 3.2.S.2 as part of pharmaceutical development discussions) positions the PLCM document alongside control strategy descriptions and process development narratives. This co-location makes logical sense—the PLCM represents the regulatory management strategy for the control strategy and process described in those sections.

Module 2 placement (within quality overall summary sections) positions the PLCM as summary-level strategic document, which aligns with its function as a high-level map rather than detailed specification.

Module 1 placement reflects that the PLCM document contains primarily regulatory process information (reporting categories, commitments) rather than scientific/technical content.

In practice, consultation with regional regulatory authorities during development or pre-approval meetings can clarify preferred location. The critical requirement is consistency and findability—inspectors and assessors need to locate the PLCM document readily.

The tabular format recommended for key PLCM elements facilitates comprehension and maintenance. ICH Q12 Annex IF provides an illustrative example showing how ECs, reporting categories, justifications, PACMPs, and commitments might be organized in tabular structure. While this example shouldn’t be treated as prescriptive template, it demonstrates organizational principles: grouping by product attribute (drug substance vs. drug product), clustering related parameters, referencing detailed justifications in development sections rather than duplicating extensive text in the table.

Control Strategy: The Foundation From Which ECs Emerge

The PLCM document’s Established Conditions emerge from the control strategy developed during pharmaceutical development and refined through technology transfer and commercial manufacturing experience. Understanding how PLCM documents relate to control strategy requires understanding what control strategies are, how they evolve across the lifecycle, and which control strategy elements become ECs versus remaining internal quality system controls.

ICH Q10 defines control strategy as “a planned set of controls, derived from current product and process understanding, that assures process performance and product quality”. This deceptively simple definition encompasses extensive complexity. The “planned set of controls” includes multiple layers:

  • Controls on material attributes: Specifications and acceptance criteria for starting materials, excipients, drug substance, intermediates, and packaging components. These controls ensure incoming materials possess the attributes necessary for the manufacturing process to perform as designed and the final product to meet quality standards.
  • Controls on the manufacturing process: Process parameter ranges, operating conditions, sequence of operations, and in-process controls that govern how materials are transformed into drug product. These include both parameters that operators actively control (temperatures, pressures, mixing speeds, flow rates) and parameters that are monitored to verify process state (pH, conductivity, particle counts).
  • Controls on drug substance and drug product: Release specifications, stability monitoring programs, and testing strategies that verify the final product meets all quality requirements before distribution and maintains quality throughout its shelf life.
  • Controls implicit in process design: Elements like sequence of unit operations, order of addition, purification step selection that aren’t necessarily “controlled” in real-time but represent design decisions that assure quality. A viral inactivation step positioned after affinity chromatography but before polishing steps exemplifies implicit control—the sequence matters for process performance but isn’t a parameter operators adjust batch-to-batch.
  • Environmental and facility controls: Clean room classifications, environmental monitoring programs, utilities qualification, equipment maintenance, and calibration that create the context within which manufacturing occurs.

The control strategy is not a single document. It’s distributed across process descriptions, specifications, SOPs, batch records, validation protocols, equipment qualification protocols, environmental monitoring programs, stability protocols, and analytical methods. What makes these disparate elements a “strategy” is that they collectively and systematically address how Critical Quality Attributes are ensured within appropriate limits throughout manufacturing and shelf life.

Control Strategy Development During Pharmaceutical Development

Control strategies don’t emerge fully formed at the end of development. They evolve systematically as product and process understanding grows.

Early development focuses on identifying what quality attributes matter. The Quality Target Product Profile (QTPP) articulates intended product performance, dosage form, route of administration, strength, stability, and quality characteristics necessary for safety and efficacy. From QTPP, potential Critical Quality Attributes are identified—the physical, chemical, biological, or microbiological properties that should be controlled within appropriate limits to ensure product quality.

For a monoclonal antibody therapeutic, potential CQAs might include: protein concentration, high molecular weight species (aggregates), low molecular weight species (fragments), charge variants, glycosylation profile, host cell protein levels, host cell DNA levels, viral safety, endotoxin levels, sterility, particulates, container closure integrity. Not all initially identified quality attributes prove critical upon investigation, but systematic evaluation determines which attributes genuinely impact safety or efficacy versus which can vary without meaningful consequence.

Risk assessment identifies which formulation components and process steps might impact these CQAs. For attributes confirmed as critical, development studies characterize how material attributes and process parameters affect CQA levels. Design of Experiments (DoE), mechanistic models, scale-down models, and small-scale studies explore parameter space systematically.

This characterization reveals Critical Material Attributes (CMAs)—characteristics of input materials that impact CQAs when varied—and Critical Process Parameters (CPPs)—process variables that affect CQAs. For our monoclonal antibody, CMAs might include cell culture media glucose concentration (affects productivity and glycosylation), excipient sources (affect aggregation propensity), and buffer pH (affects stability). CPPs might include bioreactor temperature, pH control strategy, harvest timing, chromatography load density, viral inactivation pH and duration, ultrafiltration/diafiltration concentration factors.

The control strategy emerges from this understanding. CMAs become specifications on incoming materials. CPPs become controlled process parameters with defined operating ranges in batch records. CQAs become specifications with appropriate acceptance criteria. Process analytical technology (PAT) or in-process testing provides real-time verification that process state aligns with expectations. Design spaces, when established, define multidimensional regions where input variables and process parameters consistently deliver quality.

Control Strategy Evolution Through Technology Transfer and Commercial Manufacturing

The control strategy at approval represents best understanding achieved during development and clinical manufacturing. Technology transfer to commercial manufacturing sites tests whether that understanding transfers successfully—whether commercial-scale equipment, commercial facility environments, and commercial material sourcing produce equivalent product quality when operating within the established control strategy.

Technology transfer frequently reveals knowledge gaps. Small-scale bioreactors used for clinical supply might achieve adequate oxygen transfer through simple impeller agitation; commercial-scale 20,000L bioreactors require sparging strategy design considering bubble size, gas flow rates, and pressure control that weren’t critical at smaller scale. Heat transfer dynamics differ between 200L and 2000L vessels, affecting cooling/heating rates and potentially impacting CQAs sensitive to temperature excursions. Column packing procedures validated on 10cm diameter columns at development scale might not translate directly to 80cm diameter columns at commercial scale.

These discoveries during scale-up, process validation, and early commercial manufacturing build on development knowledge. Process characterization at commercial scale, continued process verification, and manufacturing experience over initial production batches refine understanding of which parameters truly drive quality versus which development-scale sensitivities don’t manifest at commercial scale.

The control strategy should evolve to reflect this learning. Parameters initially controlled tightly based on limited understanding might be relaxed when commercial experience demonstrates wider ranges maintain quality. Parameters not initially recognized as critical might be added when commercial-scale phenomena emerge. In-process testing strategies might shift from extensive sampling to targeted critical points when process capability is demonstrated.

ICH Q10 explicitly envisions this evolution, describing pharmaceutical quality system objectives that include “establishing and maintaining a state of control” and “facilitating continual improvement”. The state of control isn’t static—it’s dynamic equilibrium where process understanding, monitoring, and control mechanisms maintain product quality while enabling adaptation as knowledge grows.

Connecting Control Strategy to PLCM Document: Which Elements Become Established Conditions?

The control strategy contains far more elements than should be Established Conditions. This is where the conceptual distinction between control strategy (comprehensive quality assurance approach) and Established Conditions (regulatory commitments requiring submission if changed) becomes critical.

Not all controls necessary to assure quality need regulatory approval before changing. Organizations should continuously improve control strategies based on growing knowledge, without regulatory approval creating barriers to enhancement. The challenge is determining which controls are so fundamental to quality assurance that regulatory oversight of changes is appropriate versus which controls can be managed through pharmaceutical quality systems without regulatory involvement.

ICH Q12 guidance indicates that EC designation should consider:

  • Criticality to product quality: Controls directly governing CQAs or CPPs/CMAs with demonstrated impact on CQAs are candidates for EC status. Release specifications for CQAs clearly merit EC designation—changing acceptance criteria for aggregates in a protein therapeutic affects patient safety and product efficacy directly. Similarly, critical process parameters with demonstrated CQA impact warrant EC consideration.
  • Level of quality risk: High-risk controls where inappropriate change could compromise patient safety should be ECs with prior approval reporting category. Moderate-risk controls might be ECs with notification reporting category. Low-risk controls might not need EC designation.
  • Product and process understanding: Greater understanding enables more nuanced EC identification. When extensive characterization demonstrates certain parameters have minimal quality impact, justification exists for excluding them from ECs. Conversely, limited understanding argues for conservative EC designation until further characterization enables refinement.
  • Regulatory expectations and precedent: While ICH Q12 harmonizes approaches, regional regulatory expectations still influence EC identification strategy. Conservative regulators might expect more extensive EC designation; progressive regulators comfortable with risk-based approaches might accept narrower EC scope when justified.

Consider our monoclonal antibody purification process control strategy. The comprehensive control strategy includes:

  • Column resin specifications (purity, dynamic binding capacity, lot-to-lot variability limits)
  • Column packing procedures (compression force, bed height uniformity testing, packing SOPs)
  • Buffer preparation procedures (component specifications, pH verification, bioburden limits)
  • Equipment qualification status (chromatography skid IQ/OQ/PQ, automated systems validation)
  • Process parameters (load density, flow rates, gradient slopes, pool collection criteria)
  • In-process testing (pool purity analysis, viral clearance sample retention)
  • Environmental monitoring in manufacturing suite
  • Operator training qualification
  • Cleaning validation for equipment between campaigns
  • Batch record templates documenting execution
  • Investigation procedures when deviations occur

Which elements become ECs in the PLCM document?

Using enhanced parameter-based approach with substantial process understanding: Resin specifications for critical attributes (dynamic binding capacity range, leachables below limits) likely merit EC designation—changing resin characteristics affects purification performance and CQA delivery. Load density ranges and pool collection criteria based on specific quality specifications probably merit EC status given their direct connection to product purity and yield. Critical buffer component specifications affecting pH and conductivity (which impact protein behavior on resins) warrant EC consideration.

Buffer preparation SOPs, equipment qualification procedures, environmental monitoring program details, operator training qualification criteria, cleaning validation acceptance criteria, and batch record templates likely don’t require EC designation despite being essential control strategy elements. These controls matter for quality, but changes can be managed through pharmaceutical quality system change control with appropriate impact assessment, validation where needed, and implementation without regulatory notification.

The PLCM document makes these distinctions explicit. The control strategy summary section acknowledges that comprehensive controls exist beyond those designated ECs. The EC listing table specifies which elements are ECs, referencing detailed justifications in development sections. The reporting category column indicates whether EC changes require prior approval (drug substance concentration specification), notification (resin dynamic binding capacity specification range adjustment based on additional characterization), or PQS management only (parameters within approved design space).

How ICH Q12 Tools Integrate Into Overall Lifecycle Management

The PLCM document serves as integrating framework for ICH Q12’s lifecycle management tools: Established Conditions, Post-Approval Change Management Protocols, reporting category assignments, and pharmaceutical quality system enablement.

Post-Approval Change Management Protocols: Planning Future Changes Prospectively

PACMPs address a fundamental lifecycle management challenge: regulatory authorities assess change appropriateness when changes are proposed, but this reactive assessment creates timeline uncertainty and resource inefficiency. Organizations proposing manufacturing site additions, analytical method improvements, or process optimizations submit change supplements, then wait months or years for assessment and approval while maintaining existing less-optimal approaches.

PACMPs flip this dynamic by obtaining prospective agreement on how anticipated changes will be implemented and assessed. The PACMP submitted in the original application or post-approval supplement describes:

  • The change intended for future implementation (e.g., manufacturing site addition, scale-up to larger bioreactors, analytical method improvement)
  • Rationale for the change (capacity expansion, technology improvement, continuous improvement)
  • Studies and validation work that will support change implementation
  • Acceptance criteria that will demonstrate the change maintains product quality
  • Proposed reporting category when acceptance criteria are met

If regulatory authorities approve the PACMP, the organization can implement the described change when studies meet acceptance criteria, reporting results per the agreed category rather than defaulting to conservative prior approval submission. This dramatically improves predictability—the organization knows in advance what studies will suffice and what reporting timeline applies.

For example, a PACMP might propose adding manufacturing capacity at a second site using identical equipment and procedures. The protocol specifies: three engineering runs demonstrating equipment performs comparably; analytical comparability studies showing product quality matches reference site; process performance qualification demonstrating commercial batches meet specifications; stability studies confirming comparable stability profiles. When these acceptance criteria are met, implementation proceeds via notification rather than prior approval supplement.

The PLCM document references approved PACMPs, providing the index to these prospectively planned changes. During regulatory inspections or when implementing changes, the PLCM document directs inspectors and internal change control teams to the relevant protocol describing the agreed implementation approach.

Reporting Categories: Risk-Based Regulatory Oversight

Reporting category assignment represents ICH Q12’s mechanism for aligning regulatory oversight intensity with quality risk. Not all changes merit identical regulatory scrutiny. Changes with high potential patient impact warrant prior approval before implementation. Changes with moderate impact might warrant notification so regulators are aware but don’t need to approve prospectively. Changes with minimal quality risk can be managed through pharmaceutical quality systems without regulatory notification (though inspection verification remains possible).

ICH Q12 encourages risk-based categorization aligned with regional regulatory frameworks while enabling flexibility when justified by product/process understanding and robust PQS. The PLCM document makes categorization explicit and provides justification.

Traditional US framework defines three reporting categories per 21 CFR 314.70:

  • Major changes (prior approval supplement): Changes requiring FDA approval before distribution of product made using the change. Examples include formulation changes affecting bioavailability, new manufacturing sites, significant manufacturing process changes, specification relaxations for CQAs. These changes present high quality risk; regulatory assessment verifies that proposed changes maintain safety and efficacy.
  • Moderate changes (Changes Being Effected or notification): Changes implemented after submission but before FDA approval (CBE-30: 30 days after submission) or notification to FDA without awaiting approval. Examples include analytical method changes, minor formulation adjustments, supplier changes for non-critical materials. Quality risk is manageable; notification ensures regulatory awareness while avoiding unnecessary delay.
  • Minor changes (annual report): Changes reported annually without prior notification. Examples include editorial corrections, equipment replacement with comparable equipment, supplier changes for non-critical non-functional components. Quality risk is minimal; annual aggregation reduces administrative burden while maintaining regulatory visibility.

European variation regulations provide comparable framework with Type IA (notification), Type IB (notification with delayed implementation), and Type II (approval required) variations.

ICH Q12 enables movement beyond default categorization through justified proposals based on product understanding, process characterization, and PQS effectiveness. A change that would traditionally require prior approval might justify notification category when:

  • Extensive process characterization demonstrates the change remains within validated design space
  • Comparability studies show equivalent product quality
  • Robust PQS ensures appropriate impact assessment and validation before implementation
  • PACMP established prospectively agreed acceptance criteria

The PLCM document documents these justified categorizations alongside conservative defaults, creating transparency about lifecycle management approach. When organizations propose that specific EC changes merit notification rather than prior approval based on process understanding, the PLCM provides the location for that proposal and cross-references to supporting justification in development sections.

Pharmaceutical Quality System: The Foundation Enabling Flexibility

None of the ICH Q12 tools—ECs, PACMPs, reporting categories, PLCM documents—function effectively without robust pharmaceutical quality system foundation. The PQS provides the infrastructure ensuring that changes not requiring regulatory notification are nevertheless managed with appropriate rigor.

ICH Q10 describes PQS as the comprehensive framework spanning the entire lifecycle from pharmaceutical development through product discontinuation, with objectives including achieving product realization, establishing and maintaining state of control, and facilitating continual improvement. The PQS elements—process performance monitoring, corrective and preventive action, change management, management review—provide systematic mechanisms for managing all changes (not just those notified to regulators).

When the PLCM document indicates that certain parameters can be adjusted within design space without regulatory notification, the PQS change management system ensures those adjustments undergo appropriate impact assessment, scientific justification, implementation with validation where needed, and effectiveness verification. When parameters are adjusted within specification ranges based on process optimization, CAPA systems ensure changes address identified opportunities while monitoring systems verify maintained quality.

Regulatory inspectors assessing ICH Q12 implementation evaluate PQS effectiveness as much as PLCM document content. An impressive PLCM document with sophisticated EC identification and justified reporting categories means little if the PQS change management system can’t demonstrate appropriate rigor for changes managed internally. Conversely, organizations with robust PQS can justify greater regulatory flexibility because inspectors have confidence that internal management substitutes effectively for regulatory oversight.

The Lifecycle Perspective: PLCM Documents as Living Infrastructure

The PLCM document concept fails if treated as static submission artifact—a form populated during regulatory preparation then filed away after approval. The document’s value emerges from functioning as living infrastructure maintained throughout commercial lifecycle.

Pharmaceutical Development Stage: Establishing Initial PLCM

During pharmaceutical development (ICH Q10’s first lifecycle stage), the focus is designing products and processes that consistently deliver intended performance. Development activities using QbD principles, risk management, and systematic characterization generate the product and process understanding that enables initial control strategy design and EC identification.

At this stage, the PLCM document represents the lifecycle management strategy proposed to regulatory authorities. Development teams compile:

  • Control strategy summary articulating how CQAs will be ensured through material controls, process controls, and testing strategy
  • Proposed EC listing based on available understanding and chosen approach (minimal, enhanced parameter-based, or performance-based)
  • Reporting category proposals justified by development studies and risk assessment
  • Any PACMPs for changes anticipated during commercialization (site additions, scale-up, method improvements)
  • Commitments for post-approval work (additional validation studies, monitoring programs, process characterization to be completed commercially)

The quality of this initial PLCM document depends heavily on development quality. Products developed with minimal process characterization and traditional empirical approaches produce conservative PLCM documents—extensive ECs, default prior approval reporting categories, limited justification for flexibility. Products developed with extensive QbD, comprehensive characterization, and demonstrated design spaces produce strategic PLCM documents—targeted ECs, risk-based reporting categories, justified flexibility.

This creates powerful incentive alignment. QbD investment during development isn’t merely about satisfying reviewers or demonstrating scientific sophistication—it’s infrastructure investment enabling lifecycle flexibility that delivers commercial value through reduced regulatory burden, faster implementation of improvements, and supply chain agility.

Technology Transfer Stage: Testing and Refining PLCM Strategy

Technology transfer represents critical validation of whether development understanding and proposed control strategy transfer successfully to commercial manufacturing. This stage tests the PLCM strategy implicitly—do the identified ECs actually ensure quality at commercial scale? Are proposed reporting categories appropriate for the change types that emerge during scale-up?

Technology transfer frequently reveals refinements needed. Parameters identified as critical at development scale might prove less sensitive commercially due to different equipment characteristics. Parameters not initially critical might require tighter control at larger scale due to heat/mass transfer limitations, longer processing times, or equipment-specific phenomena.

These discoveries should inform PLCM document updates submitted with first commercial manufacturing supplements or variations. The EC listing might be refined based on scale-up learning. Reporting category proposals might be adjusted when commercial-scale validation provides different risk perspectives. PACMPs initially proposed might require modification when commercial manufacturing reveals implementation challenges not apparent from development-scale thinking.

Organizations treating the PLCM as static approval-time artifact miss this refinement opportunity. The PLCM document approved initially reflected best understanding available during development. Commercial manufacturing generates new understanding that should enhance the PLCM, making it more accurate and strategic.

Commercial Manufacturing Stage: Maintaining PLCM as Living Document

Commercial manufacturing represents the longest lifecycle stage, potentially spanning decades. During this period, the PLCM document should evolve continuously as the product evolves.

Post-approval changes occur constantly in pharmaceutical manufacturing. Supplier discontinuations force raw material changes. Equipment obsolescence requires replacement. Analytical methods improve as technology advances. Process optimizations based on manufacturing experience enhance efficiency or robustness. Regulatory standard evolution necessitates updated validation approaches or expanded testing.

Each change potentially affects the PLCM document. If an EC changes, the PLCM document should be updated to reflect the new approved state. If a PACMP is executed and the change implemented, the PLCM should document completion and remove that protocol from active status while adding the implemented change to the EC listing if it becomes a new EC. If post-approval commitments are fulfilled, the PLCM should document completion.

The PLCM document becomes the central change management reference. When change controls propose manufacturing modifications, the first question is: “Does this affect an Established Condition in our PLCM document?” If yes, what’s the reporting category? Do we have an approved PACMP covering this change type? If we’re proposing this change doesn’t require regulatory notification despite affecting described elements, what’s our justification based on design space, process understanding, or risk assessment?

Annual Product Reviews, Management Reviews, and change management metrics should assess PLCM document currency. How many changes implemented last year affected ECs? What reporting categories were used? Were reporting category assignments appropriate retrospectively based on actual quality impact? Are there patterns suggesting EC designation should be refined—parameters initially identified as critical that commercial experience shows have minimal impact, or vice versa?

This dynamic maintenance transforms the PLCM document from regulatory artifact into operational tool for lifecycle management strategy. The document evolves from initial approval state toward increasingly sophisticated representation of how the organization manages quality through knowledge-based, risk-informed change management rather than rigid adherence to initial approval conditions.

Practical Implementation Challenges: PLCM-as-Done Versus PLCM-as-Imagined

The conceptual elegance of PLCM documents—central repository for lifecycle management strategy, transparent communication with regulators, strategic enabler for post-approval flexibility—confronts implementation reality in pharmaceutical organizations struggling with resource constraints, competing priorities, and cultural inertia favoring traditional approaches.

The Knowledge Gap: Insufficient Understanding to Support Enhanced EC Approaches

Many pharmaceutical organizations implementing ICH Q12 confront applications containing limited process characterization. Products approved years or decades ago described manufacturing processes in detail without the underlying DoE studies, mechanistic models, or design space characterization that would support enhanced EC identification.

The submitted information implies everything might be critical because systematic demonstrations of non-criticality don’t exist. Implementing PLCM documents for these legacy products forces uncomfortable choice: designate extensive ECs based on conservative interpretation (accepting reduced post-approval flexibility), or invest in retrospective characterization studies generating understanding needed to justify refined EC identification.

The latter option represents significant resource commitment. Process characterization at commercial scale requires manufacturing capacity allocation, analytical testing resources, statistical expertise for DoE design and interpretation, and time for study execution and assessment. For products with mature commercial manufacturing, this investment competes with new product development, existing product improvements, and operational firefighting.

Organizations often default to conservative EC designation for legacy products, accepting reduced ICH Q12 benefits rather than making characterization investment. This creates two-tier environment: new products developed with QbD approaches achieving ICH Q12 flexibility, while legacy products remain constrained by limited understanding despite being commercially mature.

The strategic question is whether retrospective characterization investment pays back through avoided regulatory submission costs, faster implementation of supply chain changes, and enhanced resilience during material shortages or supplier disruptions. For high-value products with long remaining commercial life, the investment frequently justifies itself. For products approaching patent expiration or with declining volumes, the business case weakens.

The Cultural Gap: Change Management as Compliance Versus Strategic Capability

Traditional pharmaceutical change management culture treats post-approval changes as compliance obligations requiring regulatory permission rather than strategic capabilities enabling continuous improvement. This mindset manifests in change control processes designed to document what changed and ensure regulatory notification rather than optimize change implementation efficiency.

ICH Q12 requires cultural shift from “prove we complied with regulatory notification requirements” toward “optimize lifecycle management strategy balancing quality assurance with operational agility”. This shift challenges embedded assumptions.

The assumption that “more regulatory oversight equals better quality” must confront evidence that excessive regulatory burden can harm quality by preventing necessary improvements, forcing workarounds when optimal changes can’t be implemented due to submission timelines, and creating perverse incentives against process optimization. Quality emerges from robust understanding, effective control, and systematic improvement—not from regulatory permission slips for every adjustment.

The assumption that “regulatory submission requirements are fixed by regulation” must acknowledge that ICH Q12 explicitly encourages justified proposals for risk-based reporting categories differing from traditional defaults. Organizations can propose that specific changes merit notification rather than prior approval based on process understanding, comparability demonstrations, and PQS rigor. But proposing non-default categorization requires confidence to articulate justification and defend during regulatory assessment—confidence many organizations lack.

Building this capability requires training quality professionals, regulatory affairs teams, and change control reviewers in ICH Q12 concepts and their application. It requires developing organizational competency in risk assessment connecting change types to quality impact with quantitative or semi-quantitative justification. It requires quality systems that can demonstrate to inspectors that internally managed changes undergo appropriate rigor even without regulatory oversight.

The Maintenance Gap: PLCM Documents as Static Approval Artifacts Versus Living Systems

Perhaps the largest implementation gap exists between PLCM documents as living lifecycle management infrastructure versus PLCM documents as one-time regulatory submission artifacts. Pharmaceutical organizations excel at generating documentation for regulatory submissions. We struggle with maintaining dynamic documents that evolve with the product.

The PLCM document submitted at approval captures understanding and strategy at that moment. Absent systematic maintenance processes, the document fossilizes. Post-approval changes occur but the PLCM document isn’t updated to reflect current EC state. PACMPs are executed but completion isn’t documented in updated PLCM versions. Commitments are fulfilled but the PLCM document continues listing them as pending.

Within several years, the PLCM document submitted at approval no longer accurately represents current product state or lifecycle management approach. When inspectors request the PLCM document, organizations scramble to reconstruct current state from change control records, approval letters, and variation submissions rather than maintaining the PLCM proactively.

This failure emerges from treating PLCM documents as regulatory submission deliverables (owned by regulatory affairs, prepared for submission, then archived) rather than operational quality system documents (owned by quality systems, maintained continuously, used routinely for change management decisions). The latter requires infrastructure:

  • Document management systems with version control and change history
  • Assignment of PLCM document maintenance responsibility to specific quality system roles
  • Integration of PLCM updates into change control workflows (every approved change affecting ECs triggers PLCM update)
  • Periodic PLCM review during annual product reviews or management reviews to verify currency
  • Training for quality professionals in using PLCM documents as operational references rather than dusty submission artifacts

Organizations implementing ICH Q12 successfully build these infrastructure elements deliberately. They recognize that PLCM document value requires maintenance investment comparable to batch record maintenance, specification maintenance, or validation protocol maintenance—not one-time preparation then neglect.

Strategic Implications: PLCM Documents as Quality System Maturity Indicators

The quality and maintenance of PLCM documents reveals pharmaceutical quality system maturity. Organizations with immature quality systems produce PLCM documents that check regulatory boxes—listing ECs comprehensively with conservative reporting categories, acknowledging required elements, fulfilling submission expectations. But these PLCM documents provide minimal strategic value because they reflect compliance obligation rather than lifecycle management strategy.

Organizations with mature quality systems produce PLCM documents demonstrating sophisticated lifecycle thinking: targeted EC identification justified by process understanding, risk-based reporting category proposals supported by characterization data and PQS capabilities, PACMPs anticipating future manufacturing evolution, and maintained currency through systematic update processes integrated into quality system operations.

This maturity manifests in tangible outcomes. Mature organizations implement post-approval improvements faster because PLCM planning anticipated change types and established appropriate reporting categories. They navigate supplier changes and material shortages more effectively because EC scope acknowledges design space flexibility rather than rigid specification adherence. They demonstrate regulatory inspection resilience because inspectors reviewing PLCM documents find coherent lifecycle strategy supported by robust PQS rather than afterthought compliance artifacts.

The PLCM document, implemented authentically, becomes what it was intended to be: central infrastructure connecting product understanding, control strategy design, risk management, quality systems, and regulatory strategy into integrated lifecycle management capability. Not another form to complete during regulatory preparation, but the strategic framework enabling pharmaceutical organizations to manage commercial manufacturing evolution over decades while assuring consistent product quality and maintaining regulatory compliance.

That’s what ICH Q12 envisions. That’s what the pharmaceutical industry needs. The gap between vision and reality—between PLCM-as-imagined and PLCM-as-done—determines whether these tools transform pharmaceutical lifecycle management or become another layer of regulatory theater generating compliance artifacts without operational value.

Closing that gap requires the same fundamental shift quality culture always requires: moving from procedure compliance and documentation theater toward genuine capability development grounded in understanding, measurement, and continuous improvement. PLCM documents that work emerge from organizations committed to product understanding, lifecycle strategy, and quality system maturity—not from organizations populating templates because ICH Q12 says we should have these documents.

Which type of organization are we building? The answer appears not in the eloquence of our PLCM document prose, but in whether our change control groups reference these documents routinely, whether our annual product reviews assess PLCM currency systematically, whether our quality professionals can articulate EC rationale confidently, and whether our post-approval changes implement predictably because lifecycle planning anticipated them rather than treating each change as crisis requiring regulatory archeology.

PLCM documents are falsifiable quality infrastructure. They make specific predictions: that identified ECs capture elements necessary for quality assurance, that reporting categories align with actual quality risk, that PACMPs enable anticipated changes efficiently, that PQS provides appropriate rigor for internally managed changes. These predictions can be tested through change implementation experience, regulatory inspection outcomes, supply chain resilience during disruptions, and cycle time metrics for post-approval changes.

Organizations serious about pharmaceutical lifecycle management should test these predictions systematically. If PLCM strategies prove ineffective—if supposedly non-critical parameters actually impact quality when changed, if reporting categories prove inappropriate, if PQS rigor proves insufficient for internally managed changes—that’s valuable information demanding revision. If PLCM strategies prove effective, that validates the lifecycle management approach and builds confidence for further refinement.

Most organizations won’t conduct this rigorous testing. PLCM documents will become another compliance artifact, accepted uncritically as required elements without empirical validation of effectiveness. This is exactly the kind of unfalsifiable quality system I’ve critiqued throughout this blog. Genuine commitment to lifecycle management requires honest measurement of whether ICH Q12 tools actually improve lifecycle management outcomes.

The pharmaceutical industry deserves better. Patients deserve better. We can build lifecycle management infrastructure that actually manages lifecycles—or we can generate impressive documents that impress nobody except those who’ve never tried using them for actual change management decisions.

A 2025 Retrospective for Investigations of a Dog

If the history of pharmaceutical quality management were written as a geological timeline, 2025 would hopefully mark the end of the Holocene of Compliance—a long, stable epoch where “following the procedure” was sufficient to ensure survival—and the beginning of the Anthropocene of Complexity.

For decades, our industry has operated under a tacit social contract. We agreed to pretend that “compliance” was synonymous with “quality.” We agreed to pretend that a validated method would work forever because we proved it worked once in a controlled protocol three years ago. We agreed to pretend that “zero deviations” meant “perfect performance,” rather than “blind surveillance.” We agreed to pretend that if we wrote enough documents, reality would conform to them.

If I had my wish 2025 would be the year that contract finally dissolved.

Throughout the year—across dozens of posts, technical analyses, and industry critiques on this blog—I have tried to dismantle the comfortable illusions of “Compliance Theater” and show how this theater collides violently with the unforgiving reality of complex systems.

The connecting thread running through every one of these developments is the concept I have returned to obsessively this year: Falsifiable Quality.

This Year in Review is not merely a summary of blog posts. It is an attempt to synthesize the fragmented lessons of 2025 into a coherent argument. The argument is this: A quality system that cannot be proven wrong is a quality system that cannot be trusted.

If our systems—our validation protocols, our risk assessments, our environmental monitoring programs—are designed only to confirm what we hope is true (the “Happy Path”), they are not quality systems at all. They are comfort blankets. And 2025 was the year we finally started pulling the blanket off.

The Philosophy of Doubt

(Reflecting on: The Effectiveness Paradox, Sidney Dekker, and Gerd Gigerenzer)

Before we dissect the technical failures of 2025, let me first establish the philosophical framework that defined this year’s analysis.

In August, I published The Effectiveness Paradox: Why ‘Nothing Bad Happened’ Doesn’t Prove Your Quality System Works.” It became one of the most discussed posts of the year because it attacked the most sacred metric in our industry: the trend line that stays flat.

We are conditioned to view stability as success. If Environmental Monitoring (EM) data shows zero excursions for six months, we throw a pizza party. If a method validation passes all acceptance criteria on the first try, we commend the development team. If a year goes by with no Critical deviations, we pay out bonuses.

But through the lens of Falsifiable Quality—a concept heavily influenced by the philosophy of Karl Popper, the challenging insights of Deming, and the safety science of Sidney Dekker, whom we discussed in November—these “successes” look suspiciously like failures of inquiry.

The Problem with Unfalsifiable Systems

Karl Popper famously argued that a scientific theory is only valid if it makes predictions that can be tested and proven false. “All swans are white” is a scientific statement because finding one black swan falsifies it. “God is love” is not, because no empirical observation can disprove it.

In 2025, I argued that most Pharmaceutical Quality Systems (PQS) are designed to be unfalsifiable.

  • The Unfalsifiable Alert Limit: We set alert limits based on historical averages + 3 standard deviations. This ensures that we only react to statistical outliers, effectively blinding us to gradual drift or systemic degradation that remains “within the noise.”
  • The Unfalsifiable Robustness Study: We design validation protocols that test parameters we already know are safe (e.g., pH +/- 0.1), avoiding the “cliff edges” where the method actually fails. We prove the method works where it works, rather than finding where it breaks.
  • The Unfalsifiable Risk Assessment: We write FMEAs where the conclusion (“The risk is acceptable”) is decided in advance, and the RPN scores are reverse-engineered to justify it.

This is “Safety Theater,” a term Dekker uses to describe the rituals organizations perform to look safe rather than be safe.

Safety-I vs. Safety-II

In November’s post Sidney Dekker: The Safety Scientist Who Influences How I Think About Quality, I explored Dekker’s distinction between Safety-I (minimizing things that go wrong) and Safety-II (understanding how things usually go right).

Traditional Quality Assurance is obsessed with Safety-I. We count deviations. We count OOS results. We count complaints. When those counts are low, we assume the system is healthy.
But as the LeMaitre Vascular warning letter showed us this year (discussed in Part III), a system can have “zero deviations” simply because it has stopped looking for them. LeMaitre had excellent water data—because they were cleaning the valves before they sampled them. They were measuring their ritual, not their water.

Falsifiable Quality is the bridge to Safety-II. It demands that we treat every batch record not as a compliance artifact, but as a hypothesis test.

  • Hypothesis: “The contamination control strategy is effective.”
  • Test: Aggressive monitoring in worst-case locations, not just the “representative” center of the room.
  • Result: If we find nothing, the hypothesis survives another day. If we find something, we have successfully falsified the hypothesis—which is a good thing because it reveals reality.

The shift from “fearing the deviation” to “seeking the falsification” is a cultural pivot point of 2025.

The Epistemological Crisis in the Lab (Method Validation)

(Reflecting on: USP <1225>, Method Qualification vs. Validation, and Lifecycle Management)

Nowhere was the battle for Falsifiable Quality fought more fiercely in 2025 than in the analytical laboratory.

The proposed revision to USP <1225> Validation of Compendial Procedures (published in Pharmacopeial Forum 51(6)) arrived late in the year, but it serves as the perfect capstone to the arguments I’ve been making since January.

For forty years, analytical validation has been the ultimate exercise in “Validation as an Event.” You develop a method. You write a protocol. You execute the protocol over three days with your best analyst and fresh reagents. You print the report. You bind it. You never look at it again.

This model is unfalsifiable. It assumes that because the method worked in the “Work-as-Imagined” conditions of the validation study, it will work in the “Work-as-Done” reality of routine QC for the next decade.

The Reportable Result: Validating Decisions, Not Signals

The revised USP <1225>—aligned with ICH Q14(Analytical Procedure Development) and USP <1220> (The Lifecycle Approach)—destroys this assumption. It introduces concepts that force falsifiability into the lab.

The most critical of these is the Reportable Result.

Historically, we validated “the instrument” or “the measurement.” We proved that the HPLC could inject the same sample ten times with < 1.0% RSD.

But the Reportable Result is the final value used for decision-making—the value that appears on the Certificate of Analysis. It is the product of a complex chain: Sampling -> Transport -> Storage -> Preparation -> Dilution -> Injection -> Integration -> Calculation -> Averaging.

Validating the injection precision (the end of the chain) tells us nothing about the sampling variability (the beginning of the chain).

By shifting focus to the Reportable Result, USP <1225> forces us to ask: “Does this method generate decisions we can trust?”

The Replication Strategy: Validating “Work-as-Done”

The new guidance insists that validation must mimic the replication strategy of routine testing.
If your SOP says “We report the average of 3 independent preparations,” then your validation must evaluate the precision and accuracy of that average, not of the individual preparations.

This seems subtle, but it is revolutionary. It prevents the common trick of “averaging away” variability during validation to pass the criteria, only to face OOS results in routine production because the routine procedure doesn’t use the same averaging scheme.

It forces the validation study to mirror the messy reality of the “Work-as-Done,” making the validation data a falsifiable predictor of routine performance, rather than a theoretical maximum capability.

Method Qualification vs. Validation: The June Distinction

I wrote Method Qualification and Validation,” clarifying a distinction that often confuses the industry.

  • Qualification is the “discovery phase” where we explore the method’s limits. It is inherently falsifiable—we want to find where the method breaks.
  • Validation has traditionally been the “confirmation phase” where we prove it works.

The danger, as I noted in that post, is when we skip the falsifiable Qualification step and go straight to Validation. We write the protocol based on hope, not data.

USP <1225> essentially argues that Validation must retain the falsifiable spirit of Qualification. It is not a coronation; it is a stress test.

The Death of “Method Transfer” as We Know It

In a Falsifiable Quality system, a method is never “done.” The Analytical Target Profile (ATP)—a concept from ICH Q14 that permeates the new thinking—is a standing hypothesis: “This method measures Potency within +/- 2%.”

Every time we run a system suitability check, every time we run a control standard, we are testing that hypothesis.

If the method starts drifting—even if it still passes broad system suitability limits—a falsifiable system flags the drift. An unfalsifiable system waits for the OOS.

The draft revision of USP <1225> is a call to arms. It asks us to stop treating validation as a “ticket to ride”—a one-time toll we pay to enter GMP compliance—and start treating it as a “ticket to doubt.” Validation gives us permission to use the method, but only as long as the data continues to support the hypothesis of fitness.

The Reality Check (The “Unholy Trinity” of Warning Letters)

Philosophy and guidelines are fine, but in 2025, reality kicked in the door. The regulatory year was defined by three critical warning letters—SanofiLeMaitre, and Rechon—that collectively dismantled the industry’s illusions of control.

It began, as these things often do, with a ghost from the past.

Sanofi Framingham: The Pendulum Swings Back

(Reflecting on: Failure to Investigate Critical Deviations and The Sanofi Warning Letter)

The year opened with a shock. On January 15, 2025, the FDA issued a warning letter to Sanofi’s Framingham facility—the sister site to the legacy Genzyme Allston landing, whose consent decree defined an entire generation of biotech compliance and of my career.

In my January analysis (Failure to Investigate Critical Deviations: A Cautionary Tale), I noted that the FDA’s primary citation was a failure to “thoroughly investigate any unexplained discrepancy.”

This is the cardinal sin of Falsifiable Quality.

An “unexplained discrepancy” is a signal from reality. It is the system telling you, “Your hypothesis about this process is wrong.”

  • The Falsifiable Response: You dive into the discrepancy. You assume your control strategy missed something. You use Causal Reasoning (the topic of my May post) to find the mechanism of failure.
  • The Sanofi Response: As the warning letter detailed, they frequently attributed failures to “isolated incidents” or superficial causes without genuine evidence.

This is the “Refusal to Falsify.” By failing to investigate thoroughly, the firm protects the comfortable status quo. They choose to believe the “Happy Path” (the process is robust) over the evidence (the discrepancy).

The Pendulum of Compliance

In my companion post (Sanofi Warning Letter”), I discussed the “pendulum of compliance.” The Framingham site was supposed to be the fortress of quality, built on the lessons of the Genzyme crisis.

The failure at Sanofi wasn’t a lack of SOPs; it was a lack of curiosity.

The investigators likely had checklists, templates, and timelines (Compliance Theater), but they lacked the mandate—or perhaps the Expertise —to actually solve the problem.

This set the thematic stage for the rest of 2025. Sanofi showed us that “closing the deviation” is not the same as fixing the problem. This insight led directly into my August argument in The Effectiveness Paradox: You can close 100% of your deviations on time and still have a manufacturing process that is spinning out of control.

If Sanofi was the failure of investigation (looking back), Rechon and LeMaitre were failures of surveillance (looking forward). Together, they form a complete picture of why unfalsifiable systems fail.

Reflecting on: Rechon Life Science and LeMaitre Vascular

Philosophy and guidelines are fine, but in September, reality kicked in the door.

Two warning letters in 2025—Rechon Life Science (September) and LeMaitre Vascular (August)—provided brutal case studies in what happens when “representative sampling” is treated as a buzzword rather than a statistical requirement.

Rechon Life Science: The Map vs. The Territory

The Rechon Life Science warning letter was a significant regulatory signal of 2025 regarding sterile manufacturing. It wasn’t just a list of observations; it was an indictment of unfalsifiable Contamination Control Strategies (CCS).

We spent 2023 and 2024 writing massive CCS documents to satisfy Annex 1. Hundreds of pages detailing airflows, gowning procedures, and material flows. We felt good about them. We felt “compliant.”

Then the FDA walked into Rechon and essentially asked: “If your CCS is so good, why does your smoke study show turbulence over the open vials?”

The warning letter highlighted a disconnect I’ve called “The Map vs. The Territory.”

  • The Map: The CCS document says the airflow is unidirectional and protects the product.
  • The Territory: The smoke study video shows air eddying backward from the operator to the sterile core.

In an unfalsifiable system, we ignore the smoke study (or film it from a flattering angle) because it contradicts the CCS. We prioritize the documentation (the claim) over the observation (the evidence).

In a falsifiable system, the smoke study is the test. If the smoke shows turbulence, the CCS is falsified. We don’t defend the CCS; we rewrite it. We redesign the line.

The FDA’s critique of Rechon’s “dynamic airflow visualization” was devastating because it showed that Rechon was using the smoke study as a marketing video, not a diagnostic tool. They filmed “representative” operations that were carefully choreographed to look clean, rather than the messy reality of interventions.

LeMaitre Vascular: The Sin of “Aspirational Data”

If Rechon was about air, LeMaitre Vascular (analyzed in my August post When Water Systems Fail) was about water. And it contained an even more egregious sin against falsifiability.

The FDA observed that LeMaitre’s water sampling procedures required cleaning and purging the sample valves before taking the sample.

Let’s pause and consider the epistemology of this.

  • The Goal: To measure the quality of the water used in manufacturing.
  • The Reality: Manufacturing operators do not purge and sanitize the valve for 10 minutes before filling the tank. They open the valve and use the water.
  • The Sample: By sanitizing the valve before sampling, LeMaitre was measuring the quality of the sampling process, not the quality of the water system.

I call this “Aspirational Data.” It is data that reflects the system as we wish it existed, not as it actually exists. It is the ultimate unfalsifiable metric. You can never find biofilm in a valve if you scrub the valve with alcohol before you open it.

The FDA’s warning letter was clear: “Sampling… must include any pathway that the water travels to reach the process.”

LeMaitre also performed an unauthorized “Sterilant Switcheroo,” changing their sanitization agent without change control or biocompatibility assessment. This is the hallmark of an unfalsifiable culture: making changes based on convenience, assuming they are safe, and never designing the study to check if that assumption is wrong.

The “Representative” Trap

Both warning letters pivot on the misuse of the word “representative.”

Firms love to claim their EM sampling locations are “representative.” But representative of what? Usually, they are representative of the average condition of the room—the clean, empty spaces where nothing happens.

But contamination is not an “average” event. It is a specific, localized failure. A falsifiable EM program places probes in the “worst-case” locations—near the door, near the operator’s hands, near the crimping station. It tries to find contamination. It tries to falsify the claim that the zone is sterile, asceptic or bioburden reducing.

When Rechon and LeMaitre failed to justify their sampling locations, they were guilty of designing an unfalsifiable experiment. They placed the “microscope” where they knew they wouldn’t find germs.

2025 taught us that regulators are no longer impressed by the thickness of the CCS binder. They are looking for the logic of control. They are testing your hypothesis. And if you haven’t tested it yourself, you will fail.

The Investigation as Evidence

(Reflecting on: The Golden Start to a Deviation InvestigationCausal ReasoningTake-the-Best Heuristics, and The Catalent Case)

If Rechon, LeMaitre, and Sanofi teach us anything, it is that the quality system’s ability to discover failure is more important than its ability to prevent failure.

A perfect manufacturing process that no one is looking at is indistinguishable from a collapsing process disguised by poor surveillance. But a mediocre process that is rigorously investigated, understood, and continuously improved is a path toward genuine control.

The investigation itself—how we respond to a deviation, how we reason about causation, how we design corrective actions—is where falsifiable quality either succeeds or fails.

The Golden Day: When Theory Meets Work-as-Done

In April, I published “The Golden Start to a Deviation Investigation,” which made a deceptively simple argument: The first 24 hours after a deviation is discovered are where your quality system either commits to discovering truth or retreats into theater.

This argument sits at the heart of falsifiable quality.

When a deviation occurs, you have a narrow window—what I call the “Golden Day”—where evidence is fresh, memories are intact, and the actual conditions that produced the failure still exist. If you waste this window with vague problem statements and abstract discussions, you permanently lose the ability to test causal hypotheses later.

The post outlined a structured protocol:

First, crystallize the problem. Not “potency was low”—but “Lot X234, potency measured at 87% on January 15th at 14:32, three hours after completion of blending in Vessel C-2.” Precision matters because only specific, bounded statements can be falsified. A vague problem statement can always be “explained away.”

Second, go to the Gemba. This is the antidote to “work-as-imagined” investigation. The SOP says the temperature controller should maintain 37°C +/- 2°C. But the Gemba walk reveals that the probe is positioned six inches from the heating element, the data logger is in a recessed pocket where humidity accumulates, and the operator checks it every four hours despite a requirement to check hourly. These are the facts that predict whether the deviation will recur.

Third, interview with cognitive discipline. Most investigations fail not because investigators lack information, but because they extract information poorly. Cognitive interviewing—developed by the FBI and the National Transportation Safety Board—uses mental reinstatement, multiple perspectives, and sequential reordering to access accurate recall rather than confabulated narrative. The investigator asks the operator to walk through the event in different orders, from different viewpoints, each time triggering different memory pathways. This is not “soft” technique; it is a mechanism for generating falsifiable evidence.

The Golden Day post makes it clear: You do not investigate deviations to document compliance. You investigate deviations to gather evidence about whether your understanding of the process is correct.

Causal Reasoning: Moving Beyond “What Was Missing”

Most investigation tools fail not because they are flawed, but because they are applied with the wrong mindset. In my May post “Causal Reasoning: A Transformative Approach to Root Cause Analysis,” I argued that pharmaceutical investigations are often trapped in “negative reasoning.”

Negative reasoning asks: “What barrier was missing? What should have been done but wasn’t?” This mindset leads to unfalsifiable conclusions like “Procedure not followed” or “Training was inadequate.” These are dead ends because they describe the absence of an ideal, not the presence of a cause.

Causal reasoning flips the script. It asks: “What was present in the system that made the observed outcome inevitable?”

Instead of settling for “human error,” causal reasoning demands we ask: What environmental cues made the action sensible to the operator at that moment? Were the instructions ambiguous? Did competing priorities make compliance impossible? Was the process design fragile?

This shift transforms the investigation from a compliance exercise into a scientific inquiry.

Consider the LeMaitre example:

  • Negative Reasoning: “Why didn’t they sample the true condition?” Answer: “Because they didn’t follow the intent of the sampling plan.”
  • Causal Reasoning: “What made the pre-cleaning practice sensible to them?” Answer: “They believed it ensured sample validity by removing valve residue.”

By understanding the why, we identify a knowledge gap that can be tested and corrected, rather than a negligence gap that can only be punished.

In September, “Take-the-Best Heuristic for Causal Investigation” provided a practical framework for this. Instead of listing every conceivable cause—a process that often leads to paralysis—the “Take-the-Best” heuristic directs investigators to focus on the most information-rich discriminators. These are the factors that, if different, would have prevented the deviation. This approach focuses resources where they matter most, turning the investigation into a targeted search for truth.

CAPA: Predictions, Not Promises

The Sanofi warning letter—analyzed in January—showed the destination of unfalsifiable investigation: CAPAs that exist mainly as paperwork.

Sanofi had investigation reports. They had “corrective actions.” But the FDA noted that deviations recurred in similar patterns, suggesting that the investigation had identified symptoms, not mechanisms, and that the “corrective” action had not actually addressed causation.

This is the sin of treating CAPA as a promise rather than a hypothesis.

A falsifiable CAPA is structured as an explicit prediction“If we implement X change, then Y undesirable outcome will not recur under conditions Z.”

This can be tested. If it fails the test, the CAPA itself becomes evidence—not of failure, but of incomplete causal understanding. Which is valuable.

In the Rechon analysis, this showed up concretely: The FDA’s real criticism was not just that contamination was found; it was that Rechon’s Contamination Control Strategy had no mechanism to falsify itself. If the CCS said “unidirectional airflow protects the product,” and smoke studies showed bidirectional eddies, the CCS had been falsified. But Rechon treated the falsification as an anomaly to be explained away, rather than evidence that the CCS hypothesis was wrong.

A falsifiable organization would say: “Our CCS predicted that Grade A in an isolator with this airflow pattern would remain sterile. The smoke study proves that prediction wrong. Therefore, the CCS is false. We redesign.”

Instead, they filmed from a different angle and said the aerodynamics were “acceptable.”

Knowledge Integration: When Deviations Become the Curriculum

The final piece of falsifiable investigation is what I call “knowledge integration.” A single deviation is a data point. But across the organization, deviations should form a curriculum about how systems actually fail.

Sanofi’s failure was not that they investigated each deviation badly (though they did). It was that they investigated them in isolation. Each deviation closed on its own. Each CAPA addressed its own batch. There was no organizational learning—no mechanism for a pattern of similar deviations to trigger a hypothesis that the control strategy itself was fundamentally flawed.

This is where the Catalent case study, analyzed in September’s “When 483s Reveal Zemblanity,” becomes instructive. Zemblanity is the opposite of serendipity: the seemingly random recurrence of the same failure through different paths. Catalent’s 483 observations were not isolated mistakes; they formed a pattern that revealed a systemic assumption (about equipment capability, about environmental control, about material consistency) that was false across multiple products and locations.

A falsifiable quality system catches zemblanity early by:

  1. Treating each deviation as a test of organizational hypotheses, not as an isolated incident.
  2. Trending deviation patterns to detect when the same causal mechanism is producing failures across different products, equipment, or operators.
  3. Revising control strategies when patterns falsify the original assumptions, rather than tightening parameters at the margins.

The Digital Hallucination (CSA, AI, and the Expertise Crisis)

(Reflecting on: CSA: The Emperor’s New Clothes, Annex 11, and The Expertise Crisis)

While we battled microbes in the cleanroom, a different battle was raging in the server room. 2025 was the year the industry tried to “modernize” validation through Computer Software Assurance (CSA) and AI, and in many ways, it was the year we tried to automate our way out of thinking.

CSA: The Emperor’s New Validation Clothes

In September, I published Computer System Assurance: The Emperor’s New Validation Clothes,” a critique of the the contortions being made around the FDA’s guidance. The narrative sold by consultants for years was that traditional Computer System Validation (CSV) was “broken”—too much documentation, too much testing—and that CSA was a revolutionary new paradigm of “critical thinking.”

My analysis showed that this narrative is historically illiterate.

The principles of CSA—risk-based testing, leveraging vendor audits, focusing on intended use—are not new. They are the core principles of GAMP5 and have been applied for decades now.

The industry didn’t need a new guidance to tell us to use critical thinking; we had simply chosen not to use the critical thinking tools we already had. We had chosen to apply “one-size-fits-all” templates because they were safe (unfalsifiable).

The CSA guidance is effectively the FDA saying: “Please read the GAMP5 guide you claimed to be following for the last 15 years.”

The danger of the “CSA Revolution” narrative is that it encourages a swing to the opposite extreme: “Unscripted Testing” that becomes “No Testing.”

In a falsifiable system, “unscripted testing” is highly rigorous—it is an expert trying to break the software (“Ad Hoc testing”). But in an unfalsifiable system, “unscripted testing” becomes “I clicked around for 10 minutes and it looked fine.”

The Expertise Crisis: AI and the Death of the Apprentice

This leads directly to the Expertise Crisis. In September, I wrote The Expertise Crisis: Why AI’s War on Entry-Level Jobs Threatens Quality’s Future.” This was perhaps the most personal topic I covered this year, because it touches on the very survival of our profession.

We are rushing to integrate Artificial Intelligence (AI) into quality systems. We have AI writing deviations, AI drafting SOPs, AI summarizing regulatory changes. The efficiency gains are undeniable. But the cost is hidden, and it is epistemological.

Falsifiability requires expertise.
To falsify a claim—to look at a draft investigation report and say, “No, that conclusion doesn’t follow from the data”—you need deep, intuitive knowledge of the process. You need to know what a “normal” pH curve looks like so you can spot the “abnormal” one that the AI smoothed over.

Where does that intuition come from? It comes from the “grunt work.” It comes from years of reviewing batch records, years of interviewing operators, years of struggling to write a root cause analysis statement.

The Expertise Crisis is this: If we give all the entry-level work to AI, where will the next generation of Quality Leaders come from?

  • The Junior Associate doesn’t review the raw data; the AI summarizes it.
  • The Junior Associate doesn’t write the deviation; the AI generates the text.
  • Therefore, the Junior Associate never builds the mental models necessary to critique the AI.

The Loop of Unfalsifiable Hallucination

We are creating a closed loop of unfalsifiability.

  1. The AI generates a plausible-sounding investigation report.
  2. The human reviewer (who has been “de-skilled” by years of AI reliance) lacks the deep expertise to spot the subtle logical flaw or the missing data point.
  3. The report is approved.
  4. The “hallucination” becomes the official record.

In a falsifiable quality system, the human must remain the adversary of the algorithm. The human’s job is to try to break the AI’s logic, to check the citations, to verify the raw data.
But in 2025, we saw the beginnings of a “Compliance Autopilot”—a desire to let the machine handle the “boring stuff.”

My warning in September remains urgent: Efficiency without expertise is just accelerated incompetence. If we lose the ability to falsify our own tools, we are no longer quality professionals; we are just passengers in a car driven by a statistical model that doesn’t know what “truth” is.

My post “The Missing Middle in GMP Decision Making: How Annex 22 Redefines Human-Machine Collaboration in Pharmaceutical Quality Assurance” goes a lot deeper here.

Annex 11 and Data Governance

In August, I analyzed the draft Annex 11 (Computerised Systems) in the post Data Governance Systems: A Fundamental Shift.”

The Europeans are ahead of the FDA here. While the FDA talks about “Assurance” (testing less), the EU is talking about “Governance” (controlling more). The new Annex 11 makes it clear: You cannot validate a system if you do not control the data lifecycle. Validation is not a test script; it is a state of control.

This aligns perfectly with USP <1225> and <1220>. Whether it’s a chromatograph or an ERP system, the requirement is the same: Prove that the data is trustworthy, not just that the software is installed.

The Process as a Hypothesis (CPV & Cleaning)

(Reflecting on: Continuous Process Verification and Hypothesis Formation)

The final frontier of validation we explored in 2025 was the manufacturing process itself.

CPV: Continuous Falsification

In March, I published Continuous Process Verification (CPV) Methodology and Tool Selection.”
CPV is the ultimate expression of Falsifiable Quality in manufacturing.

  • Traditional Validation (3 Batches): “We made 3 good batches, therefore the process is perfect forever.” (Unfalsifiable extrapolation).
  • CPV: “We made 3 good batches, so we have a license to manufacture, but we will statistically monitor every subsequent batch to detect drift.” (Continuous hypothesis testing).

The challenge with CPV, as discussed in the post, is that it requires statistical literacy. You cannot implement CPV if your quality unit doesn’t understand the difference between Cpk and Ppk, or between control limits and specification limits.

This circles back to the Expertise Crisis. We are implementing complex statistical tools (CPV software) at the exact moment we are de-skilling the workforce. We risk creating a “CPV Dashboard” that turns red, but no one knows why or what to do about it.

Cleaning Validation: The Science of Residue

In August, I tried to apply falsifiability to one of the most stubborn areas of dogma: Cleaning Validation.

In Building Decision-Making with Structured Hypothesis Formation, I argued that cleaning validation should not be about “proving it’s clean.” It should be about “understanding why it gets dirty.”

  • Traditional Approach: Swab 10 spots. If they pass, we are good.
  • Hypothesis Approach: “We hypothesize that the gasket on the bottom valve is the hardest to clean. We predict that if we reduce rinse time by 1 minute, that gasket will fail.”

By testing the boundaries—by trying to make the cleaning fail—we understand the Design Space of the cleaning process.

We discussed the “Visual Inspection” paradox in cleaning: If you can see the residue, it failed. But if you can’t see it, does it pass?

Only if you have scientifically determined the Visible Residue Limit (VRL). Using “visually clean” without a validated VRL is—you guessed it—unfalsifiable.

To: Jeremiah Genest
From: Perplexity Research
Subject: Draft Content – Single-Use Systems & E&L Section

Here is a section on Single-Use Systems (SUS) and Extractables & Leachables (E&L).

I have positioned this piece to bridge the gap between “Part III: The Reality Check” (Contamination/Water) and “Part V: The Process as a Hypothesis” (Cleaning Validation).

The argument here is that by switching from Stainless Steel to Single-Use, we traded a visible risk (cleaning residue) for an invisible one (chemical migration), and that our current approach to E&L is often just “Paper Safety”—relying on vendor data that doesn’t reflect the “Work-as-Done” reality of our specific process conditions.

The Plastic Paradox (Single-Use Systems and the E&L Mirage)

If the Rechon and LeMaitre warning letters were about the failure to control biological contaminants we can find, the industry’s struggle with Single-Use Systems (SUS) in 2025 was about the chemical contaminants we choose not to find.

We have spent the last decade aggressively swapping stainless steel for plastic. The value proposition was irresistible: Eliminate cleaning validation, eliminate cross-contamination, increase flexibility. We traded the “devil we know” (cleaning residue) for the “devil we don’t” (Extractables and Leachables).

But in 2025, with the enforcement reality of USP <665> (Plastic Components and Systems) settling in, we had to confront the uncomfortable truth: Most E&L risk assessments are unfalsifiable.

The Vendor Data Trap

The standard industry approach to E&L is the ultimate form of “Compliance Theater.”

  1. We buy a single-use bag.
  2. We request the vendor’s regulatory support package (the “Map”).
  3. We see that the vendor extracted the film with aggressive solvents (ethanol, hexane) for 7 days.
  4. We conclude: “Our process uses water for 24 hours; therefore, we are safe.”

This logic is epistemologically bankrupt. It assumes that the Vendor’s Model (aggressive solvents/short time) maps perfectly to the User’s Reality (complex buffers/long duration/specific surfactants).

It ignores the fact that plastics are dynamic systems. Polymers age. Gamma irradiation initiates free radical cascades that evolve over months. A bag manufactured in January might have a different leachable profile than a bag manufactured in June, especially if the resin supplier made a “minor” change that didn’t trigger a notification.

By relying solely on the vendor’s static validation package, we are choosing not to falsify our safety hypothesis. We are effectively saying, “If the vendor says it’s clean, we will not look for dirt.”

USP <665>: A Baseline, Not a Ceiling

The full adoption of USP <665> was supposed to bring standardization. And it has—it provides a standard set of extraction conditions. But standards can become ceilings.

In 2025, I observed a troubling trend of “Compliance by Citation.” Firms are citing USP <665> compliance as proof of absence of risk, stopping the inquiry there.

A Falsifiable E&L Strategy goes further. It asks:

  • “What if the vendor data is irrelevant to my specific surfactant?”
  • “What if the gamma irradiation dose varied?”
  • “What if the interaction between the tubing and the connector creates a new species?”

The Invisible Process Aid

We must stop viewing Single-Use Systems as inert piping. They are active process components. They are chemically reactive vessels that participate in our reaction kinetics.

When we treat them as inert, we are engaging in the same “Aspirational Thinking” that LeMaitre used on their water valves. We are modeling the system we want (pure, inert plastic), not the system we have (a complex soup of antioxidants, slip agents, and degradants).

The lesson of 2025 is that Material Qualification cannot be a paper exercise. If you haven’t done targeted simulation studies that mimic your actual “Work-as-Done” conditions, you haven’t validated the system. You’ve just filed the receipt.

The Mandate for 2026

As we look toward 2026, the path is clear. We cannot go back to the comfortable fiction of the pre-2025 era.

The regulatory environment (Annex 1, ICH Q14, USP <1225>, Annex 11) is explicitly demanding evidence of control, not just evidence of compliance. The technological environment (AI) is demanding that we sharpen our human expertise to avoid becoming obsolete. The physical environment (contamination, supply chain complexity) is demanding systems that are robust, not just rigid.

The mandate for the coming year is to build Falsifiable Quality Systems.

What does that look like practically?

  1. In the Lab: Implement USP <1225> logic now. Don’t wait for the official date. Validate your reportable results. Add “challenge tests” to your routine monitoring.
  2. In the Plant: Redesign your Environmental Monitoring to hunt for contamination, not to avoid it. If you have a “perfect” record in a Grade C area, move the plates until you find the dirt.
  3. In the Office: Treat every investigation as a chance to falsify the control strategy. If a deviation occurs that the control strategy said was impossible, update the control strategy.
  4. In the Culture: Reward the messenger. The person who finds the crack in the system is not a troublemaker; they are the most valuable asset you have. They just falsified a false sense of security.
  5. In Design: Embrace the Elegant Quality System (discussed in May). Complexity is the enemy of falsifiability. Complex systems hide failures; simple, elegant systems reveal them.

2025 was the year we stopped pretending. 2026 must be the year we start building. We must build systems that are honest enough to fail, so that we can build processes that are robust enough to endure.

Thank you for reading, challenging, and thinking with me this year. The investigation continues.

Beyond Malfunction Mindset: Normal Work, Adaptive Quality, and the Future of Pharmaceutical Problem-Solving

Beyond the Shadow of Failure

Problem-solving is too often shaped by the assumption that the system is perfectly understood and fully specified. If something goes wrong—a deviation, a batch out-of-spec, or a contamination event—our approach is to dissect what “failed” and fix that flaw, believing this will restore order. This way of thinking, which I call the malfunction mindset, is as ingrained as it is incomplete. It assumes that successful outcomes are the default, that work always happens as written in SOPs, and that only failure deserves our scrutiny.

But here’s the paradox: most of the time, our highly complex manufacturing environments actually succeed—often under imperfect, shifting, and not fully understood conditions. If we only study what failed, and never question how our systems achieve their many daily successes, we miss the real nature of pharmaceutical quality: it is not the absence of failure, but the presence of robust, adaptive work. Taking this broader, more nuanced perspective is not just an academic exercise—it’s essential for building resilient operations that truly protect patients, products, and our organizations.

Drawing from my thinking through zemblanity (the predictable but often overlooked negative outcomes of well-intentioned quality fixes), the effectiveness paradox (why “nothing bad happened” isn’t proof your quality system works), and the persistent gap between work-as-imagined and work-as-done, this post explores why the malfunction mindset persists, how it distorts investigations, and what future-ready quality management should look like.

The Allure—and Limits—of the Failure Model

Why do we reflexively look for broken parts and single points of failure? It is, as Sidney Dekker has argued, both comforting and defensible. When something goes wrong, you can always point to a failed sensor, a missed checklist, or an operator error. This approach—introducing another level of documentation, another check, another layer of review—offers a sense of closure and regulatory safety. After all, as long as you can demonstrate that you “fixed” something tangible, you’ve fulfilled investigational due diligence.

Yet this fails to account for how quality is actually produced—or lost—in the real world. The malfunction model treats systems like complicated machines: fix the broken gear, oil the creaky hinge, and the machine runs smoothly again. But, as Dekker reminds us in Drift Into Failure, such linear thinking ignores the drift, adaptation, and emergent complexity that characterize real manufacturing environments. The truth is, in complex adaptive systems like pharmaceutical manufacturing, it often takes more than one “error” for failure to manifest. The system absorbs small deviations continuously, adapting and flexing until, sometimes, a boundary is crossed and a problem surfaces.

W. Edwards Deming’s wisdom rings truer than ever: “Most problems result from the system itself, not from individual faults.” A sustainable approach to quality is one that designs for success—and that means understanding the system-wide properties enabling robust performance, not just eliminating isolated malfunctions.

Procedural Fundamentalism: The Work-as-Imagined Trap

One of the least examined, yet most impactful, contributors to the malfunction mindset is procedural fundamentalism—the belief that the written procedure is both a complete specification and an accurate description of work. This feels rigorous and provides compliance comfort, but it is a profound misreading of how work actually happens in pharmaceutical manufacturing.

Work-as-imagined, as elucidated by Erik Hollnagel and others, represents an abstraction: it is how distant architects of SOPs visualize the “correct” execution of a process. Yet, real-world conditions—resource shortages, unexpected interruptions, mismatched raw materials, shifting priorities—force adaptation. Operators, supervisors, and Quality professionals do not simply “follow the recipe”: they interpret, improvise, and—crucially—adjust on the fly.

When we treat procedures as authoritative descriptions of reality, we create the proxy problem: our investigations compare real operations against an imagined baseline that never fully existed. Deviations become automatically framed as problem points, and success is redefined as rigid adherence, regardless of context or outcome.

Complexity, Performance Variability, and Real Success

So, how do pharmaceutical operations succeed so reliably despite the ever-present complexity and variability of daily work?

The answer lies in embracing performance variability as a feature of robust systems, not a flaw. In high-reliability environments—from aviation to medicine to pharmaceutical manufacturing—success is routinely achieved not by demanding strict compliance, but by cultivating adaptive capacity.

Consider environmental monitoring in a sterile suite: The procedure may specify precise times and locations, but a seasoned operator, noticing shifts in people flow or equipment usage, might proactively sample a high-risk area more frequently. This adaptation—not captured in work-as-imagined—actually strengthens data integrity. Yet, traditional metrics would treat this as a procedural deviation.

This is the paradox of the malfunction mindset: in seeking to eliminate all performance variability, we risk undermining precisely those adaptive behaviors that produce reliable quality under uncertainty.

Why the Malfunction Mindset Persists: Cognitive Comfort and Regulatory Reinforcement

Why do organizations continue to privilege the malfunction mindset, even as evidence accumulates of its limits? The answer is both psychological and cultural.

Component breakdown thinking is psychologically satisfying—it offers a clear problem, a specific cause, and a direct fix. For regulatory agencies, it is easy to measure and audit: did the deviation investigation determine the root cause, did the CAPA address it, does the documentation support this narrative? Anything that doesn’t fit this model is hard to defend in audits or inspections.

Yet this approach offers, at best, a partial diagnosis and, at worst, the illusion of control. It encourages organizations to catalog deviations while blindly accepting a much broader universe of unexamined daily adaptations that actually determine system robustness.

Complexity Science and the Art of Organizational Success

To move toward a more accurate—and ultimately more effective—model of quality, pharmaceutical leaders must integrate the insights of complexity science. Drawing from the work of Stuart Kauffman and others at the Santa Fe Institute, we understand that the highest-performing systems operate not at the edge of rigid order, but at the “edge of chaos,” where structure is balanced with adaptability.

In these systems, success and failure both arise from emergent properties—the patterns of interaction between people, procedures, equipment, and environment. The most meaningful interventions, therefore, address how the parts interact, not just how each part functions in isolation.

This explains why traditional root cause analysis, focused on the parts, often fails to produce lasting improvements; it cannot account for outcomes that emerge only from the collective dynamics of the system as a whole.

Investigating for Learning: The Take-the-Best Heuristic

A key innovation needed in pharmaceutical investigations is a shift to what Hollnagel calls Safety-II thinking: focusing on how things go right as well as why they occasionally go wrong.

Here, the take-the-best heuristic becomes crucial. Instead of compiling lists of all deviations, ask: Among all contributing factors, which one, if addressed, would have the most powerful positive impact on future outcomes, while preserving adaptive capacity? This approach ensures investigations generate actionable, meaningful learning, rather than feeding the endless paper chase of “compliance theater.”

Building Systems That Support Adaptive Capability

Taking complexity and adaptive performance seriously requires practical changes to how we design procedures, train, oversee, and measure quality.

  • Procedure Design: Make explicit the distinction between objectives and methods. Procedures should articulate clear quality goals, specify necessary constraints, but deliberately enable workers to choose methods within those boundaries when faced with new conditions.
  • Training: Move beyond procedural compliance. Develop adaptive expertise in your staff, so they can interpret and adjust sensibly—understanding not just “what” to do, but “why” it matters in the bigger system.
  • Oversight and Monitoring: Audit for adaptive capacity. Don’t just track “compliance” but also whether workers have the resources and knowledge to adapt safely and intelligently. Positive performance variability (smart adaptations) should be recognized and studied.
  • Quality System Design: Build systematic learning from both success and failure. Examine ordinary operations to discern how adaptive mechanisms work, and protect these capabilities rather than squashing them in the name of “control.”

Leadership and Systems Thinking

Realizing this vision depends on a transformation in leadership mindset—from one seeking control to one enabling adaptive capacity. Deming’s profound knowledge and the principles of complexity leadership remind us that what matters is not enforcing ever-stricter compliance, but cultivating an organizational context where smart adaptation and genuine learning become standard.

Leadership must:

  • Distinguish between complicated and complex: Apply detailed procedures to the former (e.g., calibration), but support flexible, principles-based management for the latter.
  • Tolerate appropriate uncertainty: Not every problem has a clear, single answer. Creating psychological safety is essential for learning and adaptation during ambiguity.
  • Develop learning organizations: Invest in deep understanding of operations, foster regular study of work-as-done, and celebrate insights from both expected and unexpected sources.

Practical Strategies for Implementation

Turning these insights into institutional practice involves a systematic, research-inspired approach:

  • Start procedure development with observation of real work before specifying methods. Small scale and mock exercises are critical.
  • Employ cognitive apprenticeship models in training, so that experience, reasoning under uncertainty, and systems thinking become core competencies.
  • Begin investigations with appreciative inquiry—map out how the system usually works, not just how it trips up.
  • Measure leading indicators (capacity, information flow, adaptability) not just lagging ones (failures, deviations).
  • Create closed feedback loops for corrective actions—insisting every intervention be evaluated for impact on both compliance and adaptive capacity.

Scientific Quality Management and Adaptive Systems: No Contradiction

The tension between rigorous scientific quality management (QbD, process validation, risk management frameworks) and support for adaptation is a false dilemma. Indeed, genuine scientific quality management starts with humility: the recognition that our understanding of complex systems is always partial, our controls imperfect, and our frameworks provisional.

A falsifiable quality framework embeds learning and adaptation at its core—treating deviations as opportunities to test and refine models, rather than simply checkboxes to complete.

The best organizations are not those that experience the fewest deviations, but those that learn fastest from both expected and unexpected events, and apply this knowledge to strengthen both system structure and adaptive capacity.

Embracing Normal Work: Closing the Gap

Normal pharmaceutical manufacturing is not the story of perfect procedural compliance; it’s the story of people, working together to achieve quality goals under diverse, unpredictable, and evolving conditions. This is both more challenging—and more rewarding—than any plan prescribed solely by SOPs.

To truly move the needle on pharmaceutical quality, organizations must:

  • Embrace performance variability as evidence of adaptive capacity, not just risk.
  • Investigate for learning, not blame; study success, not just failure.
  • Design systems to support both structure and flexible adaptation—never sacrificing one entirely for the other.
  • Cultivate leadership that values humility, systems thinking, and experimental learning, creating a culture comfortable with complexity.

This approach will not be easy. It means questioning decades of compliance custom, organizational habit, and intellectual ease. But the payoff is immense: more resilient operations, fewer catastrophic surprises, and, above all, improved safety and efficacy for the patients who depend on our products.

The challenge—and the opportunity—facing pharmaceutical quality management is to evolve beyond compliance theater and malfunction thinking into a new era of resilience and organizational learning. Success lies not in the illusory comfort of perfectly executed procedures, but in the everyday adaptations, intelligent improvisation, and system-level capabilities that make those successes possible.

The call to action is clear: Investigate not just to explain what failed, but to understand how, and why, things so often go right. Protect, nurture, and enhance the adaptive capacities of your organization. In doing so, pharmaceutical quality can finally become more than an after-the-fact audit; it will become the creative, resilient capability that patients, regulators, and organizations genuinely want to hire.

Recent Podcast Appearance: Risk Revolution

I’m excited to share that I recently had the opportunity to appear on the Risk Revolution podcast, joining host Valerie Mulholland for what turned out to be a provocative and deeply engaging conversation about the future of pharmaceutical quality management.

The episode, titled “Quality Theatre to Quality Science – Jeremiah Genest’s Playbook,” aired on September 28, 2025, and dives into one of my core arguments: that quality systems should be designed to fail predictably so we can learn purposefully. This isn’t about celebrating failure—it’s about building systems intelligent enough to fail in ways that generate learning rather than hiding in the shadows until catastrophic breakdown occurs.

Why This Conversation Matters

Valerie and I spent over an hour exploring what I call “intelligent failure”—a concept that challenges the feel-good metrics that dominate our industry dashboards. You know the ones I’m talking about: those green lights celebrating zero deviations that make everyone feel accomplished while potentially masking the unknowns lurking beneath the surface. As I argued in the episode, these metrics can hide systemic problems rather than prove actual control.

This discussion connects directly to themes I’ve been developing here on Investigations of a Dog, particularly my thoughts on the effectiveness paradox and the dangerous comfort of “nothing bad happened” thinking. The podcast gave me a chance to explore how zemblanity—the patterned recurrence of unfortunate events that we should have anticipated—manifests in quality systems that prioritize the appearance of control over genuine understanding.

The Perfect Platform for These Ideas

Risk Revolution proved to be the ideal venue for this conversation. Valerie brings over 25 years of hands-on experience across biopharmaceutical, pharmaceutical, medical device, and blood transfusion industries, but what sets her apart is her unique combination of practical expertise and cutting-edge research.

The podcast’s monthly format allows for the kind of deep, nuanced discussions that advance risk management maturity rather than recycling conference presentations. When I wrote about Valerie’s writing on the GI Joe Bias, I noted how her emphasis on systematic interventions rather than individual awareness represents exactly the kind of sophisticated thinking our industry needs. This podcast appearance let us explore these concepts in real-time conversation.

What made the discussion particularly engaging was Valerie’s ability to challenge my thinking while building on it. Her research-backed insights into cognitive bias management created a perfect complement to my practical experience with system failures and investigation patterns. We explored how quality professionals—precisely because of our expertise—become vulnerable to specific blind spots that systematic design can address.

Looking Forward

This Risk Revolution appearance represents more than just a podcast interview—it’s part of a broader conversation about advancing pharmaceutical quality management beyond surface-level compliance toward genuine excellence. The episode includes references to my blog work, the Deming philosophy, and upcoming industry conferences where these ideas will continue to evolve.

If you’re interested in how quality systems can be designed for intelligent learning rather than elegant hiding, this conversation offers both provocative challenges and practical frameworks. Fair warning: you might never look at a green dashboard the same way again.

The episode is available now, and I’d love to hear your thoughts on how we might move from quality theatre toward quality science in your own organization.

Applying Jobs-to-Be-Done to Risk Management

In my recent exploration of the Jobs-to-Be-Done (JTBD) tool for process improvement, I examined how this customer-centric approach could revolutionize our understanding of deviation management. I want to extend that analysis to another fundamental challenge in pharmaceutical quality: risk management.

As we grapple with increasing regulatory complexity, accelerating technological change, and the persistent threat of risk blindness, most organizations remain trapped in what I call “compliance theater”—performing risk management activities that satisfy auditors but fail to build genuine organizational resilience. JTBD is a useful tool as we move beyond this theater toward risk management that actually creates value.

The Risk Management Jobs Users Actually Hire

When quality professionals, executives, and regulatory teams engage with risk management processes, what job are they really trying to accomplish? The answer reveals a profound disconnect between organizational intent and actual capability.

The Core Functional Job

“When facing uncertainty that could impact product quality, patient safety, or business continuity, I want to systematically understand and address potential threats, so I can make confident decisions and prevent surprise failures.”

This job statement immediately exposes the inadequacy of most risk management systems. They focus on documentation rather than understanding, assessment rather than decision enablement, and compliance rather than prevention.

The Consumption Jobs: The Hidden Workload

Risk management involves numerous consumption jobs that organizations often ignore:

  • Evaluation and Selection: “I need to choose risk assessment methodologies that match our operational complexity and regulatory environment.”
  • Implementation and Training: “I need to build organizational risk capability without creating bureaucratic overhead.”
  • Maintenance and Evolution: “I need to keep our risk approach current as our business and threat landscape evolves.”
  • Integration and Communication: “I need to ensure risk insights actually influence business decisions rather than gathering dust in risk registers.”

These consumption jobs represent the difference between risk management systems that organizations grudgingly tolerate and those they genuinely want to “hire.”

The Eight-Step Risk Management Job Map

Applying JTBD’s universal job map to risk management reveals where current approaches systematically fail:

1. Define: Establishing Risk Context

What users need: Clear understanding of what they’re assessing, why it matters, and what decisions the risk analysis will inform.

Current reality: Risk assessments often begin with template completion rather than context establishment, leading to generic analyses that don’t support actual decision-making.

2. Locate: Gathering Risk Intelligence

What users need: Access to historical data, subject matter expertise, external intelligence, and tacit knowledge about how things actually work.

Current reality: Risk teams typically work from documentation rather than engaging with operational reality, missing the pattern recognition and apprenticeship dividend that experienced practitioners possess.

3. Prepare: Creating Assessment Conditions

What users need: Diverse teams, psychological safety for honest risk discussions, and structured approaches that challenge rather than confirm existing assumptions.

Current reality: Risk assessments often involve homogeneous teams working through predetermined templates, perpetuating the GI Joe fallacy—believing that knowledge of risk frameworks prevents risky thinking.

4. Confirm: Validating Assessment Readiness

What users need: Confidence that they have sufficient information, appropriate expertise, and clear success criteria before proceeding.

Current reality: Risk assessments proceed regardless of information quality or team readiness, driven by schedule rather than preparation.

5. Execute: Conducting Risk Analysis

What users need: Systematic identification of risks, analysis of interconnections, scenario testing, and development of robust mitigation strategies.

Current reality: Risk analysis often becomes risk scoring—reducing complex phenomena to numerical ratings that provide false precision rather than genuine insight.

6. Monitor: Tracking Risk Reality

What users need: Early warning systems that detect emerging risks and validate the effectiveness of mitigation strategies.

Current reality: Risk monitoring typically involves periodic register updates rather than active intelligence gathering, missing the dynamic nature of risk evolution.

7. Modify: Adapting to New Information

What users need: Responsive adjustment of risk strategies based on monitoring feedback and changing conditions.

Current reality: Risk assessments often become static documents, updated only during scheduled reviews rather than when new information emerges.

8. Conclude: Capturing Risk Learning

What users need: Systematic capture of risk insights, pattern recognition, and knowledge transfer that builds organizational risk intelligence.

Current reality: Risk analysis conclusions focus on compliance closure rather than learning capture, missing opportunities to build the organizational memory that prevents risk blindness.

The Emotional and Social Dimensions

Risk management involves profound emotional and social jobs that traditional approaches ignore:

  • Confidence: Risk practitioners want to feel genuinely confident that significant threats have been identified and addressed, not just that procedures have been followed.
  • Intellectual Satisfaction: Quality professionals are attracted to rigorous analysis and robust reasoning—risk management should engage their analytical capabilities, not reduce them to form completion.
  • Professional Credibility: Risk managers want to be perceived as strategic enablers rather than bureaucratic obstacles—as trusted advisors who help organizations navigate uncertainty rather than create administrative burden.
  • Organizational Trust: Executive teams want assurance that their risk management capabilities are genuinely protective, not merely compliant.

What’s Underserved: The Innovation Opportunities

JTBD analysis reveals four critical areas where current risk management approaches systematically underserve user needs:

Risk Intelligence

Current systems document known risks but fail to develop early warning capabilities, pattern recognition across multiple contexts, or predictive insights about emerging threats. Organizations need risk management that builds institutional awareness, not just institutional documentation.

Decision Enablement

Risk assessments should create confidence for strategic decisions, enable rapid assessment of time-sensitive opportunities, and provide scenario planning that prepares organizations for multiple futures. Instead, most risk management creates decision paralysis through endless analysis.

Organizational Capability

Effective risk management should build risk literacy across all levels, create cultural resilience that enables honest risk conversations, and develop adaptive capacity to respond when risks materialize. Current approaches often centralize risk thinking rather than distributing risk capability.

Stakeholder Trust

Risk management should enable transparent communication about threats and mitigation strategies, demonstrate competence in risk anticipation, and provide regulatory confidence in organizational capabilities. Too often, risk management creates opacity rather than transparency.

Canvas representation of the JBTD

Moving Beyond Compliance Theater

The JTBD framework helps us address a key challenge in risk management: many organizations place excessive emphasis on “table stakes” such as regulatory compliance and documentation requirements, while neglecting vital aspects like intelligence, enablement, capability, and trust that contribute to genuine resilience.

This represents a classic case of process myopia—becoming so focused on risk management activities that we lose sight of the fundamental job those activities should accomplish. Organizations perfect their risk registers while remaining vulnerable to surprise failures, not because they lack risk management processes, but because those processes fail to serve the jobs users actually need accomplished.

Design Principles for User-Centered Risk Management

  • Context Over Templates: Begin risk analysis with clear understanding of decisions to be informed rather than forms to be completed.
  • Intelligence Over Documentation: Prioritize systems that build organizational awareness and pattern recognition rather than risk libraries.
  • Engagement Over Compliance: Create risk processes that attract rather than burden users, recognizing that effective risk management requires active intellectual participation.
  • Learning Over Closure: Structure risk activities to build institutional memory and capability rather than simply completing assessment cycles.
  • Integration Over Isolation: Ensure risk insights flow naturally into operational decisions rather than remaining in separate risk management systems.

Hiring Risk Management for Real Jobs

The most dangerous risk facing pharmaceutical organizations may be risk management systems that create false confidence while building no real capability. JTBD analysis reveals why: these systems optimize for regulatory approval rather than user needs, creating elaborate processes that nobody genuinely wants to “hire.”

True risk management begins with understanding what jobs users actually need accomplished: building confidence for difficult decisions, developing organizational intelligence about threats, creating resilience against surprise failures, and enabling rather than impeding business progress. Organizations that design risk management around these jobs will develop competitive advantages in an increasingly uncertain world.

The choice is clear: continue performing compliance theater, or build risk management systems that organizations genuinely want to hire. In a world where zemblanity—the tendency to encounter negative, foreseeable outcomes—threatens every quality system, only the latter approach offers genuine protection.

Risk management should not be something organizations endure. It should be something they actively seek because it makes them demonstrably better at navigating uncertainty and protecting what matters most.