The November 2025 FDA Warning Letter to Catalent Indiana, LLC reads like an autopsy report—a detailed dissection of how contamination hazards aren’t discovered but rather engineered into aseptic operations through a constellation of decisions that individually appear defensible yet collectively create what I’ve previously termed the “zemblanity field” in pharmaceutical quality. Section 2, addressing failures under 21 CFR 211.113(b), exposes contamination hazards that didn’t emerge from random misfortune but from deliberate choices about decontamination strategies, sampling methodologies, intervention protocols, and investigation rigor.
What makes this warning letter particularly instructive isn’t the presence of contamination events—every aseptic facility battles microbial ingress—but rather the systematic architectural failures that allowed contamination hazards to persist unrecognized, uninvestigated, and unmitigated despite multiple warning signals spanning more than 20 deviations and customer complaints. The FDA’s critique centers on three interconnected contamination hazard categories: VHP decontamination failures involving occluded surfaces, inadequate environmental monitoring methods that substituted convenience for detection capability, and intervention risk assessments that ignored documented contamination routes.
For those of us responsible for contamination control in aseptic manufacturing, this warning letter demands we ask uncomfortable questions: How many of our VHP cycles are validated against surfaces that remain functionally occluded? How often have we chosen contact plates over swabs because they’re faster, not because they’re more effective? When was the last time we terminated a media fill and treated it with the investigative rigor of a batch contamination event?
The Occluded Surface Problem: When Decontamination Becomes Theatre
The FDA’s identification of occluded surfaces as contamination sources during VHP decontamination represents a failure mode I’ve observed with troubling frequency across aseptic facilities. The fundamental physics are unambiguous: vaporized hydrogen peroxide achieves sporicidal efficacy through direct surface contact at validated concentration-time profiles. Any surface the vapor doesn’t contact—or contacts at insufficient concentration—remains a potential contamination reservoir regardless of cycle completion indicators showing “successful” decontamination.
The Catalent situation involved two distinct occluded surface scenarios, each revealing different architectural failures in contamination hazard assessment. First, equipment surfaces occluded during VHP decontamination that subsequently became contamination sources during atypical interventions involving equipment changes. The FDA noted that “the most probable root cause” of an environmental monitoring failure was equipment surfaces occluded during VHP decontamination, with contamination occurring during execution of an atypical intervention involving changes to components integral to stopper seating.
This finding exposes a conceptual error I frequently encounter: treating VHP decontamination as a universal solution that overcomes design deficiencies rather than as a validated process with specific performance boundaries. The Catalent facility’s own risk assessments advised against interventions that could disturb potentially occluded surfaces, yet these interventions continued—creating the precise contamination pathway their risk assessments identified as unacceptable.
The second occluded surface scenario involved wrapped components within the filling line where insufficient VHP exposure allowed potential contamination. The FDA cited “occluded surfaces on wrapped [components] within the [equipment] as the potential cause of contamination”. This represents a validation failure: if wrapping materials prevent adequate VHP penetration, either the wrapping must be eliminated, the decontamination method must change, or these surfaces must be treated through alternative validated processes.
The literature on VHP decontamination is explicit about occluded surface risks. As Sandle notes, surfaces must be “designed and installed so that operations, maintenance, and repairs can be performed outside the cleanroom” and where unavoidable, “all surfaces needing decontaminated” must be explicitly identified. The PIC/S guidance is similarly unambiguous: “Continuously occluded surfaces do not qualify for such trials as they cannot be exposed to the process and should have been eliminated”. Yet facilities continue to validate VHP cycles that demonstrate biological indicator kill on readily accessible flat coupons while ignoring the complex geometries, wrapped items, and recessed surfaces actually present in their filling environments.
What does a robust approach to occluded surface assessment look like? Based on the regulatory expectations and technical literature, facilities should:
Conduct comprehensive occluded surface mapping during design qualification. Every component introduced into VHP-decontaminated spaces must undergo geometric analysis to identify surfaces that may not receive adequate vapor exposure. This includes crevices, threaded connections, wrapped items, hollow spaces, and any surface shadowed by another object. The mapping should document not just that surfaces exist but their accessibility to vapor flow based on the specific VHP distribution characteristics of the equipment.
Validate VHP distribution using chemical and biological indicators placed on identified occluded surfaces. Flat coupon placement on readily accessible horizontal surfaces tells you nothing about vapor penetration into wrapped components or recessed geometries. Biological indicators should be positioned specifically where vapor exposure is questionable—inside wrapped items, within threaded connections, under equipment flanges, in dead-legs of transfer lines. If biological indicators in these locations don’t achieve the validated log reduction, the surfaces are occluded and require design modification or alternative decontamination methods.
Establish clear intervention protocols that distinguish between “sterile-to-sterile” and “potentially contaminated” surface contact. The Catalent finding reveals that atypical interventions involving equipment changes exposed the Grade A environment to surfaces not reliably exposed to VHP. Intervention risk assessments must explicitly categorize whether the intervention involves only VHP-validated surfaces or introduces components from potentially occluded areas. The latter category demands heightened controls: localized Grade A air protection, pre-intervention surface swabbing and disinfection, real-time environmental monitoring during the intervention, and post-intervention investigation if environmental monitoring shows any deviation.
Implement post-decontamination surface monitoring that targets historically occluded locations. If your facility has identified occluded surfaces that cannot be designed out, these become critical sampling locations for post-VHP environmental monitoring. Trending of these specific locations provides early detection of decontamination effectiveness degradation before contamination reaches product-contact surfaces.
The FDA’s remediation demand is appropriately comprehensive: “a review of VHP exposure to decontamination methods as well as permitted interventions, including a retrospective historical review of routine interventions and atypical interventions to determine their risks, a comprehensive identification of locations that are not reliably exposed to VHP decontamination (i.e., occluded surfaces), your plan to reduce occluded surfaces where feasible, review of currently permitted interventions and elimination of high-risk interventions entailing equipment manipulations during production campaigns that expose the ISO 5 environment to surfaces not exposed to a validated decontamination process, and redesign of any intervention that poses an unacceptable contamination risk”.
This remediation framework represents best practice for any aseptic facility using VHP decontamination. The occluded surface problem isn’t limited to Catalent—it’s an industry-wide vulnerability wherever VHP validation focuses on demonstrating sporicidal activity under ideal conditions rather than confirming adequate vapor contact across all surfaces within the validated space.
Contact Plates Versus Swabs: The Detection Capability Trade-Off
The FDA’s critique of Catalent’s environmental monitoring methodology exposes a decision I’ve challenged repeatedly throughout my career: the use of contact plates for sampling irregular, product-contact surfaces in Grade A environments. The technical limitations are well-established, yet contact plates persist because they’re faster and operationally simpler—prioritizing workflow convenience over contamination detection capability.
The specific Catalent deficiency involved sampling filling line components using “contact plate, sampling [surfaces] with one sweeping sampling motion.” The FDA identified two fundamental inadequacies: “With this method, you are unable to attribute contamination events to specific [locations]” and “your firm’s use of contact plates is not as effective as using swab methods”. These limitations aren’t novel discoveries—they’re inherent to contact plate methodology and have been documented in the microbiological literature for decades.
Contact plates—rigid agar surfaces pressed against the area to be sampled—were designed for flat, smooth surfaces where complete agar-to-surface contact can be achieved with uniform pressure. They perform adequately on stainless steel benchtops, isolator walls, and other horizontal surfaces. But filling line components—particularly those identified in the warning letter—present complex geometries: curved surfaces, corners, recesses, and irregular topographies where rigid agar cannot conform to achieve complete surface contact.
The microbial recovery implications are significant. When a contact plate fails to achieve complete surface contact, microorganisms in uncontacted areas remain unsampled. The result is a false-negative environmental monitoring reading that suggests contamination control while actual contamination persists undetected. Worse, the “sweeping sampling motion” described in the warning letter—moving a single contact plate across multiple locations—creates the additional problem the FDA identified: inability to attribute any recovered contamination to a specific surface. Was the contamination on the first component contacted? The third? Somewhere in between? This sampling approach provides data too imprecise for meaningful contamination source investigation.
The alternative—swab sampling—addresses both deficiencies. Swabs conform to irregular surfaces, accessing corners, recesses, and curved topographies that contact plates cannot reach. Swabs can be applied to specific, discrete locations, enabling precise attribution of any contamination recovered to a particular surface. The trade-off is operational: swab sampling requires more time, involves additional manipulative steps within Grade A environments, and demands different operator technique validation.
Yet the Catalent warning letter makes clear that this operational inconvenience doesn’t justify compromised detection capability for critical product-contact surfaces. The FDA’s expectation—acknowledged in Catalent’s response—is swab sampling “to replace use of contact plates to sample irregular surfaces”. This represents a fundamental shift from convenience-optimized to detection-optimized environmental monitoring.
What should a risk-based surface sampling strategy look like? The differentiation should be based on surface geometry and criticality:
Contact plates remain appropriate for flat, smooth, readily accessible surfaces where complete agar contact can be verified and where contamination risk is lower (Grade B floors, isolator walls, equipment external surfaces). The speed and simplicity advantages of contact plates justify their continued use in these applications.
Swab sampling should be mandatory for product-contact surfaces, irregular geometries, recessed areas, and any location where contact plate conformity is questionable. This includes filling needles, stopper bowls, vial transport mechanisms, crimping heads, and the specific equipment components cited in the Catalent letter. The additional time required for swab sampling is trivial compared to the contamination risk from inadequate monitoring.
Surface sampling protocols must specify the exact location sampled, not general equipment categories. Rather than “sample stopper bowl,” protocols should identify “internal rim of stopper bowl,” “external base of stopper bowl,” “stopper agitation mechanism interior surfaces.” This specificity enables contamination source attribution during investigations and ensures sampling actually reaches the highest-risk surfaces.
Swab technique must be validated to ensure consistent recovery from target surfaces. Simply switching from contact plates to swabs doesn’t guarantee improved detection unless swab technique—pressure applied, surface area contacted, swab saturation, transfer to growth media—is standardized and demonstrated to achieve adequate microbial recovery from the specific materials and geometries being sampled.
The EU GMP Annex 1 and FDA guidance documents emphasize detection capability over convenience in environmental monitoring. The expectation isn’t perfect contamination prevention—that’s impossible in aseptic processing—but rather monitoring systems sensitive enough to detect contamination events when they occur, enabling investigation and corrective action before product impact. Contact plates on irregular surfaces fail this standard by design, not because of operator error or inadequate validation but because the fundamental methodology cannot access the surfaces requiring monitoring.
The Intervention Paradox: When Risk Assessments Identify Hazards But Operations Ignore Them
Perhaps the most troubling element of the Catalent contamination hazards section isn’t the presence of occluded surfaces or inadequate sampling methods but rather the intervention management failure that reveals a disconnect between risk assessment and operational decision-making. Catalent’s risk assessments explicitly “advised against interventions that can disturb potentially occluded surfaces,” yet these high-risk interventions continued during production campaigns.
This represents what I’ve termed “investigation theatre” in previous posts—creating the superficial appearance of risk-based decision-making while actual operations proceed according to production convenience rather than contamination risk mitigation. The risk assessment identified the hazard. The environmental monitoring data confirmed the hazard when contamination occurred during the intervention. Yet the intervention continued as an accepted operational practice.
The specific intervention involved equipment changes to components “integral to stopper seating in the [filling line]”. These components operate at the critical interface between the sterile stopper and the vial—precisely the location where any contamination poses direct product impact risk. The intervention occurred during production campaigns rather than between campaigns when comprehensive decontamination and validation could occur. The intervention involved surfaces potentially occluded during VHP decontamination, meaning their microbiological state was unknown when introduced into the Grade A filling environment.
Every element of this scenario screams “unacceptable contamination risk,” yet it persisted as accepted practice until FDA inspection. How does this happen? Based on my experience across multiple aseptic facilities, the failure mode follows a predictable pattern:
Production scheduling drives intervention timing rather than contamination risk assessment. Stopping a campaign for equipment maintenance creates schedule disruption, yield loss, and capacity constraints. The pressure to maintain campaign continuity overwhelms contamination risk considerations that appear theoretical compared to the immediate, quantifiable production impact.
Risk assessments become compliance artifacts disconnected from operational decision-making. The quality unit conducts a risk assessment, documents that certain interventions pose unacceptable contamination risk, and files the assessment. But when production encounters the situation requiring that intervention, the actual decision-making process references production need, equipment availability, and batch schedules—not the risk assessment that identified the intervention as high-risk.
Interventions become “normalized deviance”—accepted operational practices despite documented risks. After performing a high-risk intervention successfully (meaning without detected contamination) multiple times, it transitions from “high-risk intervention requiring exceptional controls” to “routine intervention” in operational thinking. The fact that adequate controls prevented contamination detection gets inverted into evidence that the intervention isn’t actually high-risk.
Environmental monitoring provides false assurance when contamination goes undetected. If a high-risk intervention occurs and subsequent environmental monitoring shows no contamination, operations interprets this as validation that the intervention is acceptable. But as discussed in the contact plate section, inadequate sampling methodology may fail to detect contamination that actually occurred. The absence of detected contamination becomes “proof” that contamination didn’t occur, reinforcing the normalization of high-risk interventions.
The EU GMP Annex 1 requirements for intervention management represent regulatory recognition of these failure modes. Annex 1 Section 8.16 requires “the list of interventions evaluated via risk analysis” and Section 9.36 requires that aseptic process simulations include “interventions and associated risks”. The framework is explicit: identify interventions, assess their contamination risk, validate that operators can perform them aseptically through media fills, and eliminate interventions that cannot be performed without unacceptable contamination risk.
What does robust intervention risk management look like in practice?
Categorize interventions by contamination risk based on specific, documented criteria. The categorization should consider: surfaces contacted (sterile-to-sterile vs. potentially contaminated), duration of exposure, proximity to open product, operator actions required, first air protection feasibility, and frequency. This creates a risk hierarchy that enables differentiated control strategies rather than treating all interventions equivalently.
Establish clear decision authorities for different intervention risk levels. Routine interventions (low contamination risk, validated through media fills, performed regularly) can proceed under operator judgment following standard procedures. High-risk interventions (those involving occluded surfaces, extended exposure, or proximity to open product) should require quality unit pre-approval including documented risk assessment and enhanced controls specification. Interventions identified as posing unacceptable risk should be prohibited until equipment redesign or process modification eliminates the contamination hazard.
Validate intervention execution through media fills that specifically simulate the intervention’s contamination challenges. Generic media fills demonstrating overall aseptic processing capability don’t validate specific high-risk interventions. If your risk assessment identifies a particular intervention as posing contamination risk, your media fill program must include that intervention, performed by the operators who will execute it, under the conditions (campaign timing, equipment state, environmental conditions) where it will actually occur.
Implement intervention-specific environmental monitoring that targets the contamination pathways identified in risk assessments. If the risk assessment identifies that an intervention may expose product to surfaces not reliably decontaminated, environmental monitoring immediately following that intervention should specifically sample those surfaces and adjacent areas. Trending this intervention-specific monitoring data separately from routine environmental monitoring enables detection of intervention-associated contamination patterns.
Conduct post-intervention investigations when environmental monitoring shows any deviation. The Catalent warning letter describes an environmental monitoring failure whose “most probable root cause” was an atypical intervention involving equipment changes. This temporal association between intervention and contamination should trigger automatic investigation even if environmental monitoring results remain within action levels. The investigation should assess whether intervention protocols require modification or whether the intervention should be eliminated.
The FDA’s remediation demand addresses this gap directly: “review of currently permitted interventions and elimination of high-risk interventions entailing equipment manipulations during production campaigns that expose the ISO 5 environment to surfaces not exposed to a validated decontamination process”. This requirement forces facilities to confront the intervention paradox: if your risk assessment identifies an intervention as high-risk, you cannot simultaneously permit it as routine operational practice. Either modify the intervention to reduce risk, validate enhanced controls that mitigate the risk, or eliminate the intervention entirely.
Media Fill Terminations: When Failures Become Invisible
The Catalent warning letter’s discussion of media fill terminations exposes an investigation failure mode that reveals deeper quality system inadequacies. Since November 2023, Catalent terminated more than five media fill batches representing the filling line. Following two terminations for stoppering issues and extrinsic particle contamination, the facility “failed to open a deviation or an investigation at the time of each failure, as required by your SOPs”.
Read that again. Media fills—the fundamental aseptic processing validation tool, the simulation specifically designed to challenge contamination control—were terminated due to failures, and no deviation was opened, no investigation initiated. The failures simply disappeared from the quality system, becoming invisible until FDA inspection revealed their existence.
The rationalization is predictable: “there was no impact to the SISPQ (Safety, Identity, Strength, Purity, Quality) of the terminated media batches or to any customer batches” because “these media fills were re-executed successfully with passing results”. This reasoning exposes a fundamental misunderstanding of media fill purpose that I’ve encountered with troubling frequency across the industry.
A media fill is not a “test” that you pass or fail with product consequences. It is a simulation—a deliberate challenge to your aseptic processing capability using growth medium instead of product specifically to identify contamination risks without product impact. When a media fill is terminated due to a processing failure, that termination is itself the critical finding. The termination reveals that your process is vulnerable to exactly the failure mode that caused termination: stoppering problems that could occur during commercial filling, extrinsic particles that could contaminate product.
The FDA’s response is appropriately uncompromising: “You do not provide the investigations with a root cause that justifies aborting and re-executing the media fills, nor do you provide the corrective actions taken for each terminated media fill to ensure effective CAPAs were promptly initiated”. The regulatory expectation is clear: media fill terminations require investigation identical in rigor to commercial batch failures. Why did the stoppering issue occur? What equipment, material, or operator factors contributed? How do we prevent recurrence? What commercial batches may have experienced similar failures that went undetected?
The re-execution logic is particularly insidious. By immediately re-running the media fill and achieving passing results, Catalent created the appearance of successful validation while ignoring the process vulnerability revealed by the termination. The successful re-execution proved only that under ideal conditions—now with heightened operator awareness following the initial failure—the process could be executed successfully. It provided no assurance that commercial operations, without that heightened awareness and under the same conditions that caused the initial termination, wouldn’t experience identical failures.
What should media fill termination management look like?
Treat every media fill termination as a critical deviation requiring immediate investigation initiation. The investigation should identify the root cause of the termination, assess whether the failure mode could occur during commercial manufacturing, evaluate whether previous commercial batches may have experienced similar failures, and establish corrective actions that prevent recurrence. This investigation must occur before re-execution, not instead of investigation.
Require quality unit approval before media fill re-execution. The approval should be based on documented investigation findings demonstrating that the termination cause is understood, corrective actions are implemented, and re-execution will validate process capability under conditions that include the corrective actions. Re-execution without investigation approval perpetuates the “keep running until we get a pass” mentality that defeats media fill purpose.
Implement media fill termination trending as a critical quality indicator. A facility terminating “more than five media fill batches” in a period should recognize this as a signal of fundamental process capability problems, not as a series of unrelated events requiring re-execution. Trending should identify common factors: specific operators, equipment states, intervention types, campaign timing.
Ensure deviation tracking systems cannot exclude media fill terminations. The Catalent situation arose partly because “you failed to initiate a deviation record to capture the lack of an investigation for each of the terminated media fills, resulting in an undercounting of the deviations”. Quality metrics that exclude media fill terminations from deviation totals create perverse incentives to avoid formal deviation documentation, rendering media fill findings invisible to quality system oversight.
The broader issue extends beyond media fill terminations to how aseptic processing validation integrates with quality systems. Media fills should function as early warning indicators—detecting aseptic processing vulnerabilities before product impact occurs. But this detection value requires that findings from media fills drive investigations, corrective actions, and process improvements with the same rigor as commercial batch deviations. When media fill failures can be erased through re-execution without investigation, the entire validation framework becomes performative rather than protective.
The Stopper Supplier Qualification Failure: Accepting Contamination at the Source
The stopper contamination issues discussed throughout the warning letter—mammalian hair found in or around stopper regions of vials from nearly 20 batches across multiple products—reveal a supplier qualification and incoming inspection failure that compounds the contamination hazards already discussed. The FDA’s critique focuses on Catalent’s “inappropriate reliance on pre-shipment samples (tailgate samples)” and failure to implement “enhanced or comparative sampling of stoppers from your other suppliers”.
The pre-shipment or “tailgate” sample approach represents a fundamental violation of GMP sampling principles. Under this approach, the stopper supplier—not Catalent—collected samples from lots prior to shipment and sent these samples directly to Catalent for quality testing. Catalent then made accept/reject decisions for incoming stopper lots based on testing of supplier-selected samples that never passed through Catalent’s receiving or storage processes.
Why does this matter? Because representative sampling requires that samples be selected from the material population actually received by the facility, stored under facility conditions, and handled through facility processes. Supplier-selected pre-shipment samples bypass every opportunity to detect contamination introduced during shipping, storage transitions, or handling. They enable a supplier to selectively sample from cleaner portions of production lots while shipping potentially contaminated material in the same lot to the customer.
The FDA guidance on this issue is explicit and has been for decades: samples for quality attribute testing “are to be taken at your facility from containers after receipt to ensure they are representative of the components in question”. This isn’t a new expectation emerging from enhanced regulatory scrutiny—it’s a baseline GMP requirement that Catalent systematically violated through reliance on tailgate samples.
But the tailgate sample issue represents only one element of broader supplier qualification failures. The warning letter notes that “while stoppers from [one supplier] were the primary source of extrinsic particles, they were not the only source of foreign matter.” Yet Catalent implemented “limited, enhanced sampling strategy for one of your suppliers” while failing to “increase sampling oversight” for other suppliers. This selective enhancement—focusing remediation only on the most problematic supplier while ignoring systemic contamination risks across the stopper supply base—predictably failed to resolve ongoing contamination issues.
What should stopper supplier qualification and incoming inspection look like for aseptic filling operations?
Eliminate pre-shipment or tailgate sampling entirely. All quality testing must be conducted on samples taken from received lots, stored in facility conditions, and selected using documented random sampling procedures. If suppliers require pre-shipment testing for their internal quality release, that’s their process requirement—it doesn’t substitute for the purchaser’s independent incoming inspection using facility-sampled material.
Implement risk-based incoming inspection that intensifies sampling when contamination history indicates elevated risk. The warning letter notes that Catalent recognized stoppers as “a possible contributing factor for contamination with mammalian hairs” in July 2024 but didn’t implement enhanced sampling until May 2025—a ten-month delay. The inspection enhancement should be automatic and immediate when contamination events implicate incoming materials. The sampling intensity should remain elevated until trending data demonstrates sustained contamination reduction across multiple lots.
Apply visual inspection with reject criteria specific to the defect types that create product contamination risk. Generic visual inspection looking for general “defects” fails to detect the specific contamination types—embedded hair, extrinsic particles, material fragments—that create sterile product risks. Inspection protocols must specify mammalian hair, fiber contamination, and particulate matter as reject criteria with sensitivity adequate to detect single-particle contamination in sampled stoppers.
Require supplier process changes—not just enhanced sampling—when contamination trends indicate process capability problems. The warning letter acknowledges Catalent “worked with your suppliers to reduce the likelihood of mammalian hair contamination events” but notes that despite these efforts, “you continued to receive complaints from customers who observed mammalian hair contamination in drug products they received from you”. Enhanced sampling detects contamination; it doesn’t prevent it. Suppliers demonstrating persistent contamination require process audits, environmental control improvements, and validated contamination reduction demonstrated through process capability studies—not just promises to improve quality.
Implement finished product visual inspection with heightened sensitivity for products using stoppers from suppliers with contamination history. The FDA notes that Catalent indicated “future batches found during visual inspection of finished drug products would undergo a re-inspection followed by tightened acceptable quality limit to ensure defective units would be removed” but didn’t provide the re-inspection procedure. This two-stage inspection approach—initial inspection followed by re-inspection with enhanced criteria for lots from high-risk suppliers—provides additional contamination detection but must be validated to demonstrate adequate defect removal.
The broader lesson extends beyond stoppers to supplier qualification for any component used in sterile manufacturing. Components introduce contamination risks—microbial bioburden, particulate matter, chemical residues—that cannot be fully mitigated through end-product testing. Supplier qualification must function as a contamination prevention tool, ensuring that materials entering aseptic operations meet microbiological and particulate quality standards appropriate for their role in maintaining sterility. Reliance on tailgate samples, delayed sampling enhancement, and acceptance of persistent supplier contamination all represent failures to recognize suppliers as critical contamination control points requiring rigorous qualification and oversight.
The Systemic Pattern: From Contamination Hazards to Quality System Architecture
Stepping back from individual contamination hazards—occluded surfaces, inadequate sampling, high-risk interventions, media fill terminations, supplier qualification failures—a systemic pattern emerges that connects this warning letter to the broader zemblanity framework I’ve explored in previous posts. These aren’t independent, unrelated deficiencies that coincidentally occurred at the same facility. They represent interconnected architectural failures in how the quality system approaches contamination control.
The pattern reveals itself through three consistent characteristics:
Detection systems optimized for convenience rather than capability. Contact plates instead of swabs for irregular surfaces. Pre-shipment samples instead of facility-based incoming inspection. Generic visual inspection instead of defect-specific contamination screening. Each choice prioritizes operational ease and workflow efficiency over contamination detection sensitivity. The result is a quality system that generates reassuring data—passing environmental monitoring, acceptable incoming inspection results, successful visual inspection—while actual contamination persists undetected.
Risk assessments that identify hazards without preventing their occurrence. Catalent’s risk assessments advised against interventions disturbing potentially occluded surfaces, yet these interventions continued. The facility recognized stoppers as contamination sources in July 2024 but delayed enhanced sampling until May 2025. Media fill terminations revealed aseptic processing vulnerabilities but triggered re-execution rather than investigation. Risk identification became separated from risk mitigation—the assessment process functioned as compliance theatre rather than decision-making input.
Investigation systems that erase failures rather than learn from them. Media fill terminations occurred without deviation initiation. Mammalian hair contamination events were investigated individually without recognizing the trend across 20+ deviations. Root cause investigations concluded “no product impact” based on passing sterility tests rather than addressing the contamination source enabling future events. The investigation framework optimized for batch release justification rather than contamination prevention.
These patterns don’t emerge from incompetent quality professionals or inadequate resource allocation. They emerge from quality system design choices that prioritize production efficiency, workflow continuity, and batch release over contamination detection, investigation rigor, and source elimination. The system delivers what it was designed to deliver: maximum throughput with minimum disruption. It fails to deliver what patients require: contamination control capable of detecting and eliminating sterility risks before product impact.
Recommendations: Building Contamination Hazard Detection Into System Architecture
What does effective contamination hazard management look like at the quality system architecture level? Based on the Catalent failures and broader industry patterns, several principles should guide aseptic operations:
Design decontamination validation around worst-case geometries, not ideal conditions. VHP validation using flat coupons on horizontal surfaces tells you nothing about vapor penetration into the complex geometries, wrapped components, and recessed surfaces actually present in your filling line. Biological indicator placement should target occluded surfaces specifically—if you can’t achieve validated kill on these locations, they’re contamination hazards requiring design modification or alternative decontamination methods.
Select environmental monitoring methods based on detection capability for the surfaces and conditions actually requiring monitoring. Contact plates are adequate for flat, smooth surfaces. They’re inadequate for irregular product-contact surfaces, recessed areas, and complex geometries. Swab sampling takes more time but provides contamination detection capability that contact plates cannot match. The operational convenience sacrifice is trivial compared to the contamination risk from monitoring methods incapable of detecting contamination when it occurs.
Establish intervention risk classification with decision authorities proportional to contamination risk. Routine low-risk interventions validated through media fills can proceed under operator judgment. High-risk interventions—those involving occluded surfaces, extended exposure, or proximity to open product—require quality unit pre-approval with documented enhanced controls. Interventions identified as posing unacceptable risk should be prohibited pending equipment redesign.
Treat media fill terminations as critical deviations requiring investigation before re-execution. The termination reveals process vulnerability—the investigation must identify root cause, assess commercial batch risk, and establish corrective actions before validation continues. Re-execution without investigation perpetuates the failures that caused termination.
Implement supplier qualification with facility-based sampling, contamination-specific inspection criteria, and automatic sampling enhancement when contamination trends emerge. Tailgate samples cannot provide representative material assessment. Visual inspection must target the specific contamination types—mammalian hair, particulate matter, material fragments—that create product risks. Enhanced sampling should be automatic and sustained when contamination history indicates elevated risk.
Build investigation systems that learn from contamination events rather than erasing them through re-execution or “no product impact” conclusions. Contamination events represent failures in contamination control regardless of whether subsequent testing shows product remains within specification. The investigation purpose is preventing recurrence, not justifying release.
The FDA’s comprehensive remediation demands represent what quality system architecture should look like: independent assessment of investigation capability, CAPA effectiveness evaluation, contamination hazard risk assessment covering material flows and equipment placement, detailed remediation with specific improvements, and ongoing management oversight throughout the manufacturing lifecycle.
The Contamination Control Strategy as Living System
The Catalent warning letter’s contamination hazards section serves as a case study in how quality systems can simultaneously maintain surface-level compliance while allowing fundamental contamination control failures to persist. The facility conducted VHP decontamination cycles, performed environmental monitoring, executed media fills, and inspected incoming materials—checking every compliance box. Yet contamination hazards proliferated because these activities optimized for operational convenience and batch release justification rather than contamination detection and source elimination.
The EU GMP Annex 1 Contamination Control Strategy requirement represents regulatory recognition that contamination control cannot be achieved through isolated compliance activities. It requires integrated systems where facility design, decontamination processes, environmental monitoring, intervention protocols, material qualification, and investigation practices function cohesively to detect, investigate, and eliminate contamination sources. The Catalent failures reveal what happens when these elements remain disconnected: decontamination cycles that don’t reach occluded surfaces, monitoring that can’t detect contamination on irregular geometries, interventions that proceed despite identified risks, investigations that erase failures through re-execution
For those of us responsible for contamination control in aseptic manufacturing, the question isn’t whether our facilities face similar vulnerabilities—they do. The question is whether our quality systems are architected to detect these vulnerabilities before regulators discover them. Are your VHP validations addressing actual occluded surfaces or ideal flat coupons? Are you using contact plates because they detect contamination effectively or because they’re operationally convenient? Do your intervention protocols prevent the high-risk activities your risk assessments identify? When media fills terminate, do investigations occur before re-execution?
The Catalent warning letter provides a diagnostic framework for assessing contamination hazard management. Use it. Map your own decontamination validation against the occluded surface criteria. Evaluate your environmental monitoring method selection against detection capability requirements. Review intervention protocols for alignment with risk assessments. Examine media fill termination handling for investigation rigor. Assess supplier qualification for facility-based sampling and contamination-specific inspection.
The contamination hazards are already present in your aseptic operations. The question is whether your quality system architecture can detect them.
On August 7, 2025, FDA Commissioner Marty Makary announced a program that, on its surface, appears to be a straightforward effort to strengthen domestic pharmaceutical manufacturing. The FDA PreCheck initiative promises “regulatory predictability” and “streamlined review” for companies building new U.S. drug manufacturing facilities. It arrives wrapped in the language of national security—reducing dependence on foreign manufacturing, securing critical supply chains, ensuring Americans have access to domestically-produced medicines.
This is the story the press release tells.
But if you read PreCheck through the lens of falsifiable quality systems a different narrative emerges. PreCheck is not merely an economic incentive program or a supply chain security measure. It is, more fundamentally, a confession.
It is the FDA admitting that the current Pre-Approval Inspection (PAI) and Pre-License Inspection (PLI) model—the high-stakes, eleventh-hour facility audit conducted weeks before the PDUFA date—is a profoundly inefficient mechanism for establishing trust. It is an acknowledgment that evaluating a facility’s “GMP compliance” only in the context of a specific product application, only after the facility is built, only when the approval clock is ticking, creates a system where failures are discovered at the moment when corrections are most expensive and most disruptive.
PreCheck proposes, instead, that the FDA should evaluate facilities earlier, more frequently, and independent of the product approval timeline. It proposes that manufacturers should be able to earn regulatory confidence in their facility design(Phase 1: Facility Readiness) before they ever file a product application, and that this confidence should carry forward into the application review (Phase 2: CMC streamlining).
What is revolutionary—at least for the FDA—is the implicit admission that a manufacturing facility is not a binary state (compliant/non-compliant) evaluated at a single moment in time, but rather a developmental system that passes through stages of maturity, and that regulatory oversight should be calibrated to those stages.
This is not a cheerleading piece for PreCheck. It is an analysis of what PreCheck reveals about the epistemology of regulatory inspection, and a call for a more explicit, more testable framework for what it means for a facility to be “ready.” I also have concerns about the ability of the FDA to carry this out, and the dangers of on-going regulatory capture that I won’t really cover here.
Anatomy of PreCheck—What the Program Actually Proposes
The Two-Phase Structure
PreCheck is built on two complementary phases:
Phase 1: Facility Readiness This phase focuses on early engagement between the manufacturer and the FDA during the facility’s design, construction, and pre-production stages. The manufacturer is encouraged—though not required, as the program is voluntary—to submit a Type V Drug Master File (DMF) containing:
Site operations layout and description
Pharmaceutical Quality System (PQS) elements
Quality Management Maturity (QMM) practices
Equipment specifications and process flow diagrams
This Type V DMF serves as a “living document” that can be incorporated by reference into future drug applications. The FDA will review this DMF and provide feedback on facility design, helping to identify potential compliance issues before construction is complete.
Michael Kopcha, Director of the FDA’s Office of Pharmaceutical Quality (OPQ), clarified at the September 30 public meeting that if a facility successfully completes the Facility Readiness Phase, an inspection may not be necessary when a product application is later filed.
This is the core innovation: decoupling facility assessment from product application.
Phase 2: Application Submission Once a product application (NDA, ANDA, or BLA) is filed, the second phase focuses on streamlining the Chemistry, Manufacturing, and Controls (CMC) section of the application. The FDA offers:
Pre-application meetings
Early feedback on CMC data needs
Facility readiness and inspection planning discussions
Because the facility has already been reviewed in Phase 1, the CMC review can proceed with greater confidence that the manufacturing site is capable of producing the product as described in the application.
Importantly, Kopcha also clarified that only the CMC portion of the review is expedited—clinical and non-clinical sections follow the usual timeline. This is a critical limitation that industry stakeholders noted with some frustration, as it means PreCheck does not shorten the overall approval timeline as much as initially hoped.
What PreCheck Is Not
To understand what PreCheck offers, it is equally important to understand what it does not offer:
It is not a fast-track program. PreCheck does not provide priority review or accelerated approval pathways. It is a facility-focused engagement model, not a product-focused expedited review.
It is not a GMP certificate. Unlike the European system, where facilities can obtain a GMP certificate independent of any product application, PreCheck still requires a product application to trigger Phase 2. The Facility Readiness Phase (Phase 1) provides early engagement, but does not result in a standalone “facility approval” that can be referenced by multiple products or multiple sponsors.
It is not mandatory. PreCheck is voluntary. Manufacturers can continue to follow the traditional PAI/PLI pathway if they prefer.
It does not apply to existing facilities (yet). PreCheck is designed for new domestic manufacturing facilities. Industry stakeholders have requested expansion to include existing facility expansions and retrofits, but the FDA has not committed to this.
It does not decouple facility inspections from product approvals. Despite industry’s strong push for this—Big Pharma executives from Eli Lilly, Merck, and others explicitly requested at the public meeting that the FDA adopt the EMA model of decoupling GMP inspections from product applications—the FDA has not agreed to this. Phase 1 provides early feedback, but Phase 2 still ties the facility assessment to a specific product application.
The Type V DMF as the Backbone of PreCheck
The Type V Drug Master File is the operational mechanism through which PreCheck functions.
Historically, Type V DMFs have been a catch-all category for “FDA-accepted reference information” that doesn’t fit into the other DMF types (Type II for drug substances, Type III for packaging, Type IV for excipients). They have been used primarily for device constituent parts in combination products.
PreCheck repurposes the Type V DMF as a facility-centric repository. Instead of focusing on a material or a component, the Type V DMF in the PreCheck context contains:
Equipment and utilities: Specifications, qualification status, maintenance programs
The idea is that this DMF becomes a reusable asset. If a manufacturer builds a facility and completes the PreCheck Facility Readiness Phase, that facility’s Type V DMF can be referenced by multiple product applications from the same sponsor. This reduces redundant submissions and allows the FDA to build institutional knowledge about a facility over time.
However—and this is where the limitations become apparent—the Type V DMF is sponsor-specific. If the facility is a Contract Manufacturing Organization (CMO), the FDA has not clarified how the DMF ownership works or whether multiple API sponsors using the same CMO can leverage the same facility DMF. Industry stakeholders raised this as a significant concern at the public meeting, noting that CMOs account for approximately 50% of all facility-related CRLs.
The Type V DMF vs. Site Master File: Convergent Evolutions in Facility Documentation
The Type V DMF requirement in PreCheck bears a striking resemblance—and some critical differences—to the Site Master File (SMF) required under EU GMP and PIC/S guidelines. Understanding this comparison reveals both the potential of PreCheck and its limitations.
What is a Site Master File?
The Site Master File is a GMP documentation requirement in the EU, mandated under Chapter 4 of the EU GMP Guideline. PIC/S provides detailed guidance on SMF preparation in document PE 008-4. The SMF is:
A facility-centric document prepared by the pharmaceutical manufacturer
Typically 25-30 pages plus appendices, designed to be “readable when printed on A4 paper”
A living document that is part of the quality management system, updated regularly (recommended every 2 years)
Submitted to regulatory authorities to demonstrate GMP compliance and facilitate inspection planning
The purpose of the SMF is explicit: to provide regulators with a comprehensive overview of the manufacturing operations at a named site, independent of any specific product. It answers the question: “What GMP activities occur at this location?”
Required SMF Contents (per PIC/S PE 008-4 and EU guidance):
General Information: Company name, site address, contact information, authorized manufacturing activities, manufacturing license copy
Quality Management System: QA/QC organizational structure, key personnel qualifications, training programs, release procedures for Qualified Persons
Personnel: Number of employees in production, QC, QA, warehousing; reporting structure
Premises and Equipment: Site layouts, room classifications, pressure differentials, HVAC systems, major equipment lists
Documentation: Description of documentation systems (batch records, SOPs, specifications)
Production: Brief description of manufacturing operations, in-process controls, process validation policy
Quality Control: QC laboratories, test methods, stability programs, reference standards
Distribution, Complaints, and Product Recalls: Systems for handling complaints, recalls, and distribution controls
Self-Inspection: Internal audit programs and CAPA systems
Critically, the SMF is product-agnostic. It describes the facility’s capabilities and systems, not specific product formulations or manufacturing procedures. An appendix may list the types of products manufactured (e.g., “solid oral dosage forms,” “sterile injectables”), but detailed product-specific CMC information is not included.
How the Type V DMF Differs from the Site Master File
The FDA’s Type V DMF in PreCheck serves a similar purpose but with important distinctions:
Similarities:
Both are facility-centric documents describing site operations, quality systems, and GMP capabilities
Both include site layouts, equipment specifications, and quality management elements
Both are intended to facilitate regulatory review and inspection planning
Both are living documents that can be updated as the facility changes
Critical Differences:
Dimension
Site Master File (EU/PIC/S)
Type V DMF (FDA PreCheck)
Regulatory Status
Mandatory for EU manufacturing license
Voluntary (PreCheck is voluntary program)
Independence from Products
Fully independent—facility can be certified without any product application
Partially independent—Phase 1 allows early review, but Phase 2 still ties to product application
Ownership
Facility owner (manufacturer or CMO)
Sponsor-specific—unclear for CMO facilities with multiple clients
Regulatory Outcome
Can support GMP certificate or manufacturing license independent of product approvals
Does not result in standalone facility approval; only facilitates product application review
Scope
Describes all manufacturing operations at the site
Focused on specific facility being built, intended to support future product applications from that sponsor
International Recognition
Harmonized internationally—PIC/S member authorities recognize each other’s SMF-based inspections
FDA-specific—no provision for accepting EU GMP certificates or SMFs in lieu of PreCheck participation
Length and Detail
25-30 pages plus appendices, designed for conciseness
No specified page limit; QMM practices component could be extensive
The Critical Gap: Product-Specificity vs. Facility Independence
The most significant difference lies in how the documents relate to product approvals.
In the EU system, a manufacturer submits the SMF to the National Competent Authority (NCA) as part of obtaining or maintaining a manufacturing license. The NCA inspects the facility and, if compliant, grants a GMP certificate that is valid across all products manufactured at that site.
When a Marketing Authorization Application (MAA) is later filed for a specific product, the CHMP can reference the existing GMP certificate and decide whether a pre-approval inspection is needed. If the facility has been recently inspected and found compliant, no additional inspection may be required. The facility’s GMP status is decoupled from the product approval.
The FDA’s Type V DMF in PreCheck does not create this decoupling. While Phase 1 allows early FDA review of the facility design, the Type V DMF is still tied to the sponsor’s product applications. It is not a standalone “facility certificate.” Multiple products from the same sponsor can reference the same Type V DMF, but the FDA has not clarified whether:
The DMF reduces the need for PAIs/PLIs on second, third, and subsequent products from the same facility
The DMF serves any function outside of the PreCheck program (e.g., for routine surveillance inspections)
At the September 30 public meeting, industry stakeholders explicitly requested that the FDA adopt the EU GMP certificate model, where facilities can be certified independent of product applications. The FDA acknowledged the request but did not commit to this approach.
Confidentiality: DMFs Are Proprietary
The Type V DMF operates under FDA’s DMF confidentiality rules (21 CFR 314.420). The DMF holder (the manufacturer) authorizes the FDA to reference the DMF when reviewing a specific sponsor’s application, but the detailed contents are not disclosed to the sponsor or to other parties. This protects proprietary manufacturing information, especially important for CMOs who serve competing sponsors.
However, PreCheck asks manufacturers to include Quality Management Maturity (QMM) practices in the Type V DMF—information that goes beyond what is typically in a DMF and beyond what is required in an SMF. As discussed earlier, industry is concerned that disclosing advanced quality practices could create new regulatory expectations or vulnerabilities. This tension does not exist with SMFs, which describe only what is required by GMP, not what is aspirational.
Could the FDA Adopt a Site Master File Model?
The comparison raises an obvious question: Why doesn’t the FDA simply adopt the EU Site Master File requirement?
Several barriers exist:
1. U.S. Legal Framework
The FDA does not issue facility manufacturing licenses the way EU NCAs do. In the U.S., a facility is “approved” only in the context of a specific product application (NDA, ANDA, BLA). The FDA has establishment registration (Form FDA 2656), but registration does not constitute approval—it is merely notification that a facility exists and intends to manufacture drugs[not in sources but common knowledge].
To adopt the EU GMP certificate model, the FDA would need either:
Statutory authority to issue facility licenses independent of product applications, or
A regulatory framework that allows facilities to earn presumption of compliance that carries across multiple products
Neither currently exists in U.S. law.
2. FDA Resource Model
The FDA’s inspection system is application-driven. PAIs and PLIs are triggered by product applications, and the cost is implicitly borne by the applicant through user fees. A facility-centric certification system would require the FDA to conduct routine facility inspections on a 1-3 year cycle (as the EMA/PIC/S model does), independent of product filings.
This would require:
Significant increases in FDA inspector workforce
A new fee structure (facility fees vs. application fees)
Coordination across CDER, CBER, and Office of Inspections and Investigations (OII)
PreCheck sidesteps this by keeping the system voluntary and sponsor-initiated. The FDA does not commit to routine re-inspections; it merely offers early engagement for new facilities.
3. CDMO Business Model Complexity
Approximately 50% of facility-related CRLs involve Contract Development and Manufacturing Organizations. CDMOs manufacture products for dozens or hundreds of sponsors. In the EU, the CMO has one GMP certificate that covers all its operations, and each sponsor references that certificate in their MAAs.
In the U.S., each sponsor’s product application is reviewed independently. If the FDA were to adopt a facility certificate model, it would need to resolve:
Who pays for the facility inspection—the CMO or the sponsors?
How are facility compliance issues (OAIs, warning letters) communicated across sponsors?
Can a facility certificate be revoked without blocking all pending product applications?
These are solvable problems—the EU has solved them—but they require systemic changes to the FDA’s regulatory framework.
The Path Forward: Incremental Convergence
The Type V DMF in PreCheck is a step toward the Site Master File model, but it is not yet there. For PreCheck to evolve into a true facility-centric system, the FDA would need to:
Decouple Phase 1 (Facility Readiness) from Phase 2 (Product Application), allowing facilities to complete Phase 1 and earn a facility certificate or presumption of compliance that applies to all future products from any sponsor using that facility.
Standardize the Type V DMF content to align with PIC/S SMF guidance, ensuring international harmonization and reducing duplicative submissions for facilities operating in multiple markets.
Implement routine surveillance inspections (every 1-3 years) for facilities that have completed PreCheck, with inspection frequency adjusted based on compliance history (the PIC/S risk-based model). The major difference here probably would be facilities not yet engaged in commercial manufacturing.
Enhance Participation in PIC/S inspection reliance, accepting EU GMP certificates and SMFs for facilities that have been recently inspected by PIC/S member authorities, and allowing U.S. Type V DMFs to be recognized internationally.
The industry’s message at the PreCheck public meeting was clear: adopt the EU model. Whether the FDA is willing—or able—to make that leap remains to be seen.
Quality Management Maturity (QMM): The Aspirational Component
QMM is an FDA initiative (led by CDER) that aims to promote quality management practices that go beyond CGMP minimum requirements. The FDA’s QMM program evaluates manufacturers on a maturity scale across five practice areas:
The QMM assessment uses a pre-interview questionnaire and interactive discussion to evaluate how effectively a manufacturer monitors and manages quality. The maturity levels range from Undefined (reactive, ad hoc) to Optimized (proactive, embedded quality culture).
The FDA ran two QMM pilot programs between October 2020 and March 2022 to test this approach. The goal is to create a system where the FDA—and potentially the market—can recognize and reward manufacturers with mature quality systems that focus on continuous improvement rather than reactive compliance.
PreCheck asks manufacturers to include QMM practices in their Type V DMF. This is where the program becomes aspirational.
At the September 30 public meeting, industry stakeholders described submitting QMM information as “risky”. Why? Because QMM is not fully defined. The assessment protocol is still in development. The maturity criteria are not standardized. And most critically, manufacturers fear that disclosing information about their quality systems beyond what is required by CGMP could create new expectations or new vulnerabilities during inspections.
One attendee noted that “QMS information is difficult to package, usually viewed on inspection”. In other words, quality maturity is something you demonstrate through behavior, not something you document in a binder.
The FDA’s inclusion of QMM in PreCheck reveals a tension: the agency wants to move beyond compliance theater—beyond the checkbox mentality of “we have an SOP for that”—and toward evaluating whether manufacturers have the organizational discipline to maintain control over time. But the FDA has not yet figured out how to do this in a way that feels safe or fair to industry.
This is the same tension I discussed in my August 2025 post on “The Effectiveness Paradox“: how do you evaluate a quality system’s capability to detect its own failures, not just its ability to pass an inspection when everything is running smoothly?
The Current PAI/PLI Model and Why It Fails
To understand why PreCheck is necessary, we must first understand why the current Pre-Approval Inspection (PAI) and Pre-License Inspection (PLI) model is structurally flawed.
The High-Stakes Inspection at the Worst Possible Time
Under the current system, the FDA conducts a PAI (for drugs under CDER) or PLI (for biologics under CBER) to verify that a manufacturing facility is capable of producing the drug product as described in the application. This inspection is risk-based—the FDA does not inspect every application. But when an inspection is deemed necessary, the timing is brutal.
As one industry executive described at the PreCheck public meeting: “We brought on a new U.S. manufacturing facility two years ago and the PAI for that facility was weeks prior to our PDUFA date. At that point, we’re under a lot of pressure. Any questions or comments or observations that come up during the PAI are very difficult to resolve in that time frame”.
This is the structural flaw: the FDA evaluates the facility after the facility is built, after the application is filed, and as close as possible to the approval decision. If the inspection reveals deficiencies—data integrity failures, inadequate cleaning validation, contamination control gaps, equipment qualification issues—the manufacturer has very little time to correct them before the PDUFA clock expires.
The result? Complete Response Letters (CRLs).
The CRL Epidemic: Facility Failures Blocking Approvals
The data on inspection-related CRLs is stark.
In a 2024 analysis of BLA outcomes, researchers found that BLAs were issued CRLs nearly half the time in 2023—the highest rate ever recorded. Of these CRLs, approximately 20% were due to facility inspection failures.
Breaking this down further:
Foreign manufacturing sites are associated with more CRs, proportionate to the number of PLIs conducted.
Approximately 50% of facility deficiencies are for Contract Development Manufacturing Organizations (CDMOs).
Approximately 75% of Applicant-Site CRs are for biosimilars.
The five most-cited facilities (each with ≥5 CRs) account for ~35% of all CR deficiencies.
In a separate analysis of CRL drivers from 2020–2024, Manufacturing/CMC deficiencies and Facility Inspection Failures together account for over 60% of all CRLs. This includes:
Inadequate control of production processes
Unstable manufacturing
Data gaps in CMC
GMP site inspections revealing uncontrolled processes, document gaps, hygiene issues
The pattern is clear: facility issues discovered late in the approval process are causing massive delays.
Why the Late-Stage Inspection Model Creates Failure
The PAI/PLI model creates failure for three reasons:
1. The Inspection Evaluates “Work-as-Done” When It’s Too Late to Change It
When the FDA arrives for a PAI/PLI, the facility is already built. The equipment is already installed. The processes are already validated (or supposed to be). The SOPs are already written.
If the inspector identifies a fundamental design flaw—say, inadequate segregation between manufacturing suites, or a HVAC system that cannot maintain differential pressure during interventions—the manufacturer cannot easily fix it. Redesigning cleanroom airflow or adding airlocks requires months of construction and re-qualification. The PDUFA clock does not stop.
This is analogous to the Rechon Life Science warning letter I analyzed in September 2025, where the smoke studies revealed turbulent airflow over open vials, contradicting the firm’s Contamination Control Strategy. The CCS claimed unidirectional flow protected the product. The smoke video showed eddies. But by the time this was discovered, the facility was operational, the batches were made, and the “fix” required redesigning the isolator.
2. The Inspection Creates Adversarial Pressure Instead of Collaborative Learning
Because the PAI occurs weeks before the PDUFA date, the inspection becomes a pass/fail exam rather than a learning opportunity. The manufacturer is under intense pressure to defend their systems rather than interrogate them. Questions from inspectors are perceived as threats, not invitations to improve.
This is the opposite of the falsifiable quality mindset. A falsifiable system would welcome the inspection as a chance to test whether the control strategy holds up under scrutiny. But the current timing makes this psychologically impossible. The stakes are too high.
3. The Inspection Conflates “Facility Capability” with “Product-Specific Compliance”
The PAI/PLI is nominally about verifying that the facility can manufacture the specific product in the application. But in practice, inspectors evaluate general GMP compliance—data integrity, quality unit independence, deviation investigation rigor, cleaning validation adequacy—not just product-specific manufacturing steps.
The FDA does not give “facility certificates” like the EMA does. Every product application triggers a new inspection (or waiver decision) based on the facility’s recent inspection history. This means a facility with a poor inspection outcome on one product will face heightened scrutiny on all subsequent products—creating a negative feedback loop.
Comparative Regulatory Philosophy—EMA, WHO, and PIC/S
To understand whether PreCheck is sufficient, we must compare it to how other regulatory agencies conceptualize facility oversight.
The EMA Model: Decoupling and Delegation
The European Medicines Agency (EMA) operates a decentralized inspection system. The EMA itself does not conduct inspections; instead, National Competent Authorities (NCAs) in EU member states perform GMP inspections on behalf of the EMA.
The key structural differences from the FDA:
1. Facility Inspections Are Decoupled from Product Applications
In the EU, a manufacturing facility can be inspected and receive a GMP certificate from the NCA independent of any specific product application. This certificate attests that the facility complies with EU GMP and is capable of manufacturing medicinal products according to its authorized scope.
When a Marketing Authorization Application (MAA) is filed, the CHMP (Committee for Medicinal Products for Human Use) can request a GMP inspection if needed, but if the facility has a recent GMP certificate in good standing, a new inspection may not be necessary.
This means the facility’s “GMP status” is assessed separately from the product’s clinical and CMC review. Facility issues do not automatically block product approval—they are addressed through a separate remediation pathway.
2. Risk-Based and Reliance-Based Inspection Planning
The EMA employs a risk-based approach to determine inspection frequency. Facilities are inspected on a routine re-inspection program (typically every 1-3 years depending on risk), with the frequency adjusted based on:
Previous inspection findings (critical, major, or minor deficiencies)
Product type and patient risk
Manufacturing complexity
Company compliance history
Additionally, the EMA participates in PIC/S inspection reliance (discussed below), meaning it may accept inspection reports from other competent authorities without conducting its own inspection.
3. Mutual Recognition Agreement (MRA) with the FDA
The U.S. and EU have a Mutual Recognition Agreement for GMP inspections. Under this agreement, the FDA and EMA recognize each other’s inspection outcomes for human medicines, reducing duplicate inspections.
Importantly, the EMA has begun accepting FDA inspection reports proactively during the pre-submission phase. Applicants can provide FDA inspection reports to support their MAA, allowing the EMA to make risk-based decisions about whether an additional inspection is needed.
This is the inverse of what the FDA is attempting with PreCheck. The EMA is saying: “We trust the FDA’s inspection, so we don’t need to repeat it.” The FDA, with PreCheck, is saying: “We will inspect early, so we don’t need to repeat it later.” Both approaches aim to reduce redundancy, but the EMA’s reliance model is more mature.
WHO Prequalification: Phased Inspections and Leveraging SRAs
The WHO Prequalification (PQ) program provides an alternative model for facility assessment, particularly relevant for manufacturers in low- and middle-income countries (LMICs).
Key features:
1. Inspection Occurs During the Dossier Assessment, Not After
Unlike the FDA’s PAI (which occurs near the end of the review), WHO PQ conducts inspections within 6 months of dossier acceptance for assessment. This means the facility inspection happens in parallel with the technical review, not at the end.
If the inspection reveals deficiencies, the manufacturer submits a Corrective and Preventive Action (CAPA) plan, and WHO conducts a follow-up inspection within 6-9 months. The prequalification decision is not made until the inspection is closed.
This phased approach reduces the “all-or-nothing” pressure of the FDA’s late-stage PAI.
2. Routine Inspections Every 1-3 Years
Once a product is prequalified, WHO conducts routine inspections every 1-3 years to verify continued compliance. This aligns with the Continued Process Verification concept in FDA’s Stage 3 validation—the idea that a facility is not “validated forever” after one inspection, but must demonstrate ongoing control.
3. Reliance on Stringent Regulatory Authorities (SRAs)
WHO PQ may leverage inspection reports from Stringent Regulatory Authorities (SRAs) or WHO-Listed Authorities (WLAs). If the facility has been recently inspected by an SRA (e.g., FDA, EMA, Health Canada) and the scope is appropriate, WHO may waive the onsite inspection and rely on the SRA’s findings.
This is a trust-based model: WHO recognizes that conducting duplicate inspections wastes resources, and that a well-documented inspection by a competent authority provides sufficient assurance.
The FDA’s PreCheck program does not include this reliance mechanism. PreCheck is entirely FDA-centric—there is no provision for accepting EMA or WHO inspection reports to satisfy Phase 1 or Phase 2 requirements.
PIC/S: Risk-Based Inspection Planning and Classification
The Pharmaceutical Inspection Co-operation Scheme (PIC/S) is an international framework for harmonizing GMP inspections across member authorities.
Two key PIC/S documents are relevant to this discussion:
1. PI 037-1: Risk-Based Inspection Planning
PIC/S provides a qualitative risk management tool to help inspectorates prioritize inspections. The model assigns each facility a risk rating (A, B, or C) based on:
Intrinsic Risk: Product type, complexity, patient population
Compliance Risk: Previous inspection outcomes, deficiency history
The risk rating determines inspection frequency:
A (Low Risk): Reduced frequency (2-3 years)
B (Moderate Risk): Moderate frequency (1-2 years)
C (High Risk): Increased frequency (<1 year, potentially multiple times per year)
Critically, PIC/S assumes that every manufacturer will be inspected at least once within the defined period. There is no such thing as “perpetual approval” based on one inspection.
2. PI 048-1: GMP Inspection Reliance
PIC/S introduced a guidance on inspection reliance in 2018. This guidance provides a framework for desktop assessment of GMP compliance based on the inspection activities of other competent authorities.
The key principle: if another PIC/S member authority has recently inspected a facility and found it compliant, a second authority may accept that finding without conducting its own inspection.
This reliance is conditional—the accepting authority must verify that:
The scope of the original inspection covers the relevant products and activities
The original inspection was recent (typically within 2-3 years)
The original authority is a trusted PIC/S member
There have been no significant changes or adverse events since the inspection
This is the most mature version of the trust-based inspection model. It recognizes that GMP compliance is not a static state that can be certified once, but also that redundant inspections by multiple authorities waste resources and delay market access.
Comparative Summary
Dimension
FDA (Current PAI/PLI)
FDA PreCheck (Proposed)
EMA/EU
WHO PQ
PIC/S Framework
Timing of Inspection
Late (near PDUFA)
Early (design phase) + Late (application)
Variable, risk-based
Early (during assessment)
Risk-based (1-3 years)
Facility vs. Product Focus
Product-specific
Facility (Phase 1) → Product (Phase 2)
Facility-centric (GMP certificate)
Product-specific with facility focus
Facility-centric
Decoupling
No
Partial (Phase 1 early feedback)
Yes (GMP certificate independent)
No, but phased
Yes (risk-based frequency)
Reliance on Other Authorities
No
No
Yes (MRA, PIC/S)
Yes (SRA reliance)
Yes (core principle)
Frequency
Per-application
Phase 1 (once) → Phase 2 (per-application)
Routine re-inspection (1-3 years)
Routine (1-3 years)
Risk-based (A/B/C)
Consequence of Failure
CRL, approval blocked
Phase 1: design guidance; Phase 2: potential CRL
CAPA, may not block approval
CAPA, follow-up inspection
Remediation, increased frequency
The striking pattern: the FDA is the outlier. Every other major regulatory system has moved toward:
Decoupling facility inspections from product applications
Risk-based, routine inspection frequencies
Reliance mechanisms to avoid duplicate inspections
Facility-centric GMP certificates or equivalent
PreCheck is the FDA’s first step toward this model, but it is not yet there. Phase 1 provides early engagement, but Phase 2 still ties facility assessment to a specific product. PreCheck does not create a standalone “facility approval” that can be referenced across products or shared among CMO clients.
Potential Benefits of PreCheck (When It Works)
Despite its limitations, PreCheck could offer potential real benefits over the status quo—if it is implemented effectively.
Benefit 1: Early Detection of Facility Design Flaws
The most obvious benefit of PreCheck is that it allows the FDA to review facility design during construction, rather than after the facility is operational.
As one industry expert noted at the public meeting: “You’re going to be able to solve facility issues months, even years before they occur”.
Consider the alternative. Under the current PAI/PLI model, if the FDA inspector discovers during a pre-approval inspection that the cleanroom differential pressure cannot be maintained during material transfer, the manufacturer faces a choice:
Redesign the HVAC system (months of construction, re-commissioning, re-qualification)
Withdraw the application
Argue that the deficiency is not critical and hope the FDA agrees
All of these options are expensive and delay the product launch.
PreCheck, by contrast, allows the FDA to flag this issue during the design review (Phase 1), when the HVAC system is still on the engineering drawings. The manufacturer can adjust the design before pouring concrete.
This is the principle of Design Qualification (DQ) applied to the regulatory inspection timeline. Just as equipment must pass DQ before moving to Installation Qualification (IQ), the facility should pass regulatory design review before moving to construction and operation.
Benefit 2: Reduced Uncertainty and More Predictable Timelines
The current PAI/PLI system creates uncertainty about whether an inspection will be scheduled, when it will occur, and what the outcome will be.
Manufacturers described this uncertainty as one of the biggest pain points at the PreCheck public meeting. One executive noted that PAIs are often scheduled with short notice, and manufacturers struggle to align their production schedules (especially for seasonal products like vaccines) with the FDA’s inspection availability.
PreCheck introduces structure to this chaos. If a manufacturer completes Phase 1 successfully, the FDA has already reviewed the facility and provided feedback. The manufacturer knows what the FDA expects. When Phase 2 begins (the product application), the CMC review can proceed with greater confidence that facility issues will not derail the approval.
This does not eliminate uncertainty entirely—Phase 2 still involves an inspection (or inspection waiver decision), and deficiencies can still result in CRLs. But it shifts the uncertainty earlier in the process, when corrections are cheaper.
Benefit 3: Building Institutional Knowledge at the FDA
One underappreciated benefit of PreCheck is that it allows the FDA to build institutional knowledge about a manufacturer’s quality systems over time.
Under the current model, a PAI inspector arrives at a facility for 5-10 days, reviews documents, observes operations, and leaves. The inspection report is filed. If the same facility files a second product application two years later, a different inspector may conduct the PAI, and the process starts from scratch.
The PreCheck Type V DMF, by contrast, is a living document that accumulates information about the facility over its lifecycle. The FDA reviewers who participate in Phase 1 (design review) can provide continuity into Phase 2 (application review) and potentially into post-approval surveillance.
This is the principle behind the EMA’s GMP certificate model: once the facility is certified, subsequent inspections build on the previous findings rather than starting from zero.
Industry stakeholders explicitly requested this continuity at the PreCheck meeting, asking the FDA to “keep the same reviewers in place as the process progresses”. The implication: trust is built through relationships and institutional memory, not one-off inspections.
By including Quality Management Maturity (QMM) practices in the Type V DMF, PreCheck encourages manufacturers to invest in advanced quality systems beyond CGMP minimums.
This is aspirational, not transactional. The FDA is not offering faster approvals or reduced inspection frequency in exchange for QMM participation—at least not yet. But the long-term vision is that manufacturers with mature quality systems will be recognized as lower-risk, and this recognition could translate into regulatory flexibility (e.g., fewer post-approval inspections, faster review of post-approval changes).
This aligns with the philosophy I have argued for throughout 2025: a quality system should not be judged by its compliance on the day of the inspection, but by its ability to detect and correct failures over time. A mature quality system is one that is designed to falsify its own assumptions—to seek out the cracks before they become catastrophic failures.
The QMM framework is the FDA’s attempt to operationalize this philosophy. Whether it succeeds depends on whether the FDA can develop a fair, transparent, and non-punitive assessment protocol—something industry is deeply skeptical about.
Challenges and Industry Concerns
The September 30, 2025 public meeting revealed that while industry welcomes PreCheck, the program as proposed has significant gaps.
Challenge 1: PreCheck Does Not Decouple Facility Inspections from Product Approvals
The single most consistent request from industry was: decouple GMP facility inspections from product applications.
Executives from Eli Lilly, Merck, Johnson & Johnson, and others explicitly called for the FDA to adopt the EMA model, where a facility can be inspected and certified independent of a product application, and that certification can be referenced by multiple products.
Why does this matter? Because under the current system (and under PreCheck as proposed), if a facility has a compliance issue, all product applications relying on that facility are at risk.
Consider a CMO that manufactures API for 10 different sponsors. If the CMO fails a PAI for one sponsor’s product, the FDA may place the entire facility under heightened scrutiny, delaying approvals for all 10 sponsors. This creates a cascade failure where one product’s facility issue blocks the market access of unrelated products.
The EMA’s GMP certificate model avoids this by treating the facility as a separate regulatory entity. If the facility has compliance issues, the NCA works with the facility to remediate them independent of pending product applications. The product approvals may be delayed, but the remediation pathway is separate.
The FDA’s Michael Kopcha acknowledged the request but did not commit: “Decoupling, streamlining, and more up-front communication is helpful… We will have to think about how to go about managing and broadening the scope”.
Challenge 2: PreCheck Only Applies to New Facilities, Not Existing Ones
PreCheck is designed for new domestic manufacturing facilities. But the majority of facility-related CRLs involve existing facilities—either because they are making post-approval changes, transferring manufacturing sites, or adding new products.
Industry stakeholders requested that PreCheck be expanded to include:
Existing facility expansions and retrofits
Post-approval changes (e.g., adding a new production line, changing a manufacturing process)
Site transfers (moving production from one facility to another)
The FDA did not commit to this expansion, but Kopcha noted that the agency is “thinking about how to broaden the scope”.
The challenge here is that the FDA lacks a facility lifecycle management framework. The current system treats each product application as a discrete event, with no mechanism for a facility to earn cumulative credit for good performance across multiple products over time.
This is what the PIC/S risk-based inspection model provides: a facility with a strong compliance history moves to reduced inspection frequency (e.g., every 3 years instead of annually). A facility with a poor history moves to increased frequency (e.g., multiple inspections per year). The inspection burden is proportional to risk.
PreCheck Phase 1 could serve this function—if it were expanded to existing facilities. A CMO that completes Phase 1 and demonstrates mature quality systems could earn presumption of compliance for future product applications, reducing the need for repeated PAIs/PLIs.
But as currently designed, PreCheck is a one-time benefit for new facilities only.
Challenge 3: Confidentiality and Intellectual Property Concerns
Manufacturers expressed significant concern about what information the FDA will require in the Type V DMF and whether that information will be protected from Freedom of Information Act (FOIA) requests.
The concern is twofold:
1. Proprietary Manufacturing Details
The Type V DMF is supposed to include facility layouts, equipment specifications, and process flow diagrams. For some manufacturers—especially those with novel technologies or proprietary processes—this information is competitively sensitive.
If the DMF is subject to FOIA disclosure (even with redactions), competitors could potentially reverse-engineer the manufacturing strategy.
2. CDMO Relationships
For Contract Development and Manufacturing Organizations (CDMOs), the Type V DMF creates a dilemma. The CDMO owns the facility, but the sponsor owns the product. Who submits the DMF? Who controls access to it? If multiple sponsors use the same CDMO facility, can they all reference the same DMF, or must each sponsor submit a separate one?
Industry requested clarity on these ownership and confidentiality issues, but the FDA has not yet provided detailed guidance.
This is not a trivial concern. Approximately 50% of facility-related CRLs involve CDMOs. If PreCheck cannot accommodate the CDMO business model, its utility is limited.
The Confidentiality Paradox: Good for Companies, Uncertain for Consumers
The confidentiality protections embedded in the DMF system—and by extension, in PreCheck’s Type V DMF—serve a legitimate commercial purpose. They allow manufacturers to protect proprietary manufacturing processes, equipment specifications, and quality system innovations from competitors. This protection is particularly critical for Contract Manufacturing Organizations (CMOs) who serve multiple competing sponsors and cannot afford to have one client’s proprietary methods disclosed to another.
But there is a tension here that deserves explicit acknowledgment: confidentiality rules that benefit companies are not necessarily optimal for consumers. This is not an argument for eliminating trade secret protections—innovation requires some degree of secrecy. Rather, it is a call to examine where the balance is struck and whether current confidentiality practices are serving the public interest as robustly as they serve commercial interests.
What Confidentiality Hides from Public View
Under current FDA confidentiality rules (21 CFR 314.420 for DMFs, and broader FOIA exemptions for commercial information), the following categories of information are routinely shielded from public disclosure.
The detailed manufacturing procedures, equipment specifications, and process parameters submitted in Type II DMFs (drug substances) and Type V DMFs (facilities) are never disclosed to the public. They may not even be disclosed to the sponsor referencing the DMF—only the FDA reviews them.
This means that if a manufacturer is using a novel but potentially risky manufacturing technique—say, a continuous manufacturing process that has not been validated at scale, or a cleaning procedure that is marginally effective—the public has no way to know. The FDA reviews this information, but the public cannot verify the FDA’s judgment.
2. Drug Pricing Data and Financial Arrangements
Pharmaceutical companies have successfully invoked trade secret protections to keep drug prices, manufacturing costs, and financial arrangements (rebates, discounts) confidential. In the United States, transparency laws requiring companies to disclose drug pricing information have faced constitutional challenges on the grounds that such disclosure constitutes an uncompensated “taking” of trade secrets.
This opacity prevents consumers, researchers, and policymakers from understanding why drugs cost what they cost and whether those prices are justified by manufacturing expenses or are primarily driven by monopoly pricing.
3. Manufacturing Deficiencies and Inspection Findings
When the FDA conducts an inspection and issues a Form FDA 483 (Inspectional Observations), those observations are eventually made public. But the detailed underlying evidence—the batch records showing failures, the deviations that were investigated, the CAPA plans that were proposed—remain confidential as part of the company’s internal quality records.
This means the public can see that a deficiency occurred, but cannot assess how serious it was or whether the corrective action was adequate. We are asked to trust that the FDA’s judgment was sound, without access to the data that informed that judgment.
The Public Interest Argument for Greater Transparency
The case for reducing confidentiality protections—or at least creating exceptions for public health—rests on several arguments:
Argument 1: The Public Funds Drug Development
As health law scholars have noted, the public makes extraordinary investments in private companies’ drug research and development through NIH grants, tax incentives, and government contracts. Yet details of clinical trial data, manufacturing processes, and government contracts often remain secret, even though the public paid for the research.
During the COVID-19 pandemic, for example, the Johnson & Johnson vaccine contract explicitly allowed the company to keep secret “production/manufacturing know-how, trade secrets, [and] clinical data,” despite massive public funding of the vaccine’s development. European Commission vaccine contracts similarly included generous redactions of price per dose, amounts paid up front, and rollout schedules.
If the public is paying for innovation, the argument goes, the public should have access to the results.
Argument 2: Regulators Are Understaffed and Sometimes Wrong
The FDA is chronically understaffed and under pressure to approve medicines quickly. Regulators sometimes make mistakes. Without access to the underlying data—manufacturing details, clinical trial results, safety signals—independent researchers cannot verify the FDA’s conclusions or identify errors that might not be apparent to a time-pressured reviewer.
Clinical trial transparency advocates argue that summary-level data, study protocols, and even individual participant data can be shared in ways that protect patient privacy (through anonymization and redaction) while allowing independent verification of safety and efficacy claims.
The same logic applies to manufacturing data. If a facility has chronic contamination control issues, or a process validation that barely meets specifications, should that information remain confidential? Or should researchers, patient advocates, and public health officials have access to assess whether the FDA’s acceptance of the facility was reasonable?
Argument 3: Trade Secret Claims Are Often Overbroad
Legal scholars studying pharmaceutical trade secrecy have documented that companies often claim trade secret protection for information that does not meet the legal definition of a trade secret.
For example, “naked price” information—the actual price a company charges for a drug—has been claimed as a trade secret to prevent regulatory disclosure, even though such information provides minimal competitive advantage and is of significant public interest. Courts have begun to push back on these claims, recognizing that the public interest in transparency can outweigh the commercial interest in secrecy, especially in highly regulated industries like pharmaceuticals.
The concern is that companies use trade secret law strategically to suppress unwanted regulation, transparency, and competition—not to protect genuine innovations.
Argument 4: Secrecy Delays Generic Competition
Even after patent and data exclusivity periods expire, trade secret protections allow pharmaceutical companies to keep the precise composition or manufacturing process for medications confidential. This slows the release of generic competitors by preventing them from relying on existing engineering and manufacturing data.
For complex biologics, this problem is particularly acute. Biosimilar developers must reverse-engineer the manufacturing process without access to the originator’s process data, leading to delays of many years and higher costs.
If manufacturing data were disclosed after a defined exclusivity period—say, 10 years—generic and biosimilar developers could bring competition to market faster, reducing drug prices for consumers.
The Counter-Argument: Why Companies Need Confidentiality
It is important to acknowledge the legitimate reasons why confidentiality protections exist:
1. Protecting Innovation Incentives
If manufacturing processes were disclosed, competitors could immediately copy them, undermining the innovator’s investment in developing the process. This would reduce incentives for process innovation and potentially slow the development of more efficient, higher-quality manufacturing methods.
2. Preventing Misuse of Information
Detailed manufacturing data could, in theory, be used by bad actors to produce counterfeit drugs or to identify vulnerabilities in the supply chain. Confidentiality reduces these risks.
3. Maintaining Competitive Differentiation
For CMOs in particular, their manufacturing expertise is their product. If their processes were disclosed, they would lose competitive advantage and potentially business. This could consolidate the industry and reduce competition among manufacturers.
4. Protecting Collaborations
The DMF system enables collaborations between API suppliers, excipient manufacturers, and drug sponsors precisely because each party can protect its proprietary information. If all information had to be disclosed, vertical integration would increase (companies would manufacture everything in-house to avoid disclosure), reducing specialization and efficiency.
Where Should the Balance Be?
The tension is real, and there is no simple resolution. But several principles might guide a more consumer-protective approach to confidentiality:
Principle 1: Time-Limited Secrecy
Trade secrets currently have no expiration date—they can remain secret indefinitely, as long as they remain non-public. But public health interests might be better served by time-limited confidentiality. After a defined period (e.g., 10-15 years post-approval), manufacturing data could be disclosed to facilitate generic/biosimilar competition.
Principle 2: Public Interest Exceptions
Confidentiality rules should include explicit public health exceptions that allow disclosure when there is a compelling public interest—for example, during pandemics, public health emergencies, or when safety signals emerge. Oregon’s drug pricing transparency law includes such an exception: trade secrets are protected unless the public interest requires disclosure.
Principle 3: Independent Verification Rights
Researchers, patient advocates, and public health officials should have structured access to clinical trial data, manufacturing data, and inspection findings under conditions that protect commercial confidentiality (e.g., through data use agreements, anonymization, secure research environments). The goal is not to publish trade secrets on the internet, but to enable independent verification of regulatory decisions.
The FDA already does this in limited ways—for example, by allowing outside experts to review confidential data during advisory committee meetings under non-disclosure agreements. This model could be expanded.
Principle 4: Narrow Trade Secret Claims
Courts and regulators should scrutinize trade secret claims more carefully, rejecting overbroad claims that seek to suppress transparency without protecting genuine innovation. “Naked price” information, aggregate safety data, and high-level manufacturing principles should not qualify for trade secret protection, even if detailed process parameters do.
Implications for PreCheck
In the context of PreCheck, the confidentiality tension manifests in several ways:
For Type V DMFs: The facility information submitted in Phase 1—site layouts, quality systems, QMM practices—will be reviewed by the FDA but not disclosed to the public or even to other sponsors using the same CMO. If a facility has marginal quality practices but passes PreCheck Phase 1, the public will never know. We are asked to trust the FDA’s judgment without transparency into what was reviewed or what deficiencies (if any) were identified.
For QMM Disclosure: Industry is concerned that submitting Quality Management Maturity information is “risky” because it discloses advanced practices beyond CGMP requirements. But the flip side is: if manufacturers are not willing to disclose their quality practices, how can regulators—or the public—assess whether those practices are adequate?
QMM is supposed to reward transparency and maturity. But if the information remains confidential and is never subjected to independent scrutiny, it becomes another form of compliance theater—a document that the FDA reviews in secret, with no external verification.
For Inspection Reliance: If the FDA begins accepting EMA GMP certificates or PIC/S inspection reports (as industry has requested), will those international inspection findings be more transparent than U.S. inspections? In some jurisdictions, yes—the EU publishes more detailed inspection outcomes than the FDA does. But in other jurisdictions, confidentiality practices may be even more restrictive.
A Tension Worth Monitoring
I do not claim to have resolved this tension. Reasonable people can disagree on where the line should be drawn between protecting innovation and ensuring public accountability.
But what I will argue is this: the tension deserves ongoing attention. As PreCheck evolves, as QMM assessments become more detailed, as Type V DMFs accumulate facility data over years—we should ask, repeatedly:
Who benefits from confidentiality, and who bears the risk?
Are there ways to enable independent verification without destroying commercial incentives?
Is the FDA using its discretion to share data proactively, or defaulting to secrecy when transparency might serve the public interest?
The history of pharmaceutical regulation is, in part, a history of secrets revealed too late. Vioxx’s cardiovascular risks. Thalidomide’s teratogenicity. OxyContin’s addictiveness. In each case, information that was known or knowable earlier remained hidden—sometimes due to fraud, sometimes due to regulatory caution, sometimes due to confidentiality rules that prioritized commercial interests over public health.
PreCheck, if it succeeds, will create a new repository of confidential facility data held by the FDA. That data could be a public asset—enabling faster approvals, better-informed regulatory decisions, earlier detection of quality problems. Or it could become another black box, where the public is asked to trust that the system works without access to the evidence.
The choice is not inevitable. It is a design decision—one that regulators, legislators, and industry will make, explicitly or implicitly, in the years ahead.
We should make it explicitly, with full awareness of whose interests are being prioritized and what risks are being accepted on behalf of patients who have no seat at the table.
Challenge 4: QMM is Not Fully Defined, and Submission Feels “Risky”
As discussed earlier, manufacturers are wary of submitting Quality Management Maturity (QMM) information because the assessment framework is not fully developed.
One attendee at the public meeting described QMM submission as “risky” because:
The FDA has not published the final QMM assessment protocol
The maturity criteria are subjective and open to interpretation
Disclosing quality practices beyond CGMP requirements could create new expectations that the manufacturer must meet
The analogy is this: if you tell the FDA, “We use statistical process control to detect process drift in real-time,” the FDA might respond, “Great! Show us your SPC data for the last two years.” If that data reveals a trend that the manufacturer considered acceptable but the FDA considers concerning, the manufacturer has created a problem by disclosing the information.
This is the opposite of the trust-building that QMM is supposed to enable. Instead of rewarding manufacturers for advanced quality practices, the program risks punishing them for transparency.
Until the FDA clarifies that QMM participation is non-punitive and that disclosure of advanced practices will not trigger heightened scrutiny, industry will remain reluctant to engage fully with this component of PreCheck.
Challenge 5: Resource Constraints—Will PreCheck Starve Other FDA Programs?
Industry stakeholders raised a practical concern: if the FDA dedicates inspectors and reviewers to PreCheck, will that reduce resources for routine surveillance inspections, post-approval change reviews, and other critical programs?
The FDA has not provided a detailed resource plan for PreCheck. The program is described as voluntary, which implies it is additive to existing workload, not a replacement for existing activities.
But inspectors and reviewers are finite resources. If PreCheck becomes popular (which the FDA hopes it will), the agency will need to either:
Hire additional staff to support PreCheck (requiring Congressional appropriations)
Deprioritize other inspection activities (e.g., routine surveillance)
Limit the number of PreCheck engagements per year (creating a bottleneck)
One industry representative noted that the economic incentives for domestic manufacturing are weak—it takes 5-7 years to build a new plant, and generic drug margins are thin. Unless the FDA can demonstrate that PreCheck provides substantial time and cost savings, manufacturers may not participate at the scale needed to meet the program’s supply chain security goals.
The CRL Crisis—How Facility Deficiencies Are Blocking Approvals
To understand the urgency of PreCheck, we must examine the data on inspection-related Complete Response Letters (CRLs).
The Numbers: CRLs Are Rising, Facility Issues Are a Leading Cause
In 2023, BLAs were issued CRLs nearly half the time—an unprecedented rate. This represents a sharp increase from previous years, driven by multiple factors:
More BLA submissions overall (especially biosimilars under the 351(k) pathway)
Increased scrutiny of manufacturing and CMC sections
More for-cause inspections (up 250% in 2025 compared to historical baseline)
Of the CRLs issued in 2023-2024, approximately 20% were due to facility inspection failures. This makes facility issues the third most common CRL driver, behind Manufacturing/CMC deficiencies (44%) and Clinical Evidence Gaps (44%).
Breaking down the facility-related CRLs:
Foreign manufacturing sites are associated with more CRLs proportionate to the number of PLIs conducted
50% of facility deficiencies involve Contract Manufacturing Organizations (CMOs)
75% of Applicant-Site CRs are for biosimilar applications
The five most-cited facilities account for ~35% of CR deficiencies
This last statistic is revealing: the CRL problem is concentrated among a small number of repeat offenders. These facilities receive CRLs on multiple products, suggesting systemic quality issues that are not being resolved between applications.
What Deficiencies Are Causing CRLs?
Analysis of FDA 483 observations and warning letters from FY2024 reveals the top inspection findings driving CRLs:
Data Integrity Failures (most common)
ALCOA+ principles not followed
Inadequate audit trails
21 CFR Part 11 non-compliance
Quality Unit Failures
Inadequate oversight
Poor release decisions
Ineffective CAPA systems
Superficial root cause analysis
Inadequate Process/Equipment Qualification
Equipment not qualified before use
Process validation protocols deficient
Continued Process Verification not implemented
Contamination Control and Environmental Monitoring Issues
Inadequate monitoring locations (the “representative” trap discussed in my Rechon and LeMaitre analyses)
Failure to investigate excursions
Contamination Control Strategy not followed
Stability Program Deficiencies
Incomplete stability testing
Data does not support claimed shelf-life
These findings are not product-specific. They are systemic quality system failures that affect the facility’s ability to manufacture any product reliably.
This is the fundamental problem with the current PAI/PLI model: the FDA discovers general GMP deficiencies during a product-specific inspection, and those deficiencies block approval even though they are not unique to that product.
The Cascade Effect: One Facility Failure Blocks Multiple Approvals
The data on repeat offenders is particularly troubling. Facilities with ≥3 CRs are primarily biosimilar manufacturers or CMOs.
This creates a cascade: a CMO fails a PLI for Product A. The FDA places the CMO on heightened surveillance. Products B, C, and D—all unrelated to Product A—face delayed PAIs because the FDA prioritizes re-inspecting the CMO to verify corrective actions. By the time Products B, C, and D reach their PDUFA dates, the CMO still has not cleared the OAI classification, and all three products receive CRLs.
This is the opposite of a risk-based system. Products B, C, and D are being held hostage by Product A’s facility issues, even though the manufacturing processes are different and the sponsors are different.
The EMA’s decoupled model avoids this by treating the facility as a separate remediation pathway. If the CMO has GMP issues, the NCA works with the CMO to fix them. Product applications proceed on their own timeline. If the facility is not compliant, products cannot be approved, but the remediation does not block the application review.
For-Cause Inspections: The FDA Is Catching More Failures
One contributing factor to the rise in CRLs is the sharp increase in for-cause inspections.
In 2025, the FDA conducted for-cause inspections at nearly 25% of all inspection events, up from the historical baseline of ~10%. For-cause inspections are triggered by:
For-cause inspections have a 33.5% OAI rate—5.6 times higher than routine inspections. And approximately 50% of OAI classifications lead to a warning letter or import alert.
This suggests that the FDA is increasingly detecting facilities with serious compliance issues that were not evident during prior routine inspections. These facilities are then subjected to heightened scrutiny, and their pending product applications face CRLs.
The problem: for-cause inspections are reactive. They occur after a failure has already reached the market (a recall, a complaint, a safety signal). By that point, patient harm may have already occurred.
PreCheck is, in theory, a proactive alternative. By evaluating facilities early (Phase 1), the FDA can identify systemic quality issues before the facility begins commercial manufacturing. But PreCheck only applies to new facilities. It does not solve the problem of existing facilities with poor compliance histories.
A Framework for Site Readiness—In Place, In Use, In Control
The current PAI/PLI model treats site readiness as a binary: the facility is either “compliant” or “not compliant” at a single moment in time.
PreCheck introduces a two-phase model, separating facility design review (Phase 1) from product-specific review (Phase 2).
But I propose that a more useful—and more falsifiable—framework for site readiness is three-stage:
In Place: Systems, procedures, equipment, and documentation exist and meet design specifications.
In Use: Systems and procedures are actively implemented in routine operations as designed.
In Control: Systems maintain validated state through continuous verification, trend analysis, and proactive improvement.
This framework maps directly onto:
The FDA’s process validation lifecycle (Stage 1: Process Design = In Place; Stage 2: Process Qualification = In Use; Stage 3: Continued Process Verification = In Control)
The ISPE/EU Annex 15 qualification stages (DQ/IQ = In Place; OQ/PQ = In Use; Ongoing monitoring = In Control)
The ICH Q10 “state of control” concept (In Control)
The advantage of this framework is that it explicitly separates three distinct questions that are often conflated:
Does the system exist? (In Place)
Is the system being used? (In Use)
Is the system working? (In Control)
A facility can be “In Place” without being “In Use” (e.g., SOPs are written but operators are not trained). A facility can be “In Use” without being “In Control” (e.g., operators follow procedures, but the process produces high variability and frequent deviations).
Let me define each stage in detail.
Stage 1: In Place (Structural Readiness)
Definition: Systems, procedures, equipment, and documentation exist and meet design specifications.
This is the output of Design Qualification (DQ) and Installation Qualification (IQ). It answers the question: “Has the facility been designed and built according to GMP requirements?”
Key Elements:
Facility layout meets User Requirements Specification (URS) and regulatory expectations
Equipment installed per manufacturer specifications
SOPs written and approved
Quality systems documented (change control, deviation management, CAPA, training)
Utilities qualified (HVAC, water systems, compressed air, clean steam)
Alignment with PreCheck: This is what Phase 1 (Facility Readiness) evaluates. The Type V DMF submitted during Phase 1 contains evidence that systems are In Place.
Alignment with EMA: This corresponds to the initial GMP inspection conducted by the NCA before granting a manufacturing license.
Inspection Outcome: If a facility is “In Place,” it means the infrastructure exists. But it says nothing about whether the infrastructure is functional or effective.
Stage 2: In Use (Operational Readiness)
Definition: Systems and procedures are actively implemented in routine operations as designed.
This is the output Validation. It answers the question: “Can the facility execute its processes reliably?”
Key Elements:
Equipment operates within qualified parameters during production
Personnel trained and demonstrate competency
Process consistently produces batches meeting specifications
Environmental monitoring executing according to contamination control strategy and generating data
Quality systems actively used (deviations documented, investigations completed, CAPA plans implemented)
Data integrity controls functioning (audit trails enabled, electronic records secure)
Work-as-Done matches Work-as-Imagined
Assessment Methods:
Observation of operations
Review of batch records and deviations
Interviews with operators and otherstaff
Trending of process data (yields, cycle times, in-process controls)
Audit of training records and competency assessments
Inspection of actual manufacturing runs (not simulations)
Alignment with PreCheck: This is what Phase 2 (Application Submission) evaluates, particularly during the PAI/PLI (if one is conducted). The FDA inspector observes operations, reviews batch records, and verifies that the process described in the CMC section is actually being executed.
Alignment with EMA: This corresponds to the pre-approval GMP inspection requested by the CHMP if the facility has not been recently inspected.
Inspection Outcome: If a facility is “In Use,” it means the systems are functional. But it does not guarantee that the systems will remain functional over time or that the organization can detect and correct drift.
Stage 3: In Control (Sustained Performance)
Definition: Systems maintain validated state through continuous verification, trend analysis, and proactive improvement.
Statistical process control (SPC) implemented to detect trends and shifts
Routine monitoring identifies drift before it becomes deviation
Root cause analysis is rigorous and identifies systemic issues, not just proximate causes
CAPA effectiveness is verified—corrective actions prevent recurrence
Process capability is quantified and improving (Cp, Cpk trending upward)
Annual Product Reviews drive process improvements
Knowledge management systems capture learnings from deviations, investigations, and inspections
Quality culture is embedded—staff at all levels understand their role in maintaining control
The organization actively seeks to falsify its own assumptions (the core principle of this blog)
Assessment Methods:
Trending of process capability indices over time
Review of Annual Product Reviews and management review meetings
Audit of CAPA effectiveness (do similar deviations recur?)
Statistical analysis of deviation rates and types
Assessment of organizational culture (e.g., FDA’s QMM assessment)
Evaluation of how the facility responds to “near-misses” and “weak signals”[blog]
Alignment with PreCheck: This is not explicitly evaluated in PreCheck as currently designed. PreCheck Phase 1 and Phase 2 focus on facility design and process execution, but do not assess long-term performance or organizational maturity.
However, the inclusion of Quality Management Maturity (QMM) practices in the Type V DMF is an attempt to evaluate this dimension. A facility with mature QMM practices is, in theory, more likely to remain “In Control” over time.
This also corresponds to routine re-inspections conducted every 1-3 years. The purpose of these inspections is not to re-validate the facility (which is already licensed), but to verify that the facility has maintained its validated state and has not accumulated unresolved compliance drift.
Inspection Outcome: If a facility is “In Control,” it means the organization has demonstrated sustained capability to manufacture products reliably. This is the goal of all GMP systems, but it is the hardest state to verify because it requires longitudinal data and cultural assessment, not just a snapshot inspection.
Mapping the Framework to Regulatory Timelines
The three-stage framework provides a logic for when and how to conduct regulatory inspections.
Process drift, CAPA ineffectiveness, organizational complacency, systemic failures
The current PAI/PLI model collapses “In Place,” “In Use,” and “In Control” into a single inspection event conducted at the worst possible time (near PDUFA). This creates the illusion that a facility’s compliance status can be determined in 5-10 days.
PreCheck separates “In Place” (Phase 1) from “In Use” (Phase 2), which is a significant improvement. But it still does not address the hardest question: how do we know a facility will remain “In Control” over time?
The answer is: you don’t. Not from a one-time inspection. You need continuous verification.
This is the insight embedded in the FDA’s 2011 process validation guidance: validation is not an event, it is a lifecycle. The validated state must be maintained through Stage 3 Continued Process Verification.
The same logic applies to facilities. A facility is not “validated” by passing a single PAI. It is validated by demonstrating control over time.
PreCheck needs to be part of a wider model at the FDA:
Allow facilities that complete Phase 1 to earn presumption of compliance for future product applications (reducing PAI frequency)
Implement more robust routine surveillance inspections on a 1-3 year cycle to verify “In Control” status. The data shows how much the FDA is missing this target.
Adjust inspection frequency dynamically based on the facility’s performance (low-risk facilities inspected less often, high-risk facilities more often)
This is the system the industry is asking for. It is the system the FDA could build on the foundation of PreCheck—if it commits to the long-term vision.
The Quality Experience Must Be Brought In at Design—And Most Companies Get This Wrong
PreCheck’s most important innovation is not its timeline or its documentation requirements. It is the implicit philosophical claim that facilities can be made better by involving quality experts at the design phase, not at the commissioning phase.
This is a radical departure from current practice. In most pharmaceutical manufacturing projects, the sequence is:
Engineering designs the facility (architecture, HVAC, water systems, equipment layout)
Procurement procures equipment based on engineering specs
Construction builds the facility
Commissioning and qualification begin (and quality suddenly becomes relevant)
Quality is brought in too late. By the time a quality professional reviews a facility design, the fundamental decisions—pipe routing, equipment locations, air handling unit sizing, cleanroom pressure differentials—have already been made. Suggestions to change the design are met with “we can’t change that now, we’ve already ordered the equipment” or “that’s going to add 3 months to the project and cost $500K.”
This is Quality-by-Testing (QbT): design first, test for compliance later, and hope the test passes.
PreCheck, by contrast, asks manufacturers to submit facility designs to the FDA during the design phase, while the designs are still malleable. The FDA can identify compliance gaps—inadequate environmental monitoring locations, cleanroom pressure challenges, segregation inadequacies, data integrity risks—before construction begins.
This is the beginning of Quality-by-Design (QbD) applied to facilities.
But for PreCheck to work—for Phase 1 to actually prevent facility disasters—manufacturers must embed quality expertise in the design process from the start. And most companies do not do this well.
The “Quality at the End” Trap
The root cause is organizational structure and financial incentives. In a typical pharmaceutical manufacturing project:
Engineering owns the timeline and the budget
Quality is invited to the party once the facility is built
Operations is waiting in the wings to take over once everything is “validated”
Each function optimizes locally:
Engineering optimizes for cost and schedule (build it fast, build it cheap)
Quality optimizes for compliance (every SOP written, every deviation documented)
Operations optimizes for throughput (run as many batches as possible per week)
Nobody optimizes for “Will this facility sustainably produce quality products?”—which is a different optimization problem entirely.
Bringing a quality professional into the design phase requires:
Allocating budget for quality consultation during design (not just during qualification)
Slowing the design phase to allow time for risk assessments and tradeoff discussions
Empowering quality to say “no” to designs that meet engineering requirements but fail quality risk management
Building quality leadership into the project from the kickoff, not adding it in Phase 3
Most companies treat this as optional. It is not optional if you want PreCheck to work.
Why Most Companies Fail to Do This Well
Despite the theoretical importance of bringing quality into design, most pharmaceutical companies still treat design-phase quality as a non-essential activity. Several reasons explain this:
1. Quality Does Not Own a Budget Line
In a manufacturing project, the Engineering team has a budget (equipment, construction, contingency). Operations has a budget (staffing, training). Quality typically has no budget allocation for the design phase. Quality professionals are asked to contribute their “expertise” without resources, timeline allocation, or accountability.
The result: quality advice is given in meetings but not acted upon, because there are no resources to implement it.
2. Quality Experience Is Scarce
The pharmaceutical industry has a shortage of quality professionals with deep experience in facility design, contamination control, data integrity architecture, and process validation. Many quality people come from a compliance background (inspections, audits, documentation) rather than a design background (risk management, engineering, systems thinking).
When a designer asks, “What should we do about data integrity?” the compliance-oriented quality person says, “We’ll need SOPs and training programs.” But the design-oriented quality person says, “We need to architect the IT infrastructure such that changes are logged and cannot be backdated. Here’s what that requires…”
The former approach adds cost and schedule. The latter approach prevents problems.
3. The Design Phase Is Urgent
Pharmaceutical companies operate under intense pressure to bring new facilities online as quickly as possible. The design phase is compressed—schedules are aggressive, meetings are packed, decisions are made rapidly.
Adding quality review to the design phase is perceived as slowing the project down. A quality person who carefully works through a contamination control strategy (“Wait, have we tested whether the airflow assumption holds at scale? Do we understand the failure modes?”) is seen as a bottleneck.
The company that brings in quality expertise early pays a perceived cost (delay, complexity) and receives a delayed benefit (better operations, fewer deviations, smoother inspections). In a pressure-cooker environment, the delayed benefit is not valued.
4. Quality Experience Is Not Integrated Across the Organization
In a some pharmaceutical company, quality expertise is fragmented:
Quality Assurance handles deviations and investigations
Quality Control runs the labs
Regulatory Affairs manages submissions
Process Validation leads qualification projects
None of these groups are responsible for facility design quality. So it falls to no one, and it ends up being everyone’s secondary responsibility—which means it is no one’s primary responsibility.
A company with an integrated quality culture would have a quality leader who is accountable for the design, and who has authority to delay the project if critical risks are not addressed. Most companies do not have this structure.
What PreCheck Requires: The Quality Experience in Design
For PreCheck to deliver its promised benefits, companies participating in Phase 1 must make a commitment that quality expertise is embedded throughout design.
Specifically:
1. Quality leadership is assigned early – Someone in quality (not engineering, not operations) is accountable for quality risk management in the facility design from Day 1.
2. Quality has authority to influence design – The quality leader can say “no” to designs that create unacceptable quality risks, even if the design meets engineering specifications.
3. Quality risk management is performed systematically – Not just “quality review of designs,” but structured risk management identifying critical quality risks and mitigation strategies.
4. Design Qualification includes quality experts – DQ is not just engineering verification that design meets specs; it includes quality verification that design enables quality control.
5. Contamination control is designed, not tested – Environmental monitoring strategies, microbial testing plans, and statistical approaches are designed into the facility, not bolted on during commissioning.
6. Data integrity is architected – IT systems are designed to prevent data manipulation, not as an afterthought.
7. The organization is aligned on what “quality” means – Not compliance (“checking boxes”), but the organizational discipline to sustain control and to detect and correct drift before it becomes a failure.
This is fundamentally a cultural commitment. It is about believing that quality is not something you add at the end; it is something you design in.
The FDA’s Unspoken Expectation in PreCheck Phase 1
When the FDA reviews a Type V DMF in PreCheck Phase 1, the agency is asking: “Did this manufacturer apply quality expertise to the design?”
How does the FDA assess this? By looking for:
Risk assessments that show systematic thinking, not checkbox compliance
Design decisions that are justified by quality risk management, not just engineering convenience
Contamination control strategies that are grounded in understanding the failure modes
Data integrity architectures that prevent (not just detect) problems
Quality systems that are designed to evolve and improve, not static and reactive
If the Type V DMF reads like it was prepared by an engineering firm that called quality for comments, the FDA will see it. If it reads like it was co-developed by quality and engineering with equal voice, the FDA will see that too.
PreCheck Phase 1 is not just a design review. It is a quality culture assessment.
And this is why most companies are not ready for PreCheck. Not because they lack the engineering capability to design a facility. But because they lack the quality experience, organizational structure, and cultural commitment to bring quality into the design process as a peer equal to engineering.
Companies that participate in PreCheck with a transactional mindset—”Let’s submit our designs to the FDA and get early feedback”—will get some benefit. They will catch some design issues early.
But companies that participate with a transformational mindset—”We are going to redesign how we approach facility development to embed quality from the start”—will get deeper benefits. They will build facilities that are easier to operate, that generate fewer deviations, that demonstrate sustained control over time, and that will likely pass future inspections without significant findings.
The choice is not forced on the company by PreCheck. PreCheck is voluntary; you can choose the transactional approach.
But if you want the regulatory trust that PreCheck is supposed to enable—if you want the FDA to accept your facility as “ready” with minimal re-inspection—you need to bring the quality experience in at design.
That is what Phase 1 actually measures.
The Epistemology of Trust
Regulatory inspections are not merely compliance checks. They are trust-building mechanisms.
When the FDA inspector walks into a facility, the question is not “Does this facility have an SOP for cleaning validation?” (It does. Almost every facility does.) The question is: “Can I trust that this facility will produce quality products consistently, even when I am not watching?”
Trust cannot be established in 5 days.
Trust is built through:
Repeated interactions over time
Demonstrated capability under varied conditions
Transparency when failures occur
Evidence of learning from those failures
The current PAI/PLI model attempts to establish trust through a single high-stakes audit. This is like trying to assess a person’s character by observing them for one hour during a job interview. It is better than nothing, but it is not sufficient.
PreCheck is a step toward a trust-building system. By engaging early (Phase 1) and providing continuity into the application review (Phase 2), the FDA can develop a relationship with the manufacturer rather than a one-off transaction.
But PreCheck as currently proposed is still transactional. It is a program for new facilities. It does not create a facility lifecycle framework. It does not provide a pathway for facilities to earn cumulative trust over multiple products.
The FDA could do this—if it commits to three principles:
1. Decouple facility inspections from product applications.
Facilities should be assessed independently and granted a facility certificate (or equivalent) that can be referenced by multiple products. This separates facility remediation from product approval timelines and prevents the cascade failures we see in the current system.
2. Recognize that “In Control” is not a state achieved once, but a discipline maintained continuously.
The FDA’s own process validation guidance says this explicitly: validation is a lifecycle, not an event. The same logic must apply to facilities. A facility is not “GMP compliant” because it passed one inspection. It is GMP compliant because it has demonstrated, over time, the organizational discipline to detect and correct failures before they reach patients.
PreCheck could be the foundation for this system. But only if the FDA is willing to embrace the full implication of what it has started: that regulatory trust is earned through sustained performance, and that the agency’s job is not to catch failures through surprise inspections, but to partner with manufacturers in building systems that are designed to reveal their own weaknesses.
This is the principle of falsifiable quality applied to regulatory oversight. A quality system that cannot be proven wrong is a quality system that cannot be trusted. A facility that fears inspection is a facility that has not internalized the discipline of continuous verification.
The facilities that succeed under PreCheck—and under any future evolution of this system—will be those that understand that “In Place, In Use, In Control” is not a checklist to complete, but a philosophy to embody.
U.S. Food and Drug Administration. FDA Public Meeting: Onshoring Manufacturing of Drugs and Biological Products – Agenda and materials. Silver Spring, MD: US Food and Drug Administration; 2025. Available at: https://www.fda.gov/media/189329/download. Accessed January 8, 2026.
U.S. Food and Drug Administration. CDER’s Quality Management Maturity (QMM) Program. Silver Spring, MD: US Food and Drug Administration; 2023. Available at: https://www.fda.gov/media/171705/download. Accessed January 8, 2026.fda
U.S. Food and Drug Administration. WHO Prequalification – FDA overview. Silver Spring, MD: US Food and Drug Administration; August 15, 2022. Available at: https://www.fda.gov/media/166136/download. Accessed January 8, 2026.
The October 2025 Warning Letter to Apotex Inc. is fascinating not because it reveals anything novel about FDA expectations, but because it exposes the chasm between what we know we should do and what we actually allow to happen on our watch. Evaluate it together with what we are seeing for Complete Response Letter (CRL) data, we can see that companies continue to struggle with the concept of equipment lifecycle management.
This isn’t about a few leaking gloves or deteriorated gaskets. This is about systemic failure in how we conceptualize, resource, and execute equipment management across the entire GMP ecosystem. Let me walk you through what the Apotex letter really tells us, where the FDA is heading next, and why your current equipment qualification program is probably insufficient.
The Apotex Warning Letter: A Case Study in Lifecycle Management Failure
The FDA’s Warning Letter to Apotex (WL: 320-26-12, October 31, 2025) reads like a checklist of every equipment lifecycle management failure I’ve witnessed in two decades of quality oversight. The agency cited 21 CFR 211.67(a) equipment maintenance failures, 21 CFR 211.192 inadequate investigations, and 21 CFR 211.113(b) aseptic processing deficiencies. But these citations barely scratch the surface of what actually went wrong.
The Core Failures: A Pattern of Deferral and Neglect
Between September 2023 and April 2025—18 months—Apotex experienced at least eight critical equipment failures during leak testing. Their personnel responded by retesting until they achieved passing results rather than investigating root causes. Think about that timeline. Eight failures over 18 months means a failure every 2-3 months, each one representing a signal that their equipment was degrading. When investigators finally examined the system, they found over 30 leaking areas. This wasn’t a single failure; this was systemic equipment deterioration that the organization chose to work around rather than address.
The letter documents white particle buildup on manufacturing equipment surfaces, particles along conveyor systems, deteriorated gasket seals, and discolored gloves. Investigators observed a six-millimeter glove breach that was temporarily closed with a cable tie before production continued. They found tape applied to “false covers” as a workaround. These aren’t just housekeeping issues—they’re evidence that Apotex had crossed from proactive maintenance into reactive firefighting, and then into dangerous normalization of deviation.
Most damning: Apotex had purchased upgraded equipment nearly a year before the FDA inspection but continued using the deteriorating equipment that was actively generating particles contaminating their nasal spray products. They had the solution in their possession. They chose not to implement it.
The Investigation Gap: Equipment Failures as Quality System Failures
The FDA hammered Apotex on their failure to investigate, but here’s what’s really happening: equipment failures are quality system failures until proven otherwise. When a leak happens , you don’t just replace whatever component leaked. You ask:
Why did this component fail when others didn’t?
Is this a batch-specific issue or a systemic supplier problem?
How many products did this breach potentially affect?
What does our environmental monitoring data tell us about the timeline of contamination?
Are our maintenance intervals appropriate?
Apotex’s investigators didn’t ask these questions. Their personnel retested until they got passing results—a classic example of “testing into compliance” that I’ve seen destroy quality cultures. The quality unit failed to exercise oversight, and management failed to resource proper root cause analysis. This is what happens when quality becomes a checkbox exercise rather than an operational philosophy.
BLA CRL Trends: The Facility Equipment Crisis Is Accelerating
The Apotex warning letter doesn’t exist in isolation. It’s part of a concerning trend in FDA enforcement that’s becoming impossible to ignore. Facility inspection concerns dominate CRL justifications. Manufacturing and CMC deficiencies account for approximately 44% of all CRLs. For biologics specifically, facility-related issues are even more pronounced.
The Biologics-Specific Challenge
Biologics license applications face unique equipment lifecycle scrutiny. The 2024-2025 CRL data shows multiple biosimilars rejected due to third-party manufacturing facility issues despite clean clinical data. Tab-cel (tabelecleucel) received a CRL citing problems at a contract manufacturing organization—the FDA rejected an otherwise viable therapy because the facility couldn’t demonstrate equipment control.
This should terrify every biotech quality leader. The FDA is telling us: your clinical data is worthless if your equipment lifecycle management is suspect. They’re not wrong. Biologics manufacturing depends on consistent equipment performance in ways small molecule chemistry doesn’t. A 0.2°C deviation in a bioreactor temperature profile, caused by a poorly maintained chiller, can alter glycosylation patterns and change the entire safety profile of your product. The agency knows this, and they’re acting accordingly.
The Top 10 Facility Equipment Deficiencies Driving CRLs
Fire Protection and Hazardous Material Handling Deficiencies (equipment safety systems)
Critical Utility System Failures (WFI loops with dead legs, inadequate sanitization)
Environmental Monitoring System Gaps (manual data recording, lack of 21 CFR Part 11 compliance)
Container Closure and Packaging Validation Issues (missing extractables/leachables data, CCI testing gaps)
Inadequate Cleanroom Classification and Control (ISO 14644 and EU Annex 1 compliance failures)
Lack of Preventive Maintenance and Asset Management (missing calibration records, unclear maintenance responsibilities)
Inadequate Documentation and Change Control (HVAC setpoint changes without impact assessment)
Sustainability and Environmental Controls Overlooked (temperature/humidity excursions affecting product stability)
Notice what’s not on this list? Equipment selection errors. The FDA isn’t seeing companies buy the wrong equipment. They’re seeing companies buy the right equipment and then fail to manage it across its lifecycle. This is a crucial distinction. The problem isn’t capital allocation—it’s operational execution.
FDA’s Shift to “Equipment Lifecycle State of Control”
The FDA has introduced a significant conceptual shift in how they discuss equipment management. The Apotex Warning Letter is part of the agency’s new emphasis on “equipment lifecycle state of control” . This isn’t just semantic gamesmanship. It represents a fundamental understanding that discrete qualification events are not enough and that continuous lifecycle management is long overdue.
Continuous monitoring of equipment performance parameters, not just periodic checks
Predictive maintenance based on performance data, not just manufacturer-recommended intervals
Real-time assessment of equipment degradation signals (particle generation, seal wear, vibration changes)
Integrated change management that treats equipment modifications as potential quality events
Traceable decision-making about when to repair, refurbish, or retire equipment
The FDA is essentially saying: qualification is a snapshot; state of control is a movie. And they want to see the entire film, not just the trailer.
This aligns perfectly with the agency’s broader push toward Quality Management Maturity. As I’ve previously written about QMM, the FDA is moving away from checking compliance boxes and toward evaluating whether organizations have the infrastructure, culture, and competence to manage quality dynamically. Equipment lifecycle management is the perfect test case for this shift because equipment degradation is inevitable, predictable, and measurable. If you can’t manage equipment lifecycle, you can’t manage quality.
Global Regulatory Convergence: WHO, EMA, and PIC/S Perspectives
The FDA isn’t operating in a vacuum. Global regulators are converging on equipment lifecycle management as a critical inspection focus, though their approaches differ in emphasis.
EMA: The Annex 15 Lifecycle Approach
EMA’s process validation guidance explicitly requires IQ, OQ, and PQ for equipment and facilities as part of the validation lifecycle. Unlike FDA’s three-stage process validation model, EMA frames qualification as ongoing throughout the product lifecycle. Their 2023 revision of Annex 15 emphasizes:
Validation Master Plans that include equipment lifecycle considerations
Ongoing Process Verification that incorporates equipment performance data
Risk-based requalification triggered by changes, deviations, or trends
Integration with Product Quality Reviews (PQRs) to assess equipment impact on product quality
The EMA expects you to prove your equipment remains qualified through annual PQRs and continuous data review having been more explicit about a lifecycle approach for years.
PIC/S: The Change Management Imperative
PIC/S PI 054-1 on change management provides crucial guidance on equipment lifecycle triggers. The document explicitly identifies equipment upgrades as changes that require formal assessment, planning, and implementation controls. Critically, PIC/S emphasizes:
Interim controls when equipment issues are identified but not yet remediated
Post-implementation monitoring to ensure changes achieve intended risk reduction
Documentation of rejected changes, especially those related to quality/safety hazard mitigation
The Apotex case is a PIC/S textbook violation: they identified equipment deterioration (hazard), purchased upgraded equipment (change proposal), but failed to implement it with appropriate interim controls or timeline management. The result was continued production with deteriorating equipment—exactly what PIC/S guidance is designed to prevent.
WHO: The Resource-Limited Perspective
WHO’s equipment lifecycle guidance, while focused on medical equipment in low-resource settings, offers surprisingly relevant insights for GMP facilities. Their framework emphasizes:
Planning based on lifecycle cost, not just purchase price
Skill development and training as core lifecycle components
Decommissioning protocols that ensure data integrity and product segregation
The WHO model is refreshingly honest about resource constraints, which applies to many GMP facilities facing budget pressure. Their key insight: proper lifecycle management actually reduces total cost of ownership by 3-10x compared to run-to-failure approaches. This is the business case that quality leaders need to make to CFOs who view maintenance as a cost center.
The Six-System Inspection Model: Where Equipment Lifecycle Fits
FDA’s Six-System Inspection Model—particularly the Facilities and Equipment System—provides the structural framework for understanding equipment lifecycle requirements. As I’ve previously written, this system “ensures that facilities and equipment are suitable for their intended use and maintained properly” with focus on “design, maintenance, cleaning, and calibration.”
The Interconnectedness Problem
Here’s where many organizations fail: they treat the six systems as silos. Equipment lifecycle management bleeds across all of them:
Production System: Equipment performance directly impacts process capability
Laboratory Controls: Analytical equipment lifecycle affects data integrity
Materials System: Equipment changes can affect raw material compatibility
Packaging and Labeling: Equipment modifications require revalidation
Quality System: Equipment deviations trigger CAPA and change control
The Apotex warning letter demonstrates this interconnectedness perfectly. Their equipment failures (Facilities & Equipment) led to container-closure integrity issues (Packaging), which they failed to investigate properly (Quality), resulting in distributed product that was potentially adulterated (Production). The FDA’s response required independent assessments of investigations, CAPA, and change management—three separate systems all impacted by equipment lifecycle failures.
The “State of Control” Assessment Questions
If FDA inspectors show up tomorrow, here’s what they’ll ask about your equipment lifecycle management:
Design Qualification: Do your User Requirements Specifications include lifecycle maintenance requirements? Are you specifying equipment with modular upgrade paths, or are you buying disposable assets?
Change Management: When you purchase upgraded equipment, what triggers its implementation? Is there a formal risk assessment linking equipment deterioration to product quality? Or do you wait for failures?
Preventive Maintenance: Are your PM intervals based on manufacturer recommendations, or on actual performance data? Do you have predictive maintenance programs using vibration analysis, thermal imaging, or particle counting?
Decommissioning: When equipment reaches end-of-life, do you have formal retirement protocols that assess data integrity impact? Or does old equipment sit in corners of the cleanroom “just in case”?
Training: Do your operators understand equipment lifecycle concepts? Can they recognize early degradation signals? Or do they just call maintenance when something breaks?
These aren’t theoretical questions. They’re directly from recent 483 observations and CRL deficiencies.
The Business Case: Why Equipment Lifecycle Management Is Economic Imperative
Let’s be blunt: the pharmaceutical industry has treated equipment as a capital expense to be minimized, not an asset to be optimized. This is catastrophically wrong. The Apotex warning letter shows the true cost of this mindset:
Product recalls: Multiple ophthalmic and oral solutions recalled
Production suspension: Sterile manufacturing halted
Independent assessments: Required third-party evaluation of entire quality system
Reputational damage: Public warning letter, potential import alert
Opportunity cost: Products stuck in regulatory limbo while competitors gain market share
Contrast this with the investment required for proper lifecycle management:
Predictive maintenance systems: $50,000-200,000 for sensors and software
Enhanced training programs: $10,000-30,000 annually
Total: Less than the cost of a single batch recall
The ROI is undeniable. Equipment lifecycle management isn’t a cost center—it’s risk mitigation with quantifiable financial returns.
The CFO Conversation
I’ve had this conversation with CFOs more times than I can count. Here’s what works:
Don’t say: “We need more maintenance budget.”
Say: “Our current equipment lifecycle risk exposure is $X million based on recent CRL trends and warning letters. Investing $Y in lifecycle management reduces that risk by Z% and extends asset utilization by 2-3 years, deferring $W million in capital expenditures.”
Bring data. Show them the Apotex letter. Show them the Tab-cel CRL. Show them the 51 CRLs driven by facility concerns. CFOs understand risk-adjusted returns. Frame equipment lifecycle management as portfolio risk management, not engineering overhead.
Practical Framework: Building an Equipment Lifecycle Management Program
Enough theory. Here’s the practical framework I’ve implemented across multiple DS facilities, refined through inspections, and validated against regulatory expectations.
Phase 1: Asset Criticality Assessment
Not all equipment deserves equal lifecycle attention. Use a risk-based approach:
Criticality Class A (Direct Impact): Equipment whose failure directly impacts product quality, safety, or efficacy. Bioreactors, purification skids, sterile filling lines, environmental monitoring systems. These require full lifecycle management including continuous monitoring, predictive maintenance, and formal retirement protocols.
Criticality Class B (Indirect Impact): Equipment whose failure impacts GMP environment but not direct product attributes. HVAC units, WFI systems, clean steam generators. These require enhanced lifecycle management with robust PM programs and performance trending.
Criticality Class C (No Impact): Non-GMP equipment. Standard maintenance practices apply.
Phase 2: Lifecycle Documentation Architecture
Create a master equipment lifecycle file for each Class A and B asset containing:
User Requirements Specification with lifecycle maintenance requirements
Design Qualification including maintainability and upgrade path assessment
Commissioning Protocol (IQ/OQ/PQ) with acceptance criteria that remain valid throughout lifecycle
Maintenance Master Plan defining PM intervals, spare parts strategy, and predictive monitoring
Performance Trending Protocol specifying parameters to monitor, alert limits, and review frequency
Change Management History documenting all modifications with impact assessment
Retirement Protocol defining end-of-life triggers and data migration requirements
As I’ve written about in my posts on GMP-critical systems, documentation must be living documents that evolve with the asset, not static files that gather dust after qualification.
Phase 3: Predictive Maintenance Implementation
Move beyond manufacturer-recommended intervals to condition-based maintenance:
Vibration analysis for rotating equipment (pumps, agitators)
Thermal imaging for electrical systems and heat transfer equipment
Particle counting for cleanroom equipment and filtration systems
Pressure decay testing for sterile barrier systems
Oil analysis for hydraulic and lubrication systems
The goal is to detect degradation 6-12 months before failure, allowing planned intervention during scheduled shutdowns.
Phase 4: Integrated Change Control
Equipment changes must flow through formal change control with:
Technical assessment by engineering and quality
Risk evaluation using FMEA or similar tools
Regulatory assessment for potential prior approval requirements
Implementation planning with interim controls if needed
Post-implementation review to verify effectiveness
The Apotex case shows what happens when you skip the interim controls. They identified the need for upgraded equipment (change) but failed to implement the necessary bridge measures to ensure product quality while waiting for that equipment to come online. They allowed the “future state” (new equipment) to become an excuse for neglecting the “current state” (deteriorating equipment).
This is a failure of Change Management Logic. In a robust quality system, the moment you identify that equipment requires replacement due to performance degradation, you have acknowledged a risk. If you cannot replace it immediately—due to capital cycles, lead times, or qualification timelines—you must implement interim controls to mitigate that risk.
For Apotex, those interim controls should have been:
Reduced run durations to minimize stress on failing seals.
Shortened maintenance intervals (replacing gaskets every batch instead of every campaign).
Enhanced environmental monitoring focused specifically on the degrade zones.
Instead, they did nothing. They continued business as usual, likely comforting themselves with the purchase order for the new machine. The FDA’s response was unambiguous: A purchase order is not a CAPA. Until the new equipment is qualified and operational, your legacy equipment must remain in a state of control, or production must stop. There is no regulatory “grace period” for deteriorating assets.
Phase 5: The Cultural Shift—From “Repair” to “Reliability”
The final and most difficult phase of this framework is cultural. You cannot write a SOP for this; you have to lead it.
Most organizations operate on a “Break-Fix” mentality:
Equipment runs until it alarms or fails.
Maintenance fixes it.
Quality investigates (or papers over) the failure.
Production resumes.
The FDA’s “Lifecycle State of Control” demands a “Predict-Prevent” mentality:
Equipment is monitored for degradation signals (vibration, heat, particle counts).
Maintenance intervenes before failure limits are reached.
Quality reviews trends to confirm the intervention was effective.
Production continues uninterrupted.
To achieve this, you need to change how you incentivize your teams. Stop rewarding “heroic” fixes at 2 AM. Start rewarding the boring, invisible work of preventing the failure in the first place. As I’ve written before regarding Quality Management Maturity (QMM), mature quality systems are quiet systems. Chaos is not a sign of hard work; it’s a sign of lost control.
Conclusion: The Choice Before Us
The warning letter to Apotex Inc. and the rising tide of facility-related CRLs are not random compliance noise. They are signal flares. The regulatory expectations for equipment management have fundamentally shifted from static qualification (Is it validated?) to dynamic lifecycle management (Is it in a state of control right now?).
The FDA, EMA, and PIC/S have converged on a single truth: You cannot assure product quality if you cannot guarantee equipment performance.
We are at an inflection point. The industry’s aging infrastructure, combined with the increasing complexity of biologic processes and the unforgiving nature of residue control, has created a perfect storm. We can no longer treat equipment maintenance as a lower-tier support function. It is a core GMP activity, equal in criticality to batch record review or sterility testing.
As Quality Leaders, we have two choices:
The Apotex Path: Treat equipment upgrades as capital headaches to be deferred. Ignore the “minor” leaks and “insignificant” residues. Let the maintenance team bandage the wounds while we focus on “strategic” initiatives. This path leads to 483s, warning letters, CRLs, and the excruciating public failure of seeing your facility’s name in an FDA press release.
The Lifecycle Path: Embrace the complexity. Resource the predictive maintenance programs. Validate the residue removal. Treat every equipment change as a potential risk to patient safety. Build a system where equipment reliability is the foundation of your quality strategy, not an afterthought.
The second path is expensive. It is technically demanding. It requires fighting for budget dollars that don’t have immediate ROI. But it allows you to sleep at night, knowing that when—not if—the FDA investigator asks to see your equipment maintenance history, you won’t have to explain why you used a cable tie to fix a glove port.
You’ll simply show them the data that proves you’re in control.
How the Quality Industry Repackaged Existing Practices and Called Them Revolutionary
As someone who has spent decades implementing computer system validation practices across multiple regulated environments, I consistently find myself skeptical of the breathless excitement surrounding Computer System Assurance (CSA). The pharmaceutical quality community’s enthusiastic embrace of CSA as a revolutionary departure from traditional Computer System Validation (CSV) represents a troubling case study in how our industry allows consultants to rebrand established practices as breakthrough innovations, selling back to us concepts we’ve been applying for over two decades.
The truth is both simpler and more disappointing than the CSA evangelists would have you believe: there is nothing fundamentally new in computer system assurance that wasn’t already embedded in risk-based validation approaches, GAMP5 principles, or existing regulatory guidance. What we’re witnessing is not innovation, but sophisticated marketing—a coordinated effort to create artificial urgency around “modernizing” validation practices that were already fit for purpose.
The Historical Context: Why We Need to Remember Where We Started
To understand why CSA represents more repackaging than revolution, we must revisit the regulatory and industry context from which our current validation practices emerged. Computer system validation didn’t develop in a vacuum—it arose from genuine regulatory necessity in response to real-world failures that threatened patient safety and product quality.
The origins of systematic software validation in regulated industries trace back to military applications in the 1960s, specifically independent verification and validation (IV&V) processes developed for critical defense systems. The pharmaceutical industry’s adoption of these concepts began in earnest during the 1970s as computerized systems became more prevalent in drug manufacturing and quality control operations.
The regulatory foundation for what we now call computer system validation was established through a series of FDA guidance documents throughout the 1980s and 1990s. The 1983 FDA “Guide to Inspection of Computerized Systems in Drug Processing” represented the first systematic approach to ensuring the reliability of computer-based systems in pharmaceutical manufacturing. This was followed by increasingly sophisticated guidance, culminating in 21 CFR Part 11 in 1997 and the “General Principles of Software Validation” in 2002.
These regulations didn’t emerge from academic theory—they were responses to documented failures. The FDA’s analysis of 3,140 medical device recalls between 1992 and 1998 revealed that 242 (7.7%) were attributable to software failures, with 192 of those (79%) caused by defects introduced during software changes after initial deployment. Computer system validation developed as a systematic response to these real-world risks, not as an abstract compliance exercise.
The GAMP Evolution: Building Risk-Based Practices from the Ground Up
Perhaps no single development better illustrates how the industry has already solved the problems CSA claims to address than the evolution of the Good Automated Manufacturing Practice (GAMP) guidelines. GAMP didn’t start as a theoretical framework—it emerged from practical necessity when FDA inspectors began raising concerns about computer system validation during inspections of UK pharmaceutical facilities in 1991
The GAMP community’s response was methodical and evidence-based. Rather than creating bureaucratic overhead, GAMP sought to provide a practical framework that would satisfy regulatory requirements while enabling business efficiency. Each revision of GAMP incorporated lessons learned from real-world implementations:
GAMP 1 (1994) focused on standardizing validation activities for computerized systems, addressing the inconsistency that characterized early validation efforts.
GAMP 2 and 3 (1995-1998) introduced early concepts of risk-based approaches and expanded scope to include IT infrastructure, recognizing that validation needed to be proportional to risk rather than uniformly applied.
GAMP 4 (2001) emphasized a full system lifecycle model and defined clear validation deliverables, establishing the structured approach that remains fundamentally unchanged today.
GAMP 5 (2008) represented a decisive shift toward risk-based validation, promoting scalability and efficiency while maintaining regulatory compliance. This version explicitly recognized that validation effort should be proportional to the system’s impact on product quality, patient safety, and data integrity.
The GAMP 5 software categorization system (Categories 1, 3, 4, and 5, with Category 2 eliminated as obsolete) provided the risk-based framework that CSA proponents now claim as innovative. A Category 1 infrastructure software requires minimal validation beyond verification of installation and version control, while a Category 5 custom application demands comprehensive lifecycle validation including detailed functional and design specifications. This isn’t just risk-based thinking—it’s risk-based practice that has been successfully implemented across thousands of systems for over fifteen years.
The Risk-Based Spectrum: What GAMP Already Taught Us
One of the most frustrating aspects of CSA advocacy is how it presents risk-based validation as a novel concept. The pharmaceutical industry has been applying risk-based approaches to computer system validation since the early 2000s, not as a revolutionary breakthrough, but as basic professional competence.
The foundation of risk-based validation rests on a simple principle: validation rigor should be proportional to the potential impact on product quality, patient safety, and data integrity. This principle was explicitly articulated in ICH Q9 (Quality Risk Management) and embedded throughout GAMP 5, creating what is effectively a validation spectrum rather than a binary validated/not-validated state.
At the lower end of this spectrum, we find systems with minimal GMP impact—infrastructure software, standard office applications used for non-GMP purposes, and simple monitoring tools that generate no critical data. For these systems, validation consists primarily of installation verification and fitness-for-use confirmation, with minimal documentation requirements.
In the middle of the spectrum are configurable commercial systems—LIMS, ERP modules, and manufacturing execution systems that require configuration to meet specific business needs. These systems demand functional testing of configured elements, user acceptance testing, and ongoing change control, but can leverage supplier documentation and industry standard practices to streamline validation efforts.
At the high end of the spectrum are custom applications and systems with direct impact on batch release decisions, patient safety, or regulatory submissions. These systems require comprehensive validation including detailed functional specifications, extensive testing protocols, and rigorous change control procedures.
The elegance of this approach is that it scales validation effort appropriately while maintaining consistent quality outcomes. A risk assessment determines where on the spectrum a particular system falls, and validation activities align accordingly. This isn’t theoretical—it’s been standard practice in well-run validation programs for over a decade.
The 2003 FDA Guidance: The CSA Framework Hidden in Plain Sight
Perhaps the most damning evidence that CSA represents repackaging rather than innovation lies in the 2003 FDA guidance “Part 11, Electronic Records; Electronic Signatures — Scope and Application.” This guidance, issued over twenty years ago, contains virtually every principle that CSA advocates now present as revolutionary insights.
The 2003 guidance established several critical principles that directly anticipate CSA approaches:
Narrow Scope Interpretation: The FDA explicitly stated that Part 11 would only be enforced for records required to be kept where electronic versions are used in lieu of paper, avoiding the over-validation that characterized early Part 11 implementations.
Risk-Based Enforcement: Rather than treating Part 11 as a checklist, the FDA indicated that enforcement priorities would be risk-based, focusing on systems where failures could compromise data integrity or patient safety.
Legacy System Pragmatism: The guidance exercised discretion for systems implemented before 1997, provided they were fit for purpose and maintained data integrity.
Focus on Predicate Rules: Companies were encouraged to focus on fulfilling underlying regulatory requirements rather than treating Part 11 as an end in itself.
Innovation Encouragement: The guidance explicitly stated that “innovation should not be stifled” by fear of Part 11, encouraging adoption of new technologies provided they maintained appropriate controls.
These principles—narrow scope, risk-based approach, pragmatic implementation, focus on underlying requirements, and innovation enablement—constitute the entire conceptual framework that CSA now claims as its contribution to validation thinking. The 2003 guidance didn’t just anticipate CSA; it embodied CSA principles in FDA policy over two decades before the “Computer Software Assurance” marketing campaign began.
The EU Annex 11 Evolution: Proof That the System Was Already Working
The evolution of EU GMP Annex 11 provides another powerful example of how existing regulatory frameworks have continuously incorporated the principles that CSA now claims as innovations. The current Annex 11, dating from 2011, already included most elements that CSA advocates present as breakthrough thinking.
The original Annex 11 established several key principles that remain relevant today:
Risk-Based Validation: Clause 1 requires that “Risk management should be applied throughout the lifecycle of the computerised system taking into account patient safety, data integrity and product quality”—a clear articulation of risk-based thinking.
Supplier Assessment: The regulation required assessment of suppliers and their quality systems, anticipating the “trusted supplier” concepts that CSA emphasizes.
Lifecycle Management: Annex 11 required that systems be validated and maintained in a validated state throughout their operational life.
Change Control: The regulation established requirements for managing changes to validated systems.
Data Integrity: Electronic records requirements anticipated many of the data integrity concerns that now drive validation practices.
The 2025 draft revision of Annex 11 represents evolution, not revolution. While the document has expanded significantly, most additions address technological developments—cloud computing, artificial intelligence, cybersecurity—rather than fundamental changes in validation philosophy. The core principles remain unchanged: risk-based validation, lifecycle management, supplier oversight, and data integrity protection.
Importantly, the draft Annex 11 demonstrates regulatory convergence rather than divergence. The revision aligns more closely with FDA CSA guidance, GAMP 5 second edition, ICH Q9, and ISO 27001. This alignment doesn’t validate CSA as revolutionary—it demonstrates that global regulators recognize the maturity and effectiveness of existing validation approaches.
The FDA CSA Final Guidance: Official Release and the Repackaging of Established Principles
On September 24, 2025, the FDA officially published its final guidance on “Computer Software Assurance for Production and Quality System Software,” marking the culmination of a three-year journey from draft to final policy. This final guidance, while presented as a modernization breakthrough by consulting industry advocates, provides perhaps the clearest evidence yet that CSA represents sophisticated rebranding rather than genuine innovation.
The Official Position: Supplement, Not Revolution
The FDA’s own language reveals the evolutionary rather than revolutionary nature of CSA. The guidance explicitly states that it “supplements FDA’s guidance, ‘General Principles of Software Validation'” with one notable exception: “this guidance supersedes Section 6: Validation of Automated Process Equipment and Quality System Software of the Software Validation guidance”.
This measured approach directly contradicts the consulting industry narrative that positions CSA as a wholesale replacement for traditional validation approaches. The FDA is not abandoning established software validation principles—it is refining their application to production and quality system software while maintaining the fundamental framework that has served the industry effectively for over two decades.
What Actually Changed: Evolutionary Refinement
The final guidance incorporates several refinements that demonstrate the FDA’s commitment to practical implementation rather than theoretical innovation:
Risk-Based Framework Formalization: The guidance provides explicit criteria for determining “high process risk” versus “not high process risk” software functions, creating a binary classification system that simplifies risk assessment while maintaining proportionate validation effort. However, this risk-based thinking merely formalizes the spectrum approach that mature GAMP implementations have applied for years.
Cloud Computing Integration: The guidance addresses Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) deployments, providing clarity on when cloud-based systems require validation. This represents adaptation to technological evolution rather than philosophical innovation—the same risk-based principles apply regardless of deployment model.
Unscripted Testing Validation: The guidance explicitly endorses “unscripted testing” as an acceptable validation approach, encouraging “exploratory, ad hoc, and unscripted testing methods” when appropriate. This acknowledgment of testing methods that experienced practitioners have used for years represents regulatory catch-up rather than breakthrough thinking.
Digital Evidence Acceptance: The guidance states that “FDA recommends incorporating the use of digital records and digital signature capabilities rather than duplicating results already digitally retained,” providing regulatory endorsement for practices that reduce documentation burden. Again, this formalizes efficiency measures that sophisticated organizations have implemented within existing frameworks.
The Definitional Games: CSA Versus CSV
The final guidance provides perhaps the most telling evidence of CSA’s repackaging nature through its definition of Computer Software Assurance: “a risk-based approach for establishing and maintaining confidence that software is fit for its intended use”. This definition could have been applied to effective computer system validation programs throughout the past two decades without modification.
The guidance emphasizes that CSA “follows a least-burdensome approach, where the burden of validation is no more than necessary to address the risk”. This principle was explicitly articulated in ICH Q9 (Quality Risk Management) published in 2005 and embedded in GAMP 5 guidance from 2008. The FDA is not introducing least-burdensome thinking—it is providing regulatory endorsement for principles that the industry has applied successfully for over fifteen years.
More significantly, the guidance acknowledges that CSA “establishes and maintains that the software used in production or the quality system is in a state of control throughout its life cycle (‘validated state’)”. The concept of maintaining validated state through lifecycle management represents core computer system validation thinking that predates CSA by decades.
Practical Examples: Repackaged Wisdom
The final guidance includes four detailed examples in Appendix A that demonstrate CSA application to real-world scenarios: Nonconformance Management Systems, Learning Management Systems, Business Intelligence Applications, and Software as a Service (SaaS) Product Life Cycle Management Systems. These examples provide valuable practical guidance, but they illustrate established validation principles rather than innovative approaches.
Consider the Nonconformance Management System example, which demonstrates risk assessment, supplier evaluation, configuration testing, and ongoing monitoring. Each element represents standard GAMP-based validation practice:
Risk Assessment: Determining that failure could impact product quality aligns with established risk-based validation principles
Supplier Evaluation: Assessing vendor development practices and quality systems follows GAMP supplier guidance
Configuration Testing: Verifying that system configuration meets business requirements represents basic user acceptance testing
Ongoing Monitoring: Maintaining validated state through change control and periodic review embodies lifecycle management concepts
The Business Intelligence Applications example similarly demonstrates established practices repackaged with CSA terminology. The guidance recommends focusing validation effort on “data integrity, accuracy of calculations, and proper access controls”—core concerns that experienced validation professionals have addressed routinely using GAMP principles.
The Regulatory Timing: Why Now?
The timing of the final CSA guidance publication reveals important context about regulatory motivation. The guidance development began in earnest in 2022, coinciding with increasing industry pressure to address digital transformation challenges, cloud computing adoption, and artificial intelligence integration in manufacturing environments.
However, the three-year development timeline suggests careful consideration rather than urgent need for wholesale validation reform. If existing validation approaches were fundamentally inadequate, we would expect more rapid regulatory response to address patient safety concerns. Instead, the measured development process indicates that the FDA recognized the adequacy of existing approaches while seeking to provide clearer guidance for emerging technologies.
The final guidance explicitly states that FDA “believes that applying a risk-based approach to computer software used as part of production or the quality system would better focus manufacturers’ quality assurance activities to help ensure product quality while helping to fulfill validation requirements”. This language acknowledges that existing approaches fulfill regulatory requirements—the guidance aims to optimize resource allocation rather than address compliance failures.
The Consulting Industry’s Role in Manufacturing Urgency
To understand why CSA has gained traction despite offering little genuine innovation, we must examine the economic incentives that drive consulting industry behavior. The computer system validation consulting market represents hundreds of millions of dollars annually, with individual validation projects ranging from tens of thousands to millions of dollars depending on system complexity and organizational scope.
This market faces a fundamental problem: mature practices don’t generate consulting revenue. If organizations understand that their current GAMP-based validation approaches are fundamentally sound and regulatory-compliant, they’re less likely to engage consultants for expensive “modernization” projects. CSA provides the solution to this problem by creating artificial urgency around practices that were already fit for purpose.
The CSA marketing campaign follows a predictable pattern that the consulting industry has used repeatedly across different domains:
Step 1: Problem Creation. Traditional CSV is portrayed as outdated, burdensome, and potentially non-compliant with evolving regulatory expectations. This creates anxiety among quality professionals who fear falling behind industry best practices.
Step 2: Solution Positioning. CSA is presented as the modern, efficient, risk-based alternative that leading organizations are already adopting. Early adopters are portrayed as innovative leaders, while traditional practitioners risk being perceived as laggards.
Step 3: Urgency Amplification. Regulatory changes (like the Annex 11 revision) are leveraged to suggest that traditional approaches may become non-compliant, requiring immediate action.
Step 4: Capability Marketing. Consulting firms position themselves as experts in the “new” CSA approach, offering training, assessment services, and implementation support for organizations seeking to “modernize” their validation practices.
This pattern is particularly insidious because it exploits legitimate professional concerns. Quality professionals genuinely want to ensure their practices remain current and effective. However, the CSA campaign preys on these concerns by suggesting that existing practices are inadequate when, in fact, they remain perfectly sufficient for regulatory compliance and business effectiveness.
The False Dichotomy: CSV Versus CSA
Perhaps the most misleading aspect of CSA promotion is the suggestion that organizations must choose between “traditional CSV” and “modern CSA” approaches. This creates a false dichotomy that obscures the reality: well-implemented GAMP-based validation programs already incorporate every principle that CSA advocates as innovative.
Consider the claimed distinctions between CSV and CSA:
Critical Thinking Over Documentation: CSA proponents suggest that traditional CSV focuses on documentation production rather than system quality. However, GAMP 5 has emphasized risk-based thinking and proportionate documentation for over fifteen years. Organizations producing excessive documentation were implementing GAMP poorly, not following its actual guidance.
Testing Over Paperwork: The claim that CSA prioritizes testing effectiveness over documentation completeness misrepresents both approaches. GAMP has always emphasized that validation should provide confidence in system performance, not just documentation compliance. The GAMP software categories explicitly scale testing requirements to risk levels.
Automation and Modern Technologies: CSA advocates present automation and advanced testing methods as CSA innovations. However, Annex 11 Clause 4.7 has required consideration of automated testing tools since 2011, and GAMP 5 second edition explicitly addresses agile development, cloud computing, and artificial intelligence.
Risk-Based Resource Allocation: The suggestion that CSA introduces risk-based resource allocation ignores decades of GAMP implementation where validation effort is explicitly scaled to system risk and business impact.
Supplier Leverage: CSA emphasis on leveraging supplier documentation and testing is presented as innovative thinking. However, GAMP has advocated supplier assessment and documentation leverage since its early versions, with detailed guidance on when and how to rely on supplier work.
The reality is that organizations with mature, well-implemented validation programs are already applying CSA principles without recognizing them as such. They conduct risk assessments, scale validation activities appropriately, leverage supplier documentation effectively, and focus resources on high-impact systems. They didn’t need CSA to tell them to think critically—they were already applying critical thinking to validation challenges.
The Spectrum Reality: Quality as a Continuous Variable
One of the most important concepts that both GAMP and effective validation practice have always recognized is that system quality exists on a spectrum, not as a binary state. Systems aren’t simply “validated” or “not validated”—they exist at various points along a continuum of validation rigor that corresponds to their risk profile and business impact.
This spectrum concept directly contradicts the CSA marketing message that suggests traditional validation approaches treat all systems identically. In reality, experienced validation professionals have always applied different approaches to different system types.
This spectrum approach enables organizations to allocate validation resources effectively while maintaining appropriate controls. A simple email archiving system doesn’t receive the same validation rigor as a batch manufacturing execution system—not because we’re cutting corners, but because the risks are fundamentally different.
CSA doesn’t introduce this spectrum concept—it restates principles that have been embedded in GAMP guidance for over a decade. The suggestion that traditional validation approaches lack risk-based thinking demonstrates either ignorance of GAMP principles or deliberate misrepresentation of current practices.
Regulatory Convergence: Proof of Existing Framework Maturity
The convergence of global regulatory approaches around risk-based validation principles provides compelling evidence that existing frameworks were already effective and didn’t require CSA “modernization.” The 2025 draft Annex 11 revision demonstrates this convergence clearly.
Key aspects of the draft revision align closely with established GAMP principles:
Risk Management Integration: Section 6 requires risk management throughout the system lifecycle, aligning with ICH Q9 and existing GAMP guidance.
Lifecycle Perspective: Section 4 emphasizes lifecycle management from planning through retirement, consistent with GAMP lifecycle models.
Supplier Oversight: Section 7 requires supplier qualification and ongoing assessment, building on existing GAMP supplier guidance.
Security Integration: Section 15 addresses cybersecurity as a GMP requirement, reflecting technological evolution rather than philosophical change.
Periodic Review: Section 14 mandates periodic system review, formalizing practices that mature organizations already implement.
This alignment doesn’t validate CSA as revolutionary—it demonstrates that global regulators recognize the effectiveness of existing risk-based validation approaches and are codifying them more explicitly. The fact that CSA principles align with regulatory evolution proves that these principles were already embedded in effective validation practice.
The finalized FDA guidance fits into this by providing educational clarity for validation professionals who have struggled to apply risk-based principles effectively. The detailed examples and explicit risk classification criteria offer practical guidance that can improve validation program implementation. This is not a call by the FDA for radical changes, it is an educational moment on the current consensus.
The Technical Reality: What Actually Drives System Quality
Beneath the consulting industry rhetoric about CSA lies a more fundamental question: what actually drives computer system quality in regulated environments? The answer has remained consistent across decades of validation practice and won’t change regardless of whether we call our approach CSV, CSA, or any other acronym.
System quality derives from several key factors that transcend validation methodology:
Requirements Definition: Systems must be designed to meet clearly defined user requirements that align with business processes and regulatory obligations. Poor requirements lead to poor systems regardless of validation approach.
Supplier Competence: The quality of the underlying software depends fundamentally on the supplier’s development practices, quality systems, and technical expertise. Validation can detect defects but cannot create quality that wasn’t built into the system.
Configuration Control: Proper configuration of commercial systems requires deep understanding of both the software capabilities and the business requirements. Poor configuration creates risks that no amount of validation testing can eliminate.
Change Management: System quality degrades over time without effective change control processes that ensure modifications maintain validated status. This requires ongoing attention regardless of initial validation approach.
User Competence: Even perfectly validated systems fail if users lack adequate training, motivation, or procedural guidance. Human factors often determine system effectiveness more than technical validation.
Operational Environment: Systems must be maintained within their designed operational parameters—appropriate hardware, network infrastructure, security controls, and environmental conditions. Environmental failures can compromise even well-validated systems.
These factors have driven system quality throughout the history of computer system validation and will continue to do so regardless of methodological labels. CSA doesn’t address any of these fundamental quality drivers differently than GAMP-based approaches—it simply rebrands existing practices with contemporary terminology.
The Economics of Validation: Why Efficiency Matters
One area where CSA advocates make legitimate points involves the economics of validation practice. Poor validation implementations can indeed create excessive costs and time delays that provide minimal risk reduction benefit. However, these problems result from poor implementation, not inherent methodological limitations.
Effective validation programs have always balanced several economic considerations:
Resource Allocation: Validation effort should be concentrated on systems with the highest risk and business impact. Organizations that validate all systems identically are misapplying GAMP principles, not following them.
Documentation Efficiency: Validation documentation should support business objectives rather than existing for its own sake. Excessive documentation often results from misunderstanding regulatory requirements rather than regulatory over-reach.
Testing Effectiveness: Validation testing should build confidence in system performance rather than simply following predetermined scripts. Effective testing combines scripted protocols with exploratory testing, automated validation, and ongoing monitoring.
Lifecycle Economics: The total cost of validation includes initial validation plus ongoing maintenance throughout the system lifecycle. Front-end investment in robust validation often reduces long-term operational costs.
Opportunity Cost: Resources invested in validation could be applied to other quality improvements. Effective validation programs consider these opportunity costs and optimize overall quality outcomes.
These economic principles aren’t CSA innovations—they’re basic project management applied to validation activities. Organizations experiencing validation inefficiencies typically suffer from poor implementation of established practices rather than inadequate methodological guidance.
The Agile Development Challenge: Old Wine in New Bottles
One area where CSA advocates claim particular expertise involves validating systems developed using agile methodologies, continuous integration/continuous deployment (CI/CD), and other modern software development approaches. This represents a more legitimate consulting opportunity because these development methods do create genuine challenges for traditional validation approaches.
However, the validation industry’s response to agile development demonstrates both the adaptability of existing frameworks and the consulting industry’s tendency to oversell new approaches as revolutionary breakthroughs.
GAMP 5 second edition, published in 2022, explicitly addresses agile development challenges and provides guidance for validating systems developed using modern methodologies. The core principles remain unchanged—validation should provide confidence that systems are fit for their intended use—but the implementation approaches adapt to different development lifecycles.
Key adaptations for agile development include:
Iterative Validation: Rather than conducting validation at the end of development, validation activities occur throughout each development sprint, allowing for earlier defect detection and correction.
Automated Testing Integration: Automated testing tools become part of the validation approach rather than separate activities, leveraging the automated testing that agile development teams already implement.
Risk-Based Prioritization: User stories and system features are prioritized based on risk assessment, ensuring that high-risk functionality receives appropriate validation attention.
Continuous Documentation: Documentation evolves continuously rather than being produced as discrete deliverables, aligning with agile documentation principles.
Supplier Collaboration: Validation activities are integrated with supplier development processes rather than conducted independently, leveraging the transparency that agile methods provide.
These adaptations represent evolutionary improvements, often slight, in validation practice rather than revolutionary breakthroughs. They address genuine challenges created by modern development methods while maintaining the fundamental goal of ensuring system fitness for intended use.
The Cloud Computing Reality: Infrastructure Versus Application
Another area where CSA advocates claim particular relevance involves cloud-based systems and Software as a Service (SaaS) applications. This represents a more legitimate area of methodological development because cloud computing does create genuine differences in validation approach compared to traditional on-premises systems.
However, the core validation challenges remain unchanged: organizations must ensure that cloud-based systems are fit for their intended use, maintain data integrity, and comply with applicable regulations. The differences lie in implementation details rather than fundamental principles.
Key considerations for cloud-based system validation include:
Shared Responsibility Models: Cloud providers and customers share responsibility for different aspects of system security and compliance. Validation approaches must clearly delineate these responsibilities and ensure appropriate controls at each level.
Supplier Assessment: Cloud providers require more extensive assessment than traditional software suppliers because they control critical infrastructure components that customers cannot directly inspect.
Data Residency and Transfer: Cloud systems often involve data transfer across geographic boundaries and storage in multiple locations. Validation must address these data handling practices and their regulatory implications.
Service Level Agreements: Cloud services operate under different availability and performance models than on-premises systems. Validation approaches must adapt to these service models.
Continuous Updates: Cloud providers often update their services more frequently than traditional software suppliers. Change control processes must adapt to this continuous update model.
These considerations require adaptation of validation practices but don’t invalidate existing principles. Organizations can validate cloud-based systems using GAMP principles with appropriate modification for cloud-specific characteristics. CSA doesn’t provide fundamentally different guidance—it repackages existing adaptation strategies with cloud-specific terminology.
The Data Integrity Connection: Where Real Innovation Occurs
One area where legitimate innovation has occurred in pharmaceutical quality involves data integrity practices and their integration with computer system validation. The FDA’s data integrity guidance documents, EU data integrity guidelines, and industry best practices have evolved significantly over the past decade, creating genuine opportunities for improved validation approaches.
However, this evolution represents refinement of existing principles rather than replacement of established practices. Data integrity concepts build directly on computer system validation foundations:
ALCOA+ Principles: Attributable, Legible, Contemporaneous, Original, Accurate data requirements, plus Complete, Consistent, Enduring, and Available requirements, extend traditional validation concepts to address specific data handling challenges.
Audit Trail Requirements: Enhanced audit trail capabilities build on existing Part 11 requirements while addressing modern data manipulation risks.
System Access Controls: Improved user authentication and authorization extend traditional computer system security while addressing contemporary threats.
Data Lifecycle Management: Systematic approaches to data creation, processing, review, retention, and destruction integrate with existing system lifecycle management.
Risk-Based Data Review: Proportionate data review approaches apply risk-based thinking to data integrity challenges.
These developments represent genuine improvements in validation practice that address real regulatory and business challenges. They demonstrate how existing frameworks can evolve to address new challenges without requiring wholesale replacement of established approaches.
The Training and Competence Reality: Where Change Actually Matters
Perhaps the area where CSA advocates make the most legitimate points involves training and competence development for validation professionals. Traditional validation training has often focused on procedural compliance rather than risk-based thinking, creating practitioners who can follow protocols but struggle with complex risk assessment and decision-making.
This competence gap creates real problems in validation practice:
Protocol-Following Over Problem-Solving: Validation professionals trained primarily in procedural compliance may miss system risks that don’t fit predetermined testing categories.
Documentation Focus Over Quality Focus: Emphasis on documentation completeness can obscure the underlying goal of ensuring system fitness for intended use.
Risk Assessment Limitations: Many validation professionals lack the technical depth needed for effective risk assessment of complex modern systems.
Regulatory Interpretation Challenges: Understanding the intent behind regulatory requirements rather than just their literal text requires experience and training that many practitioners lack.
Technology Evolution: Rapid changes in information technology create knowledge gaps for validation professionals trained primarily on traditional systems.
These competence challenges represent genuine opportunities for improvement in validation practice. However, they result from inadequate implementation of existing approaches rather than flaws in the approaches themselves. GAMP has always emphasized risk-based thinking and proportionate validation—the problem lies in how practitioners are trained and supported, not in the methodological framework.
Effective responses to these competence challenges include:
Risk-Based Training: Education programs that emphasize risk assessment and critical thinking rather than procedural compliance.
Technical Depth Development: Training that builds understanding of information technology principles rather than just validation procedures.
Regulatory Context Education: Programs that help practitioners understand the regulatory intent behind validation requirements.
Scenario-Based Learning: Training that uses complex, real-world scenarios rather than simplified examples.
Continuous Learning Programs: Ongoing education that addresses technology evolution and regulatory changes.
These improvements can be implemented within existing GAMP frameworks without requiring adoption of any ‘new’ paradigm. They address real professional development needs while building on established validation principles.
The Measurement Challenge: How Do We Know What Works?
One of the most frustrating aspects of the CSA versus CSV debate is the lack of empirical evidence supporting claims of CSA superiority. Validation effectiveness ultimately depends on measurable outcomes: system reliability, regulatory compliance, cost efficiency, and business enablement. However, CSA advocates rarely present comparative data demonstrating improved outcomes.
System Reliability: Frequency of system failures, time to resolution, and impact on business operations provide direct measures of validation effectiveness.
Regulatory Compliance: Inspection findings, regulatory citations, and compliance costs indicate how well validation approaches meet regulatory expectations.
Cost Efficiency: Total cost of ownership including initial validation, ongoing maintenance, and change control activities reflects economic effectiveness.
Time to Implementation: Speed of system deployment while maintaining appropriate quality controls indicates process efficiency.
User Satisfaction: System usability, training effectiveness, and user adoption rates reflect practical validation outcomes.
Change Management Effectiveness: Success rate of system changes, time required for change implementation, and change-related defects indicate validation program maturity.
Without comparative data on these metrics, claims of CSA superiority remain unsupported marketing assertions. Organizations considering CSA adoption should demand empirical evidence of improved outcomes rather than accepting theoretical arguments about methodological superiority.
The Global Regulatory Perspective: Why Consistency Matters
The pharmaceutical industry operates in a global regulatory environment where consistency across jurisdictions provides significant business value. Validation approaches that work effectively across multiple regulatory frameworks reduce compliance costs and enable efficient global operations.
GAMP-based validation approaches have demonstrated this global effectiveness through widespread adoption across major pharmaceutical markets:
FDA Acceptance: GAMP principles align with FDA computer system validation expectations and have been successfully applied in thousands of FDA-regulated facilities.
EMA/European Union Compatibility: GAMP approaches satisfy EU GMP requirements including Annex 11 and have been widely implemented across European pharmaceutical operations.
Other Regulatory Bodies: GAMP principles are compatible with Health Canada, TGA (Australia), PMDA (Japan), and other regulatory frameworks, enabling consistent global implementation.
Industry Standards Integration: GAMP integrates effectively with ISO standards, ICH guidelines, and other international frameworks that pharmaceutical companies must address.
This global consistency represents a significant competitive advantage for established validation approaches. CSA, despite alignment with FDA thinking, has not demonstrated equivalent acceptance across other regulatory frameworks. Organizations adopting CSA risk creating validation approaches that work well in FDA-regulated environments but require modification for other jurisdictions.
The regulatory convergence demonstrated by the draft Annex 11 revision suggests that global harmonization is occurring around established risk-based validation principles rather than newer CSA concepts. This convergence validates existing approaches rather than supporting wholesale methodological change.
The Practical Implementation Reality: What Actually Happens
Beyond the methodological debates and consulting industry marketing lies the practical reality of how validation programs actually function in pharmaceutical organizations. This reality demonstrates why existing GAMP-based approaches remain effective and why CSA adoption often creates more problems than it solves.
Successful validation programs, regardless of methodological label, share several common characteristics:
Senior Leadership Support: Validation programs succeed when senior management understands their business value and provides appropriate resources.
Cross-Functional Integration: Effective validation requires collaboration between quality assurance, information technology, operations, and regulatory affairs functions.
Appropriate Resource Allocation: Validation programs must be staffed with competent professionals and provided with adequate tools and budget.
lear Procedural Guidance: Staff need clear, practical procedures that explain how to apply validation principles to specific situations.
Ongoing Training and Development: Validation effectiveness depends on continuous learning and competence development.
Metrics and Continuous Improvement: Programs must measure their effectiveness and adapt based on performance data.
These success factors operate independently of methodological labels.
The practical implementation reality also reveals why consulting industry solutions often fail to deliver promised benefits. Consultants typically focus on methodological frameworks and documentation rather than the organizational factors that actually drive validation effectiveness. A organization with poor cross-functional collaboration, inadequate resources, and weak senior management support won’t solve these problems by adopting some consultants version of CSA—they need fundamental improvements in how they approach validation as a business function.
The Future of Validation: Evolution, Not Revolution
Looking ahead, computer system validation will continue to evolve in response to technological change, regulatory development, and business needs. However, this evolution will likely occur within existing frameworks rather than through wholesale replacement of established approaches.
Several trends will shape validation practice over the coming decade:
Increased Automation: Automated testing tools, artificial intelligence applications, and machine learning capabilities will become more prevalent in validation practice, but they will augment rather than replace human judgment.
Cloud and SaaS Integration: Cloud computing and Software as a Service applications will require continued adaptation of validation approaches, but these adaptations will build on existing risk-based principles.
Data Analytics Integration: Advanced analytics capabilities will provide new insights into system performance and risk patterns, enabling more sophisticated validation approaches.
Regulatory Harmonization: Continued convergence of global regulatory approaches will simplify validation for multinational organizations.
Agile and DevOps Integration: Modern software development methodologies will require continued adaptation of validation practices, but the fundamental goals remain unchanged.
These trends represent evolutionary development rather than revolutionary change. They will require validation professionals to develop new technical competencies and adapt established practices to new contexts, but they don’t invalidate the fundamental principles that have guided effective validation for decades.
Organizations preparing for these future challenges will be best served by building strong foundational capabilities in risk assessment, technical understanding, and adaptability rather than adopting particular methodological labels. The ability to apply established validation principles to new challenges will prove more valuable than expertise in any specific framework or approach.
The Emperor’s New Validation Clothes
Computer System Assurance represents a textbook case of how the pharmaceutical consulting industry creates artificial innovation by rebranding established practices as revolutionary breakthroughs. Every principle that CSA advocates present as innovative thinking has been embedded in risk-based validation approaches, GAMP guidance, and regulatory expectations for over two decades.
The fundamental question is not whether CSA principles are sound—they generally are, because they restate established best practices. The question is whether the pharmaceutical industry benefits from treating existing practices as obsolete and investing resources in “modernization” projects that deliver minimal incremental value.
The answer should be clear to any quality professional who has implemented effective validation programs: we don’t need CSA to tell us to think critically about validation challenges, apply risk-based approaches to system assessment, or leverage supplier documentation effectively. We’ve been doing these things successfully for years using GAMP principles and established regulatory guidance.
What we do need is better implementation of existing approaches—more competent practitioners, stronger organizational support, clearer procedural guidance, and continuous improvement based on measurable outcomes. These improvements can be achieved within established frameworks without expensive consulting engagements or wholesale methodological change.
The computer system assurance emperor has no clothes—underneath the contemporary terminology and marketing sophistication lies the same risk-based, lifecycle-oriented, supplier-leveraging validation approach that mature organizations have been implementing successfully for over a decade. Quality professionals should focus their attention on implementation excellence rather than methodological fashion, building validation programs that deliver demonstrable business value regardless of what acronym appears on the procedure titles.
The choice facing pharmaceutical organizations is not between outdated CSV and modern CSA—it’s between poor implementation of established practices and excellent implementation of the same practices. Excellence is what protects patients, ensures product quality, and satisfies regulatory expectations. Everything else is just consulting industry marketing.
Take the April 2025 Warning Letter to Cosco International, for example. One might quickly react with, “Holy cow! No process validation or cleaning validation—how is this even possible?” This could spark an exhaustive discussion about why these regulations have been in place for 30 years and the urgent need for companies to comply. But frankly, nothing really valuable to a company that already realizes they need to do process validation.
Yet this Warning Letter highlights a fundamental misunderstanding among companies regarding the difference between a cosmetic and a drug. As someone who reads Warning Letters, this seems to be a fairly common problem.
Key Regulatory Distinctions
Cosmetics: Products intended solely for cleansing, beautifying, or altering the appearance without affecting bodily functions are regulated as cosmetics under the FDA. These are not required to undergo premarket approval, except for color additives.
Drugs: Products intended to diagnose, cure, mitigate, treat, or prevent disease or that affect the structure or function of the body (such as blocking sweat glands) are regulated as drugs. This includes antiperspirants, regardless of their application site.
So not really all that interesting from a biotech perspective, but a fascinating insight to some bad trends if I was on the consumer goods side of the profession.
But, as I discussed, there is value from reading these holistically, for what they tell us regulators are thinking. In this case, there is a nice little set of bullet points on what is bare minimum in cleaning validation.