The November 2025 FDA Warning Letter to Catalent Indiana, LLC reads like an autopsy report—a detailed dissection of how contamination hazards aren’t discovered but rather engineered into aseptic operations through a constellation of decisions that individually appear defensible yet collectively create what I’ve previously termed the “zemblanity field” in pharmaceutical quality. Section 2, addressing failures under 21 CFR 211.113(b), exposes contamination hazards that didn’t emerge from random misfortune but from deliberate choices about decontamination strategies, sampling methodologies, intervention protocols, and investigation rigor.
What makes this warning letter particularly instructive isn’t the presence of contamination events—every aseptic facility battles microbial ingress—but rather the systematic architectural failures that allowed contamination hazards to persist unrecognized, uninvestigated, and unmitigated despite multiple warning signals spanning more than 20 deviations and customer complaints. The FDA’s critique centers on three interconnected contamination hazard categories: VHP decontamination failures involving occluded surfaces, inadequate environmental monitoring methods that substituted convenience for detection capability, and intervention risk assessments that ignored documented contamination routes.
For those of us responsible for contamination control in aseptic manufacturing, this warning letter demands we ask uncomfortable questions: How many of our VHP cycles are validated against surfaces that remain functionally occluded? How often have we chosen contact plates over swabs because they’re faster, not because they’re more effective? When was the last time we terminated a media fill and treated it with the investigative rigor of a batch contamination event?
The Occluded Surface Problem: When Decontamination Becomes Theatre
The FDA’s identification of occluded surfaces as contamination sources during VHP decontamination represents a failure mode I’ve observed with troubling frequency across aseptic facilities. The fundamental physics are unambiguous: vaporized hydrogen peroxide achieves sporicidal efficacy through direct surface contact at validated concentration-time profiles. Any surface the vapor doesn’t contact—or contacts at insufficient concentration—remains a potential contamination reservoir regardless of cycle completion indicators showing “successful” decontamination.
The Catalent situation involved two distinct occluded surface scenarios, each revealing different architectural failures in contamination hazard assessment. First, equipment surfaces occluded during VHP decontamination that subsequently became contamination sources during atypical interventions involving equipment changes. The FDA noted that “the most probable root cause” of an environmental monitoring failure was equipment surfaces occluded during VHP decontamination, with contamination occurring during execution of an atypical intervention involving changes to components integral to stopper seating.
This finding exposes a conceptual error I frequently encounter: treating VHP decontamination as a universal solution that overcomes design deficiencies rather than as a validated process with specific performance boundaries. The Catalent facility’s own risk assessments advised against interventions that could disturb potentially occluded surfaces, yet these interventions continued—creating the precise contamination pathway their risk assessments identified as unacceptable.
The second occluded surface scenario involved wrapped components within the filling line where insufficient VHP exposure allowed potential contamination. The FDA cited “occluded surfaces on wrapped [components] within the [equipment] as the potential cause of contamination”. This represents a validation failure: if wrapping materials prevent adequate VHP penetration, either the wrapping must be eliminated, the decontamination method must change, or these surfaces must be treated through alternative validated processes.
The literature on VHP decontamination is explicit about occluded surface risks. As Sandle notes, surfaces must be “designed and installed so that operations, maintenance, and repairs can be performed outside the cleanroom” and where unavoidable, “all surfaces needing decontaminated” must be explicitly identified. The PIC/S guidance is similarly unambiguous: “Continuously occluded surfaces do not qualify for such trials as they cannot be exposed to the process and should have been eliminated”. Yet facilities continue to validate VHP cycles that demonstrate biological indicator kill on readily accessible flat coupons while ignoring the complex geometries, wrapped items, and recessed surfaces actually present in their filling environments.
What does a robust approach to occluded surface assessment look like? Based on the regulatory expectations and technical literature, facilities should:
Conduct comprehensive occluded surface mapping during design qualification. Every component introduced into VHP-decontaminated spaces must undergo geometric analysis to identify surfaces that may not receive adequate vapor exposure. This includes crevices, threaded connections, wrapped items, hollow spaces, and any surface shadowed by another object. The mapping should document not just that surfaces exist but their accessibility to vapor flow based on the specific VHP distribution characteristics of the equipment.
Validate VHP distribution using chemical and biological indicators placed on identified occluded surfaces. Flat coupon placement on readily accessible horizontal surfaces tells you nothing about vapor penetration into wrapped components or recessed geometries. Biological indicators should be positioned specifically where vapor exposure is questionable—inside wrapped items, within threaded connections, under equipment flanges, in dead-legs of transfer lines. If biological indicators in these locations don’t achieve the validated log reduction, the surfaces are occluded and require design modification or alternative decontamination methods.
Establish clear intervention protocols that distinguish between “sterile-to-sterile” and “potentially contaminated” surface contact. The Catalent finding reveals that atypical interventions involving equipment changes exposed the Grade A environment to surfaces not reliably exposed to VHP. Intervention risk assessments must explicitly categorize whether the intervention involves only VHP-validated surfaces or introduces components from potentially occluded areas. The latter category demands heightened controls: localized Grade A air protection, pre-intervention surface swabbing and disinfection, real-time environmental monitoring during the intervention, and post-intervention investigation if environmental monitoring shows any deviation.
Implement post-decontamination surface monitoring that targets historically occluded locations. If your facility has identified occluded surfaces that cannot be designed out, these become critical sampling locations for post-VHP environmental monitoring. Trending of these specific locations provides early detection of decontamination effectiveness degradation before contamination reaches product-contact surfaces.
The FDA’s remediation demand is appropriately comprehensive: “a review of VHP exposure to decontamination methods as well as permitted interventions, including a retrospective historical review of routine interventions and atypical interventions to determine their risks, a comprehensive identification of locations that are not reliably exposed to VHP decontamination (i.e., occluded surfaces), your plan to reduce occluded surfaces where feasible, review of currently permitted interventions and elimination of high-risk interventions entailing equipment manipulations during production campaigns that expose the ISO 5 environment to surfaces not exposed to a validated decontamination process, and redesign of any intervention that poses an unacceptable contamination risk”.
This remediation framework represents best practice for any aseptic facility using VHP decontamination. The occluded surface problem isn’t limited to Catalent—it’s an industry-wide vulnerability wherever VHP validation focuses on demonstrating sporicidal activity under ideal conditions rather than confirming adequate vapor contact across all surfaces within the validated space.
Contact Plates Versus Swabs: The Detection Capability Trade-Off
The FDA’s critique of Catalent’s environmental monitoring methodology exposes a decision I’ve challenged repeatedly throughout my career: the use of contact plates for sampling irregular, product-contact surfaces in Grade A environments. The technical limitations are well-established, yet contact plates persist because they’re faster and operationally simpler—prioritizing workflow convenience over contamination detection capability.
The specific Catalent deficiency involved sampling filling line components using “contact plate, sampling [surfaces] with one sweeping sampling motion.” The FDA identified two fundamental inadequacies: “With this method, you are unable to attribute contamination events to specific [locations]” and “your firm’s use of contact plates is not as effective as using swab methods”. These limitations aren’t novel discoveries—they’re inherent to contact plate methodology and have been documented in the microbiological literature for decades.
Contact plates—rigid agar surfaces pressed against the area to be sampled—were designed for flat, smooth surfaces where complete agar-to-surface contact can be achieved with uniform pressure. They perform adequately on stainless steel benchtops, isolator walls, and other horizontal surfaces. But filling line components—particularly those identified in the warning letter—present complex geometries: curved surfaces, corners, recesses, and irregular topographies where rigid agar cannot conform to achieve complete surface contact.
The microbial recovery implications are significant. When a contact plate fails to achieve complete surface contact, microorganisms in uncontacted areas remain unsampled. The result is a false-negative environmental monitoring reading that suggests contamination control while actual contamination persists undetected. Worse, the “sweeping sampling motion” described in the warning letter—moving a single contact plate across multiple locations—creates the additional problem the FDA identified: inability to attribute any recovered contamination to a specific surface. Was the contamination on the first component contacted? The third? Somewhere in between? This sampling approach provides data too imprecise for meaningful contamination source investigation.
The alternative—swab sampling—addresses both deficiencies. Swabs conform to irregular surfaces, accessing corners, recesses, and curved topographies that contact plates cannot reach. Swabs can be applied to specific, discrete locations, enabling precise attribution of any contamination recovered to a particular surface. The trade-off is operational: swab sampling requires more time, involves additional manipulative steps within Grade A environments, and demands different operator technique validation.
Yet the Catalent warning letter makes clear that this operational inconvenience doesn’t justify compromised detection capability for critical product-contact surfaces. The FDA’s expectation—acknowledged in Catalent’s response—is swab sampling “to replace use of contact plates to sample irregular surfaces”. This represents a fundamental shift from convenience-optimized to detection-optimized environmental monitoring.
What should a risk-based surface sampling strategy look like? The differentiation should be based on surface geometry and criticality:
Contact plates remain appropriate for flat, smooth, readily accessible surfaces where complete agar contact can be verified and where contamination risk is lower (Grade B floors, isolator walls, equipment external surfaces). The speed and simplicity advantages of contact plates justify their continued use in these applications.
Swab sampling should be mandatory for product-contact surfaces, irregular geometries, recessed areas, and any location where contact plate conformity is questionable. This includes filling needles, stopper bowls, vial transport mechanisms, crimping heads, and the specific equipment components cited in the Catalent letter. The additional time required for swab sampling is trivial compared to the contamination risk from inadequate monitoring.
Surface sampling protocols must specify the exact location sampled, not general equipment categories. Rather than “sample stopper bowl,” protocols should identify “internal rim of stopper bowl,” “external base of stopper bowl,” “stopper agitation mechanism interior surfaces.” This specificity enables contamination source attribution during investigations and ensures sampling actually reaches the highest-risk surfaces.
Swab technique must be validated to ensure consistent recovery from target surfaces. Simply switching from contact plates to swabs doesn’t guarantee improved detection unless swab technique—pressure applied, surface area contacted, swab saturation, transfer to growth media—is standardized and demonstrated to achieve adequate microbial recovery from the specific materials and geometries being sampled.
The EU GMP Annex 1 and FDA guidance documents emphasize detection capability over convenience in environmental monitoring. The expectation isn’t perfect contamination prevention—that’s impossible in aseptic processing—but rather monitoring systems sensitive enough to detect contamination events when they occur, enabling investigation and corrective action before product impact. Contact plates on irregular surfaces fail this standard by design, not because of operator error or inadequate validation but because the fundamental methodology cannot access the surfaces requiring monitoring.
The Intervention Paradox: When Risk Assessments Identify Hazards But Operations Ignore Them
Perhaps the most troubling element of the Catalent contamination hazards section isn’t the presence of occluded surfaces or inadequate sampling methods but rather the intervention management failure that reveals a disconnect between risk assessment and operational decision-making. Catalent’s risk assessments explicitly “advised against interventions that can disturb potentially occluded surfaces,” yet these high-risk interventions continued during production campaigns.
This represents what I’ve termed “investigation theatre” in previous posts—creating the superficial appearance of risk-based decision-making while actual operations proceed according to production convenience rather than contamination risk mitigation. The risk assessment identified the hazard. The environmental monitoring data confirmed the hazard when contamination occurred during the intervention. Yet the intervention continued as an accepted operational practice.
The specific intervention involved equipment changes to components “integral to stopper seating in the [filling line]”. These components operate at the critical interface between the sterile stopper and the vial—precisely the location where any contamination poses direct product impact risk. The intervention occurred during production campaigns rather than between campaigns when comprehensive decontamination and validation could occur. The intervention involved surfaces potentially occluded during VHP decontamination, meaning their microbiological state was unknown when introduced into the Grade A filling environment.
Every element of this scenario screams “unacceptable contamination risk,” yet it persisted as accepted practice until FDA inspection. How does this happen? Based on my experience across multiple aseptic facilities, the failure mode follows a predictable pattern:
Production scheduling drives intervention timing rather than contamination risk assessment. Stopping a campaign for equipment maintenance creates schedule disruption, yield loss, and capacity constraints. The pressure to maintain campaign continuity overwhelms contamination risk considerations that appear theoretical compared to the immediate, quantifiable production impact.
Risk assessments become compliance artifacts disconnected from operational decision-making. The quality unit conducts a risk assessment, documents that certain interventions pose unacceptable contamination risk, and files the assessment. But when production encounters the situation requiring that intervention, the actual decision-making process references production need, equipment availability, and batch schedules—not the risk assessment that identified the intervention as high-risk.
Interventions become “normalized deviance”—accepted operational practices despite documented risks. After performing a high-risk intervention successfully (meaning without detected contamination) multiple times, it transitions from “high-risk intervention requiring exceptional controls” to “routine intervention” in operational thinking. The fact that adequate controls prevented contamination detection gets inverted into evidence that the intervention isn’t actually high-risk.
Environmental monitoring provides false assurance when contamination goes undetected. If a high-risk intervention occurs and subsequent environmental monitoring shows no contamination, operations interprets this as validation that the intervention is acceptable. But as discussed in the contact plate section, inadequate sampling methodology may fail to detect contamination that actually occurred. The absence of detected contamination becomes “proof” that contamination didn’t occur, reinforcing the normalization of high-risk interventions.
The EU GMP Annex 1 requirements for intervention management represent regulatory recognition of these failure modes. Annex 1 Section 8.16 requires “the list of interventions evaluated via risk analysis” and Section 9.36 requires that aseptic process simulations include “interventions and associated risks”. The framework is explicit: identify interventions, assess their contamination risk, validate that operators can perform them aseptically through media fills, and eliminate interventions that cannot be performed without unacceptable contamination risk.
What does robust intervention risk management look like in practice?
Categorize interventions by contamination risk based on specific, documented criteria. The categorization should consider: surfaces contacted (sterile-to-sterile vs. potentially contaminated), duration of exposure, proximity to open product, operator actions required, first air protection feasibility, and frequency. This creates a risk hierarchy that enables differentiated control strategies rather than treating all interventions equivalently.
Establish clear decision authorities for different intervention risk levels. Routine interventions (low contamination risk, validated through media fills, performed regularly) can proceed under operator judgment following standard procedures. High-risk interventions (those involving occluded surfaces, extended exposure, or proximity to open product) should require quality unit pre-approval including documented risk assessment and enhanced controls specification. Interventions identified as posing unacceptable risk should be prohibited until equipment redesign or process modification eliminates the contamination hazard.
Validate intervention execution through media fills that specifically simulate the intervention’s contamination challenges. Generic media fills demonstrating overall aseptic processing capability don’t validate specific high-risk interventions. If your risk assessment identifies a particular intervention as posing contamination risk, your media fill program must include that intervention, performed by the operators who will execute it, under the conditions (campaign timing, equipment state, environmental conditions) where it will actually occur.
Implement intervention-specific environmental monitoring that targets the contamination pathways identified in risk assessments. If the risk assessment identifies that an intervention may expose product to surfaces not reliably decontaminated, environmental monitoring immediately following that intervention should specifically sample those surfaces and adjacent areas. Trending this intervention-specific monitoring data separately from routine environmental monitoring enables detection of intervention-associated contamination patterns.
Conduct post-intervention investigations when environmental monitoring shows any deviation. The Catalent warning letter describes an environmental monitoring failure whose “most probable root cause” was an atypical intervention involving equipment changes. This temporal association between intervention and contamination should trigger automatic investigation even if environmental monitoring results remain within action levels. The investigation should assess whether intervention protocols require modification or whether the intervention should be eliminated.
The FDA’s remediation demand addresses this gap directly: “review of currently permitted interventions and elimination of high-risk interventions entailing equipment manipulations during production campaigns that expose the ISO 5 environment to surfaces not exposed to a validated decontamination process”. This requirement forces facilities to confront the intervention paradox: if your risk assessment identifies an intervention as high-risk, you cannot simultaneously permit it as routine operational practice. Either modify the intervention to reduce risk, validate enhanced controls that mitigate the risk, or eliminate the intervention entirely.
Media Fill Terminations: When Failures Become Invisible
The Catalent warning letter’s discussion of media fill terminations exposes an investigation failure mode that reveals deeper quality system inadequacies. Since November 2023, Catalent terminated more than five media fill batches representing the filling line. Following two terminations for stoppering issues and extrinsic particle contamination, the facility “failed to open a deviation or an investigation at the time of each failure, as required by your SOPs”.
Read that again. Media fills—the fundamental aseptic processing validation tool, the simulation specifically designed to challenge contamination control—were terminated due to failures, and no deviation was opened, no investigation initiated. The failures simply disappeared from the quality system, becoming invisible until FDA inspection revealed their existence.
The rationalization is predictable: “there was no impact to the SISPQ (Safety, Identity, Strength, Purity, Quality) of the terminated media batches or to any customer batches” because “these media fills were re-executed successfully with passing results”. This reasoning exposes a fundamental misunderstanding of media fill purpose that I’ve encountered with troubling frequency across the industry.
A media fill is not a “test” that you pass or fail with product consequences. It is a simulation—a deliberate challenge to your aseptic processing capability using growth medium instead of product specifically to identify contamination risks without product impact. When a media fill is terminated due to a processing failure, that termination is itself the critical finding. The termination reveals that your process is vulnerable to exactly the failure mode that caused termination: stoppering problems that could occur during commercial filling, extrinsic particles that could contaminate product.
The FDA’s response is appropriately uncompromising: “You do not provide the investigations with a root cause that justifies aborting and re-executing the media fills, nor do you provide the corrective actions taken for each terminated media fill to ensure effective CAPAs were promptly initiated”. The regulatory expectation is clear: media fill terminations require investigation identical in rigor to commercial batch failures. Why did the stoppering issue occur? What equipment, material, or operator factors contributed? How do we prevent recurrence? What commercial batches may have experienced similar failures that went undetected?
The re-execution logic is particularly insidious. By immediately re-running the media fill and achieving passing results, Catalent created the appearance of successful validation while ignoring the process vulnerability revealed by the termination. The successful re-execution proved only that under ideal conditions—now with heightened operator awareness following the initial failure—the process could be executed successfully. It provided no assurance that commercial operations, without that heightened awareness and under the same conditions that caused the initial termination, wouldn’t experience identical failures.
What should media fill termination management look like?
Treat every media fill termination as a critical deviation requiring immediate investigation initiation. The investigation should identify the root cause of the termination, assess whether the failure mode could occur during commercial manufacturing, evaluate whether previous commercial batches may have experienced similar failures, and establish corrective actions that prevent recurrence. This investigation must occur before re-execution, not instead of investigation.
Require quality unit approval before media fill re-execution. The approval should be based on documented investigation findings demonstrating that the termination cause is understood, corrective actions are implemented, and re-execution will validate process capability under conditions that include the corrective actions. Re-execution without investigation approval perpetuates the “keep running until we get a pass” mentality that defeats media fill purpose.
Implement media fill termination trending as a critical quality indicator. A facility terminating “more than five media fill batches” in a period should recognize this as a signal of fundamental process capability problems, not as a series of unrelated events requiring re-execution. Trending should identify common factors: specific operators, equipment states, intervention types, campaign timing.
Ensure deviation tracking systems cannot exclude media fill terminations. The Catalent situation arose partly because “you failed to initiate a deviation record to capture the lack of an investigation for each of the terminated media fills, resulting in an undercounting of the deviations”. Quality metrics that exclude media fill terminations from deviation totals create perverse incentives to avoid formal deviation documentation, rendering media fill findings invisible to quality system oversight.
The broader issue extends beyond media fill terminations to how aseptic processing validation integrates with quality systems. Media fills should function as early warning indicators—detecting aseptic processing vulnerabilities before product impact occurs. But this detection value requires that findings from media fills drive investigations, corrective actions, and process improvements with the same rigor as commercial batch deviations. When media fill failures can be erased through re-execution without investigation, the entire validation framework becomes performative rather than protective.
The Stopper Supplier Qualification Failure: Accepting Contamination at the Source
The stopper contamination issues discussed throughout the warning letter—mammalian hair found in or around stopper regions of vials from nearly 20 batches across multiple products—reveal a supplier qualification and incoming inspection failure that compounds the contamination hazards already discussed. The FDA’s critique focuses on Catalent’s “inappropriate reliance on pre-shipment samples (tailgate samples)” and failure to implement “enhanced or comparative sampling of stoppers from your other suppliers”.
The pre-shipment or “tailgate” sample approach represents a fundamental violation of GMP sampling principles. Under this approach, the stopper supplier—not Catalent—collected samples from lots prior to shipment and sent these samples directly to Catalent for quality testing. Catalent then made accept/reject decisions for incoming stopper lots based on testing of supplier-selected samples that never passed through Catalent’s receiving or storage processes.
Why does this matter? Because representative sampling requires that samples be selected from the material population actually received by the facility, stored under facility conditions, and handled through facility processes. Supplier-selected pre-shipment samples bypass every opportunity to detect contamination introduced during shipping, storage transitions, or handling. They enable a supplier to selectively sample from cleaner portions of production lots while shipping potentially contaminated material in the same lot to the customer.
The FDA guidance on this issue is explicit and has been for decades: samples for quality attribute testing “are to be taken at your facility from containers after receipt to ensure they are representative of the components in question”. This isn’t a new expectation emerging from enhanced regulatory scrutiny—it’s a baseline GMP requirement that Catalent systematically violated through reliance on tailgate samples.
But the tailgate sample issue represents only one element of broader supplier qualification failures. The warning letter notes that “while stoppers from [one supplier] were the primary source of extrinsic particles, they were not the only source of foreign matter.” Yet Catalent implemented “limited, enhanced sampling strategy for one of your suppliers” while failing to “increase sampling oversight” for other suppliers. This selective enhancement—focusing remediation only on the most problematic supplier while ignoring systemic contamination risks across the stopper supply base—predictably failed to resolve ongoing contamination issues.
What should stopper supplier qualification and incoming inspection look like for aseptic filling operations?
Eliminate pre-shipment or tailgate sampling entirely. All quality testing must be conducted on samples taken from received lots, stored in facility conditions, and selected using documented random sampling procedures. If suppliers require pre-shipment testing for their internal quality release, that’s their process requirement—it doesn’t substitute for the purchaser’s independent incoming inspection using facility-sampled material.
Implement risk-based incoming inspection that intensifies sampling when contamination history indicates elevated risk. The warning letter notes that Catalent recognized stoppers as “a possible contributing factor for contamination with mammalian hairs” in July 2024 but didn’t implement enhanced sampling until May 2025—a ten-month delay. The inspection enhancement should be automatic and immediate when contamination events implicate incoming materials. The sampling intensity should remain elevated until trending data demonstrates sustained contamination reduction across multiple lots.
Apply visual inspection with reject criteria specific to the defect types that create product contamination risk. Generic visual inspection looking for general “defects” fails to detect the specific contamination types—embedded hair, extrinsic particles, material fragments—that create sterile product risks. Inspection protocols must specify mammalian hair, fiber contamination, and particulate matter as reject criteria with sensitivity adequate to detect single-particle contamination in sampled stoppers.
Require supplier process changes—not just enhanced sampling—when contamination trends indicate process capability problems. The warning letter acknowledges Catalent “worked with your suppliers to reduce the likelihood of mammalian hair contamination events” but notes that despite these efforts, “you continued to receive complaints from customers who observed mammalian hair contamination in drug products they received from you”. Enhanced sampling detects contamination; it doesn’t prevent it. Suppliers demonstrating persistent contamination require process audits, environmental control improvements, and validated contamination reduction demonstrated through process capability studies—not just promises to improve quality.
Implement finished product visual inspection with heightened sensitivity for products using stoppers from suppliers with contamination history. The FDA notes that Catalent indicated “future batches found during visual inspection of finished drug products would undergo a re-inspection followed by tightened acceptable quality limit to ensure defective units would be removed” but didn’t provide the re-inspection procedure. This two-stage inspection approach—initial inspection followed by re-inspection with enhanced criteria for lots from high-risk suppliers—provides additional contamination detection but must be validated to demonstrate adequate defect removal.
The broader lesson extends beyond stoppers to supplier qualification for any component used in sterile manufacturing. Components introduce contamination risks—microbial bioburden, particulate matter, chemical residues—that cannot be fully mitigated through end-product testing. Supplier qualification must function as a contamination prevention tool, ensuring that materials entering aseptic operations meet microbiological and particulate quality standards appropriate for their role in maintaining sterility. Reliance on tailgate samples, delayed sampling enhancement, and acceptance of persistent supplier contamination all represent failures to recognize suppliers as critical contamination control points requiring rigorous qualification and oversight.
The Systemic Pattern: From Contamination Hazards to Quality System Architecture
Stepping back from individual contamination hazards—occluded surfaces, inadequate sampling, high-risk interventions, media fill terminations, supplier qualification failures—a systemic pattern emerges that connects this warning letter to the broader zemblanity framework I’ve explored in previous posts. These aren’t independent, unrelated deficiencies that coincidentally occurred at the same facility. They represent interconnected architectural failures in how the quality system approaches contamination control.
The pattern reveals itself through three consistent characteristics:
Detection systems optimized for convenience rather than capability. Contact plates instead of swabs for irregular surfaces. Pre-shipment samples instead of facility-based incoming inspection. Generic visual inspection instead of defect-specific contamination screening. Each choice prioritizes operational ease and workflow efficiency over contamination detection sensitivity. The result is a quality system that generates reassuring data—passing environmental monitoring, acceptable incoming inspection results, successful visual inspection—while actual contamination persists undetected.
Risk assessments that identify hazards without preventing their occurrence. Catalent’s risk assessments advised against interventions disturbing potentially occluded surfaces, yet these interventions continued. The facility recognized stoppers as contamination sources in July 2024 but delayed enhanced sampling until May 2025. Media fill terminations revealed aseptic processing vulnerabilities but triggered re-execution rather than investigation. Risk identification became separated from risk mitigation—the assessment process functioned as compliance theatre rather than decision-making input.
Investigation systems that erase failures rather than learn from them. Media fill terminations occurred without deviation initiation. Mammalian hair contamination events were investigated individually without recognizing the trend across 20+ deviations. Root cause investigations concluded “no product impact” based on passing sterility tests rather than addressing the contamination source enabling future events. The investigation framework optimized for batch release justification rather than contamination prevention.
These patterns don’t emerge from incompetent quality professionals or inadequate resource allocation. They emerge from quality system design choices that prioritize production efficiency, workflow continuity, and batch release over contamination detection, investigation rigor, and source elimination. The system delivers what it was designed to deliver: maximum throughput with minimum disruption. It fails to deliver what patients require: contamination control capable of detecting and eliminating sterility risks before product impact.
Recommendations: Building Contamination Hazard Detection Into System Architecture
What does effective contamination hazard management look like at the quality system architecture level? Based on the Catalent failures and broader industry patterns, several principles should guide aseptic operations:
Design decontamination validation around worst-case geometries, not ideal conditions. VHP validation using flat coupons on horizontal surfaces tells you nothing about vapor penetration into the complex geometries, wrapped components, and recessed surfaces actually present in your filling line. Biological indicator placement should target occluded surfaces specifically—if you can’t achieve validated kill on these locations, they’re contamination hazards requiring design modification or alternative decontamination methods.
Select environmental monitoring methods based on detection capability for the surfaces and conditions actually requiring monitoring. Contact plates are adequate for flat, smooth surfaces. They’re inadequate for irregular product-contact surfaces, recessed areas, and complex geometries. Swab sampling takes more time but provides contamination detection capability that contact plates cannot match. The operational convenience sacrifice is trivial compared to the contamination risk from monitoring methods incapable of detecting contamination when it occurs.
Establish intervention risk classification with decision authorities proportional to contamination risk. Routine low-risk interventions validated through media fills can proceed under operator judgment. High-risk interventions—those involving occluded surfaces, extended exposure, or proximity to open product—require quality unit pre-approval with documented enhanced controls. Interventions identified as posing unacceptable risk should be prohibited pending equipment redesign.
Treat media fill terminations as critical deviations requiring investigation before re-execution. The termination reveals process vulnerability—the investigation must identify root cause, assess commercial batch risk, and establish corrective actions before validation continues. Re-execution without investigation perpetuates the failures that caused termination.
Implement supplier qualification with facility-based sampling, contamination-specific inspection criteria, and automatic sampling enhancement when contamination trends emerge. Tailgate samples cannot provide representative material assessment. Visual inspection must target the specific contamination types—mammalian hair, particulate matter, material fragments—that create product risks. Enhanced sampling should be automatic and sustained when contamination history indicates elevated risk.
Build investigation systems that learn from contamination events rather than erasing them through re-execution or “no product impact” conclusions. Contamination events represent failures in contamination control regardless of whether subsequent testing shows product remains within specification. The investigation purpose is preventing recurrence, not justifying release.
The FDA’s comprehensive remediation demands represent what quality system architecture should look like: independent assessment of investigation capability, CAPA effectiveness evaluation, contamination hazard risk assessment covering material flows and equipment placement, detailed remediation with specific improvements, and ongoing management oversight throughout the manufacturing lifecycle.
The Contamination Control Strategy as Living System
The Catalent warning letter’s contamination hazards section serves as a case study in how quality systems can simultaneously maintain surface-level compliance while allowing fundamental contamination control failures to persist. The facility conducted VHP decontamination cycles, performed environmental monitoring, executed media fills, and inspected incoming materials—checking every compliance box. Yet contamination hazards proliferated because these activities optimized for operational convenience and batch release justification rather than contamination detection and source elimination.
The EU GMP Annex 1 Contamination Control Strategy requirement represents regulatory recognition that contamination control cannot be achieved through isolated compliance activities. It requires integrated systems where facility design, decontamination processes, environmental monitoring, intervention protocols, material qualification, and investigation practices function cohesively to detect, investigate, and eliminate contamination sources. The Catalent failures reveal what happens when these elements remain disconnected: decontamination cycles that don’t reach occluded surfaces, monitoring that can’t detect contamination on irregular geometries, interventions that proceed despite identified risks, investigations that erase failures through re-execution
For those of us responsible for contamination control in aseptic manufacturing, the question isn’t whether our facilities face similar vulnerabilities—they do. The question is whether our quality systems are architected to detect these vulnerabilities before regulators discover them. Are your VHP validations addressing actual occluded surfaces or ideal flat coupons? Are you using contact plates because they detect contamination effectively or because they’re operationally convenient? Do your intervention protocols prevent the high-risk activities your risk assessments identify? When media fills terminate, do investigations occur before re-execution?
The Catalent warning letter provides a diagnostic framework for assessing contamination hazard management. Use it. Map your own decontamination validation against the occluded surface criteria. Evaluate your environmental monitoring method selection against detection capability requirements. Review intervention protocols for alignment with risk assessments. Examine media fill termination handling for investigation rigor. Assess supplier qualification for facility-based sampling and contamination-specific inspection.
The contamination hazards are already present in your aseptic operations. The question is whether your quality system architecture can detect them.
On August 7, 2025, FDA Commissioner Marty Makary announced a program that, on its surface, appears to be a straightforward effort to strengthen domestic pharmaceutical manufacturing. The FDA PreCheck initiative promises “regulatory predictability” and “streamlined review” for companies building new U.S. drug manufacturing facilities. It arrives wrapped in the language of national security—reducing dependence on foreign manufacturing, securing critical supply chains, ensuring Americans have access to domestically-produced medicines.
This is the story the press release tells.
But if you read PreCheck through the lens of falsifiable quality systems a different narrative emerges. PreCheck is not merely an economic incentive program or a supply chain security measure. It is, more fundamentally, a confession.
It is the FDA admitting that the current Pre-Approval Inspection (PAI) and Pre-License Inspection (PLI) model—the high-stakes, eleventh-hour facility audit conducted weeks before the PDUFA date—is a profoundly inefficient mechanism for establishing trust. It is an acknowledgment that evaluating a facility’s “GMP compliance” only in the context of a specific product application, only after the facility is built, only when the approval clock is ticking, creates a system where failures are discovered at the moment when corrections are most expensive and most disruptive.
PreCheck proposes, instead, that the FDA should evaluate facilities earlier, more frequently, and independent of the product approval timeline. It proposes that manufacturers should be able to earn regulatory confidence in their facility design(Phase 1: Facility Readiness) before they ever file a product application, and that this confidence should carry forward into the application review (Phase 2: CMC streamlining).
What is revolutionary—at least for the FDA—is the implicit admission that a manufacturing facility is not a binary state (compliant/non-compliant) evaluated at a single moment in time, but rather a developmental system that passes through stages of maturity, and that regulatory oversight should be calibrated to those stages.
This is not a cheerleading piece for PreCheck. It is an analysis of what PreCheck reveals about the epistemology of regulatory inspection, and a call for a more explicit, more testable framework for what it means for a facility to be “ready.” I also have concerns about the ability of the FDA to carry this out, and the dangers of on-going regulatory capture that I won’t really cover here.
Anatomy of PreCheck—What the Program Actually Proposes
The Two-Phase Structure
PreCheck is built on two complementary phases:
Phase 1: Facility Readiness This phase focuses on early engagement between the manufacturer and the FDA during the facility’s design, construction, and pre-production stages. The manufacturer is encouraged—though not required, as the program is voluntary—to submit a Type V Drug Master File (DMF) containing:
Site operations layout and description
Pharmaceutical Quality System (PQS) elements
Quality Management Maturity (QMM) practices
Equipment specifications and process flow diagrams
This Type V DMF serves as a “living document” that can be incorporated by reference into future drug applications. The FDA will review this DMF and provide feedback on facility design, helping to identify potential compliance issues before construction is complete.
Michael Kopcha, Director of the FDA’s Office of Pharmaceutical Quality (OPQ), clarified at the September 30 public meeting that if a facility successfully completes the Facility Readiness Phase, an inspection may not be necessary when a product application is later filed.
This is the core innovation: decoupling facility assessment from product application.
Phase 2: Application Submission Once a product application (NDA, ANDA, or BLA) is filed, the second phase focuses on streamlining the Chemistry, Manufacturing, and Controls (CMC) section of the application. The FDA offers:
Pre-application meetings
Early feedback on CMC data needs
Facility readiness and inspection planning discussions
Because the facility has already been reviewed in Phase 1, the CMC review can proceed with greater confidence that the manufacturing site is capable of producing the product as described in the application.
Importantly, Kopcha also clarified that only the CMC portion of the review is expedited—clinical and non-clinical sections follow the usual timeline. This is a critical limitation that industry stakeholders noted with some frustration, as it means PreCheck does not shorten the overall approval timeline as much as initially hoped.
What PreCheck Is Not
To understand what PreCheck offers, it is equally important to understand what it does not offer:
It is not a fast-track program. PreCheck does not provide priority review or accelerated approval pathways. It is a facility-focused engagement model, not a product-focused expedited review.
It is not a GMP certificate. Unlike the European system, where facilities can obtain a GMP certificate independent of any product application, PreCheck still requires a product application to trigger Phase 2. The Facility Readiness Phase (Phase 1) provides early engagement, but does not result in a standalone “facility approval” that can be referenced by multiple products or multiple sponsors.
It is not mandatory. PreCheck is voluntary. Manufacturers can continue to follow the traditional PAI/PLI pathway if they prefer.
It does not apply to existing facilities (yet). PreCheck is designed for new domestic manufacturing facilities. Industry stakeholders have requested expansion to include existing facility expansions and retrofits, but the FDA has not committed to this.
It does not decouple facility inspections from product approvals. Despite industry’s strong push for this—Big Pharma executives from Eli Lilly, Merck, and others explicitly requested at the public meeting that the FDA adopt the EMA model of decoupling GMP inspections from product applications—the FDA has not agreed to this. Phase 1 provides early feedback, but Phase 2 still ties the facility assessment to a specific product application.
The Type V DMF as the Backbone of PreCheck
The Type V Drug Master File is the operational mechanism through which PreCheck functions.
Historically, Type V DMFs have been a catch-all category for “FDA-accepted reference information” that doesn’t fit into the other DMF types (Type II for drug substances, Type III for packaging, Type IV for excipients). They have been used primarily for device constituent parts in combination products.
PreCheck repurposes the Type V DMF as a facility-centric repository. Instead of focusing on a material or a component, the Type V DMF in the PreCheck context contains:
Equipment and utilities: Specifications, qualification status, maintenance programs
The idea is that this DMF becomes a reusable asset. If a manufacturer builds a facility and completes the PreCheck Facility Readiness Phase, that facility’s Type V DMF can be referenced by multiple product applications from the same sponsor. This reduces redundant submissions and allows the FDA to build institutional knowledge about a facility over time.
However—and this is where the limitations become apparent—the Type V DMF is sponsor-specific. If the facility is a Contract Manufacturing Organization (CMO), the FDA has not clarified how the DMF ownership works or whether multiple API sponsors using the same CMO can leverage the same facility DMF. Industry stakeholders raised this as a significant concern at the public meeting, noting that CMOs account for approximately 50% of all facility-related CRLs.
The Type V DMF vs. Site Master File: Convergent Evolutions in Facility Documentation
The Type V DMF requirement in PreCheck bears a striking resemblance—and some critical differences—to the Site Master File (SMF) required under EU GMP and PIC/S guidelines. Understanding this comparison reveals both the potential of PreCheck and its limitations.
What is a Site Master File?
The Site Master File is a GMP documentation requirement in the EU, mandated under Chapter 4 of the EU GMP Guideline. PIC/S provides detailed guidance on SMF preparation in document PE 008-4. The SMF is:
A facility-centric document prepared by the pharmaceutical manufacturer
Typically 25-30 pages plus appendices, designed to be “readable when printed on A4 paper”
A living document that is part of the quality management system, updated regularly (recommended every 2 years)
Submitted to regulatory authorities to demonstrate GMP compliance and facilitate inspection planning
The purpose of the SMF is explicit: to provide regulators with a comprehensive overview of the manufacturing operations at a named site, independent of any specific product. It answers the question: “What GMP activities occur at this location?”
Required SMF Contents (per PIC/S PE 008-4 and EU guidance):
General Information: Company name, site address, contact information, authorized manufacturing activities, manufacturing license copy
Quality Management System: QA/QC organizational structure, key personnel qualifications, training programs, release procedures for Qualified Persons
Personnel: Number of employees in production, QC, QA, warehousing; reporting structure
Premises and Equipment: Site layouts, room classifications, pressure differentials, HVAC systems, major equipment lists
Documentation: Description of documentation systems (batch records, SOPs, specifications)
Production: Brief description of manufacturing operations, in-process controls, process validation policy
Quality Control: QC laboratories, test methods, stability programs, reference standards
Distribution, Complaints, and Product Recalls: Systems for handling complaints, recalls, and distribution controls
Self-Inspection: Internal audit programs and CAPA systems
Critically, the SMF is product-agnostic. It describes the facility’s capabilities and systems, not specific product formulations or manufacturing procedures. An appendix may list the types of products manufactured (e.g., “solid oral dosage forms,” “sterile injectables”), but detailed product-specific CMC information is not included.
How the Type V DMF Differs from the Site Master File
The FDA’s Type V DMF in PreCheck serves a similar purpose but with important distinctions:
Similarities:
Both are facility-centric documents describing site operations, quality systems, and GMP capabilities
Both include site layouts, equipment specifications, and quality management elements
Both are intended to facilitate regulatory review and inspection planning
Both are living documents that can be updated as the facility changes
Critical Differences:
Dimension
Site Master File (EU/PIC/S)
Type V DMF (FDA PreCheck)
Regulatory Status
Mandatory for EU manufacturing license
Voluntary (PreCheck is voluntary program)
Independence from Products
Fully independent—facility can be certified without any product application
Partially independent—Phase 1 allows early review, but Phase 2 still ties to product application
Ownership
Facility owner (manufacturer or CMO)
Sponsor-specific—unclear for CMO facilities with multiple clients
Regulatory Outcome
Can support GMP certificate or manufacturing license independent of product approvals
Does not result in standalone facility approval; only facilitates product application review
Scope
Describes all manufacturing operations at the site
Focused on specific facility being built, intended to support future product applications from that sponsor
International Recognition
Harmonized internationally—PIC/S member authorities recognize each other’s SMF-based inspections
FDA-specific—no provision for accepting EU GMP certificates or SMFs in lieu of PreCheck participation
Length and Detail
25-30 pages plus appendices, designed for conciseness
No specified page limit; QMM practices component could be extensive
The Critical Gap: Product-Specificity vs. Facility Independence
The most significant difference lies in how the documents relate to product approvals.
In the EU system, a manufacturer submits the SMF to the National Competent Authority (NCA) as part of obtaining or maintaining a manufacturing license. The NCA inspects the facility and, if compliant, grants a GMP certificate that is valid across all products manufactured at that site.
When a Marketing Authorization Application (MAA) is later filed for a specific product, the CHMP can reference the existing GMP certificate and decide whether a pre-approval inspection is needed. If the facility has been recently inspected and found compliant, no additional inspection may be required. The facility’s GMP status is decoupled from the product approval.
The FDA’s Type V DMF in PreCheck does not create this decoupling. While Phase 1 allows early FDA review of the facility design, the Type V DMF is still tied to the sponsor’s product applications. It is not a standalone “facility certificate.” Multiple products from the same sponsor can reference the same Type V DMF, but the FDA has not clarified whether:
The DMF reduces the need for PAIs/PLIs on second, third, and subsequent products from the same facility
The DMF serves any function outside of the PreCheck program (e.g., for routine surveillance inspections)
At the September 30 public meeting, industry stakeholders explicitly requested that the FDA adopt the EU GMP certificate model, where facilities can be certified independent of product applications. The FDA acknowledged the request but did not commit to this approach.
Confidentiality: DMFs Are Proprietary
The Type V DMF operates under FDA’s DMF confidentiality rules (21 CFR 314.420). The DMF holder (the manufacturer) authorizes the FDA to reference the DMF when reviewing a specific sponsor’s application, but the detailed contents are not disclosed to the sponsor or to other parties. This protects proprietary manufacturing information, especially important for CMOs who serve competing sponsors.
However, PreCheck asks manufacturers to include Quality Management Maturity (QMM) practices in the Type V DMF—information that goes beyond what is typically in a DMF and beyond what is required in an SMF. As discussed earlier, industry is concerned that disclosing advanced quality practices could create new regulatory expectations or vulnerabilities. This tension does not exist with SMFs, which describe only what is required by GMP, not what is aspirational.
Could the FDA Adopt a Site Master File Model?
The comparison raises an obvious question: Why doesn’t the FDA simply adopt the EU Site Master File requirement?
Several barriers exist:
1. U.S. Legal Framework
The FDA does not issue facility manufacturing licenses the way EU NCAs do. In the U.S., a facility is “approved” only in the context of a specific product application (NDA, ANDA, BLA). The FDA has establishment registration (Form FDA 2656), but registration does not constitute approval—it is merely notification that a facility exists and intends to manufacture drugs[not in sources but common knowledge].
To adopt the EU GMP certificate model, the FDA would need either:
Statutory authority to issue facility licenses independent of product applications, or
A regulatory framework that allows facilities to earn presumption of compliance that carries across multiple products
Neither currently exists in U.S. law.
2. FDA Resource Model
The FDA’s inspection system is application-driven. PAIs and PLIs are triggered by product applications, and the cost is implicitly borne by the applicant through user fees. A facility-centric certification system would require the FDA to conduct routine facility inspections on a 1-3 year cycle (as the EMA/PIC/S model does), independent of product filings.
This would require:
Significant increases in FDA inspector workforce
A new fee structure (facility fees vs. application fees)
Coordination across CDER, CBER, and Office of Inspections and Investigations (OII)
PreCheck sidesteps this by keeping the system voluntary and sponsor-initiated. The FDA does not commit to routine re-inspections; it merely offers early engagement for new facilities.
3. CDMO Business Model Complexity
Approximately 50% of facility-related CRLs involve Contract Development and Manufacturing Organizations. CDMOs manufacture products for dozens or hundreds of sponsors. In the EU, the CMO has one GMP certificate that covers all its operations, and each sponsor references that certificate in their MAAs.
In the U.S., each sponsor’s product application is reviewed independently. If the FDA were to adopt a facility certificate model, it would need to resolve:
Who pays for the facility inspection—the CMO or the sponsors?
How are facility compliance issues (OAIs, warning letters) communicated across sponsors?
Can a facility certificate be revoked without blocking all pending product applications?
These are solvable problems—the EU has solved them—but they require systemic changes to the FDA’s regulatory framework.
The Path Forward: Incremental Convergence
The Type V DMF in PreCheck is a step toward the Site Master File model, but it is not yet there. For PreCheck to evolve into a true facility-centric system, the FDA would need to:
Decouple Phase 1 (Facility Readiness) from Phase 2 (Product Application), allowing facilities to complete Phase 1 and earn a facility certificate or presumption of compliance that applies to all future products from any sponsor using that facility.
Standardize the Type V DMF content to align with PIC/S SMF guidance, ensuring international harmonization and reducing duplicative submissions for facilities operating in multiple markets.
Implement routine surveillance inspections (every 1-3 years) for facilities that have completed PreCheck, with inspection frequency adjusted based on compliance history (the PIC/S risk-based model). The major difference here probably would be facilities not yet engaged in commercial manufacturing.
Enhance Participation in PIC/S inspection reliance, accepting EU GMP certificates and SMFs for facilities that have been recently inspected by PIC/S member authorities, and allowing U.S. Type V DMFs to be recognized internationally.
The industry’s message at the PreCheck public meeting was clear: adopt the EU model. Whether the FDA is willing—or able—to make that leap remains to be seen.
Quality Management Maturity (QMM): The Aspirational Component
QMM is an FDA initiative (led by CDER) that aims to promote quality management practices that go beyond CGMP minimum requirements. The FDA’s QMM program evaluates manufacturers on a maturity scale across five practice areas:
The QMM assessment uses a pre-interview questionnaire and interactive discussion to evaluate how effectively a manufacturer monitors and manages quality. The maturity levels range from Undefined (reactive, ad hoc) to Optimized (proactive, embedded quality culture).
The FDA ran two QMM pilot programs between October 2020 and March 2022 to test this approach. The goal is to create a system where the FDA—and potentially the market—can recognize and reward manufacturers with mature quality systems that focus on continuous improvement rather than reactive compliance.
PreCheck asks manufacturers to include QMM practices in their Type V DMF. This is where the program becomes aspirational.
At the September 30 public meeting, industry stakeholders described submitting QMM information as “risky”. Why? Because QMM is not fully defined. The assessment protocol is still in development. The maturity criteria are not standardized. And most critically, manufacturers fear that disclosing information about their quality systems beyond what is required by CGMP could create new expectations or new vulnerabilities during inspections.
One attendee noted that “QMS information is difficult to package, usually viewed on inspection”. In other words, quality maturity is something you demonstrate through behavior, not something you document in a binder.
The FDA’s inclusion of QMM in PreCheck reveals a tension: the agency wants to move beyond compliance theater—beyond the checkbox mentality of “we have an SOP for that”—and toward evaluating whether manufacturers have the organizational discipline to maintain control over time. But the FDA has not yet figured out how to do this in a way that feels safe or fair to industry.
This is the same tension I discussed in my August 2025 post on “The Effectiveness Paradox“: how do you evaluate a quality system’s capability to detect its own failures, not just its ability to pass an inspection when everything is running smoothly?
The Current PAI/PLI Model and Why It Fails
To understand why PreCheck is necessary, we must first understand why the current Pre-Approval Inspection (PAI) and Pre-License Inspection (PLI) model is structurally flawed.
The High-Stakes Inspection at the Worst Possible Time
Under the current system, the FDA conducts a PAI (for drugs under CDER) or PLI (for biologics under CBER) to verify that a manufacturing facility is capable of producing the drug product as described in the application. This inspection is risk-based—the FDA does not inspect every application. But when an inspection is deemed necessary, the timing is brutal.
As one industry executive described at the PreCheck public meeting: “We brought on a new U.S. manufacturing facility two years ago and the PAI for that facility was weeks prior to our PDUFA date. At that point, we’re under a lot of pressure. Any questions or comments or observations that come up during the PAI are very difficult to resolve in that time frame”.
This is the structural flaw: the FDA evaluates the facility after the facility is built, after the application is filed, and as close as possible to the approval decision. If the inspection reveals deficiencies—data integrity failures, inadequate cleaning validation, contamination control gaps, equipment qualification issues—the manufacturer has very little time to correct them before the PDUFA clock expires.
The result? Complete Response Letters (CRLs).
The CRL Epidemic: Facility Failures Blocking Approvals
The data on inspection-related CRLs is stark.
In a 2024 analysis of BLA outcomes, researchers found that BLAs were issued CRLs nearly half the time in 2023—the highest rate ever recorded. Of these CRLs, approximately 20% were due to facility inspection failures.
Breaking this down further:
Foreign manufacturing sites are associated with more CRs, proportionate to the number of PLIs conducted.
Approximately 50% of facility deficiencies are for Contract Development Manufacturing Organizations (CDMOs).
Approximately 75% of Applicant-Site CRs are for biosimilars.
The five most-cited facilities (each with ≥5 CRs) account for ~35% of all CR deficiencies.
In a separate analysis of CRL drivers from 2020–2024, Manufacturing/CMC deficiencies and Facility Inspection Failures together account for over 60% of all CRLs. This includes:
Inadequate control of production processes
Unstable manufacturing
Data gaps in CMC
GMP site inspections revealing uncontrolled processes, document gaps, hygiene issues
The pattern is clear: facility issues discovered late in the approval process are causing massive delays.
Why the Late-Stage Inspection Model Creates Failure
The PAI/PLI model creates failure for three reasons:
1. The Inspection Evaluates “Work-as-Done” When It’s Too Late to Change It
When the FDA arrives for a PAI/PLI, the facility is already built. The equipment is already installed. The processes are already validated (or supposed to be). The SOPs are already written.
If the inspector identifies a fundamental design flaw—say, inadequate segregation between manufacturing suites, or a HVAC system that cannot maintain differential pressure during interventions—the manufacturer cannot easily fix it. Redesigning cleanroom airflow or adding airlocks requires months of construction and re-qualification. The PDUFA clock does not stop.
This is analogous to the Rechon Life Science warning letter I analyzed in September 2025, where the smoke studies revealed turbulent airflow over open vials, contradicting the firm’s Contamination Control Strategy. The CCS claimed unidirectional flow protected the product. The smoke video showed eddies. But by the time this was discovered, the facility was operational, the batches were made, and the “fix” required redesigning the isolator.
2. The Inspection Creates Adversarial Pressure Instead of Collaborative Learning
Because the PAI occurs weeks before the PDUFA date, the inspection becomes a pass/fail exam rather than a learning opportunity. The manufacturer is under intense pressure to defend their systems rather than interrogate them. Questions from inspectors are perceived as threats, not invitations to improve.
This is the opposite of the falsifiable quality mindset. A falsifiable system would welcome the inspection as a chance to test whether the control strategy holds up under scrutiny. But the current timing makes this psychologically impossible. The stakes are too high.
3. The Inspection Conflates “Facility Capability” with “Product-Specific Compliance”
The PAI/PLI is nominally about verifying that the facility can manufacture the specific product in the application. But in practice, inspectors evaluate general GMP compliance—data integrity, quality unit independence, deviation investigation rigor, cleaning validation adequacy—not just product-specific manufacturing steps.
The FDA does not give “facility certificates” like the EMA does. Every product application triggers a new inspection (or waiver decision) based on the facility’s recent inspection history. This means a facility with a poor inspection outcome on one product will face heightened scrutiny on all subsequent products—creating a negative feedback loop.
Comparative Regulatory Philosophy—EMA, WHO, and PIC/S
To understand whether PreCheck is sufficient, we must compare it to how other regulatory agencies conceptualize facility oversight.
The EMA Model: Decoupling and Delegation
The European Medicines Agency (EMA) operates a decentralized inspection system. The EMA itself does not conduct inspections; instead, National Competent Authorities (NCAs) in EU member states perform GMP inspections on behalf of the EMA.
The key structural differences from the FDA:
1. Facility Inspections Are Decoupled from Product Applications
In the EU, a manufacturing facility can be inspected and receive a GMP certificate from the NCA independent of any specific product application. This certificate attests that the facility complies with EU GMP and is capable of manufacturing medicinal products according to its authorized scope.
When a Marketing Authorization Application (MAA) is filed, the CHMP (Committee for Medicinal Products for Human Use) can request a GMP inspection if needed, but if the facility has a recent GMP certificate in good standing, a new inspection may not be necessary.
This means the facility’s “GMP status” is assessed separately from the product’s clinical and CMC review. Facility issues do not automatically block product approval—they are addressed through a separate remediation pathway.
2. Risk-Based and Reliance-Based Inspection Planning
The EMA employs a risk-based approach to determine inspection frequency. Facilities are inspected on a routine re-inspection program (typically every 1-3 years depending on risk), with the frequency adjusted based on:
Previous inspection findings (critical, major, or minor deficiencies)
Product type and patient risk
Manufacturing complexity
Company compliance history
Additionally, the EMA participates in PIC/S inspection reliance (discussed below), meaning it may accept inspection reports from other competent authorities without conducting its own inspection.
3. Mutual Recognition Agreement (MRA) with the FDA
The U.S. and EU have a Mutual Recognition Agreement for GMP inspections. Under this agreement, the FDA and EMA recognize each other’s inspection outcomes for human medicines, reducing duplicate inspections.
Importantly, the EMA has begun accepting FDA inspection reports proactively during the pre-submission phase. Applicants can provide FDA inspection reports to support their MAA, allowing the EMA to make risk-based decisions about whether an additional inspection is needed.
This is the inverse of what the FDA is attempting with PreCheck. The EMA is saying: “We trust the FDA’s inspection, so we don’t need to repeat it.” The FDA, with PreCheck, is saying: “We will inspect early, so we don’t need to repeat it later.” Both approaches aim to reduce redundancy, but the EMA’s reliance model is more mature.
WHO Prequalification: Phased Inspections and Leveraging SRAs
The WHO Prequalification (PQ) program provides an alternative model for facility assessment, particularly relevant for manufacturers in low- and middle-income countries (LMICs).
Key features:
1. Inspection Occurs During the Dossier Assessment, Not After
Unlike the FDA’s PAI (which occurs near the end of the review), WHO PQ conducts inspections within 6 months of dossier acceptance for assessment. This means the facility inspection happens in parallel with the technical review, not at the end.
If the inspection reveals deficiencies, the manufacturer submits a Corrective and Preventive Action (CAPA) plan, and WHO conducts a follow-up inspection within 6-9 months. The prequalification decision is not made until the inspection is closed.
This phased approach reduces the “all-or-nothing” pressure of the FDA’s late-stage PAI.
2. Routine Inspections Every 1-3 Years
Once a product is prequalified, WHO conducts routine inspections every 1-3 years to verify continued compliance. This aligns with the Continued Process Verification concept in FDA’s Stage 3 validation—the idea that a facility is not “validated forever” after one inspection, but must demonstrate ongoing control.
3. Reliance on Stringent Regulatory Authorities (SRAs)
WHO PQ may leverage inspection reports from Stringent Regulatory Authorities (SRAs) or WHO-Listed Authorities (WLAs). If the facility has been recently inspected by an SRA (e.g., FDA, EMA, Health Canada) and the scope is appropriate, WHO may waive the onsite inspection and rely on the SRA’s findings.
This is a trust-based model: WHO recognizes that conducting duplicate inspections wastes resources, and that a well-documented inspection by a competent authority provides sufficient assurance.
The FDA’s PreCheck program does not include this reliance mechanism. PreCheck is entirely FDA-centric—there is no provision for accepting EMA or WHO inspection reports to satisfy Phase 1 or Phase 2 requirements.
PIC/S: Risk-Based Inspection Planning and Classification
The Pharmaceutical Inspection Co-operation Scheme (PIC/S) is an international framework for harmonizing GMP inspections across member authorities.
Two key PIC/S documents are relevant to this discussion:
1. PI 037-1: Risk-Based Inspection Planning
PIC/S provides a qualitative risk management tool to help inspectorates prioritize inspections. The model assigns each facility a risk rating (A, B, or C) based on:
Intrinsic Risk: Product type, complexity, patient population
Compliance Risk: Previous inspection outcomes, deficiency history
The risk rating determines inspection frequency:
A (Low Risk): Reduced frequency (2-3 years)
B (Moderate Risk): Moderate frequency (1-2 years)
C (High Risk): Increased frequency (<1 year, potentially multiple times per year)
Critically, PIC/S assumes that every manufacturer will be inspected at least once within the defined period. There is no such thing as “perpetual approval” based on one inspection.
2. PI 048-1: GMP Inspection Reliance
PIC/S introduced a guidance on inspection reliance in 2018. This guidance provides a framework for desktop assessment of GMP compliance based on the inspection activities of other competent authorities.
The key principle: if another PIC/S member authority has recently inspected a facility and found it compliant, a second authority may accept that finding without conducting its own inspection.
This reliance is conditional—the accepting authority must verify that:
The scope of the original inspection covers the relevant products and activities
The original inspection was recent (typically within 2-3 years)
The original authority is a trusted PIC/S member
There have been no significant changes or adverse events since the inspection
This is the most mature version of the trust-based inspection model. It recognizes that GMP compliance is not a static state that can be certified once, but also that redundant inspections by multiple authorities waste resources and delay market access.
Comparative Summary
Dimension
FDA (Current PAI/PLI)
FDA PreCheck (Proposed)
EMA/EU
WHO PQ
PIC/S Framework
Timing of Inspection
Late (near PDUFA)
Early (design phase) + Late (application)
Variable, risk-based
Early (during assessment)
Risk-based (1-3 years)
Facility vs. Product Focus
Product-specific
Facility (Phase 1) → Product (Phase 2)
Facility-centric (GMP certificate)
Product-specific with facility focus
Facility-centric
Decoupling
No
Partial (Phase 1 early feedback)
Yes (GMP certificate independent)
No, but phased
Yes (risk-based frequency)
Reliance on Other Authorities
No
No
Yes (MRA, PIC/S)
Yes (SRA reliance)
Yes (core principle)
Frequency
Per-application
Phase 1 (once) → Phase 2 (per-application)
Routine re-inspection (1-3 years)
Routine (1-3 years)
Risk-based (A/B/C)
Consequence of Failure
CRL, approval blocked
Phase 1: design guidance; Phase 2: potential CRL
CAPA, may not block approval
CAPA, follow-up inspection
Remediation, increased frequency
The striking pattern: the FDA is the outlier. Every other major regulatory system has moved toward:
Decoupling facility inspections from product applications
Risk-based, routine inspection frequencies
Reliance mechanisms to avoid duplicate inspections
Facility-centric GMP certificates or equivalent
PreCheck is the FDA’s first step toward this model, but it is not yet there. Phase 1 provides early engagement, but Phase 2 still ties facility assessment to a specific product. PreCheck does not create a standalone “facility approval” that can be referenced across products or shared among CMO clients.
Potential Benefits of PreCheck (When It Works)
Despite its limitations, PreCheck could offer potential real benefits over the status quo—if it is implemented effectively.
Benefit 1: Early Detection of Facility Design Flaws
The most obvious benefit of PreCheck is that it allows the FDA to review facility design during construction, rather than after the facility is operational.
As one industry expert noted at the public meeting: “You’re going to be able to solve facility issues months, even years before they occur”.
Consider the alternative. Under the current PAI/PLI model, if the FDA inspector discovers during a pre-approval inspection that the cleanroom differential pressure cannot be maintained during material transfer, the manufacturer faces a choice:
Redesign the HVAC system (months of construction, re-commissioning, re-qualification)
Withdraw the application
Argue that the deficiency is not critical and hope the FDA agrees
All of these options are expensive and delay the product launch.
PreCheck, by contrast, allows the FDA to flag this issue during the design review (Phase 1), when the HVAC system is still on the engineering drawings. The manufacturer can adjust the design before pouring concrete.
This is the principle of Design Qualification (DQ) applied to the regulatory inspection timeline. Just as equipment must pass DQ before moving to Installation Qualification (IQ), the facility should pass regulatory design review before moving to construction and operation.
Benefit 2: Reduced Uncertainty and More Predictable Timelines
The current PAI/PLI system creates uncertainty about whether an inspection will be scheduled, when it will occur, and what the outcome will be.
Manufacturers described this uncertainty as one of the biggest pain points at the PreCheck public meeting. One executive noted that PAIs are often scheduled with short notice, and manufacturers struggle to align their production schedules (especially for seasonal products like vaccines) with the FDA’s inspection availability.
PreCheck introduces structure to this chaos. If a manufacturer completes Phase 1 successfully, the FDA has already reviewed the facility and provided feedback. The manufacturer knows what the FDA expects. When Phase 2 begins (the product application), the CMC review can proceed with greater confidence that facility issues will not derail the approval.
This does not eliminate uncertainty entirely—Phase 2 still involves an inspection (or inspection waiver decision), and deficiencies can still result in CRLs. But it shifts the uncertainty earlier in the process, when corrections are cheaper.
Benefit 3: Building Institutional Knowledge at the FDA
One underappreciated benefit of PreCheck is that it allows the FDA to build institutional knowledge about a manufacturer’s quality systems over time.
Under the current model, a PAI inspector arrives at a facility for 5-10 days, reviews documents, observes operations, and leaves. The inspection report is filed. If the same facility files a second product application two years later, a different inspector may conduct the PAI, and the process starts from scratch.
The PreCheck Type V DMF, by contrast, is a living document that accumulates information about the facility over its lifecycle. The FDA reviewers who participate in Phase 1 (design review) can provide continuity into Phase 2 (application review) and potentially into post-approval surveillance.
This is the principle behind the EMA’s GMP certificate model: once the facility is certified, subsequent inspections build on the previous findings rather than starting from zero.
Industry stakeholders explicitly requested this continuity at the PreCheck meeting, asking the FDA to “keep the same reviewers in place as the process progresses”. The implication: trust is built through relationships and institutional memory, not one-off inspections.
By including Quality Management Maturity (QMM) practices in the Type V DMF, PreCheck encourages manufacturers to invest in advanced quality systems beyond CGMP minimums.
This is aspirational, not transactional. The FDA is not offering faster approvals or reduced inspection frequency in exchange for QMM participation—at least not yet. But the long-term vision is that manufacturers with mature quality systems will be recognized as lower-risk, and this recognition could translate into regulatory flexibility (e.g., fewer post-approval inspections, faster review of post-approval changes).
This aligns with the philosophy I have argued for throughout 2025: a quality system should not be judged by its compliance on the day of the inspection, but by its ability to detect and correct failures over time. A mature quality system is one that is designed to falsify its own assumptions—to seek out the cracks before they become catastrophic failures.
The QMM framework is the FDA’s attempt to operationalize this philosophy. Whether it succeeds depends on whether the FDA can develop a fair, transparent, and non-punitive assessment protocol—something industry is deeply skeptical about.
Challenges and Industry Concerns
The September 30, 2025 public meeting revealed that while industry welcomes PreCheck, the program as proposed has significant gaps.
Challenge 1: PreCheck Does Not Decouple Facility Inspections from Product Approvals
The single most consistent request from industry was: decouple GMP facility inspections from product applications.
Executives from Eli Lilly, Merck, Johnson & Johnson, and others explicitly called for the FDA to adopt the EMA model, where a facility can be inspected and certified independent of a product application, and that certification can be referenced by multiple products.
Why does this matter? Because under the current system (and under PreCheck as proposed), if a facility has a compliance issue, all product applications relying on that facility are at risk.
Consider a CMO that manufactures API for 10 different sponsors. If the CMO fails a PAI for one sponsor’s product, the FDA may place the entire facility under heightened scrutiny, delaying approvals for all 10 sponsors. This creates a cascade failure where one product’s facility issue blocks the market access of unrelated products.
The EMA’s GMP certificate model avoids this by treating the facility as a separate regulatory entity. If the facility has compliance issues, the NCA works with the facility to remediate them independent of pending product applications. The product approvals may be delayed, but the remediation pathway is separate.
The FDA’s Michael Kopcha acknowledged the request but did not commit: “Decoupling, streamlining, and more up-front communication is helpful… We will have to think about how to go about managing and broadening the scope”.
Challenge 2: PreCheck Only Applies to New Facilities, Not Existing Ones
PreCheck is designed for new domestic manufacturing facilities. But the majority of facility-related CRLs involve existing facilities—either because they are making post-approval changes, transferring manufacturing sites, or adding new products.
Industry stakeholders requested that PreCheck be expanded to include:
Existing facility expansions and retrofits
Post-approval changes (e.g., adding a new production line, changing a manufacturing process)
Site transfers (moving production from one facility to another)
The FDA did not commit to this expansion, but Kopcha noted that the agency is “thinking about how to broaden the scope”.
The challenge here is that the FDA lacks a facility lifecycle management framework. The current system treats each product application as a discrete event, with no mechanism for a facility to earn cumulative credit for good performance across multiple products over time.
This is what the PIC/S risk-based inspection model provides: a facility with a strong compliance history moves to reduced inspection frequency (e.g., every 3 years instead of annually). A facility with a poor history moves to increased frequency (e.g., multiple inspections per year). The inspection burden is proportional to risk.
PreCheck Phase 1 could serve this function—if it were expanded to existing facilities. A CMO that completes Phase 1 and demonstrates mature quality systems could earn presumption of compliance for future product applications, reducing the need for repeated PAIs/PLIs.
But as currently designed, PreCheck is a one-time benefit for new facilities only.
Challenge 3: Confidentiality and Intellectual Property Concerns
Manufacturers expressed significant concern about what information the FDA will require in the Type V DMF and whether that information will be protected from Freedom of Information Act (FOIA) requests.
The concern is twofold:
1. Proprietary Manufacturing Details
The Type V DMF is supposed to include facility layouts, equipment specifications, and process flow diagrams. For some manufacturers—especially those with novel technologies or proprietary processes—this information is competitively sensitive.
If the DMF is subject to FOIA disclosure (even with redactions), competitors could potentially reverse-engineer the manufacturing strategy.
2. CDMO Relationships
For Contract Development and Manufacturing Organizations (CDMOs), the Type V DMF creates a dilemma. The CDMO owns the facility, but the sponsor owns the product. Who submits the DMF? Who controls access to it? If multiple sponsors use the same CDMO facility, can they all reference the same DMF, or must each sponsor submit a separate one?
Industry requested clarity on these ownership and confidentiality issues, but the FDA has not yet provided detailed guidance.
This is not a trivial concern. Approximately 50% of facility-related CRLs involve CDMOs. If PreCheck cannot accommodate the CDMO business model, its utility is limited.
The Confidentiality Paradox: Good for Companies, Uncertain for Consumers
The confidentiality protections embedded in the DMF system—and by extension, in PreCheck’s Type V DMF—serve a legitimate commercial purpose. They allow manufacturers to protect proprietary manufacturing processes, equipment specifications, and quality system innovations from competitors. This protection is particularly critical for Contract Manufacturing Organizations (CMOs) who serve multiple competing sponsors and cannot afford to have one client’s proprietary methods disclosed to another.
But there is a tension here that deserves explicit acknowledgment: confidentiality rules that benefit companies are not necessarily optimal for consumers. This is not an argument for eliminating trade secret protections—innovation requires some degree of secrecy. Rather, it is a call to examine where the balance is struck and whether current confidentiality practices are serving the public interest as robustly as they serve commercial interests.
What Confidentiality Hides from Public View
Under current FDA confidentiality rules (21 CFR 314.420 for DMFs, and broader FOIA exemptions for commercial information), the following categories of information are routinely shielded from public disclosure.
The detailed manufacturing procedures, equipment specifications, and process parameters submitted in Type II DMFs (drug substances) and Type V DMFs (facilities) are never disclosed to the public. They may not even be disclosed to the sponsor referencing the DMF—only the FDA reviews them.
This means that if a manufacturer is using a novel but potentially risky manufacturing technique—say, a continuous manufacturing process that has not been validated at scale, or a cleaning procedure that is marginally effective—the public has no way to know. The FDA reviews this information, but the public cannot verify the FDA’s judgment.
2. Drug Pricing Data and Financial Arrangements
Pharmaceutical companies have successfully invoked trade secret protections to keep drug prices, manufacturing costs, and financial arrangements (rebates, discounts) confidential. In the United States, transparency laws requiring companies to disclose drug pricing information have faced constitutional challenges on the grounds that such disclosure constitutes an uncompensated “taking” of trade secrets.
This opacity prevents consumers, researchers, and policymakers from understanding why drugs cost what they cost and whether those prices are justified by manufacturing expenses or are primarily driven by monopoly pricing.
3. Manufacturing Deficiencies and Inspection Findings
When the FDA conducts an inspection and issues a Form FDA 483 (Inspectional Observations), those observations are eventually made public. But the detailed underlying evidence—the batch records showing failures, the deviations that were investigated, the CAPA plans that were proposed—remain confidential as part of the company’s internal quality records.
This means the public can see that a deficiency occurred, but cannot assess how serious it was or whether the corrective action was adequate. We are asked to trust that the FDA’s judgment was sound, without access to the data that informed that judgment.
The Public Interest Argument for Greater Transparency
The case for reducing confidentiality protections—or at least creating exceptions for public health—rests on several arguments:
Argument 1: The Public Funds Drug Development
As health law scholars have noted, the public makes extraordinary investments in private companies’ drug research and development through NIH grants, tax incentives, and government contracts. Yet details of clinical trial data, manufacturing processes, and government contracts often remain secret, even though the public paid for the research.
During the COVID-19 pandemic, for example, the Johnson & Johnson vaccine contract explicitly allowed the company to keep secret “production/manufacturing know-how, trade secrets, [and] clinical data,” despite massive public funding of the vaccine’s development. European Commission vaccine contracts similarly included generous redactions of price per dose, amounts paid up front, and rollout schedules.
If the public is paying for innovation, the argument goes, the public should have access to the results.
Argument 2: Regulators Are Understaffed and Sometimes Wrong
The FDA is chronically understaffed and under pressure to approve medicines quickly. Regulators sometimes make mistakes. Without access to the underlying data—manufacturing details, clinical trial results, safety signals—independent researchers cannot verify the FDA’s conclusions or identify errors that might not be apparent to a time-pressured reviewer.
Clinical trial transparency advocates argue that summary-level data, study protocols, and even individual participant data can be shared in ways that protect patient privacy (through anonymization and redaction) while allowing independent verification of safety and efficacy claims.
The same logic applies to manufacturing data. If a facility has chronic contamination control issues, or a process validation that barely meets specifications, should that information remain confidential? Or should researchers, patient advocates, and public health officials have access to assess whether the FDA’s acceptance of the facility was reasonable?
Argument 3: Trade Secret Claims Are Often Overbroad
Legal scholars studying pharmaceutical trade secrecy have documented that companies often claim trade secret protection for information that does not meet the legal definition of a trade secret.
For example, “naked price” information—the actual price a company charges for a drug—has been claimed as a trade secret to prevent regulatory disclosure, even though such information provides minimal competitive advantage and is of significant public interest. Courts have begun to push back on these claims, recognizing that the public interest in transparency can outweigh the commercial interest in secrecy, especially in highly regulated industries like pharmaceuticals.
The concern is that companies use trade secret law strategically to suppress unwanted regulation, transparency, and competition—not to protect genuine innovations.
Argument 4: Secrecy Delays Generic Competition
Even after patent and data exclusivity periods expire, trade secret protections allow pharmaceutical companies to keep the precise composition or manufacturing process for medications confidential. This slows the release of generic competitors by preventing them from relying on existing engineering and manufacturing data.
For complex biologics, this problem is particularly acute. Biosimilar developers must reverse-engineer the manufacturing process without access to the originator’s process data, leading to delays of many years and higher costs.
If manufacturing data were disclosed after a defined exclusivity period—say, 10 years—generic and biosimilar developers could bring competition to market faster, reducing drug prices for consumers.
The Counter-Argument: Why Companies Need Confidentiality
It is important to acknowledge the legitimate reasons why confidentiality protections exist:
1. Protecting Innovation Incentives
If manufacturing processes were disclosed, competitors could immediately copy them, undermining the innovator’s investment in developing the process. This would reduce incentives for process innovation and potentially slow the development of more efficient, higher-quality manufacturing methods.
2. Preventing Misuse of Information
Detailed manufacturing data could, in theory, be used by bad actors to produce counterfeit drugs or to identify vulnerabilities in the supply chain. Confidentiality reduces these risks.
3. Maintaining Competitive Differentiation
For CMOs in particular, their manufacturing expertise is their product. If their processes were disclosed, they would lose competitive advantage and potentially business. This could consolidate the industry and reduce competition among manufacturers.
4. Protecting Collaborations
The DMF system enables collaborations between API suppliers, excipient manufacturers, and drug sponsors precisely because each party can protect its proprietary information. If all information had to be disclosed, vertical integration would increase (companies would manufacture everything in-house to avoid disclosure), reducing specialization and efficiency.
Where Should the Balance Be?
The tension is real, and there is no simple resolution. But several principles might guide a more consumer-protective approach to confidentiality:
Principle 1: Time-Limited Secrecy
Trade secrets currently have no expiration date—they can remain secret indefinitely, as long as they remain non-public. But public health interests might be better served by time-limited confidentiality. After a defined period (e.g., 10-15 years post-approval), manufacturing data could be disclosed to facilitate generic/biosimilar competition.
Principle 2: Public Interest Exceptions
Confidentiality rules should include explicit public health exceptions that allow disclosure when there is a compelling public interest—for example, during pandemics, public health emergencies, or when safety signals emerge. Oregon’s drug pricing transparency law includes such an exception: trade secrets are protected unless the public interest requires disclosure.
Principle 3: Independent Verification Rights
Researchers, patient advocates, and public health officials should have structured access to clinical trial data, manufacturing data, and inspection findings under conditions that protect commercial confidentiality (e.g., through data use agreements, anonymization, secure research environments). The goal is not to publish trade secrets on the internet, but to enable independent verification of regulatory decisions.
The FDA already does this in limited ways—for example, by allowing outside experts to review confidential data during advisory committee meetings under non-disclosure agreements. This model could be expanded.
Principle 4: Narrow Trade Secret Claims
Courts and regulators should scrutinize trade secret claims more carefully, rejecting overbroad claims that seek to suppress transparency without protecting genuine innovation. “Naked price” information, aggregate safety data, and high-level manufacturing principles should not qualify for trade secret protection, even if detailed process parameters do.
Implications for PreCheck
In the context of PreCheck, the confidentiality tension manifests in several ways:
For Type V DMFs: The facility information submitted in Phase 1—site layouts, quality systems, QMM practices—will be reviewed by the FDA but not disclosed to the public or even to other sponsors using the same CMO. If a facility has marginal quality practices but passes PreCheck Phase 1, the public will never know. We are asked to trust the FDA’s judgment without transparency into what was reviewed or what deficiencies (if any) were identified.
For QMM Disclosure: Industry is concerned that submitting Quality Management Maturity information is “risky” because it discloses advanced practices beyond CGMP requirements. But the flip side is: if manufacturers are not willing to disclose their quality practices, how can regulators—or the public—assess whether those practices are adequate?
QMM is supposed to reward transparency and maturity. But if the information remains confidential and is never subjected to independent scrutiny, it becomes another form of compliance theater—a document that the FDA reviews in secret, with no external verification.
For Inspection Reliance: If the FDA begins accepting EMA GMP certificates or PIC/S inspection reports (as industry has requested), will those international inspection findings be more transparent than U.S. inspections? In some jurisdictions, yes—the EU publishes more detailed inspection outcomes than the FDA does. But in other jurisdictions, confidentiality practices may be even more restrictive.
A Tension Worth Monitoring
I do not claim to have resolved this tension. Reasonable people can disagree on where the line should be drawn between protecting innovation and ensuring public accountability.
But what I will argue is this: the tension deserves ongoing attention. As PreCheck evolves, as QMM assessments become more detailed, as Type V DMFs accumulate facility data over years—we should ask, repeatedly:
Who benefits from confidentiality, and who bears the risk?
Are there ways to enable independent verification without destroying commercial incentives?
Is the FDA using its discretion to share data proactively, or defaulting to secrecy when transparency might serve the public interest?
The history of pharmaceutical regulation is, in part, a history of secrets revealed too late. Vioxx’s cardiovascular risks. Thalidomide’s teratogenicity. OxyContin’s addictiveness. In each case, information that was known or knowable earlier remained hidden—sometimes due to fraud, sometimes due to regulatory caution, sometimes due to confidentiality rules that prioritized commercial interests over public health.
PreCheck, if it succeeds, will create a new repository of confidential facility data held by the FDA. That data could be a public asset—enabling faster approvals, better-informed regulatory decisions, earlier detection of quality problems. Or it could become another black box, where the public is asked to trust that the system works without access to the evidence.
The choice is not inevitable. It is a design decision—one that regulators, legislators, and industry will make, explicitly or implicitly, in the years ahead.
We should make it explicitly, with full awareness of whose interests are being prioritized and what risks are being accepted on behalf of patients who have no seat at the table.
Challenge 4: QMM is Not Fully Defined, and Submission Feels “Risky”
As discussed earlier, manufacturers are wary of submitting Quality Management Maturity (QMM) information because the assessment framework is not fully developed.
One attendee at the public meeting described QMM submission as “risky” because:
The FDA has not published the final QMM assessment protocol
The maturity criteria are subjective and open to interpretation
Disclosing quality practices beyond CGMP requirements could create new expectations that the manufacturer must meet
The analogy is this: if you tell the FDA, “We use statistical process control to detect process drift in real-time,” the FDA might respond, “Great! Show us your SPC data for the last two years.” If that data reveals a trend that the manufacturer considered acceptable but the FDA considers concerning, the manufacturer has created a problem by disclosing the information.
This is the opposite of the trust-building that QMM is supposed to enable. Instead of rewarding manufacturers for advanced quality practices, the program risks punishing them for transparency.
Until the FDA clarifies that QMM participation is non-punitive and that disclosure of advanced practices will not trigger heightened scrutiny, industry will remain reluctant to engage fully with this component of PreCheck.
Challenge 5: Resource Constraints—Will PreCheck Starve Other FDA Programs?
Industry stakeholders raised a practical concern: if the FDA dedicates inspectors and reviewers to PreCheck, will that reduce resources for routine surveillance inspections, post-approval change reviews, and other critical programs?
The FDA has not provided a detailed resource plan for PreCheck. The program is described as voluntary, which implies it is additive to existing workload, not a replacement for existing activities.
But inspectors and reviewers are finite resources. If PreCheck becomes popular (which the FDA hopes it will), the agency will need to either:
Hire additional staff to support PreCheck (requiring Congressional appropriations)
Deprioritize other inspection activities (e.g., routine surveillance)
Limit the number of PreCheck engagements per year (creating a bottleneck)
One industry representative noted that the economic incentives for domestic manufacturing are weak—it takes 5-7 years to build a new plant, and generic drug margins are thin. Unless the FDA can demonstrate that PreCheck provides substantial time and cost savings, manufacturers may not participate at the scale needed to meet the program’s supply chain security goals.
The CRL Crisis—How Facility Deficiencies Are Blocking Approvals
To understand the urgency of PreCheck, we must examine the data on inspection-related Complete Response Letters (CRLs).
The Numbers: CRLs Are Rising, Facility Issues Are a Leading Cause
In 2023, BLAs were issued CRLs nearly half the time—an unprecedented rate. This represents a sharp increase from previous years, driven by multiple factors:
More BLA submissions overall (especially biosimilars under the 351(k) pathway)
Increased scrutiny of manufacturing and CMC sections
More for-cause inspections (up 250% in 2025 compared to historical baseline)
Of the CRLs issued in 2023-2024, approximately 20% were due to facility inspection failures. This makes facility issues the third most common CRL driver, behind Manufacturing/CMC deficiencies (44%) and Clinical Evidence Gaps (44%).
Breaking down the facility-related CRLs:
Foreign manufacturing sites are associated with more CRLs proportionate to the number of PLIs conducted
50% of facility deficiencies involve Contract Manufacturing Organizations (CMOs)
75% of Applicant-Site CRs are for biosimilar applications
The five most-cited facilities account for ~35% of CR deficiencies
This last statistic is revealing: the CRL problem is concentrated among a small number of repeat offenders. These facilities receive CRLs on multiple products, suggesting systemic quality issues that are not being resolved between applications.
What Deficiencies Are Causing CRLs?
Analysis of FDA 483 observations and warning letters from FY2024 reveals the top inspection findings driving CRLs:
Data Integrity Failures (most common)
ALCOA+ principles not followed
Inadequate audit trails
21 CFR Part 11 non-compliance
Quality Unit Failures
Inadequate oversight
Poor release decisions
Ineffective CAPA systems
Superficial root cause analysis
Inadequate Process/Equipment Qualification
Equipment not qualified before use
Process validation protocols deficient
Continued Process Verification not implemented
Contamination Control and Environmental Monitoring Issues
Inadequate monitoring locations (the “representative” trap discussed in my Rechon and LeMaitre analyses)
Failure to investigate excursions
Contamination Control Strategy not followed
Stability Program Deficiencies
Incomplete stability testing
Data does not support claimed shelf-life
These findings are not product-specific. They are systemic quality system failures that affect the facility’s ability to manufacture any product reliably.
This is the fundamental problem with the current PAI/PLI model: the FDA discovers general GMP deficiencies during a product-specific inspection, and those deficiencies block approval even though they are not unique to that product.
The Cascade Effect: One Facility Failure Blocks Multiple Approvals
The data on repeat offenders is particularly troubling. Facilities with ≥3 CRs are primarily biosimilar manufacturers or CMOs.
This creates a cascade: a CMO fails a PLI for Product A. The FDA places the CMO on heightened surveillance. Products B, C, and D—all unrelated to Product A—face delayed PAIs because the FDA prioritizes re-inspecting the CMO to verify corrective actions. By the time Products B, C, and D reach their PDUFA dates, the CMO still has not cleared the OAI classification, and all three products receive CRLs.
This is the opposite of a risk-based system. Products B, C, and D are being held hostage by Product A’s facility issues, even though the manufacturing processes are different and the sponsors are different.
The EMA’s decoupled model avoids this by treating the facility as a separate remediation pathway. If the CMO has GMP issues, the NCA works with the CMO to fix them. Product applications proceed on their own timeline. If the facility is not compliant, products cannot be approved, but the remediation does not block the application review.
For-Cause Inspections: The FDA Is Catching More Failures
One contributing factor to the rise in CRLs is the sharp increase in for-cause inspections.
In 2025, the FDA conducted for-cause inspections at nearly 25% of all inspection events, up from the historical baseline of ~10%. For-cause inspections are triggered by:
For-cause inspections have a 33.5% OAI rate—5.6 times higher than routine inspections. And approximately 50% of OAI classifications lead to a warning letter or import alert.
This suggests that the FDA is increasingly detecting facilities with serious compliance issues that were not evident during prior routine inspections. These facilities are then subjected to heightened scrutiny, and their pending product applications face CRLs.
The problem: for-cause inspections are reactive. They occur after a failure has already reached the market (a recall, a complaint, a safety signal). By that point, patient harm may have already occurred.
PreCheck is, in theory, a proactive alternative. By evaluating facilities early (Phase 1), the FDA can identify systemic quality issues before the facility begins commercial manufacturing. But PreCheck only applies to new facilities. It does not solve the problem of existing facilities with poor compliance histories.
A Framework for Site Readiness—In Place, In Use, In Control
The current PAI/PLI model treats site readiness as a binary: the facility is either “compliant” or “not compliant” at a single moment in time.
PreCheck introduces a two-phase model, separating facility design review (Phase 1) from product-specific review (Phase 2).
But I propose that a more useful—and more falsifiable—framework for site readiness is three-stage:
In Place: Systems, procedures, equipment, and documentation exist and meet design specifications.
In Use: Systems and procedures are actively implemented in routine operations as designed.
In Control: Systems maintain validated state through continuous verification, trend analysis, and proactive improvement.
This framework maps directly onto:
The FDA’s process validation lifecycle (Stage 1: Process Design = In Place; Stage 2: Process Qualification = In Use; Stage 3: Continued Process Verification = In Control)
The ISPE/EU Annex 15 qualification stages (DQ/IQ = In Place; OQ/PQ = In Use; Ongoing monitoring = In Control)
The ICH Q10 “state of control” concept (In Control)
The advantage of this framework is that it explicitly separates three distinct questions that are often conflated:
Does the system exist? (In Place)
Is the system being used? (In Use)
Is the system working? (In Control)
A facility can be “In Place” without being “In Use” (e.g., SOPs are written but operators are not trained). A facility can be “In Use” without being “In Control” (e.g., operators follow procedures, but the process produces high variability and frequent deviations).
Let me define each stage in detail.
Stage 1: In Place (Structural Readiness)
Definition: Systems, procedures, equipment, and documentation exist and meet design specifications.
This is the output of Design Qualification (DQ) and Installation Qualification (IQ). It answers the question: “Has the facility been designed and built according to GMP requirements?”
Key Elements:
Facility layout meets User Requirements Specification (URS) and regulatory expectations
Equipment installed per manufacturer specifications
SOPs written and approved
Quality systems documented (change control, deviation management, CAPA, training)
Utilities qualified (HVAC, water systems, compressed air, clean steam)
Alignment with PreCheck: This is what Phase 1 (Facility Readiness) evaluates. The Type V DMF submitted during Phase 1 contains evidence that systems are In Place.
Alignment with EMA: This corresponds to the initial GMP inspection conducted by the NCA before granting a manufacturing license.
Inspection Outcome: If a facility is “In Place,” it means the infrastructure exists. But it says nothing about whether the infrastructure is functional or effective.
Stage 2: In Use (Operational Readiness)
Definition: Systems and procedures are actively implemented in routine operations as designed.
This is the output Validation. It answers the question: “Can the facility execute its processes reliably?”
Key Elements:
Equipment operates within qualified parameters during production
Personnel trained and demonstrate competency
Process consistently produces batches meeting specifications
Environmental monitoring executing according to contamination control strategy and generating data
Quality systems actively used (deviations documented, investigations completed, CAPA plans implemented)
Data integrity controls functioning (audit trails enabled, electronic records secure)
Work-as-Done matches Work-as-Imagined
Assessment Methods:
Observation of operations
Review of batch records and deviations
Interviews with operators and otherstaff
Trending of process data (yields, cycle times, in-process controls)
Audit of training records and competency assessments
Inspection of actual manufacturing runs (not simulations)
Alignment with PreCheck: This is what Phase 2 (Application Submission) evaluates, particularly during the PAI/PLI (if one is conducted). The FDA inspector observes operations, reviews batch records, and verifies that the process described in the CMC section is actually being executed.
Alignment with EMA: This corresponds to the pre-approval GMP inspection requested by the CHMP if the facility has not been recently inspected.
Inspection Outcome: If a facility is “In Use,” it means the systems are functional. But it does not guarantee that the systems will remain functional over time or that the organization can detect and correct drift.
Stage 3: In Control (Sustained Performance)
Definition: Systems maintain validated state through continuous verification, trend analysis, and proactive improvement.
Statistical process control (SPC) implemented to detect trends and shifts
Routine monitoring identifies drift before it becomes deviation
Root cause analysis is rigorous and identifies systemic issues, not just proximate causes
CAPA effectiveness is verified—corrective actions prevent recurrence
Process capability is quantified and improving (Cp, Cpk trending upward)
Annual Product Reviews drive process improvements
Knowledge management systems capture learnings from deviations, investigations, and inspections
Quality culture is embedded—staff at all levels understand their role in maintaining control
The organization actively seeks to falsify its own assumptions (the core principle of this blog)
Assessment Methods:
Trending of process capability indices over time
Review of Annual Product Reviews and management review meetings
Audit of CAPA effectiveness (do similar deviations recur?)
Statistical analysis of deviation rates and types
Assessment of organizational culture (e.g., FDA’s QMM assessment)
Evaluation of how the facility responds to “near-misses” and “weak signals”[blog]
Alignment with PreCheck: This is not explicitly evaluated in PreCheck as currently designed. PreCheck Phase 1 and Phase 2 focus on facility design and process execution, but do not assess long-term performance or organizational maturity.
However, the inclusion of Quality Management Maturity (QMM) practices in the Type V DMF is an attempt to evaluate this dimension. A facility with mature QMM practices is, in theory, more likely to remain “In Control” over time.
This also corresponds to routine re-inspections conducted every 1-3 years. The purpose of these inspections is not to re-validate the facility (which is already licensed), but to verify that the facility has maintained its validated state and has not accumulated unresolved compliance drift.
Inspection Outcome: If a facility is “In Control,” it means the organization has demonstrated sustained capability to manufacture products reliably. This is the goal of all GMP systems, but it is the hardest state to verify because it requires longitudinal data and cultural assessment, not just a snapshot inspection.
Mapping the Framework to Regulatory Timelines
The three-stage framework provides a logic for when and how to conduct regulatory inspections.
Process drift, CAPA ineffectiveness, organizational complacency, systemic failures
The current PAI/PLI model collapses “In Place,” “In Use,” and “In Control” into a single inspection event conducted at the worst possible time (near PDUFA). This creates the illusion that a facility’s compliance status can be determined in 5-10 days.
PreCheck separates “In Place” (Phase 1) from “In Use” (Phase 2), which is a significant improvement. But it still does not address the hardest question: how do we know a facility will remain “In Control” over time?
The answer is: you don’t. Not from a one-time inspection. You need continuous verification.
This is the insight embedded in the FDA’s 2011 process validation guidance: validation is not an event, it is a lifecycle. The validated state must be maintained through Stage 3 Continued Process Verification.
The same logic applies to facilities. A facility is not “validated” by passing a single PAI. It is validated by demonstrating control over time.
PreCheck needs to be part of a wider model at the FDA:
Allow facilities that complete Phase 1 to earn presumption of compliance for future product applications (reducing PAI frequency)
Implement more robust routine surveillance inspections on a 1-3 year cycle to verify “In Control” status. The data shows how much the FDA is missing this target.
Adjust inspection frequency dynamically based on the facility’s performance (low-risk facilities inspected less often, high-risk facilities more often)
This is the system the industry is asking for. It is the system the FDA could build on the foundation of PreCheck—if it commits to the long-term vision.
The Quality Experience Must Be Brought In at Design—And Most Companies Get This Wrong
PreCheck’s most important innovation is not its timeline or its documentation requirements. It is the implicit philosophical claim that facilities can be made better by involving quality experts at the design phase, not at the commissioning phase.
This is a radical departure from current practice. In most pharmaceutical manufacturing projects, the sequence is:
Engineering designs the facility (architecture, HVAC, water systems, equipment layout)
Procurement procures equipment based on engineering specs
Construction builds the facility
Commissioning and qualification begin (and quality suddenly becomes relevant)
Quality is brought in too late. By the time a quality professional reviews a facility design, the fundamental decisions—pipe routing, equipment locations, air handling unit sizing, cleanroom pressure differentials—have already been made. Suggestions to change the design are met with “we can’t change that now, we’ve already ordered the equipment” or “that’s going to add 3 months to the project and cost $500K.”
This is Quality-by-Testing (QbT): design first, test for compliance later, and hope the test passes.
PreCheck, by contrast, asks manufacturers to submit facility designs to the FDA during the design phase, while the designs are still malleable. The FDA can identify compliance gaps—inadequate environmental monitoring locations, cleanroom pressure challenges, segregation inadequacies, data integrity risks—before construction begins.
This is the beginning of Quality-by-Design (QbD) applied to facilities.
But for PreCheck to work—for Phase 1 to actually prevent facility disasters—manufacturers must embed quality expertise in the design process from the start. And most companies do not do this well.
The “Quality at the End” Trap
The root cause is organizational structure and financial incentives. In a typical pharmaceutical manufacturing project:
Engineering owns the timeline and the budget
Quality is invited to the party once the facility is built
Operations is waiting in the wings to take over once everything is “validated”
Each function optimizes locally:
Engineering optimizes for cost and schedule (build it fast, build it cheap)
Quality optimizes for compliance (every SOP written, every deviation documented)
Operations optimizes for throughput (run as many batches as possible per week)
Nobody optimizes for “Will this facility sustainably produce quality products?”—which is a different optimization problem entirely.
Bringing a quality professional into the design phase requires:
Allocating budget for quality consultation during design (not just during qualification)
Slowing the design phase to allow time for risk assessments and tradeoff discussions
Empowering quality to say “no” to designs that meet engineering requirements but fail quality risk management
Building quality leadership into the project from the kickoff, not adding it in Phase 3
Most companies treat this as optional. It is not optional if you want PreCheck to work.
Why Most Companies Fail to Do This Well
Despite the theoretical importance of bringing quality into design, most pharmaceutical companies still treat design-phase quality as a non-essential activity. Several reasons explain this:
1. Quality Does Not Own a Budget Line
In a manufacturing project, the Engineering team has a budget (equipment, construction, contingency). Operations has a budget (staffing, training). Quality typically has no budget allocation for the design phase. Quality professionals are asked to contribute their “expertise” without resources, timeline allocation, or accountability.
The result: quality advice is given in meetings but not acted upon, because there are no resources to implement it.
2. Quality Experience Is Scarce
The pharmaceutical industry has a shortage of quality professionals with deep experience in facility design, contamination control, data integrity architecture, and process validation. Many quality people come from a compliance background (inspections, audits, documentation) rather than a design background (risk management, engineering, systems thinking).
When a designer asks, “What should we do about data integrity?” the compliance-oriented quality person says, “We’ll need SOPs and training programs.” But the design-oriented quality person says, “We need to architect the IT infrastructure such that changes are logged and cannot be backdated. Here’s what that requires…”
The former approach adds cost and schedule. The latter approach prevents problems.
3. The Design Phase Is Urgent
Pharmaceutical companies operate under intense pressure to bring new facilities online as quickly as possible. The design phase is compressed—schedules are aggressive, meetings are packed, decisions are made rapidly.
Adding quality review to the design phase is perceived as slowing the project down. A quality person who carefully works through a contamination control strategy (“Wait, have we tested whether the airflow assumption holds at scale? Do we understand the failure modes?”) is seen as a bottleneck.
The company that brings in quality expertise early pays a perceived cost (delay, complexity) and receives a delayed benefit (better operations, fewer deviations, smoother inspections). In a pressure-cooker environment, the delayed benefit is not valued.
4. Quality Experience Is Not Integrated Across the Organization
In a some pharmaceutical company, quality expertise is fragmented:
Quality Assurance handles deviations and investigations
Quality Control runs the labs
Regulatory Affairs manages submissions
Process Validation leads qualification projects
None of these groups are responsible for facility design quality. So it falls to no one, and it ends up being everyone’s secondary responsibility—which means it is no one’s primary responsibility.
A company with an integrated quality culture would have a quality leader who is accountable for the design, and who has authority to delay the project if critical risks are not addressed. Most companies do not have this structure.
What PreCheck Requires: The Quality Experience in Design
For PreCheck to deliver its promised benefits, companies participating in Phase 1 must make a commitment that quality expertise is embedded throughout design.
Specifically:
1. Quality leadership is assigned early – Someone in quality (not engineering, not operations) is accountable for quality risk management in the facility design from Day 1.
2. Quality has authority to influence design – The quality leader can say “no” to designs that create unacceptable quality risks, even if the design meets engineering specifications.
3. Quality risk management is performed systematically – Not just “quality review of designs,” but structured risk management identifying critical quality risks and mitigation strategies.
4. Design Qualification includes quality experts – DQ is not just engineering verification that design meets specs; it includes quality verification that design enables quality control.
5. Contamination control is designed, not tested – Environmental monitoring strategies, microbial testing plans, and statistical approaches are designed into the facility, not bolted on during commissioning.
6. Data integrity is architected – IT systems are designed to prevent data manipulation, not as an afterthought.
7. The organization is aligned on what “quality” means – Not compliance (“checking boxes”), but the organizational discipline to sustain control and to detect and correct drift before it becomes a failure.
This is fundamentally a cultural commitment. It is about believing that quality is not something you add at the end; it is something you design in.
The FDA’s Unspoken Expectation in PreCheck Phase 1
When the FDA reviews a Type V DMF in PreCheck Phase 1, the agency is asking: “Did this manufacturer apply quality expertise to the design?”
How does the FDA assess this? By looking for:
Risk assessments that show systematic thinking, not checkbox compliance
Design decisions that are justified by quality risk management, not just engineering convenience
Contamination control strategies that are grounded in understanding the failure modes
Data integrity architectures that prevent (not just detect) problems
Quality systems that are designed to evolve and improve, not static and reactive
If the Type V DMF reads like it was prepared by an engineering firm that called quality for comments, the FDA will see it. If it reads like it was co-developed by quality and engineering with equal voice, the FDA will see that too.
PreCheck Phase 1 is not just a design review. It is a quality culture assessment.
And this is why most companies are not ready for PreCheck. Not because they lack the engineering capability to design a facility. But because they lack the quality experience, organizational structure, and cultural commitment to bring quality into the design process as a peer equal to engineering.
Companies that participate in PreCheck with a transactional mindset—”Let’s submit our designs to the FDA and get early feedback”—will get some benefit. They will catch some design issues early.
But companies that participate with a transformational mindset—”We are going to redesign how we approach facility development to embed quality from the start”—will get deeper benefits. They will build facilities that are easier to operate, that generate fewer deviations, that demonstrate sustained control over time, and that will likely pass future inspections without significant findings.
The choice is not forced on the company by PreCheck. PreCheck is voluntary; you can choose the transactional approach.
But if you want the regulatory trust that PreCheck is supposed to enable—if you want the FDA to accept your facility as “ready” with minimal re-inspection—you need to bring the quality experience in at design.
That is what Phase 1 actually measures.
The Epistemology of Trust
Regulatory inspections are not merely compliance checks. They are trust-building mechanisms.
When the FDA inspector walks into a facility, the question is not “Does this facility have an SOP for cleaning validation?” (It does. Almost every facility does.) The question is: “Can I trust that this facility will produce quality products consistently, even when I am not watching?”
Trust cannot be established in 5 days.
Trust is built through:
Repeated interactions over time
Demonstrated capability under varied conditions
Transparency when failures occur
Evidence of learning from those failures
The current PAI/PLI model attempts to establish trust through a single high-stakes audit. This is like trying to assess a person’s character by observing them for one hour during a job interview. It is better than nothing, but it is not sufficient.
PreCheck is a step toward a trust-building system. By engaging early (Phase 1) and providing continuity into the application review (Phase 2), the FDA can develop a relationship with the manufacturer rather than a one-off transaction.
But PreCheck as currently proposed is still transactional. It is a program for new facilities. It does not create a facility lifecycle framework. It does not provide a pathway for facilities to earn cumulative trust over multiple products.
The FDA could do this—if it commits to three principles:
1. Decouple facility inspections from product applications.
Facilities should be assessed independently and granted a facility certificate (or equivalent) that can be referenced by multiple products. This separates facility remediation from product approval timelines and prevents the cascade failures we see in the current system.
2. Recognize that “In Control” is not a state achieved once, but a discipline maintained continuously.
The FDA’s own process validation guidance says this explicitly: validation is a lifecycle, not an event. The same logic must apply to facilities. A facility is not “GMP compliant” because it passed one inspection. It is GMP compliant because it has demonstrated, over time, the organizational discipline to detect and correct failures before they reach patients.
PreCheck could be the foundation for this system. But only if the FDA is willing to embrace the full implication of what it has started: that regulatory trust is earned through sustained performance, and that the agency’s job is not to catch failures through surprise inspections, but to partner with manufacturers in building systems that are designed to reveal their own weaknesses.
This is the principle of falsifiable quality applied to regulatory oversight. A quality system that cannot be proven wrong is a quality system that cannot be trusted. A facility that fears inspection is a facility that has not internalized the discipline of continuous verification.
The facilities that succeed under PreCheck—and under any future evolution of this system—will be those that understand that “In Place, In Use, In Control” is not a checklist to complete, but a philosophy to embody.
U.S. Food and Drug Administration. FDA Public Meeting: Onshoring Manufacturing of Drugs and Biological Products – Agenda and materials. Silver Spring, MD: US Food and Drug Administration; 2025. Available at: https://www.fda.gov/media/189329/download. Accessed January 8, 2026.
U.S. Food and Drug Administration. CDER’s Quality Management Maturity (QMM) Program. Silver Spring, MD: US Food and Drug Administration; 2023. Available at: https://www.fda.gov/media/171705/download. Accessed January 8, 2026.fda
U.S. Food and Drug Administration. WHO Prequalification – FDA overview. Silver Spring, MD: US Food and Drug Administration; August 15, 2022. Available at: https://www.fda.gov/media/166136/download. Accessed January 8, 2026.
If I had a dollar for every time I sat in a risk assessment workshop and heard someone use “aseptic” and “sterile” interchangeably, I could probably fund my own private isolator line. It is one of those semantic slips that seems harmless on the surface—like confusing “precision” with “accuracy”—but in the pharmaceutical quality world, these linguistic shortcuts are often the canary in the coal mine for a systemic failure of understanding.
We are currently navigating the post-Annex 1 implementation landscape, a world where the Contamination Control Strategy (CCS) has transitioned from a “nice-to-have” philosophy to a mandatory, living document. Yet, I frequently see CCS documents that read like a disorganized shopping list of controls rather than a coherent strategy. Why? Because the authors haven’t fundamentally distinguished between microbial control, aseptic processing, and sterility.
If we cannot agree on what we are trying to achieve, we certainly cannot build a strategy to achieve it. Today, I want to unpack these terms—not for the sake of pedantry, but because the distinction dictates your facility design, your risk profile, and ultimately, patient safety. We will also look at how these definitions map onto the spectrum of open and closed systems, and critically, how they apply across drug substance and drug product manufacturing. This last point is where I see the most confusion—and where the stakes are highest.
The Definitions: More Than Just Semantics
Let’s strip this back. These aren’t just vocabulary words; they are distinct operational states that demand different control philosophies.
Microbial Control: The Art of Management
Microbial control is the baseline. It is the broad umbrella under which all our activities sit, but it is not synonymous with sterility. In the world of non-sterile manufacturing (tablets, oral liquids, topicals), microbial control is about bioburden management. We aren’t trying to eliminate life; we are trying to keep it within safe, predefined limits and, crucially, ensure the absence of “objectionable organisms.”
In a sterile manufacturing context, microbial control is what happens before the sterilization step. It is the upstream battle. It is the control of raw materials, the WFI loops, the bioburden of the bulk solution prior to filtration.
Impact on CCS: If your CCS treats microbial control as “sterility light,” you will fail. A strategy for microbial control focuses on trend analysis, cleaning validation, and objectionable organism assessments. It relies heavily on understanding the microbiome of your facility. It accepts that microorganisms are present but demands they be the right kind (skin flora vs. fecal) and in the right numbers.
Sterile: The Absolute Negative
Sterility is an absolute. There is no such thing as “a little bit sterile.” It is a theoretical concept defined by a probability—the Sterility Assurance Level (SAL), typically 10⁻⁶.
Here is the critical philosophical point: Sterility is a negative quality attribute. You cannot test for it. You cannot inspect for it. By the time you get a sterility test result, the batch is already made. Therefore, you cannot “control” sterility in the same way you control pH or dissolved oxygen. You can only assure it through the validation of the process that delivered it.
Impact on CCS: Your CCS cannot rely on monitoring to prove sterility. Any strategy that points to “passing sterility tests” as a primary control measure is fundamentally flawed. The CCS for sterility must focus entirely on the robustness of the sterilization cycle (autoclave validation, gamma irradiation dosimetry, VHP cycles) and the integrity of the container closure system.
Aseptic: The Maintenance of State
This is where the confusion peaks. Aseptic does not mean “sterilizing.” Aseptic processing is the methodology of maintaining the sterility of components that have already been sterilized individually. It is the handling, the assembly, and the filling of sterile parts in a sterile environment.
If sterilization is the act of killing, aseptic processing is the act of not re-contaminating.
Impact on CCS: This is the highest risk area. Why? Because it involves the single dirtiest variable in our industry: people. An aseptic CCS is almost entirely focused on intervention management, first air protection, and behavioral controls. It is about the “tacit knowledge” of the operator—knowing how to move slowly, knowing not to block the HEPA flow. If your CCS focuses on environmental monitoring (EM) data here, you are reacting, not controlling. The strategy must be prevention of ingress.
Drug Substance vs. Drug Product: The Fork in the Road
This is where the plot thickens. Many quality professionals treat the CCS as a monolithic framework, but drug substance manufacturing and drug product manufacturing are fundamentally different activities with different contamination risks, different control philosophies, and different success criteria.
Let me be direct: confusing these two stages is the source of many failed validation studies, inappropriate risk assessments, and ultimately, preventable contamination events.
Drug Substance: The Upstream Challenge
Drug substance (the active pharmaceutical ingredient, or API) is typically manufactured in a dedicated facility, often from biological fermentation (for biotech) or chemical synthesis. The critical distinction is this: drug substance manufacturing is almost always a closed process.
Why? Because the bulk is continuously held in vessels, tanks, or bioreactors. It is rarely exposed to the open room environment. Even where additions occur (buffers, precipitants), these are often made through closed connectors or valving systems.
The CCS for drug substance therefore prioritizes:
Bioburden control of the bulk product at defined process stages. This is not about sterility assurance; it is about understanding the microbial load before formulation and the downstream sterilizing filter. The European guidance (CPMP Note for Guidance on Manufacture) is explicit: the maximum acceptable bioburden prior to sterilizing filtration is typically ≤10 CFU/100 mL for aseptically filled products.
Process hold times. One of the most underappreciated risks in drug substance manufacturing is the hold time between stages—the time the bulk sits in a vessel before the next operation. If you haven’t validated that microorganisms won’t grow during a 72-hour hold at room temperature, you haven’t validated your process. The pharmaceutical literature is littered with cases where insufficient attention to hold time validation led to unexpected bioburden increases (50-100× increases have been observed).
Intermediate bioburden testing. The CCS must specify where in the process bioburden is assessed. I advocate for testing at critical junctures:
At the start of manufacturing (raw materials/fermentation)
Post-purification (to assess effectiveness of unit operations)
Prior to formulation/final filtration (this is the regulatory checkpoint)
Equipment design and cleanliness. Drug substance vessels and transfer lines are part of the microbial control landscape. They are not Grade A environments (because the product is in a closed vessel), but they must be designed and maintained to prevent bioburden increase. This includes cleaning and disinfection, material of construction (stainless steel vs. single-use), and microbial monitoring of water used for equipment cleaning.
Water systems. The water used in drug substance manufacturing (for rinsing, for buffer preparation) is a critical contamination source. Water for Injection (WFI) has a specification of ≤0.1 CFU/mL. However, many drug substance processes use purified water or even highly purified water (HPW), where microbial control is looser. The CCS must specify the water system design, the microbial limits, and the monitoring frequency.
The environmental monitoring program for drug substance is quite different from drug product. There are no settle plates of the drug substance itself (it’s not open). Instead, EM focuses on the compressor room (if using compressed gases), water systems, and post-manufacturing equipment surfaces. The EM is about detecting facility drift, not about detecting product contamination in real-time.
Drug Product: The Aseptic Battlefield
Drug product manufacturing—the formulation, filling, and capping of the drug substance into vials or containers—is where the real contamination risk lives.
For sterile drug products, this is the aseptic filling stage. And here, the CCS is almost entirely different from drug substance.
The CCS for drug product prioritizes:
Intervention management and aseptic technique validation. Every opening of a sterile vial, every manual connection, every operator interaction is a potential contamination event. The CCS must specify:
Gowning requirements (Grade A background requires full body coverage, including hood, suit, and sterile gloves)
Aseptic technique training and periodic requalification (gloved hand aseptic technique, GHAT)
First-air protection (the air directly above the vial or connection point must be Grade A)
Speed of operations (rapid movements increase turbulence and microbial dispersion)
Container closure integrity. Once filled, the vial is sealed. But the window of vulnerability is the time between filling and capping. The CCS must specify maximum exposure times prior to closure (often 5-15 minutes, depending on the filling line). Any vial left uncapped beyond this window is at risk.
Real-time environmental monitoring. Unlike drug substance manufacturing, drug product EM is your primary detective. Settle plates in the Grade A filling zone, active air samplers, surface monitoring, and gloved-hand contact plates are all part of the CCS. The logic is: if you see a trend in EM data during the filling run, you can stop the batch and investigate. You cannot do this with end-product sterility testing (you get the result weeks later). This is why parametric monitoring of differential pressures, airflow velocities, and particle counts is critical—it gives you live feedback.
Container closure integrity testing. This is critical for the drug product CCS. You can fill a vial perfectly under Grade A conditions, but if the container closure system is compromised, the sterility is lost. The CCS must include:
Validation of the closure system during development
Routine CCI testing (often helium leak detection) as part of QC
Shelf-life stability studies that include CCI assessments
The key distinction: Drug substance CCS is about upstream prevention (keeping microorganisms out of the bulk). Drug product CCS is about downstream detection and prevention of re-contamination (because the product is no longer in a controlled vessel, it is now exposed).
The Bridge: Sterilizing Filtration
Here is where the two meet. The drug substance, with its controlled bioburden, passes through a sterilizing-grade filter (0.2 µm) into a sterile holding vessel. This is the handoff point. The filter is validated to remove ≥99.99999999% (log 10) of the challenge organisms.
The CCS must address this transition:
The bioburden before filtration must be ≤10 CFU/100 mL (European limit; the FDA requires “appropriate limits” but does not specify a number).
The filtration process itself must be validated with the actual drug substance and challenge organisms.
Post-filtration, the bulk is considered sterile (by probability) and enters aseptic filling.
Many failures I have seen involve inadequate attention to the state of the product at this handoff. A bulk solution that has grown from 5 CFU/mL to 500 CFU/mL during a hold time can still technically be “filtered.” But it challenges the sterilizing filter, increases the risk of breakthrough, and is frankly an indication of poor upstream control. The CCS must make this connection explicit.
From Definitions to Strategy: The Open vs. Closed Spectrum
Now that we have the definitions, and we understand the distinction between drug substance and drug product, we have to talk about where these activities happen. The regulatory wind (specifically Annex 1) is blowing hard in one direction: separation of the operator from the process.
This brings us to the concept of Open vs. Closed systems. This isn’t a binary switch; it’s a spectrum of risk.
The “Open” System: The Legacy Nightmare
In a truly open system, the product or critical surfaces are exposed to the cleanroom environment, which is shared by operators.
The Setup: A Grade A filling line with curtain barriers, or worse, just laminar flow hoods where operators reach in with gowned arms.
The Risk: The operator is part of the environment. Every movement sheds particles. Every intervention is a roll of the dice.
CCS Implications: If you are running an open system, your CCS is working overtime. You are relying heavily on personnel qualification, gowning discipline, and aggressive Environmental Monitoring (EM). You are essentially fighting a war of attrition against entropy. The “Microbial Control” aspect here is desperate; you are relying on airflow to sweep away the contamination that you know is being generated by the people in the room.
This is almost never used for drug substance (which is in a closed vessel) but remains common in older drug product filling lines.
The Restricted Access Barrier System (RABS): The Middle Ground
RABS attempts to separate the operator from the critical zone via a rigid wall and glove ports, but it retains a connection to the room’s air supply.
Active RABS: Has its own onboard fan/HEPA units.
Passive RABS: Relies on the ceiling HEPA filters of the room.
Closed RABS: Doors are kept locked during the batch.
Open RABS: Doors can be opened (though they shouldn’t be).
CCS Implications: Here, the CCS shifts. The reliance on gowning decreases slightly (though Grade B background is still required), and the focus shifts to intervention management. The “Aseptic” strategy here is about door discipline. If a door is opened, you have effectively reverted to an open system. The CCS must explicitly define what constitutes a “closed” state and rigorously justify any breach.
The Closed System: The Holy Grail
A closed system is one where the product is never exposed to the immediate room environment. This is achieved via Isolators (for drug product filling) or Single-Use Systems (SUS) (for both drug substance transfers and drug product formulation).
Isolators: These are fully sealed units, often biodecontaminated with VHP, operating at a pressure differential. The operator is physically walled off. The critical zone (inside the isolator) is often Class 5 or better, while the surrounding room can be Class 7 or Class 8.
Single-Use Systems (SUS): Gamma-irradiated bags, tubing, and connectors (like aseptic connectors or tube welders) that create a sterile fluid path from start to finish. For drug substance, SUS is increasingly the norm—a connected bioprocess using Flexel or similar technology. For drug product, SUS includes pre-filled syringe filling systems, which eliminate the open vial/filling needle risk.
CCS Implications:
This is where the definitions we discussed earlier truly diverge, and where the drug substance vs. drug product distinction becomes clear.
Microbial Control (Drug Substance in SUS): The environment outside the SUS matters almost not at all. The control focus moves to:
Integrity testing (leak testing the connections)
Bioburden of the incoming bulk (before it enters the SUS)
Duration of hold (how long can the sterile fluid path remain static without microbial growth?)
A drug substance process using SUS (e.g., a continuous perfusion bioreactor feeding into a SUS train for chromatography, buffer exchange, and concentration) can run in a Grade C or even Grade D facility. The process itself is closed.
Sterile (Isolator for Drug Product Filling): The focus is on the VHP cycle validation. The isolator is fumigated with vaporized hydrogen peroxide, and the cycle is validated to achieve a 6-log reduction of a challenge organism. Once biodecontaminated, the isolator is considered “sterile” (or more accurately, “free from viable organisms”), and the drug product filling occurs inside.
Aseptic (Within Closed Systems): The “aseptic” risk is reduced to the connection points. For example: In a SUS, the risk is the act of disconnecting the bag when the process is complete. This must be done aseptically (often with a tube welder).
In an isolator filling line, the risk is the transfer of vials into and out of the isolator (through a rapid transfer port, or RTP, or through a port that is first disinfected).
The CCS focuses on the make or break moment—the point where sterility can be compromised.
The “Functionally Closed” Trap
A word of caution: I often see processes described as “closed” that are merely “functionally closed.”
Example: A bioreactor is SIP’d (sterilized in place) and runs in a closed loop, but then an operator has to manually open a sampling port with a needle to withdraw samples for bioburden testing.
The Reality: That is an open operation in a closed vessel.
CCS Requirement: Your strategy must identify these “briefly open” moments. These are your Critical Control Points (CCPs) (if using HACCP terminology). The strategy must layer controls here:
Localized Grade A air (a laminar flow station or glovebox around the sampling port)
Strict behavioral training (the operator must don sterile gloves, swab the port with 70% isopropyl alcohol, and execute the sampling in <2 minutes)
Immediate closure and post-sampling disinfection
I have seen drug substance batches rejected because of a single bioburden sample taken during an open operation that exceeded action levels. The bioburden itself may not have been representative of the bulk; it may have been adventitious contamination during sampling. But the CCS failed to protect the process during that vulnerable moment.
The “So What?” for Your Contamination Control Strategy
So, how do we pull this together into a cohesive document that doesn’t just sit on a shelf gathering dust?
Map the Process, Not the Room
Stop writing your CCS based on room grades. Write it based on the process flow. Map the journey of the product.
For Drug Substance:
Where is it synthesized or fermented? (typically in closed bioreactors)
Where is it purified? (chromatography columns, which are generally closed)
Where is it concentrated or buffer-exchanged? (tangential flow filtration units, which are closed)
Where is it held before filtration? (hold vessels, which are closed)
Where does it become sterile (filtration through 0.2 µm filter)
For Drug Product:
Where is the sterile bulk formulated? (generally in closed tanks or bags)
Where is it filled? (either in an isolator, a RABS, or an open line)
Where is it sealed? (capping machine, which must maintain Grade A conditions)
Where is it tested (QC lab, which is a separate cleanroom environment)
Within each of these stages, identify:
Where microbial control is critical (e.g., bioburden monitoring in drug substance holds)
Where sterility is assured (e.g., the sterilizing filter)
Where aseptic state is maintained (e.g., the filling room, the isolator)
Differentiate the Detectors
For Microbial Control: Use in-process bioburden and endotoxin testing to trend “bulk product quality.” If you see a shift from 5 CFU/mL (upstream) to 100 CFU/mL (mid-process), your CCS has a problem. These are alerts, not just data points.
For Aseptic Processing: Use physical monitoring (differential pressures, airflow velocities, particle counts) as your primary real-time indicators. If the pressure drops in the isolator, the aseptic state is compromised, regardless of what the settle plate says 5 days later.
For Sterility: Focus on parametric release concepts. The sterilizing filter validation data, the VHP cycle documentation—these are the product assurance. The end-product sterility test is a confirmation, not a control.
Justify Your Choices: Open vs. Closed, Drug Substance vs. Drug Product
For Drug Substance:
If you are using a closed bioreactor or SUS, your CCS can focus on upstream bioburden control and process hold time validation. Environmental monitoring is secondary (you’re monitoring the facility, not the product).
If you are using an open process (e.g., open fermentation, open harvesting), your CCS must be much tighter, and you need extensive EM.
For Drug Product:
If you are using an isolator or SUS (pre-filled syringe), your CCS focuses on biodecontamination validation and connection point discipline. You can fill in a lower-grade environment.
If you are using an open line or RABS, your CCS must extensively cover gowning, aseptic technique, and real-time EM. This is the higher-risk approach, and Annex 1 is explicitly nudging you away from it.
Explicitly Connect the Two Stages
Your CCS should have a section titled something like “Drug Substance to Drug Product Handoff: The Sterilizing Filtration Stage.” This section should specify:
The target bioburden for the drug substance bulk prior to filtration (typically ≤10 CFU/100 mL)
The filter used (pore size, expected log-reduction value, vendor qualification)
The validation data supporting the filtration (challenge testing with the actual drug substance, with a representative microbial panel)
The post-filtration process (transfer to sterile holding tank, aseptic filling)
This handoff is where drug substance “becomes” sterile, and where aseptic processing “begins.” Do not gloss over it.
A Word on Data Integrity and Trending
One final point, because I see this trip up good quality teams: your CCS must specify how data is collected, stored, analyzed, and acted upon.
For drug substance bioburden and endotoxin data:
Is trending performed monthly? Quarterly?
Who reviews the data?
At what point does a trend prompt investigation?
Are alert and action levels set based on historical facility data, not just pharmacopeial guidance?
For drug product environmental monitoring:
Are EM results reviewed during the filling run (with rapid methods) or after?
If a grow is seen, what is the protocol? Do you stop the batch?
Are microorganisms identified to species? If not, how do you know if it’s a contamination event or just normal flora?
A CCS is only as good as its data management infrastructure. If you are still printing out EM results and filing them in binders, you are not executing Annex 1 in its intended spirit.
Conclusion
The difference between microbial control, aseptic, and sterile is not academic. It is the difference between managing a risk, maintaining a state, and assuring an absolute.
When we confuse these terms, we get “sterile” manufacturing lines that rely on “microbial control” tactics—like trying to test quality into a product via settle plates. We get risk assessments that underestimate the “aseptic” challenge of a manual connection because we assume the “sterile” tube will save us. We get drug substance processes that are validated like drug product processes, with unnecessary Grade A facilities and excessive EM, when a tight bioburden control strategy would be more effective.
Worse, we get a single CCS that tries to cover both drug substance and drug product with the same language and the same controls. These are fundamentally different manufacturing activities with different risks and different control philosophies.
A robust Contamination Control Strategy requires us to be linguistically and technically precise. It demands that we move away from the comfort of open systems and the reliance on retrospective monitoring. It forces us to acknowledge that while we can control microbes in drug substance and assure sterility through sterilization, the aseptic state in drug product filling is a fragile thing, maintained only by the rigor of our design, the separation of the operator from the process, and the discipline of our decisions.
Stop ticking boxes. Start analyzing the process. Understand where you are dealing with microbial control, aseptic processing, or sterility assurance—and make sure your CCS reflects that understanding. And for the love of quality, stop using a single template to describe both drug substance and drug product manufacturing.
The October 2025 Warning Letter to Apotex Inc. is fascinating not because it reveals anything novel about FDA expectations, but because it exposes the chasm between what we know we should do and what we actually allow to happen on our watch. Evaluate it together with what we are seeing for Complete Response Letter (CRL) data, we can see that companies continue to struggle with the concept of equipment lifecycle management.
This isn’t about a few leaking gloves or deteriorated gaskets. This is about systemic failure in how we conceptualize, resource, and execute equipment management across the entire GMP ecosystem. Let me walk you through what the Apotex letter really tells us, where the FDA is heading next, and why your current equipment qualification program is probably insufficient.
The Apotex Warning Letter: A Case Study in Lifecycle Management Failure
The FDA’s Warning Letter to Apotex (WL: 320-26-12, October 31, 2025) reads like a checklist of every equipment lifecycle management failure I’ve witnessed in two decades of quality oversight. The agency cited 21 CFR 211.67(a) equipment maintenance failures, 21 CFR 211.192 inadequate investigations, and 21 CFR 211.113(b) aseptic processing deficiencies. But these citations barely scratch the surface of what actually went wrong.
The Core Failures: A Pattern of Deferral and Neglect
Between September 2023 and April 2025—18 months—Apotex experienced at least eight critical equipment failures during leak testing. Their personnel responded by retesting until they achieved passing results rather than investigating root causes. Think about that timeline. Eight failures over 18 months means a failure every 2-3 months, each one representing a signal that their equipment was degrading. When investigators finally examined the system, they found over 30 leaking areas. This wasn’t a single failure; this was systemic equipment deterioration that the organization chose to work around rather than address.
The letter documents white particle buildup on manufacturing equipment surfaces, particles along conveyor systems, deteriorated gasket seals, and discolored gloves. Investigators observed a six-millimeter glove breach that was temporarily closed with a cable tie before production continued. They found tape applied to “false covers” as a workaround. These aren’t just housekeeping issues—they’re evidence that Apotex had crossed from proactive maintenance into reactive firefighting, and then into dangerous normalization of deviation.
Most damning: Apotex had purchased upgraded equipment nearly a year before the FDA inspection but continued using the deteriorating equipment that was actively generating particles contaminating their nasal spray products. They had the solution in their possession. They chose not to implement it.
The Investigation Gap: Equipment Failures as Quality System Failures
The FDA hammered Apotex on their failure to investigate, but here’s what’s really happening: equipment failures are quality system failures until proven otherwise. When a leak happens , you don’t just replace whatever component leaked. You ask:
Why did this component fail when others didn’t?
Is this a batch-specific issue or a systemic supplier problem?
How many products did this breach potentially affect?
What does our environmental monitoring data tell us about the timeline of contamination?
Are our maintenance intervals appropriate?
Apotex’s investigators didn’t ask these questions. Their personnel retested until they got passing results—a classic example of “testing into compliance” that I’ve seen destroy quality cultures. The quality unit failed to exercise oversight, and management failed to resource proper root cause analysis. This is what happens when quality becomes a checkbox exercise rather than an operational philosophy.
BLA CRL Trends: The Facility Equipment Crisis Is Accelerating
The Apotex warning letter doesn’t exist in isolation. It’s part of a concerning trend in FDA enforcement that’s becoming impossible to ignore. Facility inspection concerns dominate CRL justifications. Manufacturing and CMC deficiencies account for approximately 44% of all CRLs. For biologics specifically, facility-related issues are even more pronounced.
The Biologics-Specific Challenge
Biologics license applications face unique equipment lifecycle scrutiny. The 2024-2025 CRL data shows multiple biosimilars rejected due to third-party manufacturing facility issues despite clean clinical data. Tab-cel (tabelecleucel) received a CRL citing problems at a contract manufacturing organization—the FDA rejected an otherwise viable therapy because the facility couldn’t demonstrate equipment control.
This should terrify every biotech quality leader. The FDA is telling us: your clinical data is worthless if your equipment lifecycle management is suspect. They’re not wrong. Biologics manufacturing depends on consistent equipment performance in ways small molecule chemistry doesn’t. A 0.2°C deviation in a bioreactor temperature profile, caused by a poorly maintained chiller, can alter glycosylation patterns and change the entire safety profile of your product. The agency knows this, and they’re acting accordingly.
The Top 10 Facility Equipment Deficiencies Driving CRLs
Fire Protection and Hazardous Material Handling Deficiencies (equipment safety systems)
Critical Utility System Failures (WFI loops with dead legs, inadequate sanitization)
Environmental Monitoring System Gaps (manual data recording, lack of 21 CFR Part 11 compliance)
Container Closure and Packaging Validation Issues (missing extractables/leachables data, CCI testing gaps)
Inadequate Cleanroom Classification and Control (ISO 14644 and EU Annex 1 compliance failures)
Lack of Preventive Maintenance and Asset Management (missing calibration records, unclear maintenance responsibilities)
Inadequate Documentation and Change Control (HVAC setpoint changes without impact assessment)
Sustainability and Environmental Controls Overlooked (temperature/humidity excursions affecting product stability)
Notice what’s not on this list? Equipment selection errors. The FDA isn’t seeing companies buy the wrong equipment. They’re seeing companies buy the right equipment and then fail to manage it across its lifecycle. This is a crucial distinction. The problem isn’t capital allocation—it’s operational execution.
FDA’s Shift to “Equipment Lifecycle State of Control”
The FDA has introduced a significant conceptual shift in how they discuss equipment management. The Apotex Warning Letter is part of the agency’s new emphasis on “equipment lifecycle state of control” . This isn’t just semantic gamesmanship. It represents a fundamental understanding that discrete qualification events are not enough and that continuous lifecycle management is long overdue.
Continuous monitoring of equipment performance parameters, not just periodic checks
Predictive maintenance based on performance data, not just manufacturer-recommended intervals
Real-time assessment of equipment degradation signals (particle generation, seal wear, vibration changes)
Integrated change management that treats equipment modifications as potential quality events
Traceable decision-making about when to repair, refurbish, or retire equipment
The FDA is essentially saying: qualification is a snapshot; state of control is a movie. And they want to see the entire film, not just the trailer.
This aligns perfectly with the agency’s broader push toward Quality Management Maturity. As I’ve previously written about QMM, the FDA is moving away from checking compliance boxes and toward evaluating whether organizations have the infrastructure, culture, and competence to manage quality dynamically. Equipment lifecycle management is the perfect test case for this shift because equipment degradation is inevitable, predictable, and measurable. If you can’t manage equipment lifecycle, you can’t manage quality.
Global Regulatory Convergence: WHO, EMA, and PIC/S Perspectives
The FDA isn’t operating in a vacuum. Global regulators are converging on equipment lifecycle management as a critical inspection focus, though their approaches differ in emphasis.
EMA: The Annex 15 Lifecycle Approach
EMA’s process validation guidance explicitly requires IQ, OQ, and PQ for equipment and facilities as part of the validation lifecycle. Unlike FDA’s three-stage process validation model, EMA frames qualification as ongoing throughout the product lifecycle. Their 2023 revision of Annex 15 emphasizes:
Validation Master Plans that include equipment lifecycle considerations
Ongoing Process Verification that incorporates equipment performance data
Risk-based requalification triggered by changes, deviations, or trends
Integration with Product Quality Reviews (PQRs) to assess equipment impact on product quality
The EMA expects you to prove your equipment remains qualified through annual PQRs and continuous data review having been more explicit about a lifecycle approach for years.
PIC/S: The Change Management Imperative
PIC/S PI 054-1 on change management provides crucial guidance on equipment lifecycle triggers. The document explicitly identifies equipment upgrades as changes that require formal assessment, planning, and implementation controls. Critically, PIC/S emphasizes:
Interim controls when equipment issues are identified but not yet remediated
Post-implementation monitoring to ensure changes achieve intended risk reduction
Documentation of rejected changes, especially those related to quality/safety hazard mitigation
The Apotex case is a PIC/S textbook violation: they identified equipment deterioration (hazard), purchased upgraded equipment (change proposal), but failed to implement it with appropriate interim controls or timeline management. The result was continued production with deteriorating equipment—exactly what PIC/S guidance is designed to prevent.
WHO: The Resource-Limited Perspective
WHO’s equipment lifecycle guidance, while focused on medical equipment in low-resource settings, offers surprisingly relevant insights for GMP facilities. Their framework emphasizes:
Planning based on lifecycle cost, not just purchase price
Skill development and training as core lifecycle components
Decommissioning protocols that ensure data integrity and product segregation
The WHO model is refreshingly honest about resource constraints, which applies to many GMP facilities facing budget pressure. Their key insight: proper lifecycle management actually reduces total cost of ownership by 3-10x compared to run-to-failure approaches. This is the business case that quality leaders need to make to CFOs who view maintenance as a cost center.
The Six-System Inspection Model: Where Equipment Lifecycle Fits
FDA’s Six-System Inspection Model—particularly the Facilities and Equipment System—provides the structural framework for understanding equipment lifecycle requirements. As I’ve previously written, this system “ensures that facilities and equipment are suitable for their intended use and maintained properly” with focus on “design, maintenance, cleaning, and calibration.”
The Interconnectedness Problem
Here’s where many organizations fail: they treat the six systems as silos. Equipment lifecycle management bleeds across all of them:
Production System: Equipment performance directly impacts process capability
Laboratory Controls: Analytical equipment lifecycle affects data integrity
Materials System: Equipment changes can affect raw material compatibility
Packaging and Labeling: Equipment modifications require revalidation
Quality System: Equipment deviations trigger CAPA and change control
The Apotex warning letter demonstrates this interconnectedness perfectly. Their equipment failures (Facilities & Equipment) led to container-closure integrity issues (Packaging), which they failed to investigate properly (Quality), resulting in distributed product that was potentially adulterated (Production). The FDA’s response required independent assessments of investigations, CAPA, and change management—three separate systems all impacted by equipment lifecycle failures.
The “State of Control” Assessment Questions
If FDA inspectors show up tomorrow, here’s what they’ll ask about your equipment lifecycle management:
Design Qualification: Do your User Requirements Specifications include lifecycle maintenance requirements? Are you specifying equipment with modular upgrade paths, or are you buying disposable assets?
Change Management: When you purchase upgraded equipment, what triggers its implementation? Is there a formal risk assessment linking equipment deterioration to product quality? Or do you wait for failures?
Preventive Maintenance: Are your PM intervals based on manufacturer recommendations, or on actual performance data? Do you have predictive maintenance programs using vibration analysis, thermal imaging, or particle counting?
Decommissioning: When equipment reaches end-of-life, do you have formal retirement protocols that assess data integrity impact? Or does old equipment sit in corners of the cleanroom “just in case”?
Training: Do your operators understand equipment lifecycle concepts? Can they recognize early degradation signals? Or do they just call maintenance when something breaks?
These aren’t theoretical questions. They’re directly from recent 483 observations and CRL deficiencies.
The Business Case: Why Equipment Lifecycle Management Is Economic Imperative
Let’s be blunt: the pharmaceutical industry has treated equipment as a capital expense to be minimized, not an asset to be optimized. This is catastrophically wrong. The Apotex warning letter shows the true cost of this mindset:
Product recalls: Multiple ophthalmic and oral solutions recalled
Production suspension: Sterile manufacturing halted
Independent assessments: Required third-party evaluation of entire quality system
Reputational damage: Public warning letter, potential import alert
Opportunity cost: Products stuck in regulatory limbo while competitors gain market share
Contrast this with the investment required for proper lifecycle management:
Predictive maintenance systems: $50,000-200,000 for sensors and software
Enhanced training programs: $10,000-30,000 annually
Total: Less than the cost of a single batch recall
The ROI is undeniable. Equipment lifecycle management isn’t a cost center—it’s risk mitigation with quantifiable financial returns.
The CFO Conversation
I’ve had this conversation with CFOs more times than I can count. Here’s what works:
Don’t say: “We need more maintenance budget.”
Say: “Our current equipment lifecycle risk exposure is $X million based on recent CRL trends and warning letters. Investing $Y in lifecycle management reduces that risk by Z% and extends asset utilization by 2-3 years, deferring $W million in capital expenditures.”
Bring data. Show them the Apotex letter. Show them the Tab-cel CRL. Show them the 51 CRLs driven by facility concerns. CFOs understand risk-adjusted returns. Frame equipment lifecycle management as portfolio risk management, not engineering overhead.
Practical Framework: Building an Equipment Lifecycle Management Program
Enough theory. Here’s the practical framework I’ve implemented across multiple DS facilities, refined through inspections, and validated against regulatory expectations.
Phase 1: Asset Criticality Assessment
Not all equipment deserves equal lifecycle attention. Use a risk-based approach:
Criticality Class A (Direct Impact): Equipment whose failure directly impacts product quality, safety, or efficacy. Bioreactors, purification skids, sterile filling lines, environmental monitoring systems. These require full lifecycle management including continuous monitoring, predictive maintenance, and formal retirement protocols.
Criticality Class B (Indirect Impact): Equipment whose failure impacts GMP environment but not direct product attributes. HVAC units, WFI systems, clean steam generators. These require enhanced lifecycle management with robust PM programs and performance trending.
Criticality Class C (No Impact): Non-GMP equipment. Standard maintenance practices apply.
Phase 2: Lifecycle Documentation Architecture
Create a master equipment lifecycle file for each Class A and B asset containing:
User Requirements Specification with lifecycle maintenance requirements
Design Qualification including maintainability and upgrade path assessment
Commissioning Protocol (IQ/OQ/PQ) with acceptance criteria that remain valid throughout lifecycle
Maintenance Master Plan defining PM intervals, spare parts strategy, and predictive monitoring
Performance Trending Protocol specifying parameters to monitor, alert limits, and review frequency
Change Management History documenting all modifications with impact assessment
Retirement Protocol defining end-of-life triggers and data migration requirements
As I’ve written about in my posts on GMP-critical systems, documentation must be living documents that evolve with the asset, not static files that gather dust after qualification.
Phase 3: Predictive Maintenance Implementation
Move beyond manufacturer-recommended intervals to condition-based maintenance:
Vibration analysis for rotating equipment (pumps, agitators)
Thermal imaging for electrical systems and heat transfer equipment
Particle counting for cleanroom equipment and filtration systems
Pressure decay testing for sterile barrier systems
Oil analysis for hydraulic and lubrication systems
The goal is to detect degradation 6-12 months before failure, allowing planned intervention during scheduled shutdowns.
Phase 4: Integrated Change Control
Equipment changes must flow through formal change control with:
Technical assessment by engineering and quality
Risk evaluation using FMEA or similar tools
Regulatory assessment for potential prior approval requirements
Implementation planning with interim controls if needed
Post-implementation review to verify effectiveness
The Apotex case shows what happens when you skip the interim controls. They identified the need for upgraded equipment (change) but failed to implement the necessary bridge measures to ensure product quality while waiting for that equipment to come online. They allowed the “future state” (new equipment) to become an excuse for neglecting the “current state” (deteriorating equipment).
This is a failure of Change Management Logic. In a robust quality system, the moment you identify that equipment requires replacement due to performance degradation, you have acknowledged a risk. If you cannot replace it immediately—due to capital cycles, lead times, or qualification timelines—you must implement interim controls to mitigate that risk.
For Apotex, those interim controls should have been:
Reduced run durations to minimize stress on failing seals.
Shortened maintenance intervals (replacing gaskets every batch instead of every campaign).
Enhanced environmental monitoring focused specifically on the degrade zones.
Instead, they did nothing. They continued business as usual, likely comforting themselves with the purchase order for the new machine. The FDA’s response was unambiguous: A purchase order is not a CAPA. Until the new equipment is qualified and operational, your legacy equipment must remain in a state of control, or production must stop. There is no regulatory “grace period” for deteriorating assets.
Phase 5: The Cultural Shift—From “Repair” to “Reliability”
The final and most difficult phase of this framework is cultural. You cannot write a SOP for this; you have to lead it.
Most organizations operate on a “Break-Fix” mentality:
Equipment runs until it alarms or fails.
Maintenance fixes it.
Quality investigates (or papers over) the failure.
Production resumes.
The FDA’s “Lifecycle State of Control” demands a “Predict-Prevent” mentality:
Equipment is monitored for degradation signals (vibration, heat, particle counts).
Maintenance intervenes before failure limits are reached.
Quality reviews trends to confirm the intervention was effective.
Production continues uninterrupted.
To achieve this, you need to change how you incentivize your teams. Stop rewarding “heroic” fixes at 2 AM. Start rewarding the boring, invisible work of preventing the failure in the first place. As I’ve written before regarding Quality Management Maturity (QMM), mature quality systems are quiet systems. Chaos is not a sign of hard work; it’s a sign of lost control.
Conclusion: The Choice Before Us
The warning letter to Apotex Inc. and the rising tide of facility-related CRLs are not random compliance noise. They are signal flares. The regulatory expectations for equipment management have fundamentally shifted from static qualification (Is it validated?) to dynamic lifecycle management (Is it in a state of control right now?).
The FDA, EMA, and PIC/S have converged on a single truth: You cannot assure product quality if you cannot guarantee equipment performance.
We are at an inflection point. The industry’s aging infrastructure, combined with the increasing complexity of biologic processes and the unforgiving nature of residue control, has created a perfect storm. We can no longer treat equipment maintenance as a lower-tier support function. It is a core GMP activity, equal in criticality to batch record review or sterility testing.
As Quality Leaders, we have two choices:
The Apotex Path: Treat equipment upgrades as capital headaches to be deferred. Ignore the “minor” leaks and “insignificant” residues. Let the maintenance team bandage the wounds while we focus on “strategic” initiatives. This path leads to 483s, warning letters, CRLs, and the excruciating public failure of seeing your facility’s name in an FDA press release.
The Lifecycle Path: Embrace the complexity. Resource the predictive maintenance programs. Validate the residue removal. Treat every equipment change as a potential risk to patient safety. Build a system where equipment reliability is the foundation of your quality strategy, not an afterthought.
The second path is expensive. It is technically demanding. It requires fighting for budget dollars that don’t have immediate ROI. But it allows you to sleep at night, knowing that when—not if—the FDA investigator asks to see your equipment maintenance history, you won’t have to explain why you used a cable tie to fix a glove port.
You’ll simply show them the data that proves you’re in control.
Job descriptions are foundational documents in pharmaceutical quality systems. Regulations like 21 CFR 211.25 require that personnel have appropriate education, training, and experience to perform assigned functions. The job description serves as the starting point for determining training requirements, establishing accountability, and demonstrating regulatory compliance. Yet for all their regulatory necessity, most job descriptions fail to capture what actually makes someone effective in their role.
The problem isn’t that job descriptions are poorly written or inadequately detailed. The problem is more fundamental: they describe static snapshots of isolated positions while ignoring the dynamic, interconnected, and discretionary nature of real organizational work.
The Static Job Description Trap
Traditional job descriptions treat roles as if they exist in isolation. A quality manager’s job description might list responsibilities like “lead inspection readiness activities,” “participate in vendor management,” or “write and review deviations and CAPAs”. These statements aren’t wrong, but they’re profoundly incomplete.
Elliott Jacques, a late 20th century thinker on organizational theory, identified a critical distinction that most job descriptions ignore: the difference between prescribed elements and discretionary elements of work. Every role contains both, yet our documentation acknowledge only one.
Prescribed elements are the boundaries, constraints, and requirements that eliminate choice. They specify what must be done, what cannot be done, and the regulations, policies, and methods to which the role holder must conform. In pharmaceutical quality, prescribed elements are abundant and well-documented: follow GMPs, complete training before performing tasks, document decisions according to procedure, escalate deviations within defined timeframes.
Discretionary elements are everything else—the choices, judgments, and decisions that cannot be fully specified in advance. They represent the exercise of professional judgment within the prescribed limits. Discretion is where competence actually lives.
When we investigate a deviation, the prescribed elements are clear: follow the investigation procedure, document findings in the system, complete within regulatory timelines. But the discretionary elements determine whether the investigation succeeds: What questions should I ask? Which subject matter experts should I engage? How deeply should I probe this particular failure mode? What level of evidence is sufficient? When have I gathered enough data to draw conclusions?
As Jacques observed, “the core of industrial work is therefore not only to carry out the prescribed elements of the job, but also to exercise discretion in its execution”. Yet if job descriptions don’t recognize and define the limits of discretion, employees will either fail to exercise adequate discretion or wander beyond appropriate limits into territory that belongs to other roles.
The Interconnectedness Problem
Job descriptions also fail because they treat positions as independent entities rather than as nodes in an organizational network. In reality, all jobs in pharmaceutical organizations are interconnected. A mistake in manufacturing manifests as a quality investigation. A poorly written procedure creates training challenges. An inadequate risk assessment during tech transfer generates compliance findings during inspection.
This interconnectedness means that describing any role in isolation fundamentally misrepresents how work actually flows through the organization. When I write about process owners, I emphasize that they play a fundamental role in managing interfaces between key processes precisely to prevent horizontal silos. The process owner’s authority and accountability extend across functional boundaries because the work itself crosses those boundaries.
Yet traditional job descriptions remain trapped in functional silos. They specify reporting relationships vertically—who you report to, who reports to you—but rarely acknowledge the lateral dependencies that define how work actually gets done. They describe individual accountability without addressing mutual obligations.
The Missing Element: Mutual Role Expectations
Jacques argued that effective job descriptions must contain three elements:
The central purpose and rationale for the position
The prescribed and discretionary elements of the work
The mutual role expectations—what the focal role expects from other roles, and vice versa
That third element is almost entirely absent from job descriptions, yet it’s arguably the most critical for organizational effectiveness.
Consider a deviation investigation. The person leading the investigation needs certain things from other roles: timely access to manufacturing records from operations, technical expertise from subject matter experts, root cause methodology support from quality systems specialists, regulatory context from regulatory affairs. Conversely, those other roles have legitimate expectations of the quality professional: clear articulation of information needs, respect for operational constraints, transparency about investigation progress, appropriate use of their expertise.
These mutual expectations form the actual working contract that determines whether the organization functions effectively. When they remain implicit and undocumented, we get the dysfunction I see constantly: investigations that stall because operations claims they’re too busy to provide information, subject matter experts who feel blindsided by last-minute requests, quality professionals frustrated that other functions don’t understand the urgency of compliance timelines.
Decision-making frameworks like DACI and RAPID exist precisely to make these mutual expectations explicit. They clarify who drives decisions, who must be consulted, who has approval authority, and who needs to be informed. But these frameworks work at the decision level. We need the same clarity at the role level, embedded in how we define positions from the start.
Discretion and Hierarchy
The amount of discretion in a role—what Jacques called the “time span of discretion”—is actually a better measure of organizational level than traditional hierarchical markers like job titles or reporting relationships. A front-line operator works within tightly prescribed limits with short time horizons: follow this batch record, use these materials, execute these steps, escalate these deviations immediately. A site quality director operates with much broader discretion over longer time horizons: establish quality strategy, allocate resources across competing priorities, determine which regulatory risks to accept or mitigate, shape organizational culture over years.
This observation has profound implications for how we think about organizational design. As I’ve written before, the idea that “the higher the rank in the organization the more decision-making authority you have” is absurd. In every organization I’ve worked in, people hold positions of authority over areas where they lack the education, experience, and training to make competent decisions.
The solution isn’t to eliminate hierarchy—organizations need stratification by complexity and time horizon. The solution is to separate positional authority from decision authority and to explicitly define the discretionary scope of each role.
A manufacturing supervisor might have positional authority over operations staff but should not have decision authority over validation strategies—that’s outside their discretionary scope. A quality director might have positional authority over the quality function but should not unilaterally decide equipment qualification approaches that require deep engineering expertise. Clear boundaries around discretion prevent the territorial conflicts and competence gaps that plague organizations.
Implications for Training and Competency
The distinction between prescribed and discretionary elements has critical implications for how we develop competency. Most pharmaceutical training focuses almost exclusively on prescribed elements: here’s the procedure, here’s how to use the system, here’s what the regulation requires. We measure training effectiveness by knowledge checks that assess whether people remember the prescribed limits.
But competence isn’t about following procedures—it’s about exercising appropriate judgment within procedural constraints. It’s about knowing what to do when things depart from expectations, recognizing which risk assessment methodology fits a particular decision context, sensing when additional expertise needs to be consulted.
These discretionary capabilities develop differently than procedural knowledge. They require practice, feedback, coaching, and sustained engagement over time. A meta-analysis examining skill retention found that complex cognitive skills like risk assessment decay much faster than simple procedural skills. Without regular practice, the discretionary capabilities that define competence actively degrade.
This is why I emphasize frequency, duration, depth, and accuracy of practice as the real measures of competence. It’s why deep process ownership requires years of sustained engagement rather than weeks of onboarding. It’s why competency frameworks must integrate skills, knowledge, and behaviors in ways that acknowledge the discretionary nature of professional work.
Job descriptions that specify only prescribed elements provide no foundation for developing the discretionary capabilities that actually determine whether someone can perform the role effectively. They lead to training plans focused on knowledge transfer rather than judgment development, performance evaluations that measure compliance rather than contribution, and hiring decisions based on credentials rather than capacity.
Designing Better Job Descriptions
Quality leaders—especially those of us responsible for organizational design—need to fundamentally rethink how we define and document roles. Effective job descriptions should:
Articulate the central purpose. Why does this role exist? What job is the organization hiring this position to do? A deviation investigator exists to transform quality failures into organizational learning while demonstrating control to regulators. A validation engineer exists to establish documented evidence that systems consistently produce quality outcomes. Purpose provides the context for exercising discretion appropriately.
Specify prescribed boundaries explicitly. What are the non-negotiable constraints? Which policies, regulations, and procedures must be followed without exception? What decisions require escalation or approval? Clear prescribed limits create safety—they tell people where they can’t exercise judgment and where they must seek guidance.
Define discretionary scope clearly. Within the prescribed limits, what decisions is this role expected to make independently? What level of evidence is this role qualified to evaluate? What types of problems should this role resolve without escalation? How much resource commitment can this role authorize? Making discretion explicit transforms vague “good judgment” expectations into concrete accountability.
Document mutual role expectations. What does this role need from other roles to be successful? What do other roles have the right to expect from this position? How do the prescribed and discretionary elements of this role interface with adjacent roles in the process? Mapping these interdependencies makes the organizational system visible and manageable.
Connect to process roles explicitly. Rather than generic statements like “participate in CAPAs,” job descriptions should specify process roles: “Author and project manage CAPAs for quality system improvements” or “Provide technical review of manufacturing-related CAPAs”. Process roles define the specific prescribed and discretionary elements relevant to each procedure. They provide the foundation for role-based training curricula that address both procedural compliance and judgment development.
Beyond Job Descriptions: Organizational Design
The limitations of traditional job descriptions point to larger questions about organizational design. If we’re serious about building quality systems that work—that don’t just satisfy auditors but actually prevent failures and enable learning—we need to design organizations around how work flows rather than how authority is distributed.
This means establishing empowered process owners who have clear authority over end-to-end processes regardless of functional boundaries. It means implementing decision-making frameworks that explicitly assign decision roles based on competence rather than hierarchy. It means creating conditions for deep process ownership through sustained engagement rather than rotational assignments.
Most importantly, it means recognizing that competent performance requires both adherence to prescribed limits and skillful exercise of discretion. Training systems, performance management approaches, and career development pathways must address both dimensions. Job descriptions that acknowledge only one while ignoring the other set employees up for failure and organizations up for dysfunction.
The Path Forward
Jacques wrote that organizational structures should be “requisite”—required by the nature of work itself rather than imposed by arbitrary management preferences. There’s wisdom in that framing for pharmaceutical quality. Our organizational structures should emerge from the actual requirements of pharmaceutical work: the need for both compliance and innovation, the reality of interdependent processes, the requirement for expert judgment alongside procedural discipline.
Job descriptions are foundational documents in quality systems. They link to hiring decisions, training requirements, performance expectations, and regulatory demonstration of competence. Getting them right matters not just for audit preparedness but for organizational effectiveness.
The next time you review a job description, ask yourself: Does this document acknowledge both what must be done and what must be decided? Does it clarify where discretion is expected and where it’s prohibited? Does it make visible the interdependencies that determine whether this role can succeed? Does it provide a foundation for developing both procedural compliance and professional judgment?
If the answer is no, you’re not alone. Most job descriptions fail these tests. But recognizing the deficit is the first step toward designing organizational systems that actually match the complexity and interdependence of pharmaceutical work—systems where competence can develop, accountability is clear, and quality is built into how we organize rather than inspected into what we produce.
The work of pharmaceutical quality requires us to exercise discretion well within prescribed limits. Our organizational design documents should acknowledge that reality rather than pretend it away.
Example Job Description
Site Quality Risk Manager – Seattle and Redmond Sites
Reports To: Sr. Manager, Quality Department: Qualty Location: Hybrid/Field-Based – Certain Sites
Purpose of the Role
The Site Quality Risk Manager ensures that quality and manufacturing operations at the sites maintain proactive, compliant, and science-based risk management practices. The role exists to translate uncertainty into structured understanding—identifying, prioritizing, and mitigating risks to product quality, patient safety, and business continuity. Through expert application of Quality Risk Management (QRM) principles, this role builds a culture of curiosity, professional judgment, and continuous improvement in decision-making.
Prescribed Work Elements
Boundaries and required activities defined by regulations, procedures, and PQS expectations.
Ensure full alignment of the site Risk Program with the Corporate Pharmaceutical Quality System (PQS), ICH Q9(R1) principles, and applicable GMP regulations.
Facilitate and document formal quality risk assessments for manufacturing, laboratory, and facility operations.
Manage and maintain the site Risk Registers for sitefacilities.
Communicate high-priority risks, mitigation actions, and risk acceptance decisions to site and functional senior management.
Support Health Authority inspections and audits as QRM Subject Matter Expert (SME).
Lead deployment and sustainment of QRM process tools, templates, and governance structures within the corporate risk management framework.
Maintain and periodically review site-level guidance documents and procedures on risk management.
Discretionary Work Elements
Judgment and decision-making required within professional and policy boundaries.
Determine the appropriate depth and scope of risk assessments based on formality and system impact.
Evaluate the adequacy and proportionality of mitigations, balancing regulatory conservatism with operational feasibility.
Prioritize site risk topics requiring cross-functional escalation or systemic remediation.
Shape site-specific applications of global QRM tools (e.g., HACCP, FMEA, HAZOP, RRF) to reflect manufacturing complexity and lifecycle phase—from Phase 1 through PPQ and commercial readiness.
Determine which emerging risks require systemic visibility in the Corporate Risk Register and document rationale for inclusion or deferral.
Facilitate reflection-based learning after deviations, applying risk communication as a learning mechanism across functions.
Offer informed judgment in gray areas where quality principles must guide rather than prescribe decisions.
Mutual Role Expectations
From the Site Quality Risk Manager:
Partner transparently with Process Owners and Functional SMEs to identify, evaluate, and mitigate risks.
Translate technical findings into business-relevant risk statements for senior leadership.
Mentor and train site teams to develop risk literacy and discretionary competence—the ability to think, not just comply.
Maintain a systems perspective that integrates manufacturing, analytical, and quality operations within a unified risk framework.
From Other Roles Toward the Site Quality Risk Manager:
Provide timely, complete data for risk assessments.
Engage in collaborative dialogue rather than escalation-only interactions.
Respect QRM governance boundaries while contributing specialized technical judgment.
Support implementation of sustainable mitigations beyond short-term containment.
Qualifications and Experience
Bachelor’s degree in life sciences, engineering, or a related technical discipline. Equivalent experience accepted.
Minimum 4+ years relevant experience in Quality Risk Management within biopharmaceutical GMP manufacturing environments.
Demonstrated application of QRM methodologies (FMEA, HACCP, HAZOP, RRF) and facilitation of cross-functional risk assessments.
Strong understanding of ICH Q9(R1) and FDA/EMA risk management expectations.
Proven ability to make judgment-based decisions under regulatory and operational uncertainty.
Experience mentoring or building risk capabilities across technical teams.
Excellent communication, synthesis, and facilitation skills.
Purpose in Organizational Design Context
This role exemplifies a requisite position—where scope of discretion, not hierarchy, defines level of work. The Site Quality Risk Manager operates with a medium-span time horizon (6–18 months), balancing regulatory compliance with strategic foresight. Success is measured by the organization’s capacity to detect, understand, and manage risk at progressively earlier stages of product and process lifecycle—reducing reactivity and enabling resilience.
Competency Development and Training Focus
Prescribed competence: Deep mastery of PQS procedures, regulatory standards, and risk methodologies.
Discretionary competence: Situational judgment, cross-functional influence, systems thinking, and adaptive decision-making. Training plans should integrate practice, feedback, and reflection mechanisms rather than static knowledge transfer, aligning with the competency framework principles.
This enriched job description demonstrates how clarity of purpose, articulation of prescribed vs. discretionary elements, and defined mutual expectations transform a standard compliance document into a true instrument of organizational design and leadership alignment.