Mentorship as Missing Infrastructure in Quality Culture

The gap between quality-as-imagined and quality-as-done doesn’t emerge from inadequate procedures or insufficient training budgets. It emerges from a fundamental failure to transfer the reasoning, judgment, and adaptive capacity that expert quality professionals deploy every day but rarely articulate explicitly. This knowledge—how to navigate the tension between regulatory compliance and operational reality, how to distinguish signal from noise in deviation trends, how to conduct investigations that identify causal mechanisms rather than document procedural failures—doesn’t transmit effectively through classroom training or SOP review. It requires mentorship.

Yet pharmaceutical quality organizations treat mentorship as a peripheral benefit rather than critical infrastructure. When we discuss quality culture, we focus on leadership commitment, clear procedures, adequate resources, and accountability systems. These matter. But without deliberate mentorship structures that transfer tacit quality expertise from experienced professionals to developing ones, we’re building quality systems on the assumption that technical competence alone generates quality judgment. That assumption fails predictably and expensively.

A recent Harvard Business Review article on organizational mentorship culture provides a framework that translates powerfully to pharmaceutical quality contexts. The authors distinguish between running mentoring programs—tactical initiatives with clear participants and timelines—and fostering mentoring cultures where mentorship permeates the organization as an expected practice rather than a special benefit. That distinction matters enormously for quality functions.

Quality organizations running mentoring programs might pair high-potential analysts with senior managers for quarterly conversations about career development. Quality organizations with mentoring cultures embed expectation and practice of knowledge transfer into daily operations—senior investigators routinely involve junior colleagues in root cause analysis, experienced auditors deliberately explain their risk-based thinking during facility walkthroughs, quality managers create space for emerging leaders to struggle productively with complex regulatory interpretations before providing their own conclusions.

The difference isn’t semantic. It’s the difference between quality systems that can adapt and improve versus systems that stagnate despite impressive procedure libraries and training completion metrics.

The Organizational Blind Spot: High Performers Left to Navigate Development Alone

The HBR article describes a scenario that resonates uncomfortably with pharmaceutical quality career paths: Maria, a high-performing marketing professional, was overlooked for promotion because strong technical results didn’t automatically translate to readiness for increased responsibility. She assumed performance alone would drive progression. Her manager recognized a gap between Maria’s current behaviors and those required for senior roles but also recognized she wasn’t the right person to develop those capabilities—her focus was Maria’s technical performance, not her strategic development.

This pattern repeats constantly in pharmaceutical quality organizations. A QC analyst demonstrates excellent technical capability—meticulous documentation, strong analytical troubleshooting, consistent detection of out-of-specification results. Based on this performance, they’re promoted to Senior Analyst or given investigation leadership responsibilities. Suddenly they’re expected to demonstrate capabilities that excellent technical work neither requires nor develops: distinguishing between adequate and excellent investigation depth, navigating political complexity when investigations implicate manufacturing process decisions, mentoring junior analysts while managing their own workload.

Nobody mentions mentoring because everything seemed to be going well. The analyst was meeting expectations. Training records were current. Performance reviews were positive. But the knowledge required for the next level—how to think like a senior quality professional rather than execute like a proficient technician—was never deliberately transferred.

I’ve seen this failure mode throughout my career leading quality organizations. We promote based on technical excellence, then express frustration when newly promoted professionals struggle with judgment, strategic thinking, or leadership capabilities. We attribute these struggles to individual limitations rather than systematic organizational failure to develop those capabilities before they became job requirements.

The assumption underlying this failure is that professional development naturally emerges from experience plus training. Put capable people in challenging roles, provide required training, and development follows. This assumption ignores what research on expertise consistently demonstrates: expert performance emerges from deliberate practice with feedback, not accumulated experience. Without structured mentorship providing that feedback and guiding that deliberate practice, experience often just reinforces existing patterns rather than developing new capabilities.

Why Generic Mentorship Programs Fail in Quality Contexts

Pharmaceutical companies increasingly recognize mentorship value and implement formal mentoring programs. According to the HBR article, 98% of Fortune 500 companies offered visible mentoring programs in 2024. Yet uptake remains remarkably low—only 24% of employees use available programs. Employees cite time pressures, unclear expectations, limited training, and poor program visibility as barriers.

These barriers intensify in quality functions. Quality professionals already face impossible time allocation challenges—investigation backlogs, audit preparation, regulatory submission support, training delivery, change control review, deviation trending. Adding mentorship meetings to calendars already stretched beyond capacity feels like another corporate initiative disconnected from operational reality.

But the deeper problem with generic mentoring programs in quality contexts is misalignment between program structure and quality knowledge characteristics. Most corporate mentoring programs focus on career development, leadership skills, networking, and organizational navigation. These matter. But they don’t address the specific knowledge transfer challenges unique to pharmaceutical quality practice.

Quality expertise is deeply contextual and often tacit. An experienced investigator approaching a potential product contamination doesn’t follow a decision tree. They’re integrating environmental monitoring trends, recent facility modifications, similar historical events, understanding of manufacturing process vulnerabilities, assessment of analytical method limitations, and pattern recognition across hundreds of previous investigations. Much of this reasoning happens below conscious awareness—it’s System 1 thinking in Kahneman’s framework, rapid and automatic.

When mentoring focuses primarily on career development conversations, it misses the opportunity to make this tacit expertise explicit. The most valuable mentorship for a junior quality professional isn’t quarterly career planning discussions. It’s the experienced investigator talking through their reasoning during an active investigation: “I’m focusing on the environmental monitoring because the failure pattern suggests localized contamination rather than systemic breakdown, and these three recent EM excursions in the same suite caught my attention even though they were all within action levels…” That’s knowledge transfer that changes how the mentee will approach their next investigation.

Generic mentoring programs also struggle with the falsifiability challenge I’ve been exploring on this blog. When mentoring success metrics focus on program participation rates, satisfaction surveys, and retention statistics, they measure mentoring-as-imagined (career discussions happened, participants felt supported) rather than mentoring-as-done (quality judgment improved, investigation quality increased, regulatory inspection findings decreased). These programs can look successful while failing to transfer the quality expertise that actually matters for organizational performance.

Evidence for Mentorship Impact: Beyond Engagement to Quality Outcomes

Despite implementation challenges, research evidence for mentorship impact is substantial. The HBR article cites multiple studies demonstrating that mentees were promoted at more than twice the rate of non-participants, mentoring delivered ROI of 1000% or better, and 70% of HR leaders reported mentoring enhanced business performance. A 2021 meta-analysis in the Journal of Vocational Behavior found strong correlations between mentoring, job performance, and career satisfaction across industries.

These findings align with broader research on expertise development. Anders Ericsson’s work on deliberate practice demonstrates that expert performance requires not just experience but structured practice with immediate feedback from more expert practitioners. Mentorship provides exactly this structure—experienced quality professionals providing feedback that helps developing professionals identify gaps between their current performance and expert performance, then deliberately practicing specific capabilities to close those gaps.

In pharmaceutical quality contexts, mentorship impact manifests in several measurable dimensions that directly connect to organizational quality outcomes:

Investigation quality and cycle time—Organizations with strong mentorship cultures produce investigations that more reliably identify causal mechanisms rather than documenting procedural failures. Junior investigators mentored through multiple complex investigations develop pattern recognition and causal reasoning capabilities that would take years to develop through independent practice. This translates to shorter investigation cycles (less rework when initial investigation proves inadequate) and more effective CAPAs (addressing actual causes rather than superficial procedural gaps).

Regulatory inspection resilience—Quality professionals who’ve been mentored through inspection preparation and response demonstrate better real-time judgment during inspections. They’ve observed how experienced professionals navigate inspector questions, balance transparency with appropriate context, and distinguish between minor observations requiring acknowledgment versus potential citations requiring immediate escalation. This tacit knowledge doesn’t transfer through training on FDA inspection procedures—it requires observing and debriefing actual inspection experiences with expert mentors.

Adaptive capacity during operational challenges—Mentorship develops the capability to distinguish when procedures should be followed rigorously versus when procedures need adaptive interpretation based on specific circumstances. This is exactly the work-as-done versus work-as-imagined tension that Sidney Dekker emphasizes. Junior quality professionals without mentorship default to rigid procedural compliance (safest from personal accountability perspective) or make inappropriate exceptions (lacking judgment to distinguish justified from unjustified deviation). Experienced mentors help develop the judgment required to navigate this tension appropriately.

Knowledge retention during turnover—Perhaps most critically for pharmaceutical manufacturing, mentorship creates explicit transfer of institutional knowledge that otherwise walks out the door when experienced professionals leave. The experienced QA manager who remembers why specific change control categories exist, which regulatory commitments drove specific procedural requirements, and which historical issues inform current risk assessments—without deliberate mentorship, that knowledge disappears at retirement, leaving the organization vulnerable to repeating historical failures.

The ROI calculation for quality mentorship should account for these specific outcomes. What’s the cost of investigation rework cycles? What’s the cost of FDA Form 483 observations requiring CAPA responses? What’s the cost of lost production while investigating contamination events that experienced professionals would have prevented through better environmental monitoring interpretation? What’s the cost of losing manufacturing licenses because institutional knowledge critical for regulatory compliance wasn’t transferred before key personnel retired?

When framed against these costs, the investment in structured mentorship—time allocation for senior professionals to mentor, reduced direct productivity while developing professionals learn through observation and guided practice, programmatic infrastructure to match mentors with mentees—becomes obviously justified. The problem is that mentorship costs appear on operational budgets as reduced efficiency, while mentorship benefits appear as avoided costs that are invisible until failures occur.

From Mentoring Programs to Mentoring Culture: The Infrastructure Challenge

The HBR framework distinguishes programs from culture by emphasizing permeation and normalization. Mentoring programs are tactical—specific participants, clear timelines, defined objectives. Mentoring cultures embed mentorship expectations throughout the organization such that receiving and providing mentorship becomes normal professional practice rather than a special developmental opportunity.

This distinction maps directly onto quality culture challenges. Organizations with quality programs have quality departments, quality procedures, quality training, quality metrics. Organizations with quality cultures have quality thinking embedded throughout operational decision-making—manufacturing doesn’t view quality as external oversight but as integrated partnership, investigations focus on understanding what happened rather than documenting compliance, regulatory commitments inform operational planning rather than appearing as constraints after plans are established.

Building quality culture requires exactly the same permeation and normalization that building mentoring culture requires. And these aren’t separate challenges—they’re deeply interconnected. Quality culture emerges when quality judgment becomes distributed throughout the organization rather than concentrated in the quality function. That distribution requires knowledge transfer. Knowledge transfer of complex professional judgment requires mentorship.

The pathway from mentoring programs to mentoring culture in quality organizations involves several specific shifts:

From Opt-In to Default Expectation

The HBR article recommends shifting from opt-in to opt-out mentoring so support becomes a default rather than a benefit requiring active enrollment. In quality contexts, this means embedding mentorship into role expectations rather than treating it as additional responsibility.

When I’ve implemented this approach, it looks like clear articulation in job descriptions and performance objectives: “Senior Investigators are expected to mentor at least two developing investigators through complex investigations annually, with documented knowledge transfer and mentee capability development.” Not optional. Not extra credit. Core job responsibility with the same performance accountability as investigation completion and regulatory response.

Similarly for mentees: “QA Associates are expected to engage actively with assigned mentors, seeking guidance on complex quality decisions and debriefing experiences to accelerate capability development.” This frames mentorship as professional responsibility rather than optional benefit.

The challenge is time allocation. If mentorship is a core expectation, workload planning must account for it. A senior investigator expected to mentor two people through complex investigations cannot also carry the same investigation load as someone without mentorship responsibilities. Organizations that add mentorship expectations without adjusting other performance expectations are creating mentorship theater—the appearance of commitment without genuine resource allocation.

This requires honest confrontation with capacity constraints. If investigation workload already exceeds capacity, adding mentorship expectations just creates another failure mode where people are accountable for obligations they cannot possibly fulfill. The alternative is reducing other expectations to create genuine space for mentorship—which forces difficult prioritization conversations about whether knowledge transfer and capability development matter more than marginal investigation throughput increases.

Embedding Mentorship into Performance and Development Processes

The HBR framework emphasizes integrating mentorship into performance conversations rather than treating it as standalone initiative. Line managers should be trained to identify development needs served through mentoring and explore progress during check-ins and appraisals.

In quality organizations, this integration happens at multiple levels. Individual development plans should explicitly identify capabilities requiring mentorship rather than classroom training. Investigation management processes should include mentorship components—complex investigations assigned to mentor-mentee pairs rather than individual investigators, with explicit expectation that mentors will transfer reasoning processes not just task completion.

Quality system audits and management reviews should assess mentorship effectiveness as quality system element. Are investigations led by recently mentored professionals showing improved causal reasoning? Are newly promoted quality managers demonstrating judgment capabilities suggesting effective mentorship? Are critical knowledge areas identified for transfer before experienced professionals leave?

The falsifiable systems approach I’ve advocated demands testable predictions. A mentoring culture makes specific predictions about performance: professionals who receive structured mentorship in investigation techniques will produce higher quality investigations than those who develop through independent practice alone. This prediction can be tested—and potentially falsified—through comparison of investigation quality metrics between mentored and non-mentored populations.

Organizations serious about quality culture should conduct exactly this analysis. If mentorship isn’t producing measurable improvement in quality performance, either the mentorship approach needs revision or the assumption that mentorship improves quality performance is wrong. Most organizations avoid this test because they’re not confident in the answer—which suggests they’re engaged in mentorship theater rather than genuine capability development.

Cross-Functional Mentorship: Breaking Quality Silos

The HBR article emphasizes that senior leaders should mentor beyond their direct teams to ensure objectivity and transparency. Mentors outside the mentee’s reporting line can provide perspective and feedback that direct managers cannot.

This principle is especially powerful in quality contexts when applied cross-functionally. Quality professionals mentored exclusively within quality functions risk developing insular perspectives that reinforce quality-as-imagined disconnected from manufacturing-as-done. Manufacturing professionals mentored exclusively within manufacturing risk developing operational perspectives disconnected from regulatory requirements and patient safety considerations.

Cross-functional mentorship addresses these risks while building organizational capabilities that strengthen quality culture. Consider several specific applications:

Manufacturing leaders mentoring quality professionals—An experienced manufacturing director mentoring a QA manager helps the QA manager understand operational constraints, equipment limitations, and process variability from manufacturing perspective. This doesn’t compromise quality oversight—it makes oversight more effective by grounding regulatory interpretation in operational reality. The QA manager learns to distinguish between regulatory requirements demanding rigid compliance versus areas where risk-based interpretation aligned with manufacturing capabilities produces better patient outcomes than theoretical ideals disconnected from operational possibility.

Quality leaders mentoring manufacturing professionals—Conversely, an experienced quality director mentoring a manufacturing supervisor helps the supervisor understand how manufacturing decisions create quality implications and regulatory commitments. The supervisor learns to anticipate how process changes will trigger change control requirements, how equipment qualification status affects operational decisions, and how data integrity practices during routine manufacturing become critical evidence during investigations. This knowledge prevents problems rather than just catching them after occurrence.

Reverse mentoring on emerging technologies and approaches—The HBR framework mentions reverse and peer mentoring as equally important to traditional hierarchical mentoring. In quality contexts, reverse mentoring becomes especially valuable around emerging technologies, data analytics approaches, and new regulatory frameworks. A junior quality analyst with strong statistical and data visualization capabilities mentoring a senior quality director on advanced trending techniques creates mutual benefit—the director learns new analytical approaches while the analyst gains understanding of how to make analytical insights actionable in regulatory contexts.

Cross-site mentoring for platform knowledge transfer—For organizations with multiple manufacturing sites, cross-site mentoring creates powerful platform knowledge transfer mechanisms. An experienced quality manager from a mature site mentoring quality professionals at a newer site transfers not just procedural knowledge but judgment about what actually matters versus what looks impressive in procedures but doesn’t drive quality outcomes. This prevents newer sites from learning through expensive failures that mature sites have already experienced.

The organizational design challenge is creating infrastructure that enables and incentivizes cross-functional mentorship despite natural siloing tendencies. Mentorship expectations in performance objectives should explicitly include cross-functional components. Recognition programs should highlight cross-functional mentoring impact. Senior leadership communications should emphasize cross-functional mentoring as strategic capability development rather than distraction from functional responsibilities.

Measuring Mentorship: Individual Development and Organizational Capability

The HBR framework recommends measuring outcomes both individually and organizationally, encouraging mentors and mentees to set clear objectives while also connecting individual progress to organizational objectives. This dual measurement approach addresses the falsifiability challenge—ensuring mentorship programs can be tested against claims about impact rather than just demonstrated as existing.

Individual measurement focuses on capability development aligned with career progression and role requirements. For quality professionals, this might include:

Investigation capabilities—Mentees should demonstrate progressive improvement in investigation quality based on defined criteria: clarity of problem statements, thoroughness of data gathering, rigor of causal analysis, effectiveness of CAPA identification. Mentors and mentees should review investigation documentation together, comparing mentee reasoning processes to expert reasoning and identifying specific capability gaps requiring deliberate practice.

Regulatory interpretation judgment—Quality professionals must constantly interpret regulatory requirements in specific operational contexts. Mentorship should develop this judgment through guided practice—mentor and mentee reviewing the same regulatory scenario, mentee articulating their interpretation and rationale, mentor providing feedback on reasoning quality and identifying considerations the mentee missed. Over time, mentee interpretations should converge toward expert quality with less guidance required.

Risk assessment and prioritization—Developing quality professionals often struggle with risk-based thinking, defaulting to treating everything as equally critical. Mentorship should deliberately develop risk intuition through discussion of specific scenarios: “Here are five potential quality issues—how would you prioritize investigation resources?” Mentor feedback explains expert risk reasoning, helping mentee calibrate their own risk assessment against expert judgment.

Technical communication and influence—Quality professionals must communicate complex technical and regulatory concepts to diverse audiences—regulatory agencies, senior management, manufacturing personnel, external auditors. Mentorship develops this capability through observation (mentees attending regulatory meetings led by mentors), practice with feedback (mentees presenting draft communications for mentor review before external distribution), and guided reflection (debriefing presentations and identifying communication approaches that succeeded or failed).

These individual capabilities should be assessed through demonstrated performance, not self-report satisfaction surveys. The question isn’t whether mentees feel supported or believe they’re developing—it’s whether their actual performance demonstrates capability improvement measurable through work products and outcomes.

Organizational measurement focuses on whether mentorship programs translate to quality system performance improvements:

Investigation quality trending—Organizations should track investigation quality metrics across mentored versus non-mentored populations and over time for individuals receiving mentorship. Quality metrics might include: percentage of investigations identifying credible root causes versus concluding with “human error”, investigation cycle time, CAPA effectiveness (recurrence rates for similar events), regulatory inspection findings related to investigation quality. If mentorship improves investigation capability, these metrics should show measurable differences.

Regulatory inspection outcomes—Organizations with strong quality mentorship should demonstrate better regulatory inspection outcomes—fewer observations, faster response cycles, more credible CAPA plans. While multiple factors influence inspection outcomes, tracking inspection performance alongside mentorship program maturity provides indication of organizational impact. Particularly valuable is comparing inspection findings between facilities or functions with strong mentorship cultures versus those with weaker mentorship infrastructure within the same organization.

Knowledge retention and transfer—Organizations should measure whether critical quality knowledge transfers successfully during personnel transitions. When experienced quality professionals leave, do their successors demonstrate comparable judgment and capability, or do quality metrics deteriorate until new professionals develop through independent experience? Strong mentorship programs should show smoother transitions with maintained or improved performance rather than capability gaps requiring years to rebuild.

Succession pipeline health—Quality organizations need robust internal pipelines preparing professionals for increasing responsibility. Mentorship programs should demonstrate measurable pipeline development—percentage of senior quality roles filled through internal promotion, time required for promoted professionals to demonstrate full capability in new roles, retention of high-potential quality professionals. Organizations with weak mentorship typically show heavy external hiring for senior roles (internal candidates lack required capabilities), extended learning curves when internal promotions occur, and turnover of high-potential professionals who don’t see clear development pathways.

The measurement framework should be designed for falsifiability—creating testable predictions that could prove mentorship programs ineffective. If an organization invests significantly in quality mentorship programs but sees no measurable improvement in investigation quality, regulatory outcomes, knowledge retention, or succession pipeline health, that’s important information demanding program revision or recognition that mentorship isn’t generating claimed benefits.

Most organizations avoid this level of measurement rigor because they’re not confident in results. Mentorship programs become articles of faith—assumed to be beneficial without empirical testing. This is exactly the kind of unfalsifiable quality system I’ve critiqued throughout this blog. Genuine commitment to quality culture requires honest measurement of whether quality initiatives actually improve quality outcomes.

Work-As-Done in Mentorship: The Implementation Gap

Mentorship-as-imagined involves structured meetings where experienced mentors transfer knowledge to developing mentees through thoughtful discussions aligned with individual development plans. Mentors are skilled at articulating tacit knowledge, mentees are engaged and actively seeking growth, organizations provide adequate time and support, and measurable capability development results.

Mentorship-as-done often looks quite different. Mentors are senior professionals already overwhelmed with operational responsibilities, struggling to find time for scheduled mentorship meetings and unprepared to structure developmental conversations effectively when meetings do occur. They have deep expertise but limited conscious access to their own reasoning processes and even less experience articulating those processes pedagogically. Mentees are equally overwhelmed, viewing mentorship meetings as another calendar obligation rather than developmental opportunity, and uncertain what questions to ask or how to extract valuable knowledge from limited meeting time.

Organizations schedule mentorship programs, create matching processes, provide brief mentor training, then declare victory when participation metrics look acceptable—while actual knowledge transfer remains minimal and capability development indistinguishable from what would have occurred through independent experience.

I’ve observed this implementation gap repeatedly when introducing formal mentorship into quality organizations. The gap emerges from several systematic failures:

Insufficient time allocation—Organizations add mentorship expectations without reducing other responsibilities. A senior investigator told to mentor two junior colleagues while maintaining their previous investigation load simply cannot fulfill both expectations adequately. Mentorship becomes the discretionary activity sacrificed when workload pressures mount—which is always. Genuine mentorship requires genuine time allocation, meaning reduced expectations for other deliverables or additional staffing to maintain throughput.

Lack of mentor development—Being expert quality practitioners doesn’t automatically make professionals effective mentors. Mentoring requires different capabilities: articulating tacit reasoning processes, identifying mentee knowledge gaps, structuring developmental experiences, providing constructive feedback, maintaining mentoring relationships through operational pressures. Organizations assume these capabilities exist or develop naturally rather than deliberately developing them through mentor training and mentoring-the-mentors programs.

Mismatch between mentorship structure and knowledge characteristics—Many mentorship programs structure around scheduled meetings for career discussions. This works for developing professional skills like networking, organizational navigation, and career planning. It doesn’t work well for developing technical judgment that emerges in context. The most valuable mentorship for investigation capability doesn’t happen in scheduled meetings—it happens during actual investigations when mentor and mentee are jointly analyzing data, debating hypotheses, identifying evidence gaps, and reasoning about causation. Organizations need mentorship structures that embed mentoring into operational work rather than treating it as separate activity.

Inadequate mentor-mentee matching—Generic matching based on availability and organizational hierarchy often creates mismatched pairs where mentor expertise doesn’t align with mentee development needs or where interpersonal dynamics prevent effective knowledge transfer. The HBR article emphasizes that good mentors require objectivity and the ability to make mentees comfortable sharing transparently—qualities undermined when mentors are in direct reporting lines or have conflicts of interest. Quality organizations need thoughtful matching considering expertise alignment, developmental needs, interpersonal compatibility, and organizational positioning.

Absence of accountability and measurement—Without clear accountability for mentorship outcomes and measurement of mentorship effectiveness, programs devolve into activity theater. Mentors and mentees go through motions of scheduled meetings while actual capability development remains minimal. Organizations need specific, measurable expectations for both mentors and mentees, regular assessment of whether those expectations are being met, and consequences when they’re not—just as with any other critical organizational responsibility.

Addressing these implementation gaps requires moving beyond mentorship programs to genuine mentorship culture. Culture means expectations, norms, accountability, and resource allocation aligned with stated priorities. Organizations claiming quality mentorship is a priority while providing no time allocation, no mentor development, no measurement, and no accountability for outcomes aren’t building mentorship culture—they’re building mentorship theater.

Practical Implementation: Building Quality Mentorship Infrastructure

Building authentic quality mentorship culture requires deliberate infrastructure addressing the implementation gaps between mentorship-as-imagined and mentorship-as-done. Based on both the HBR framework and my experience implementing quality mentorship in pharmaceutical manufacturing, several practical elements prove critical:

1. Embed Mentorship in Onboarding and Role Transitions

New hire onboarding provides natural mentorship opportunity that most organizations underutilize. Instead of generic orientation training followed by independent learning, structured onboarding should pair new quality professionals with experienced mentors for their first 6-12 months. The mentor guides the new hire through their first investigations, change control reviews, audit preparations, and regulatory interactions—not just explaining procedures but articulating the reasoning and judgment underlying quality decisions.

This onboarding mentorship should include explicit knowledge transfer milestones: understanding of regulatory framework and organizational commitments, capability to conduct routine quality activities independently, judgment to identify when escalation or consultation is appropriate, integration into quality team and cross-functional relationships. Successful onboarding means the new hire has internalized not just what to do but why, developing foundation for continued capability growth rather than just procedural compliance.

Role transitions create similar mentorship opportunities. When quality professionals are promoted or move to new responsibilities, assigning experienced mentors in those roles accelerates capability development and reduces failure risk. A newly promoted QA manager benefits enormously from mentorship by an experienced QA director who can guide them through their first regulatory inspection, first serious investigation, first contentious cross-functional negotiation—helping them develop judgment through guided practice rather than expensive independent trial-and-error.

2. Create Operational Mentorship Structures

The most valuable quality mentorship happens during operational work rather than separate from it. Organizations should structure operational processes to enable embedded mentorship:

Investigation mentor-mentee pairing—Complex investigations should be staffed as mentor-mentee pairs rather than individual assignments. The mentee leads the investigation with mentor guidance, developing investigation capabilities through active practice with immediate expert feedback. This provides better developmental experience than either independent investigation (no expert feedback) or observation alone (no active practice).

Audit mentorship—Quality audits provide excellent mentorship opportunities. Experienced auditors should deliberately involve developing auditors in audit planning, conduct, and reporting—explaining risk-based audit strategy, demonstrating interview techniques, articulating how they distinguish significant findings from minor observations, and guiding report writing that balances accuracy with appropriate tone.

Regulatory submission mentorship—Regulatory submissions require judgment about what level of detail satisfies regulatory expectations, how to present data persuasively, and how to address potential deficiencies proactively. Experienced regulatory affairs professionals should mentor developing professionals through their first submissions, providing feedback on draft content and explaining reasoning behind revision recommendations.

Cross-functional meeting mentorship—Quality professionals must regularly engage with cross-functional partners in change control meetings, investigation reviews, management reviews, and strategic planning. Experienced quality leaders should bring developing professionals to these meetings as observers initially, then active participants with debriefing afterward. The debrief addresses what happened, why particular approaches succeeded or failed, what the mentee noticed or missed, and how expert quality professionals navigate cross-functional dynamics effectively.

These operational mentorship structures require deliberate process design. Investigation procedures should explicitly describe mentor-mentee investigation approaches. Audit planning should consider developmental opportunities alongside audit objectives. Meeting attendance should account for mentorship value even when the developing professional’s direct contribution is limited.

3. Develop Mentors Systematically

Effective mentoring requires capabilities beyond subject matter expertise. Organizations should develop mentors through structured programs addressing:

Articulating tacit knowledge—Expert quality professionals often operate on intuition developed through extensive experience—they “just know” when an investigation needs deeper analysis or a regulatory interpretation seems risky. Mentor development should help experts make this tacit knowledge explicit by practicing articulation of their reasoning processes, identifying the cues and patterns driving their intuitions, and developing vocabulary for concepts they previously couldn’t name.

Providing developmental feedback—Mentors need capability to provide feedback that improves mentee performance without being discouraging or creating defensiveness. This requires distinguishing between feedback on work products (investigation reports, audit findings, regulatory responses) and feedback on reasoning processes underlying those products. Product feedback alone doesn’t develop capability—mentees need to understand why their reasoning was inadequate and how expert reasoning differs.

Structuring developmental conversations—Effective mentorship conversations follow patterns: asking mentees to articulate their reasoning before providing expert perspective, identifying specific capability gaps rather than global assessments, creating action plans for deliberate practice addressing identified gaps, following up on previous developmental commitments. Mentor development should provide frameworks and practice for conducting these conversations effectively.

Managing mentorship relationships—Mentoring relationships have natural lifecycle challenges—establishing initial rapport, navigating difficult feedback conversations, maintaining connection through operational pressures, transitioning appropriately when mentees outgrow the relationship. Mentor development should address these relationship dynamics, providing guidance on building trust, managing conflict, maintaining boundaries, and recognizing when mentorship should evolve or conclude.

Organizations serious about quality mentorship should invest in systematic mentor development programs, potentially including formal mentor training, mentoring-the-mentors structures where experienced mentors guide newer mentors, and regular mentor communities of practice sharing effective approaches and addressing challenges.

4. Implement Robust Matching Processes

The quality of mentor-mentee matches substantially determines mentorship effectiveness. Poor matches—misaligned expertise, incompatible working styles, problematic organizational dynamics—generate minimal value while consuming significant time. Thoughtful matching requires considering multiple dimensions:

Expertise alignment—Mentee developmental needs should align with mentor expertise and experience. A quality professional needing to develop investigation capabilities benefits most from mentorship by an expert investigator, not a quality systems manager whose expertise centers on procedural compliance and audit management.

Organizational positioning—The HBR framework emphasizes that mentors should be outside mentees’ direct reporting lines to enable objectivity and transparency. In quality contexts, this means avoiding mentor-mentee relationships where the mentor evaluates the mentee’s performance or makes decisions affecting the mentee’s career progression. Cross-functional mentoring, cross-site mentoring, or mentoring across organizational levels (but not direct reporting relationships) provide better positioning.

Working style compatibility—Mentoring requires substantial interpersonal interaction. Mismatches in communication styles, work preferences, or interpersonal approaches create friction that undermines mentorship effectiveness. Matching processes should consider personality assessments, communication preferences, and past relationship patterns alongside technical expertise.

Developmental stage appropriateness—Mentee needs evolve as capability develops. Early-career quality professionals need mentors who excel at foundational skill development and can provide patient, detailed guidance. Mid-career professionals need mentors who can challenge their thinking and push them beyond comfortable patterns. Senior professionals approaching leadership transitions need mentors who can guide strategic thinking and organizational influence.

Mutual commitment—Effective mentoring requires genuine commitment from both mentor and mentee. Forced pairings where participants lack authentic investment generate minimal value. Matching processes should incorporate participant preferences and voluntary commitment alongside organizational needs.

Organizations can improve matching through structured processes: detailed profiles of mentor expertise and mentee developmental needs, algorithms or facilitated matching sessions pairing based on multiple criteria, trial periods allowing either party to request rematch if initial pairing proves ineffective, and regular check-ins assessing relationship health.

5. Create Accountability Through Measurement and Recognition

What gets measured and recognized signals organizational priorities. Quality mentorship cultures require measurement systems and recognition programs that make mentorship impact visible and valued:

Individual accountability—Mentors and mentees should have explicit mentorship expectations in performance objectives with assessment during performance reviews. For mentors: capability development demonstrated by mentees, quality of mentorship relationship, time invested in developmental activities. For mentees: active engagement in mentorship relationship, evidence of capability improvement, application of mentored knowledge in operational performance.

Organizational metrics—Quality leadership should track mentorship program health and impact: participation rates (while noting that universal participation is the goal, not special achievement), mentee capability development measured through work quality metrics, succession pipeline strength, knowledge retention during transitions, and ultimately quality system performance improvements associated with enhanced organizational capability.

Recognition programs—Organizations should visibly recognize effective mentoring through awards, leadership communications, and career progression. Mentoring excellence should be weighted comparably to technical excellence and operational performance in promotion decisions. When senior quality professionals are recognized primarily for investigation output or audit completion but not for developing the next generation of quality professionals, the implicit message is that knowledge transfer doesn’t matter despite explicit statements about mentorship importance.

Integration into quality metrics—Quality system performance metrics should include indicators of mentorship effectiveness: investigation quality trends for recently mentored professionals, successful internal promotions, retention of high-potential talent, knowledge transfer completeness during personnel transitions. These metrics should appear in quality management reviews alongside traditional quality metrics, demonstrating that organizational capability development is a quality system element comparable to deviation management or CAPA effectiveness.

This measurement and recognition infrastructure prevents mentorship from becoming another compliance checkbox—organizations can demonstrate through data whether mentorship programs generate genuine capability development and quality improvement or represent mentorship theater disconnected from outcomes.

The Strategic Argument: Mentorship as Quality Risk Mitigation

Quality leaders facing resource constraints and competing priorities require clear strategic rationale for investing in mentorship infrastructure. The argument shouldn’t rest on abstract benefits like “employee development” or “organizational culture”—though these matter. The compelling argument positions mentorship as critical quality risk mitigation addressing specific vulnerabilities in pharmaceutical quality systems.

Knowledge Retention Risk

Pharmaceutical quality organizations face acute knowledge retention risk as experienced professionals retire or leave. The quality director who remembers why specific procedural requirements exist, which regulatory commitments drive particular practices, and how historical failures inform current risk assessments—when that person leaves without deliberate knowledge transfer, the organization loses institutional memory critical for regulatory compliance and quality decision-making.

This knowledge loss creates specific, measurable risks: repeating historical failures because current professionals don’t understand why particular controls exist, inadvertently violating regulatory commitments because knowledge of those commitments wasn’t transferred, implementing changes that create quality issues experienced professionals would have anticipated. These aren’t hypothetical risks—I’ve investigated multiple serious quality events that occurred specifically because institutional knowledge wasn’t transferred during personnel transitions.

Mentorship directly mitigates this risk by creating systematic knowledge transfer mechanisms. When experienced professionals mentor their likely successors, critical knowledge transfers explicitly before transition rather than disappearing at departure. The cost of mentorship infrastructure should be evaluated against the cost of knowledge loss—investigation costs, regulatory response costs, potential product quality impact, and organizational capability degradation.

Investigation Capability Risk

Investigation quality directly impacts regulatory compliance, patient safety, and operational efficiency. Poor investigations fail to identify true root causes, leading to ineffective CAPAs and event recurrence. Poor investigations generate regulatory findings requiring expensive remediation. Poor investigations consume excessive time without generating valuable knowledge to prevent recurrence.

Organizations relying on independent experience to develop investigation capabilities accept years of suboptimal investigation quality while professionals learn through trial and error. During this learning period, investigations are more likely to miss critical causal factors, identify superficial rather than genuine root causes, and propose CAPAs addressing symptoms rather than causes.

Mentorship accelerates investigation capability development by providing expert feedback during active investigations rather than after completion. Instead of learning that an investigation was inadequate when it receives critical feedback during regulatory inspection or management review, mentored investigators receive that feedback during investigation conduct when it can improve the current investigation rather than just inform future attempts.

Regulatory Relationship Risk

Regulatory relationships—with FDA, EMA, and other authorities—represent critical organizational assets requiring years to build and moments to damage. These relationships depend partly on demonstrated technical competence but substantially on regulatory agencies’ confidence in organizational quality judgment and integrity.

Junior quality professionals without mentorship often struggle during regulatory interactions, providing responses that are technically accurate but strategically unwise, failing to understand inspector concerns underlying specific questions, or presenting information in ways that create rather than resolve regulatory concerns. These missteps damage regulatory relationships and can trigger expanded inspection scope or regulatory actions.

Mentorship develops regulatory interaction capabilities before professionals face high-stakes regulatory situations independently. Mentored professionals observe how experienced quality leaders navigate inspector questions, understand regulatory concerns, and present information persuasively. They receive feedback on draft regulatory responses before submission. They learn to distinguish situations requiring immediate escalation versus independent handling.

Organizations should evaluate mentorship investment against regulatory risk—potential costs of warning letters, consent decrees, import alerts, or manufacturing restrictions that can result from poor regulatory relationships exacerbated by inadequate quality professional development.

Succession Planning Risk

Quality organizations need robust internal succession pipelines to ensure continuity during planned and unplanned leadership transitions. External hiring for senior quality roles creates risks: extended learning curves while new leaders develop organizational and operational knowledge, potential cultural misalignment, and expensive recruiting and retention costs.

Yet many pharmaceutical quality organizations struggle to develop internal candidates ready for senior leadership roles. They promote based on technical excellence without developing strategic thinking, organizational influence, and leadership capabilities required for senior positions. The promoted professionals then struggle, creating performance gaps and succession planning failures.

Mentorship directly addresses succession pipeline risk by deliberately developing capabilities required for advancement before promotion rather than hoping they emerge after promotion. Quality professionals mentored in strategic thinking, cross-functional influence, and organizational leadership become viable internal succession candidates—reducing dependence on external hiring, accelerating leadership transition effectiveness, and retaining high-potential talent who see clear development pathways.

These strategic arguments position mentorship not as employee development benefit but as essential quality infrastructure comparable to laboratory equipment, quality systems software, or regulatory intelligence capabilities. Organizations invest in these capabilities because their absence creates unacceptable quality and business risk. Mentorship deserves comparable investment justification.

From Compliance Theater to Genuine Capability Development

Pharmaceutical quality culture doesn’t emerge from impressive procedure libraries, extensive training catalogs, or sophisticated quality metrics systems. These matter, but they’re insufficient. Quality culture emerges when quality judgment becomes distributed throughout the organization—when professionals at all levels understand not just what procedures require but why, not just how to detect quality failures but how to prevent them, not just how to document compliance but how to create genuine quality outcomes for patients.

That distributed judgment requires knowledge transfer that classroom training and procedure review cannot provide. It requires mentorship—deliberate, structured, measured transfer of expert quality reasoning from experienced professionals to developing ones.

Most pharmaceutical organizations claim mentorship commitment while providing no genuine infrastructure supporting effective mentorship. They announce mentoring programs without adjusting workload expectations to create time for mentoring. They match mentors and mentees based on availability rather than thoughtful consideration of expertise alignment and developmental needs. They measure participation and satisfaction rather than capability development and quality outcomes. They recognize technical achievement while ignoring knowledge transfer contribution to organizational capability.

This is mentorship theater—the appearance of commitment without genuine resource allocation or accountability. Like other forms of compliance theater that Sidney Dekker critiques, mentorship theater satisfies surface expectations while failing to deliver claimed benefits. Organizations can demonstrate mentoring program existence to leadership and regulators while actual knowledge transfer remains minimal and quality capability development indistinguishable from what would occur without any mentorship program.

Building genuine mentorship culture requires confronting this gap between mentorship-as-imagined and mentorship-as-done. It requires honest acknowledgment that effective mentorship demands time, capability, infrastructure, and accountability that most organizations haven’t provided. It requires shifting mentorship from peripheral benefit to core quality infrastructure with resource allocation and measurement commensurate to strategic importance.

The HBR framework provides actionable structure for this shift: broaden mentorship access from select high-potentials to organizational default, embed mentorship into performance management and operational processes rather than treating it as separate initiative, implement cross-functional mentorship breaking down organizational silos, measure mentorship outcomes both individually and organizationally with falsifiable metrics that could demonstrate program ineffectiveness.

For pharmaceutical quality organizations specifically, mentorship culture addresses critical vulnerabilities: knowledge retention during personnel transitions, investigation capability development affecting regulatory compliance and patient safety, regulatory relationship quality depending on quality professional judgment, and succession pipeline strength determining organizational resilience.

The organizations that build genuine mentorship cultures—with infrastructure, accountability, and measurement demonstrating authentic commitment—will develop quality capabilities that organizations relying on procedure compliance and classroom training cannot match. They’ll conduct better investigations, build stronger regulatory relationships, retain critical knowledge through transitions, and develop quality leaders internally rather than depending on expensive external hiring.

Most importantly, they’ll create quality systems characterized by genuine capability rather than compliance theater—systems that can honestly claim to protect patients because they’ve developed the distributed quality judgment required to identify and address quality risks before they become quality failures.

That’s the quality culture we need. Mentorship is how we build it.

The Hidden Contamination Hazards: What the Catalent Warning Letter Reveals About Systemic Aseptic Processing Failures

The November 2025 FDA Warning Letter to Catalent Indiana, LLC reads like an autopsy report—a detailed dissection of how contamination hazards aren’t discovered but rather engineered into aseptic operations through a constellation of decisions that individually appear defensible yet collectively create what I’ve previously termed the “zemblanity field” in pharmaceutical quality. Section 2, addressing failures under 21 CFR 211.113(b), exposes contamination hazards that didn’t emerge from random misfortune but from deliberate choices about decontamination strategies, sampling methodologies, intervention protocols, and investigation rigor.​

What makes this warning letter particularly instructive isn’t the presence of contamination events—every aseptic facility battles microbial ingress—but rather the systematic architectural failures that allowed contamination hazards to persist unrecognized, uninvestigated, and unmitigated despite multiple warning signals spanning more than 20 deviations and customer complaints. The FDA’s critique centers on three interconnected contamination hazard categories: VHP decontamination failures involving occluded surfaces, inadequate environmental monitoring methods that substituted convenience for detection capability, and intervention risk assessments that ignored documented contamination routes.

For those of us responsible for contamination control in aseptic manufacturing, this warning letter demands we ask uncomfortable questions: How many of our VHP cycles are validated against surfaces that remain functionally occluded? How often have we chosen contact plates over swabs because they’re faster, not because they’re more effective? When was the last time we terminated a media fill and treated it with the investigative rigor of a batch contamination event?

The Occluded Surface Problem: When Decontamination Becomes Theatre

The FDA’s identification of occluded surfaces as contamination sources during VHP decontamination represents a failure mode I’ve observed with troubling frequency across aseptic facilities. The fundamental physics are unambiguous: vaporized hydrogen peroxide achieves sporicidal efficacy through direct surface contact at validated concentration-time profiles. Any surface the vapor doesn’t contact—or contacts at insufficient concentration—remains a potential contamination reservoir regardless of cycle completion indicators showing “successful” decontamination.​

The Catalent situation involved two distinct occluded surface scenarios, each revealing different architectural failures in contamination hazard assessment. First, equipment surfaces occluded during VHP decontamination that subsequently became contamination sources during atypical interventions involving equipment changes. The FDA noted that “the most probable root cause” of an environmental monitoring failure was equipment surfaces occluded during VHP decontamination, with contamination occurring during execution of an atypical intervention involving changes to components integral to stopper seating.​

This finding exposes a conceptual error I frequently encounter: treating VHP decontamination as a universal solution that overcomes design deficiencies rather than as a validated process with specific performance boundaries. The Catalent facility’s own risk assessments advised against interventions that could disturb potentially occluded surfaces, yet these interventions continued—creating the precise contamination pathway their risk assessments identified as unacceptable.​

The second occluded surface scenario involved wrapped components within the filling line where insufficient VHP exposure allowed potential contamination. The FDA cited “occluded surfaces on wrapped [components] within the [equipment] as the potential cause of contamination”. This represents a validation failure: if wrapping materials prevent adequate VHP penetration, either the wrapping must be eliminated, the decontamination method must change, or these surfaces must be treated through alternative validated processes.​

The literature on VHP decontamination is explicit about occluded surface risks. As Sandle notes, surfaces must be “designed and installed so that operations, maintenance, and repairs can be performed outside the cleanroom” and where unavoidable, “all surfaces needing decontaminated” must be explicitly identified. The PIC/S guidance is similarly unambiguous: “Continuously occluded surfaces do not qualify for such trials as they cannot be exposed to the process and should have been eliminated”. Yet facilities continue to validate VHP cycles that demonstrate biological indicator kill on readily accessible flat coupons while ignoring the complex geometries, wrapped items, and recessed surfaces actually present in their filling environments.

What does a robust approach to occluded surface assessment look like? Based on the regulatory expectations and technical literature, facilities should:

Conduct comprehensive occluded surface mapping during design qualification. Every component introduced into VHP-decontaminated spaces must undergo geometric analysis to identify surfaces that may not receive adequate vapor exposure. This includes crevices, threaded connections, wrapped items, hollow spaces, and any surface shadowed by another object. The mapping should document not just that surfaces exist but their accessibility to vapor flow based on the specific VHP distribution characteristics of the equipment.​

Validate VHP distribution using chemical and biological indicators placed on identified occluded surfaces. Flat coupon placement on readily accessible horizontal surfaces tells you nothing about vapor penetration into wrapped components or recessed geometries. Biological indicators should be positioned specifically where vapor exposure is questionable—inside wrapped items, within threaded connections, under equipment flanges, in dead-legs of transfer lines. If biological indicators in these locations don’t achieve the validated log reduction, the surfaces are occluded and require design modification or alternative decontamination methods.​

Establish clear intervention protocols that distinguish between “sterile-to-sterile” and “potentially contaminated” surface contact. The Catalent finding reveals that atypical interventions involving equipment changes exposed the Grade A environment to surfaces not reliably exposed to VHP. Intervention risk assessments must explicitly categorize whether the intervention involves only VHP-validated surfaces or introduces components from potentially occluded areas. The latter category demands heightened controls: localized Grade A air protection, pre-intervention surface swabbing and disinfection, real-time environmental monitoring during the intervention, and post-intervention investigation if environmental monitoring shows any deviation.​

Implement post-decontamination surface monitoring that targets historically occluded locations. If your facility has identified occluded surfaces that cannot be designed out, these become critical sampling locations for post-VHP environmental monitoring. Trending of these specific locations provides early detection of decontamination effectiveness degradation before contamination reaches product-contact surfaces.

The FDA’s remediation demand is appropriately comprehensive: “a review of VHP exposure to decontamination methods as well as permitted interventions, including a retrospective historical review of routine interventions and atypical interventions to determine their risks, a comprehensive identification of locations that are not reliably exposed to VHP decontamination (i.e., occluded surfaces), your plan to reduce occluded surfaces where feasible, review of currently permitted interventions and elimination of high-risk interventions entailing equipment manipulations during production campaigns that expose the ISO 5 environment to surfaces not exposed to a validated decontamination process, and redesign of any intervention that poses an unacceptable contamination risk”.​

This remediation framework represents best practice for any aseptic facility using VHP decontamination. The occluded surface problem isn’t limited to Catalent—it’s an industry-wide vulnerability wherever VHP validation focuses on demonstrating sporicidal activity under ideal conditions rather than confirming adequate vapor contact across all surfaces within the validated space.

Contact Plates Versus Swabs: The Detection Capability Trade-Off

The FDA’s critique of Catalent’s environmental monitoring methodology exposes a decision I’ve challenged repeatedly throughout my career: the use of contact plates for sampling irregular, product-contact surfaces in Grade A environments. The technical limitations are well-established, yet contact plates persist because they’re faster and operationally simpler—prioritizing workflow convenience over contamination detection capability.

The specific Catalent deficiency involved sampling filling line components using “contact plate, sampling [surfaces] with one sweeping sampling motion.” The FDA identified two fundamental inadequacies: “With this method, you are unable to attribute contamination events to specific [locations]” and “your firm’s use of contact plates is not as effective as using swab methods”. These limitations aren’t novel discoveries—they’re inherent to contact plate methodology and have been documented in the microbiological literature for decades.​

Contact plates—rigid agar surfaces pressed against the area to be sampled—were designed for flat, smooth surfaces where complete agar-to-surface contact can be achieved with uniform pressure. They perform adequately on stainless steel benchtops, isolator walls, and other horizontal surfaces. But filling line components—particularly those identified in the warning letter—present complex geometries: curved surfaces, corners, recesses, and irregular topographies where rigid agar cannot conform to achieve complete surface contact.

The microbial recovery implications are significant. When a contact plate fails to achieve complete surface contact, microorganisms in uncontacted areas remain unsampled. The result is a false-negative environmental monitoring reading that suggests contamination control while actual contamination persists undetected. Worse, the “sweeping sampling motion” described in the warning letter—moving a single contact plate across multiple locations—creates the additional problem the FDA identified: inability to attribute any recovered contamination to a specific surface. Was the contamination on the first component contacted? The third? Somewhere in between? This sampling approach provides data too imprecise for meaningful contamination source investigation.

The alternative—swab sampling—addresses both deficiencies. Swabs conform to irregular surfaces, accessing corners, recesses, and curved topographies that contact plates cannot reach. Swabs can be applied to specific, discrete locations, enabling precise attribution of any contamination recovered to a particular surface. The trade-off is operational: swab sampling requires more time, involves additional manipulative steps within Grade A environments, and demands different operator technique validation.​

Yet the Catalent warning letter makes clear that this operational inconvenience doesn’t justify compromised detection capability for critical product-contact surfaces. The FDA’s expectation—acknowledged in Catalent’s response—is swab sampling “to replace use of contact plates to sample irregular surfaces”. This represents a fundamental shift from convenience-optimized to detection-optimized environmental monitoring.​

What should a risk-based surface sampling strategy look like? The differentiation should be based on surface geometry and criticality:

Contact plates remain appropriate for flat, smooth, readily accessible surfaces where complete agar contact can be verified and where contamination risk is lower (Grade B floors, isolator walls, equipment external surfaces). The speed and simplicity advantages of contact plates justify their continued use in these applications.

Swab sampling should be mandatory for product-contact surfaces, irregular geometries, recessed areas, and any location where contact plate conformity is questionable. This includes filling needles, stopper bowls, vial transport mechanisms, crimping heads, and the specific equipment components cited in the Catalent letter. The additional time required for swab sampling is trivial compared to the contamination risk from inadequate monitoring.

Surface sampling protocols must specify the exact location sampled, not general equipment categories. Rather than “sample stopper bowl,” protocols should identify “internal rim of stopper bowl,” “external base of stopper bowl,” “stopper agitation mechanism interior surfaces.” This specificity enables contamination source attribution during investigations and ensures sampling actually reaches the highest-risk surfaces.

Swab technique must be validated to ensure consistent recovery from target surfaces. Simply switching from contact plates to swabs doesn’t guarantee improved detection unless swab technique—pressure applied, surface area contacted, swab saturation, transfer to growth media—is standardized and demonstrated to achieve adequate microbial recovery from the specific materials and geometries being sampled.​

The EU GMP Annex 1 and FDA guidance documents emphasize detection capability over convenience in environmental monitoring. The expectation isn’t perfect contamination prevention—that’s impossible in aseptic processing—but rather monitoring systems sensitive enough to detect contamination events when they occur, enabling investigation and corrective action before product impact. Contact plates on irregular surfaces fail this standard by design, not because of operator error or inadequate validation but because the fundamental methodology cannot access the surfaces requiring monitoring.​

The Intervention Paradox: When Risk Assessments Identify Hazards But Operations Ignore Them

Perhaps the most troubling element of the Catalent contamination hazards section isn’t the presence of occluded surfaces or inadequate sampling methods but rather the intervention management failure that reveals a disconnect between risk assessment and operational decision-making. Catalent’s risk assessments explicitly “advised against interventions that can disturb potentially occluded surfaces,” yet these high-risk interventions continued during production campaigns.​

This represents what I’ve termed “investigation theatre” in previous posts—creating the superficial appearance of risk-based decision-making while actual operations proceed according to production convenience rather than contamination risk mitigation. The risk assessment identified the hazard. The environmental monitoring data confirmed the hazard when contamination occurred during the intervention. Yet the intervention continued as an accepted operational practice.​

The specific intervention involved equipment changes to components “integral to stopper seating in the [filling line]”. These components operate at the critical interface between the sterile stopper and the vial—precisely the location where any contamination poses direct product impact risk. The intervention occurred during production campaigns rather than between campaigns when comprehensive decontamination and validation could occur. The intervention involved surfaces potentially occluded during VHP decontamination, meaning their microbiological state was unknown when introduced into the Grade A filling environment.​

Every element of this scenario screams “unacceptable contamination risk,” yet it persisted as accepted practice until FDA inspection. How does this happen? Based on my experience across multiple aseptic facilities, the failure mode follows a predictable pattern:

Production scheduling drives intervention timing rather than contamination risk assessment. Stopping a campaign for equipment maintenance creates schedule disruption, yield loss, and capacity constraints. The pressure to maintain campaign continuity overwhelms contamination risk considerations that appear theoretical compared to the immediate, quantifiable production impact.

Risk assessments become compliance artifacts disconnected from operational decision-making. The quality unit conducts a risk assessment, documents that certain interventions pose unacceptable contamination risk, and files the assessment. But when production encounters the situation requiring that intervention, the actual decision-making process references production need, equipment availability, and batch schedules—not the risk assessment that identified the intervention as high-risk.

Interventions become “normalized deviance”—accepted operational practices despite documented risks. After performing a high-risk intervention successfully (meaning without detected contamination) multiple times, it transitions from “high-risk intervention requiring exceptional controls” to “routine intervention” in operational thinking. The fact that adequate controls prevented contamination detection gets inverted into evidence that the intervention isn’t actually high-risk.

Environmental monitoring provides false assurance when contamination goes undetected. If a high-risk intervention occurs and subsequent environmental monitoring shows no contamination, operations interprets this as validation that the intervention is acceptable. But as discussed in the contact plate section, inadequate sampling methodology may fail to detect contamination that actually occurred. The absence of detected contamination becomes “proof” that contamination didn’t occur, reinforcing the normalization of high-risk interventions.

The EU GMP Annex 1 requirements for intervention management represent regulatory recognition of these failure modes. Annex 1 Section 8.16 requires “the list of interventions evaluated via risk analysis” and Section 9.36 requires that aseptic process simulations include “interventions and associated risks”. The framework is explicit: identify interventions, assess their contamination risk, validate that operators can perform them aseptically through media fills, and eliminate interventions that cannot be performed without unacceptable contamination risk.​

What does robust intervention risk management look like in practice?

Categorize interventions by contamination risk based on specific, documented criteria. The categorization should consider: surfaces contacted (sterile-to-sterile vs. potentially contaminated), duration of exposure, proximity to open product, operator actions required, first air protection feasibility, and frequency. This creates a risk hierarchy that enables differentiated control strategies rather than treating all interventions equivalently.​

Establish clear decision authorities for different intervention risk levels. Routine interventions (low contamination risk, validated through media fills, performed regularly) can proceed under operator judgment following standard procedures. High-risk interventions (those involving occluded surfaces, extended exposure, or proximity to open product) should require quality unit pre-approval including documented risk assessment and enhanced controls specification. Interventions identified as posing unacceptable risk should be prohibited until equipment redesign or process modification eliminates the contamination hazard.​

Validate intervention execution through media fills that specifically simulate the intervention’s contamination challenges. Generic media fills demonstrating overall aseptic processing capability don’t validate specific high-risk interventions. If your risk assessment identifies a particular intervention as posing contamination risk, your media fill program must include that intervention, performed by the operators who will execute it, under the conditions (campaign timing, equipment state, environmental conditions) where it will actually occur.​

Implement intervention-specific environmental monitoring that targets the contamination pathways identified in risk assessments. If the risk assessment identifies that an intervention may expose product to surfaces not reliably decontaminated, environmental monitoring immediately following that intervention should specifically sample those surfaces and adjacent areas. Trending this intervention-specific monitoring data separately from routine environmental monitoring enables detection of intervention-associated contamination patterns.​

Conduct post-intervention investigations when environmental monitoring shows any deviation. The Catalent warning letter describes an environmental monitoring failure whose “most probable root cause” was an atypical intervention involving equipment changes. This temporal association between intervention and contamination should trigger automatic investigation even if environmental monitoring results remain within action levels. The investigation should assess whether intervention protocols require modification or whether the intervention should be eliminated.​

The FDA’s remediation demand addresses this gap directly: “review of currently permitted interventions and elimination of high-risk interventions entailing equipment manipulations during production campaigns that expose the ISO 5 environment to surfaces not exposed to a validated decontamination process”. This requirement forces facilities to confront the intervention paradox: if your risk assessment identifies an intervention as high-risk, you cannot simultaneously permit it as routine operational practice. Either modify the intervention to reduce risk, validate enhanced controls that mitigate the risk, or eliminate the intervention entirely.​

Media Fill Terminations: When Failures Become Invisible

The Catalent warning letter’s discussion of media fill terminations exposes an investigation failure mode that reveals deeper quality system inadequacies. Since November 2023, Catalent terminated more than five media fill batches representing the filling line. Following two terminations for stoppering issues and extrinsic particle contamination, the facility “failed to open a deviation or an investigation at the time of each failure, as required by your SOPs”.​

Read that again. Media fills—the fundamental aseptic processing validation tool, the simulation specifically designed to challenge contamination control—were terminated due to failures, and no deviation was opened, no investigation initiated. The failures simply disappeared from the quality system, becoming invisible until FDA inspection revealed their existence.

The rationalization is predictable: “there was no impact to the SISPQ (Safety, Identity, Strength, Purity, Quality) of the terminated media batches or to any customer batches” because “these media fills were re-executed successfully with passing results”. This reasoning exposes a fundamental misunderstanding of media fill purpose that I’ve encountered with troubling frequency across the industry.​

A media fill is not a “test” that you pass or fail with product consequences. It is a simulation—a deliberate challenge to your aseptic processing capability using growth medium instead of product specifically to identify contamination risks without product impact. When a media fill is terminated due to a processing failure, that termination is itself the critical finding. The termination reveals that your process is vulnerable to exactly the failure mode that caused termination: stoppering problems that could occur during commercial filling, extrinsic particles that could contaminate product.

The FDA’s response is appropriately uncompromising: “You do not provide the investigations with a root cause that justifies aborting and re-executing the media fills, nor do you provide the corrective actions taken for each terminated media fill to ensure effective CAPAs were promptly initiated”. The regulatory expectation is clear: media fill terminations require investigation identical in rigor to commercial batch failures. Why did the stoppering issue occur? What equipment, material, or operator factors contributed? How do we prevent recurrence? What commercial batches may have experienced similar failures that went undetected?​

The re-execution logic is particularly insidious. By immediately re-running the media fill and achieving passing results, Catalent created the appearance of successful validation while ignoring the process vulnerability revealed by the termination. The successful re-execution proved only that under ideal conditions—now with heightened operator awareness following the initial failure—the process could be executed successfully. It provided no assurance that commercial operations, without that heightened awareness and under the same conditions that caused the initial termination, wouldn’t experience identical failures.

What should media fill termination management look like?

Treat every media fill termination as a critical deviation requiring immediate investigation initiation. The investigation should identify the root cause of the termination, assess whether the failure mode could occur during commercial manufacturing, evaluate whether previous commercial batches may have experienced similar failures, and establish corrective actions that prevent recurrence. This investigation must occur before re-execution, not instead of investigation.​

Require quality unit approval before media fill re-execution. The approval should be based on documented investigation findings demonstrating that the termination cause is understood, corrective actions are implemented, and re-execution will validate process capability under conditions that include the corrective actions. Re-execution without investigation approval perpetuates the “keep running until we get a pass” mentality that defeats media fill purpose.​

Implement media fill termination trending as a critical quality indicator. A facility terminating “more than five media fill batches” in a period should recognize this as a signal of fundamental process capability problems, not as a series of unrelated events requiring re-execution. Trending should identify common factors: specific operators, equipment states, intervention types, campaign timing.​

Ensure deviation tracking systems cannot exclude media fill terminations. The Catalent situation arose partly because “you failed to initiate a deviation record to capture the lack of an investigation for each of the terminated media fills, resulting in an undercounting of the deviations”. Quality metrics that exclude media fill terminations from deviation totals create perverse incentives to avoid formal deviation documentation, rendering media fill findings invisible to quality system oversight.​

The broader issue extends beyond media fill terminations to how aseptic processing validation integrates with quality systems. Media fills should function as early warning indicators—detecting aseptic processing vulnerabilities before product impact occurs. But this detection value requires that findings from media fills drive investigations, corrective actions, and process improvements with the same rigor as commercial batch deviations. When media fill failures can be erased through re-execution without investigation, the entire validation framework becomes performative rather than protective.

The Stopper Supplier Qualification Failure: Accepting Contamination at the Source

The stopper contamination issues discussed throughout the warning letter—mammalian hair found in or around stopper regions of vials from nearly 20 batches across multiple products—reveal a supplier qualification and incoming inspection failure that compounds the contamination hazards already discussed. The FDA’s critique focuses on Catalent’s “inappropriate reliance on pre-shipment samples (tailgate samples)” and failure to implement “enhanced or comparative sampling of stoppers from your other suppliers”.​

The pre-shipment or “tailgate” sample approach represents a fundamental violation of GMP sampling principles. Under this approach, the stopper supplier—not Catalent—collected samples from lots prior to shipment and sent these samples directly to Catalent for quality testing. Catalent then made accept/reject decisions for incoming stopper lots based on testing of supplier-selected samples that never passed through Catalent’s receiving or storage processes.​

Why does this matter? Because representative sampling requires that samples be selected from the material population actually received by the facility, stored under facility conditions, and handled through facility processes. Supplier-selected pre-shipment samples bypass every opportunity to detect contamination introduced during shipping, storage transitions, or handling. They enable a supplier to selectively sample from cleaner portions of production lots while shipping potentially contaminated material in the same lot to the customer.

The FDA guidance on this issue is explicit and has been for decades: samples for quality attribute testing “are to be taken at your facility from containers after receipt to ensure they are representative of the components in question”. This isn’t a new expectation emerging from enhanced regulatory scrutiny—it’s a baseline GMP requirement that Catalent systematically violated through reliance on tailgate samples.​

But the tailgate sample issue represents only one element of broader supplier qualification failures. The warning letter notes that “while stoppers from [one supplier] were the primary source of extrinsic particles, they were not the only source of foreign matter.” Yet Catalent implemented “limited, enhanced sampling strategy for one of your suppliers” while failing to “increase sampling oversight” for other suppliers. This selective enhancement—focusing remediation only on the most problematic supplier while ignoring systemic contamination risks across the stopper supply base—predictably failed to resolve ongoing contamination issues.​

What should stopper supplier qualification and incoming inspection look like for aseptic filling operations?

Eliminate pre-shipment or tailgate sampling entirely. All quality testing must be conducted on samples taken from received lots, stored in facility conditions, and selected using documented random sampling procedures. If suppliers require pre-shipment testing for their internal quality release, that’s their process requirement—it doesn’t substitute for the purchaser’s independent incoming inspection using facility-sampled material.​

Implement risk-based incoming inspection that intensifies sampling when contamination history indicates elevated risk. The warning letter notes that Catalent recognized stoppers as “a possible contributing factor for contamination with mammalian hairs” in July 2024 but didn’t implement enhanced sampling until May 2025—a ten-month delay. The inspection enhancement should be automatic and immediate when contamination events implicate incoming materials. The sampling intensity should remain elevated until trending data demonstrates sustained contamination reduction across multiple lots.​

Apply visual inspection with reject criteria specific to the defect types that create product contamination risk. Generic visual inspection looking for general “defects” fails to detect the specific contamination types—embedded hair, extrinsic particles, material fragments—that create sterile product risks. Inspection protocols must specify mammalian hair, fiber contamination, and particulate matter as reject criteria with sensitivity adequate to detect single-particle contamination in sampled stoppers.​

Require supplier process changes—not just enhanced sampling—when contamination trends indicate process capability problems. The warning letter acknowledges Catalent “worked with your suppliers to reduce the likelihood of mammalian hair contamination events” but notes that despite these efforts, “you continued to receive complaints from customers who observed mammalian hair contamination in drug products they received from you”. Enhanced sampling detects contamination; it doesn’t prevent it. Suppliers demonstrating persistent contamination require process audits, environmental control improvements, and validated contamination reduction demonstrated through process capability studies—not just promises to improve quality.​

Implement finished product visual inspection with heightened sensitivity for products using stoppers from suppliers with contamination history. The FDA notes that Catalent indicated “future batches found during visual inspection of finished drug products would undergo a re-inspection followed by tightened acceptable quality limit to ensure defective units would be removed” but didn’t provide the re-inspection procedure. This two-stage inspection approach—initial inspection followed by re-inspection with enhanced criteria for lots from high-risk suppliers—provides additional contamination detection but must be validated to demonstrate adequate defect removal.​

The broader lesson extends beyond stoppers to supplier qualification for any component used in sterile manufacturing. Components introduce contamination risks—microbial bioburden, particulate matter, chemical residues—that cannot be fully mitigated through end-product testing. Supplier qualification must function as a contamination prevention tool, ensuring that materials entering aseptic operations meet microbiological and particulate quality standards appropriate for their role in maintaining sterility. Reliance on tailgate samples, delayed sampling enhancement, and acceptance of persistent supplier contamination all represent failures to recognize suppliers as critical contamination control points requiring rigorous qualification and oversight.

The Systemic Pattern: From Contamination Hazards to Quality System Architecture

Stepping back from individual contamination hazards—occluded surfaces, inadequate sampling, high-risk interventions, media fill terminations, supplier qualification failures—a systemic pattern emerges that connects this warning letter to the broader zemblanity framework I’ve explored in previous posts. These aren’t independent, unrelated deficiencies that coincidentally occurred at the same facility. They represent interconnected architectural failures in how the quality system approaches contamination control.​

The pattern reveals itself through three consistent characteristics:

Detection systems optimized for convenience rather than capability. Contact plates instead of swabs for irregular surfaces. Pre-shipment samples instead of facility-based incoming inspection. Generic visual inspection instead of defect-specific contamination screening. Each choice prioritizes operational ease and workflow efficiency over contamination detection sensitivity. The result is a quality system that generates reassuring data—passing environmental monitoring, acceptable incoming inspection results, successful visual inspection—while actual contamination persists undetected.

Risk assessments that identify hazards without preventing their occurrence. Catalent’s risk assessments advised against interventions disturbing potentially occluded surfaces, yet these interventions continued. The facility recognized stoppers as contamination sources in July 2024 but delayed enhanced sampling until May 2025. Media fill terminations revealed aseptic processing vulnerabilities but triggered re-execution rather than investigation. Risk identification became separated from risk mitigation—the assessment process functioned as compliance theatre rather than decision-making input.​

Investigation systems that erase failures rather than learn from them. Media fill terminations occurred without deviation initiation. Mammalian hair contamination events were investigated individually without recognizing the trend across 20+ deviations. Root cause investigations concluded “no product impact” based on passing sterility tests rather than addressing the contamination source enabling future events. The investigation framework optimized for batch release justification rather than contamination prevention.​

These patterns don’t emerge from incompetent quality professionals or inadequate resource allocation. They emerge from quality system design choices that prioritize production efficiency, workflow continuity, and batch release over contamination detection, investigation rigor, and source elimination. The system delivers what it was designed to deliver: maximum throughput with minimum disruption. It fails to deliver what patients require: contamination control capable of detecting and eliminating sterility risks before product impact.

Recommendations: Building Contamination Hazard Detection Into System Architecture

What does effective contamination hazard management look like at the quality system architecture level? Based on the Catalent failures and broader industry patterns, several principles should guide aseptic operations:

Design decontamination validation around worst-case geometries, not ideal conditions. VHP validation using flat coupons on horizontal surfaces tells you nothing about vapor penetration into the complex geometries, wrapped components, and recessed surfaces actually present in your filling line. Biological indicator placement should target occluded surfaces specifically—if you can’t achieve validated kill on these locations, they’re contamination hazards requiring design modification or alternative decontamination methods.

Select environmental monitoring methods based on detection capability for the surfaces and conditions actually requiring monitoring. Contact plates are adequate for flat, smooth surfaces. They’re inadequate for irregular product-contact surfaces, recessed areas, and complex geometries. Swab sampling takes more time but provides contamination detection capability that contact plates cannot match. The operational convenience sacrifice is trivial compared to the contamination risk from monitoring methods incapable of detecting contamination when it occurs.​

Establish intervention risk classification with decision authorities proportional to contamination risk. Routine low-risk interventions validated through media fills can proceed under operator judgment. High-risk interventions—those involving occluded surfaces, extended exposure, or proximity to open product—require quality unit pre-approval with documented enhanced controls. Interventions identified as posing unacceptable risk should be prohibited pending equipment redesign.​

Treat media fill terminations as critical deviations requiring investigation before re-execution. The termination reveals process vulnerability—the investigation must identify root cause, assess commercial batch risk, and establish corrective actions before validation continues. Re-execution without investigation perpetuates the failures that caused termination.​

Implement supplier qualification with facility-based sampling, contamination-specific inspection criteria, and automatic sampling enhancement when contamination trends emerge. Tailgate samples cannot provide representative material assessment. Visual inspection must target the specific contamination types—mammalian hair, particulate matter, material fragments—that create product risks. Enhanced sampling should be automatic and sustained when contamination history indicates elevated risk.​

Build investigation systems that learn from contamination events rather than erasing them through re-execution or “no product impact” conclusions. Contamination events represent failures in contamination control regardless of whether subsequent testing shows product remains within specification. The investigation purpose is preventing recurrence, not justifying release.​

The FDA’s comprehensive remediation demands represent what quality system architecture should look like: independent assessment of investigation capability, CAPA effectiveness evaluation, contamination hazard risk assessment covering material flows and equipment placement, detailed remediation with specific improvements, and ongoing management oversight throughout the manufacturing lifecycle.​

The Contamination Control Strategy as Living System

The Catalent warning letter’s contamination hazards section serves as a case study in how quality systems can simultaneously maintain surface-level compliance while allowing fundamental contamination control failures to persist. The facility conducted VHP decontamination cycles, performed environmental monitoring, executed media fills, and inspected incoming materials—checking every compliance box. Yet contamination hazards proliferated because these activities optimized for operational convenience and batch release justification rather than contamination detection and source elimination.

The EU GMP Annex 1 Contamination Control Strategy requirement represents regulatory recognition that contamination control cannot be achieved through isolated compliance activities. It requires integrated systems where facility design, decontamination processes, environmental monitoring, intervention protocols, material qualification, and investigation practices function cohesively to detect, investigate, and eliminate contamination sources. The Catalent failures reveal what happens when these elements remain disconnected: decontamination cycles that don’t reach occluded surfaces, monitoring that can’t detect contamination on irregular geometries, interventions that proceed despite identified risks, investigations that erase failures through re-execution​

For those of us responsible for contamination control in aseptic manufacturing, the question isn’t whether our facilities face similar vulnerabilities—they do. The question is whether our quality systems are architected to detect these vulnerabilities before regulators discover them. Are your VHP validations addressing actual occluded surfaces or ideal flat coupons? Are you using contact plates because they detect contamination effectively or because they’re operationally convenient? Do your intervention protocols prevent the high-risk activities your risk assessments identify? When media fills terminate, do investigations occur before re-execution?

The Catalent warning letter provides a diagnostic framework for assessing contamination hazard management. Use it. Map your own decontamination validation against the occluded surface criteria. Evaluate your environmental monitoring method selection against detection capability requirements. Review intervention protocols for alignment with risk assessments. Examine media fill termination handling for investigation rigor. Assess supplier qualification for facility-based sampling and contamination-specific inspection.

The contamination hazards are already present in your aseptic operations. The question is whether your quality system architecture can detect them.

FDA PreCheck and the Geography of Regulatory Trust

On August 7, 2025, FDA Commissioner Marty Makary announced a program that, on its surface, appears to be a straightforward effort to strengthen domestic pharmaceutical manufacturing. The FDA PreCheck initiative promises “regulatory predictability” and “streamlined review” for companies building new U.S. drug manufacturing facilities. It arrives wrapped in the language of national security—reducing dependence on foreign manufacturing, securing critical supply chains, ensuring Americans have access to domestically-produced medicines.

This is the story the press release tells.

But if you read PreCheck through the lens of falsifiable quality systems a different narrative emerges. PreCheck is not merely an economic incentive program or a supply chain security measure. It is, more fundamentally, a confession.

It is the FDA admitting that the current Pre-Approval Inspection (PAI) and Pre-License Inspection (PLI) model—the high-stakes, eleventh-hour facility audit conducted weeks before the PDUFA date—is a profoundly inefficient mechanism for establishing trust. It is an acknowledgment that evaluating a facility’s “GMP compliance” only in the context of a specific product application, only after the facility is built, only when the approval clock is ticking, creates a system where failures are discovered at the moment when corrections are most expensive and most disruptive.

PreCheck proposes, instead, that the FDA should evaluate facilities earlier, more frequently, and independent of the product approval timeline. It proposes that manufacturers should be able to earn regulatory confidence in their facility design (Phase 1: Facility Readiness) before they ever file a product application, and that this confidence should carry forward into the application review (Phase 2: CMC streamlining).

This is not revolutionary. This is mostly how the European Medicines Agency (EMA) already works. This is the logic behind WHO Prequalification’s phased inspection model. This is the philosophy embedded in PIC/S risk-based inspection planning.

What is revolutionary—at least for the FDA—is the implicit admission that a manufacturing facility is not a binary state (compliant/non-compliant) evaluated at a single moment in time, but rather a developmental system that passes through stages of maturity, and that regulatory oversight should be calibrated to those stages.

This is not a cheerleading piece for PreCheck. It is an analysis of what PreCheck reveals about the epistemology of regulatory inspection, and a call for a more explicit, more testable framework for what it means for a facility to be “ready.” I also have concerns about the ability of the FDA to carry this out, and the dangers of on-going regulatory capture that I won’t really cover here.

Anatomy of PreCheck—What the Program Actually Proposes

The Two-Phase Structure

PreCheck is built on two complementary phases:

Phase 1: Facility Readiness
This phase focuses on early engagement between the manufacturer and the FDA during the facility’s design, construction, and pre-production stages. The manufacturer is encouraged—though not required, as the program is voluntary—to submit a Type V Drug Master File (DMF) containing:

  • Site operations layout and description
  • Pharmaceutical Quality System (PQS) elements
  • Quality Management Maturity (QMM) practices
  • Equipment specifications and process flow diagrams

This Type V DMF serves as a “living document” that can be incorporated by reference into future drug applications. The FDA will review this DMF and provide feedback on facility design, helping to identify potential compliance issues before construction is complete.

Michael Kopcha, Director of the FDA’s Office of Pharmaceutical Quality (OPQ), clarified at the September 30 public meeting that if a facility successfully completes the Facility Readiness Phase, an inspection may not be necessary when a product application is later filed.

This is the core innovation: decoupling facility assessment from product application.

Phase 2: Application Submission
Once a product application (NDA, ANDA, or BLA) is filed, the second phase focuses on streamlining the Chemistry, Manufacturing, and Controls (CMC) section of the application. The FDA offers:

  • Pre-application meetings
  • Early feedback on CMC data needs
  • Facility readiness and inspection planning discussions

Because the facility has already been reviewed in Phase 1, the CMC review can proceed with greater confidence that the manufacturing site is capable of producing the product as described in the application.

Importantly, Kopcha also clarified that only the CMC portion of the review is expedited—clinical and non-clinical sections follow the usual timeline. This is a critical limitation that industry stakeholders noted with some frustration, as it means PreCheck does not shorten the overall approval timeline as much as initially hoped.

What PreCheck Is Not

To understand what PreCheck offers, it is equally important to understand what it does not offer:

It is not a fast-track program. PreCheck does not provide priority review or accelerated approval pathways. It is a facility-focused engagement model, not a product-focused expedited review.

It is not a GMP certificate. Unlike the European system, where facilities can obtain a GMP certificate independent of any product application, PreCheck still requires a product application to trigger Phase 2. The Facility Readiness Phase (Phase 1) provides early engagement, but does not result in a standalone “facility approval” that can be referenced by multiple products or multiple sponsors.

It is not mandatory. PreCheck is voluntary. Manufacturers can continue to follow the traditional PAI/PLI pathway if they prefer.

It does not apply to existing facilities (yet). PreCheck is designed for new domestic manufacturing facilities. Industry stakeholders have requested expansion to include existing facility expansions and retrofits, but the FDA has not committed to this.

It does not decouple facility inspections from product approvals. Despite industry’s strong push for this—Big Pharma executives from Eli Lilly, Merck, and others explicitly requested at the public meeting that the FDA adopt the EMA model of decoupling GMP inspections from product applications—the FDA has not agreed to this. Phase 1 provides early feedback, but Phase 2 still ties the facility assessment to a specific product application.

The Type V DMF as the Backbone of PreCheck

The Type V Drug Master File is the operational mechanism through which PreCheck functions.

Historically, Type V DMFs have been a catch-all category for “FDA-accepted reference information” that doesn’t fit into the other DMF types (Type II for drug substances, Type III for packaging, Type IV for excipients). They have been used primarily for device constituent parts in combination products.

PreCheck repurposes the Type V DMF as a facility-centric repository. Instead of focusing on a material or a component, the Type V DMF in the PreCheck context contains:

  • Facility design: Layouts, flow diagrams, segregation strategies
  • Quality systems: Change control, deviation management, CAPA processes
  • Quality Management Maturity: Evidence of advanced quality practices beyond CGMP minimum requirements
  • Equipment and utilities: Specifications, qualification status, maintenance programs

The idea is that this DMF becomes a reusable asset. If a manufacturer builds a facility and completes the PreCheck Facility Readiness Phase, that facility’s Type V DMF can be referenced by multiple product applications from the same sponsor. This reduces redundant submissions and allows the FDA to build institutional knowledge about a facility over time.

However—and this is where the limitations become apparent—the Type V DMF is sponsor-specific. If the facility is a Contract Manufacturing Organization (CMO), the FDA has not clarified how the DMF ownership works or whether multiple API sponsors using the same CMO can leverage the same facility DMF. Industry stakeholders raised this as a significant concern at the public meeting, noting that CMOs account for approximately 50% of all facility-related CRLs.

The Type V DMF vs. Site Master File: Convergent Evolutions in Facility Documentation

The Type V DMF requirement in PreCheck bears a striking resemblance—and some critical differences—to the Site Master File (SMF) required under EU GMP and PIC/S guidelines. Understanding this comparison reveals both the potential of PreCheck and its limitations.

What is a Site Master File?

The Site Master File is a GMP documentation requirement in the EU, mandated under Chapter 4 of the EU GMP Guideline. PIC/S provides detailed guidance on SMF preparation in document PE 008-4. The SMF is:

  • facility-centric document prepared by the pharmaceutical manufacturer
  • Typically 25-30 pages plus appendices, designed to be “readable when printed on A4 paper”
  • living document that is part of the quality management system, updated regularly (recommended every 2 years)
  • Submitted to regulatory authorities to demonstrate GMP compliance and facilitate inspection planning

The purpose of the SMF is explicit: to provide regulators with a comprehensive overview of the manufacturing operations at a named site, independent of any specific product. It answers the question: “What GMP activities occur at this location?”

Required SMF Contents (per PIC/S PE 008-4 and EU guidance):

  1. General Information: Company name, site address, contact information, authorized manufacturing activities, manufacturing license copy
  2. Quality Management System: QA/QC organizational structure, key personnel qualifications, training programs, release procedures for Qualified Persons
  3. Personnel: Number of employees in production, QC, QA, warehousing; reporting structure
  4. Premises and Equipment: Site layouts, room classifications, pressure differentials, HVAC systems, major equipment lists
  5. Documentation: Description of documentation systems (batch records, SOPs, specifications)
  6. Production: Brief description of manufacturing operations, in-process controls, process validation policy
  7. Quality Control: QC laboratories, test methods, stability programs, reference standards
  8. Distribution, Complaints, and Product Recalls: Systems for handling complaints, recalls, and distribution controls
  9. Self-Inspection: Internal audit programs and CAPA systems

Critically, the SMF is product-agnostic. It describes the facility’s capabilities and systems, not specific product formulations or manufacturing procedures. An appendix may list the types of products manufactured (e.g., “solid oral dosage forms,” “sterile injectables”), but detailed product-specific CMC information is not included.

How the Type V DMF Differs from the Site Master File

The FDA’s Type V DMF in PreCheck serves a similar purpose but with important distinctions:

Similarities:

  • Both are facility-centric documents describing site operations, quality systems, and GMP capabilities
  • Both include site layouts, equipment specifications, and quality management elements
  • Both are intended to facilitate regulatory review and inspection planning
  • Both are living documents that can be updated as the facility changes

Critical Differences:

DimensionSite Master File (EU/PIC/S)Type V DMF (FDA PreCheck)
Regulatory StatusMandatory for EU manufacturing licenseVoluntary (PreCheck is voluntary program)
Independence from ProductsFully independent—facility can be certified without any product applicationPartially independent—Phase 1 allows early review, but Phase 2 still ties to product application
OwnershipFacility owner (manufacturer or CMO)Sponsor-specific—unclear for CMO facilities with multiple clients
Regulatory OutcomeCan support GMP certificate or manufacturing license independent of product approvalsDoes not result in standalone facility approval; only facilitates product application review
ScopeDescribes all manufacturing operations at the siteFocused on specific facility being built, intended to support future product applications from that sponsor
International RecognitionHarmonized internationally—PIC/S member authorities recognize each other’s SMF-based inspectionsFDA-specific—no provision for accepting EU GMP certificates or SMFs in lieu of PreCheck participation
Length and Detail25-30 pages plus appendices, designed for concisenessNo specified page limit; QMM practices component could be extensive

The Critical Gap: Product-Specificity vs. Facility Independence

The most significant difference lies in how the documents relate to product approvals.

In the EU system, a manufacturer submits the SMF to the National Competent Authority (NCA) as part of obtaining or maintaining a manufacturing license. The NCA inspects the facility and, if compliant, grants a GMP certificate that is valid across all products manufactured at that site.

When a Marketing Authorization Application (MAA) is later filed for a specific product, the CHMP can reference the existing GMP certificate and decide whether a pre-approval inspection is needed. If the facility has been recently inspected and found compliant, no additional inspection may be required. The facility’s GMP status is decoupled from the product approval.

The FDA’s Type V DMF in PreCheck does not create this decoupling. While Phase 1 allows early FDA review of the facility design, the Type V DMF is still tied to the sponsor’s product applications. It is not a standalone “facility certificate.” Multiple products from the same sponsor can reference the same Type V DMF, but the FDA has not clarified whether:

  • The DMF reduces the need for PAIs/PLIs on second, third, and subsequent products from the same facility
  • The DMF serves any function outside of the PreCheck program (e.g., for routine surveillance inspections)

At the September 30 public meeting, industry stakeholders explicitly requested that the FDA adopt the EU GMP certificate model, where facilities can be certified independent of product applications. The FDA acknowledged the request but did not commit to this approach.

Confidentiality: DMFs Are Proprietary

The Type V DMF operates under FDA’s DMF confidentiality rules (21 CFR 314.420). The DMF holder (the manufacturer) authorizes the FDA to reference the DMF when reviewing a specific sponsor’s application, but the detailed contents are not disclosed to the sponsor or to other parties. This protects proprietary manufacturing information, especially important for CMOs who serve competing sponsors.

However, PreCheck asks manufacturers to include Quality Management Maturity (QMM) practices in the Type V DMF—information that goes beyond what is typically in a DMF and beyond what is required in an SMF. As discussed earlier, industry is concerned that disclosing advanced quality practices could create new regulatory expectations or vulnerabilities. This tension does not exist with SMFs, which describe only what is required by GMP, not what is aspirational.

Could the FDA Adopt a Site Master File Model?

The comparison raises an obvious question: Why doesn’t the FDA simply adopt the EU Site Master File requirement?

Several barriers exist:

1. U.S. Legal Framework

The FDA does not issue facility manufacturing licenses the way EU NCAs do. In the U.S., a facility is “approved” only in the context of a specific product application (NDA, ANDA, BLA). The FDA has establishment registration (Form FDA 2656), but registration does not constitute approval—it is merely notification that a facility exists and intends to manufacture drugs[not in sources but common knowledge].

To adopt the EU GMP certificate model, the FDA would need either:

  • Statutory authority to issue facility licenses independent of product applications, or
  • A regulatory framework that allows facilities to earn presumption of compliance that carries across multiple products

Neither currently exists in U.S. law.

2. FDA Resource Model

The FDA’s inspection system is application-driven. PAIs and PLIs are triggered by product applications, and the cost is implicitly borne by the applicant through user fees. A facility-centric certification system would require the FDA to conduct routine facility inspections on a 1-3 year cycle (as the EMA/PIC/S model does), independent of product filings.

This would require:

  • Significant increases in FDA inspector workforce
  • A new fee structure (facility fees vs. application fees)
  • Coordination across CDER, CBER, and Office of Inspections and Investigations (OII)

PreCheck sidesteps this by keeping the system voluntary and sponsor-initiated. The FDA does not commit to routine re-inspections; it merely offers early engagement for new facilities.

3. CDMO Business Model Complexity

Approximately 50% of facility-related CRLs involve Contract Development and Manufacturing Organizations. CDMOs manufacture products for dozens or hundreds of sponsors. In the EU, the CMO has one GMP certificate that covers all its operations, and each sponsor references that certificate in their MAAs.

In the U.S., each sponsor’s product application is reviewed independently. If the FDA were to adopt a facility certificate model, it would need to resolve:

  • Who pays for the facility inspection—the CMO or the sponsors?
  • How are facility compliance issues (OAIs, warning letters) communicated across sponsors?
  • Can a facility certificate be revoked without blocking all pending product applications?

These are solvable problems—the EU has solved them—but they require systemic changes to the FDA’s regulatory framework.

The Path Forward: Incremental Convergence

The Type V DMF in PreCheck is a step toward the Site Master File model, but it is not yet there. For PreCheck to evolve into a true facility-centric system, the FDA would need to:

  1. Decouple Phase 1 (Facility Readiness) from Phase 2 (Product Application), allowing facilities to complete Phase 1 and earn a facility certificate or presumption of compliance that applies to all future products from any sponsor using that facility.
  2. Standardize the Type V DMF content to align with PIC/S SMF guidance, ensuring international harmonization and reducing duplicative submissions for facilities operating in multiple markets.
  3. Implement routine surveillance inspections (every 1-3 years) for facilities that have completed PreCheck, with inspection frequency adjusted based on compliance history (the PIC/S risk-based model). The major difference here probably would be facilities not yet engaged in commercial manufacturing.
  4. Enhance Participation in PIC/S inspection reliance, accepting EU GMP certificates and SMFs for facilities that have been recently inspected by PIC/S member authorities, and allowing U.S. Type V DMFs to be recognized internationally.

The industry’s message at the PreCheck public meeting was clear: adopt the EU model. Whether the FDA is willing—or able—to make that leap remains to be seen.

Quality Management Maturity (QMM): The Aspirational Component

Buried within the Type V DMF requirement is a more ambitious—and more controversial—element: Quality Management Maturity (QMM) practices.

QMM is an FDA initiative (led by CDER) that aims to promote quality management practices that go beyond CGMP minimum requirements. The FDA’s QMM program evaluates manufacturers on a maturity scale across five practice areas:

  1. Quality Culture and Management Commitment
  2. Risk Management and Knowledge Management
  3. Data Integrity and Information Systems
  4. Change Management and Process Control
  5. Continuous Improvement and Innovation

The QMM assessment uses a pre-interview questionnaire and interactive discussion to evaluate how effectively a manufacturer monitors and manages quality. The maturity levels range from Undefined (reactive, ad hoc) to Optimized (proactive, embedded quality culture).

The FDA ran two QMM pilot programs between October 2020 and March 2022 to test this approach. The goal is to create a system where the FDA—and potentially the market—can recognize and reward manufacturers with mature quality systems that focus on continuous improvement rather than reactive compliance.

PreCheck asks manufacturers to include QMM practices in their Type V DMF. This is where the program becomes aspirational.

At the September 30 public meeting, industry stakeholders described submitting QMM information as “risky”. Why? Because QMM is not fully defined. The assessment protocol is still in development. The maturity criteria are not standardized. And most critically, manufacturers fear that disclosing information about their quality systems beyond what is required by CGMP could create new expectations or new vulnerabilities during inspections.

One attendee noted that “QMS information is difficult to package, usually viewed on inspection”. In other words, quality maturity is something you demonstrate through behavior, not something you document in a binder.

The FDA’s inclusion of QMM in PreCheck reveals a tension: the agency wants to move beyond compliance theater—beyond the checkbox mentality of “we have an SOP for that”—and toward evaluating whether manufacturers have the organizational discipline to maintain control over time. But the FDA has not yet figured out how to do this in a way that feels safe or fair to industry.

This is the same tension I discussed in my August 2025 post on “The Effectiveness Paradox“: how do you evaluate a quality system’s capability to detect its own failures, not just its ability to pass an inspection when everything is running smoothly?

The Current PAI/PLI Model and Why It Fails

To understand why PreCheck is necessary, we must first understand why the current Pre-Approval Inspection (PAI) and Pre-License Inspection (PLI) model is structurally flawed.

The High-Stakes Inspection at the Worst Possible Time

Under the current system, the FDA conducts a PAI (for drugs under CDER) or PLI (for biologics under CBER) to verify that a manufacturing facility is capable of producing the drug product as described in the application. This inspection is risk-based—the FDA does not inspect every application. But when an inspection is deemed necessary, the timing is brutal.

As one industry executive described at the PreCheck public meeting: “We brought on a new U.S. manufacturing facility two years ago and the PAI for that facility was weeks prior to our PDUFA date. At that point, we’re under a lot of pressure. Any questions or comments or observations that come up during the PAI are very difficult to resolve in that time frame”.

This is the structural flaw: the FDA evaluates the facility after the facility is built, after the application is filed, and as close as possible to the approval decision. If the inspection reveals deficiencies—data integrity failures, inadequate cleaning validation, contamination control gaps, equipment qualification issues—the manufacturer has very little time to correct them before the PDUFA clock expires.

The result? Complete Response Letters (CRLs).

The CRL Epidemic: Facility Failures Blocking Approvals

The data on inspection-related CRLs is stark.

In a 2024 analysis of BLA outcomes, researchers found that BLAs were issued CRLs nearly half the time in 2023—the highest rate ever recorded. Of these CRLs, approximately 20% were due to facility inspection failures.

Breaking this down further:

  • Foreign manufacturing sites are associated with more CRs, proportionate to the number of PLIs conducted.
  • Approximately 50% of facility deficiencies are for Contract Development Manufacturing Organizations (CDMOs).
  • Approximately 75% of Applicant-Site CRs are for biosimilars.
  • The five most-cited facilities (each with ≥5 CRs) account for ~35% of all CR deficiencies.

In a separate analysis of CRL drivers from 2020–2024, Manufacturing/CMC deficiencies and Facility Inspection Failures together account for over 60% of all CRLs. This includes:

  • Inadequate control of production processes
  • Unstable manufacturing
  • Data gaps in CMC
  • GMP site inspections revealing uncontrolled processes, document gaps, hygiene issues

The pattern is clear: facility issues discovered late in the approval process are causing massive delays.

Why the Late-Stage Inspection Model Creates Failure

The PAI/PLI model creates failure for three reasons:

1. The Inspection Evaluates “Work-as-Done” When It’s Too Late to Change It

When the FDA arrives for a PAI/PLI, the facility is already built. The equipment is already installed. The processes are already validated (or supposed to be). The SOPs are already written.

If the inspector identifies a fundamental design flaw—say, inadequate segregation between manufacturing suites, or a HVAC system that cannot maintain differential pressure during interventions—the manufacturer cannot easily fix it. Redesigning cleanroom airflow or adding airlocks requires months of construction and re-qualification. The PDUFA clock does not stop.

This is analogous to the Rechon Life Science warning letter I analyzed in September 2025, where the smoke studies revealed turbulent airflow over open vials, contradicting the firm’s Contamination Control Strategy. The CCS claimed unidirectional flow protected the product. The smoke video showed eddies. But by the time this was discovered, the facility was operational, the batches were made, and the “fix” required redesigning the isolator.

2. The Inspection Creates Adversarial Pressure Instead of Collaborative Learning

Because the PAI occurs weeks before the PDUFA date, the inspection becomes a pass/fail exam rather than a learning opportunity. The manufacturer is under intense pressure to defend their systems rather than interrogate them. Questions from inspectors are perceived as threats, not invitations to improve.

This is the opposite of the falsifiable quality mindset. A falsifiable system would welcome the inspection as a chance to test whether the control strategy holds up under scrutiny. But the current timing makes this psychologically impossible. The stakes are too high.

3. The Inspection Conflates “Facility Capability” with “Product-Specific Compliance”

The PAI/PLI is nominally about verifying that the facility can manufacture the specific product in the application. But in practice, inspectors evaluate general GMP compliance—data integrity, quality unit independence, deviation investigation rigor, cleaning validation adequacy—not just product-specific manufacturing steps.

The FDA does not give “facility certificates” like the EMA does. Every product application triggers a new inspection (or waiver decision) based on the facility’s recent inspection history. This means a facility with a poor inspection outcome on one product will face heightened scrutiny on all subsequent products—creating a negative feedback loop.

Comparative Regulatory Philosophy—EMA, WHO, and PIC/S

To understand whether PreCheck is sufficient, we must compare it to how other regulatory agencies conceptualize facility oversight.

The EMA Model: Decoupling and Delegation

The European Medicines Agency (EMA) operates a decentralized inspection system. The EMA itself does not conduct inspections; instead, National Competent Authorities (NCAs) in EU member states perform GMP inspections on behalf of the EMA.

The key structural differences from the FDA:

1. Facility Inspections Are Decoupled from Product Applications

In the EU, a manufacturing facility can be inspected and receive a GMP certificate from the NCA independent of any specific product application. This certificate attests that the facility complies with EU GMP and is capable of manufacturing medicinal products according to its authorized scope.

When a Marketing Authorization Application (MAA) is filed, the CHMP (Committee for Medicinal Products for Human Use) can request a GMP inspection if needed, but if the facility has a recent GMP certificate in good standing, a new inspection may not be necessary.

This means the facility’s “GMP status” is assessed separately from the product’s clinical and CMC review. Facility issues do not automatically block product approval—they are addressed through a separate remediation pathway.

2. Risk-Based and Reliance-Based Inspection Planning

The EMA employs a risk-based approach to determine inspection frequency. Facilities are inspected on a routine re-inspection program (typically every 1-3 years depending on risk), with the frequency adjusted based on:

  • Previous inspection findings (critical, major, or minor deficiencies)
  • Product type and patient risk
  • Manufacturing complexity
  • Company compliance history

Additionally, the EMA participates in PIC/S inspection reliance (discussed below), meaning it may accept inspection reports from other competent authorities without conducting its own inspection.

3. Mutual Recognition Agreement (MRA) with the FDA

The U.S. and EU have a Mutual Recognition Agreement for GMP inspections. Under this agreement, the FDA and EMA recognize each other’s inspection outcomes for human medicines, reducing duplicate inspections.

Importantly, the EMA has begun accepting FDA inspection reports proactively during the pre-submission phase. Applicants can provide FDA inspection reports to support their MAA, allowing the EMA to make risk-based decisions about whether an additional inspection is needed.

This is the inverse of what the FDA is attempting with PreCheck. The EMA is saying: “We trust the FDA’s inspection, so we don’t need to repeat it.” The FDA, with PreCheck, is saying: “We will inspect early, so we don’t need to repeat it later.” Both approaches aim to reduce redundancy, but the EMA’s reliance model is more mature.

WHO Prequalification: Phased Inspections and Leveraging SRAs

The WHO Prequalification (PQ) program provides an alternative model for facility assessment, particularly relevant for manufacturers in low- and middle-income countries (LMICs).

Key features:

1. Inspection Occurs During the Dossier Assessment, Not After

Unlike the FDA’s PAI (which occurs near the end of the review), WHO PQ conducts inspections within 6 months of dossier acceptance for assessment. This means the facility inspection happens in parallel with the technical review, not at the end.

If the inspection reveals deficiencies, the manufacturer submits a Corrective and Preventive Action (CAPA) plan, and WHO conducts a follow-up inspection within 6-9 months. The prequalification decision is not made until the inspection is closed.

This phased approach reduces the “all-or-nothing” pressure of the FDA’s late-stage PAI.

2. Routine Inspections Every 1-3 Years

Once a product is prequalified, WHO conducts routine inspections every 1-3 years to verify continued compliance. This aligns with the Continued Process Verification concept in FDA’s Stage 3 validation—the idea that a facility is not “validated forever” after one inspection, but must demonstrate ongoing control.

3. Reliance on Stringent Regulatory Authorities (SRAs)

WHO PQ may leverage inspection reports from Stringent Regulatory Authorities (SRAs) or WHO-Listed Authorities (WLAs). If the facility has been recently inspected by an SRA (e.g., FDA, EMA, Health Canada) and the scope is appropriate, WHO may waive the onsite inspection and rely on the SRA’s findings.

This is a trust-based model: WHO recognizes that conducting duplicate inspections wastes resources, and that a well-documented inspection by a competent authority provides sufficient assurance.

The FDA’s PreCheck program does not include this reliance mechanism. PreCheck is entirely FDA-centric—there is no provision for accepting EMA or WHO inspection reports to satisfy Phase 1 or Phase 2 requirements.

PIC/S: Risk-Based Inspection Planning and Classification

The Pharmaceutical Inspection Co-operation Scheme (PIC/S) is an international framework for harmonizing GMP inspections across member authorities.

Two key PIC/S documents are relevant to this discussion:

1. PI 037-1: Risk-Based Inspection Planning

PIC/S provides a qualitative risk management tool to help inspectorates prioritize inspections. The model assigns each facility a risk rating (A, B, or C) based on:

  • Intrinsic Risk: Product type, complexity, patient population
  • Compliance Risk: Previous inspection outcomes, deficiency history

The risk rating determines inspection frequency:

  • A (Low Risk): Reduced frequency (2-3 years)
  • B (Moderate Risk): Moderate frequency (1-2 years)
  • C (High Risk): Increased frequency (<1 year, potentially multiple times per year)

Critically, PIC/S assumes that every manufacturer will be inspected at least once within the defined period. There is no such thing as “perpetual approval” based on one inspection.

2. PI 048-1: GMP Inspection Reliance

PIC/S introduced a guidance on inspection reliance in 2018. This guidance provides a framework for desktop assessment of GMP compliance based on the inspection activities of other competent authorities.

The key principle: if another PIC/S member authority has recently inspected a facility and found it compliant, a second authority may accept that finding without conducting its own inspection.

This reliance is conditional—the accepting authority must verify that:

  • The scope of the original inspection covers the relevant products and activities
  • The original inspection was recent (typically within 2-3 years)
  • The original authority is a trusted PIC/S member
  • There have been no significant changes or adverse events since the inspection

This is the most mature version of the trust-based inspection model. It recognizes that GMP compliance is not a static state that can be certified once, but also that redundant inspections by multiple authorities waste resources and delay market access.

Comparative Summary

DimensionFDA (Current PAI/PLI)FDA PreCheck (Proposed)EMA/EUWHO PQPIC/S Framework
Timing of InspectionLate (near PDUFA)Early (design phase) + Late (application)Variable, risk-basedEarly (during assessment)Risk-based (1-3 years)
Facility vs. Product FocusProduct-specificFacility (Phase 1) → Product (Phase 2)Facility-centric (GMP certificate)Product-specific with facility focusFacility-centric
DecouplingNoPartial (Phase 1 early feedback)Yes (GMP certificate independent)No, but phasedYes (risk-based frequency)
Reliance on Other AuthoritiesNoNoYes (MRA, PIC/S)Yes (SRA reliance)Yes (core principle)
FrequencyPer-applicationPhase 1 (once) → Phase 2 (per-application)Routine re-inspection (1-3 years)Routine (1-3 years)Risk-based (A/B/C)
Consequence of FailureCRL, approval blockedPhase 1: design guidance; Phase 2: potential CRLCAPA, may not block approvalCAPA, follow-up inspectionRemediation, increased frequency

The striking pattern: the FDA is the outlier. Every other major regulatory system has moved toward:

  • Decoupling facility inspections from product applications
  • Risk-based, routine inspection frequencies
  • Reliance mechanisms to avoid duplicate inspections
  • Facility-centric GMP certificates or equivalent

PreCheck is the FDA’s first step toward this model, but it is not yet there. Phase 1 provides early engagement, but Phase 2 still ties facility assessment to a specific product. PreCheck does not create a standalone “facility approval” that can be referenced across products or shared among CMO clients.

Potential Benefits of PreCheck (When It Works)

Despite its limitations, PreCheck could offer potential real benefits over the status quo—if it is implemented effectively.

Benefit 1: Early Detection of Facility Design Flaws

The most obvious benefit of PreCheck is that it allows the FDA to review facility design during construction, rather than after the facility is operational.

As one industry expert noted at the public meeting: “You’re going to be able to solve facility issues months, even years before they occur”.

Consider the alternative. Under the current PAI/PLI model, if the FDA inspector discovers during a pre-approval inspection that the cleanroom differential pressure cannot be maintained during material transfer, the manufacturer faces a choice:

  • Redesign the HVAC system (months of construction, re-commissioning, re-qualification)
  • Withdraw the application
  • Argue that the deficiency is not critical and hope the FDA agrees

All of these options are expensive and delay the product launch.

PreCheck, by contrast, allows the FDA to flag this issue during the design review (Phase 1), when the HVAC system is still on the engineering drawings. The manufacturer can adjust the design before pouring concrete.

This is the principle of Design Qualification (DQ) applied to the regulatory inspection timeline. Just as equipment must pass DQ before moving to Installation Qualification (IQ), the facility should pass regulatory design review before moving to construction and operation.

Benefit 2: Reduced Uncertainty and More Predictable Timelines

The current PAI/PLI system creates uncertainty about whether an inspection will be scheduled, when it will occur, and what the outcome will be.

Manufacturers described this uncertainty as one of the biggest pain points at the PreCheck public meeting. One executive noted that PAIs are often scheduled with short notice, and manufacturers struggle to align their production schedules (especially for seasonal products like vaccines) with the FDA’s inspection availability.

PreCheck introduces structure to this chaos. If a manufacturer completes Phase 1 successfully, the FDA has already reviewed the facility and provided feedback. The manufacturer knows what the FDA expects. When Phase 2 begins (the product application), the CMC review can proceed with greater confidence that facility issues will not derail the approval.

This does not eliminate uncertainty entirely—Phase 2 still involves an inspection (or inspection waiver decision), and deficiencies can still result in CRLs. But it shifts the uncertainty earlier in the process, when corrections are cheaper.

Benefit 3: Building Institutional Knowledge at the FDA

One underappreciated benefit of PreCheck is that it allows the FDA to build institutional knowledge about a manufacturer’s quality systems over time.

Under the current model, a PAI inspector arrives at a facility for 5-10 days, reviews documents, observes operations, and leaves. The inspection report is filed. If the same facility files a second product application two years later, a different inspector may conduct the PAI, and the process starts from scratch.

The PreCheck Type V DMF, by contrast, is a living document that accumulates information about the facility over its lifecycle. The FDA reviewers who participate in Phase 1 (design review) can provide continuity into Phase 2 (application review) and potentially into post-approval surveillance.

This is the principle behind the EMA’s GMP certificate model: once the facility is certified, subsequent inspections build on the previous findings rather than starting from zero.

Industry stakeholders explicitly requested this continuity at the PreCheck meeting, asking the FDA to “keep the same reviewers in place as the process progresses”. The implication: trust is built through relationships and institutional memory, not one-off inspections.

Benefit 4: Incentivizing Quality Management Maturity

By including Quality Management Maturity (QMM) practices in the Type V DMF, PreCheck encourages manufacturers to invest in advanced quality systems beyond CGMP minimums.

This is aspirational, not transactional. The FDA is not offering faster approvals or reduced inspection frequency in exchange for QMM participation—at least not yet. But the long-term vision is that manufacturers with mature quality systems will be recognized as lower-risk, and this recognition could translate into regulatory flexibility (e.g., fewer post-approval inspections, faster review of post-approval changes).

This aligns with the philosophy I have argued for throughout 2025: a quality system should not be judged by its compliance on the day of the inspection, but by its ability to detect and correct failures over time. A mature quality system is one that is designed to falsify its own assumptions—to seek out the cracks before they become catastrophic failures.

The QMM framework is the FDA’s attempt to operationalize this philosophy. Whether it succeeds depends on whether the FDA can develop a fair, transparent, and non-punitive assessment protocol—something industry is deeply skeptical about.

Challenges and Industry Concerns

The September 30, 2025 public meeting revealed that while industry welcomes PreCheck, the program as proposed has significant gaps.

Challenge 1: PreCheck Does Not Decouple Facility Inspections from Product Approvals

The single most consistent request from industry was: decouple GMP facility inspections from product applications.

Executives from Eli Lilly, Merck, Johnson & Johnson, and others explicitly called for the FDA to adopt the EMA model, where a facility can be inspected and certified independent of a product application, and that certification can be referenced by multiple products.

Why does this matter? Because under the current system (and under PreCheck as proposed), if a facility has a compliance issue, all product applications relying on that facility are at risk.

Consider a CMO that manufactures API for 10 different sponsors. If the CMO fails a PAI for one sponsor’s product, the FDA may place the entire facility under heightened scrutiny, delaying approvals for all 10 sponsors. This creates a cascade failure where one product’s facility issue blocks the market access of unrelated products.

The EMA’s GMP certificate model avoids this by treating the facility as a separate regulatory entity. If the facility has compliance issues, the NCA works with the facility to remediate them independent of pending product applications. The product approvals may be delayed, but the remediation pathway is separate.

The FDA’s Michael Kopcha acknowledged the request but did not commit: “Decoupling, streamlining, and more up-front communication is helpful… We will have to think about how to go about managing and broadening the scope”.

Challenge 2: PreCheck Only Applies to New Facilities, Not Existing Ones

PreCheck is designed for new domestic manufacturing facilities. But the majority of facility-related CRLs involve existing facilities—either because they are making post-approval changes, transferring manufacturing sites, or adding new products.

Industry stakeholders requested that PreCheck be expanded to include:

  • Existing facility expansions and retrofits
  • Post-approval changes (e.g., adding a new production line, changing a manufacturing process)
  • Site transfers (moving production from one facility to another)

The FDA did not commit to this expansion, but Kopcha noted that the agency is “thinking about how to broaden the scope”.

The challenge here is that the FDA lacks a facility lifecycle management framework. The current system treats each product application as a discrete event, with no mechanism for a facility to earn cumulative credit for good performance across multiple products over time.

This is what the PIC/S risk-based inspection model provides: a facility with a strong compliance history moves to reduced inspection frequency (e.g., every 3 years instead of annually). A facility with a poor history moves to increased frequency (e.g., multiple inspections per year). The inspection burden is proportional to risk.

PreCheck Phase 1 could serve this function—if it were expanded to existing facilities. A CMO that completes Phase 1 and demonstrates mature quality systems could earn presumption of compliance for future product applications, reducing the need for repeated PAIs/PLIs.

But as currently designed, PreCheck is a one-time benefit for new facilities only.

Challenge 3: Confidentiality and Intellectual Property Concerns

Manufacturers expressed significant concern about what information the FDA will require in the Type V DMF and whether that information will be protected from Freedom of Information Act (FOIA) requests.

The concern is twofold:

1. Proprietary Manufacturing Details

The Type V DMF is supposed to include facility layouts, equipment specifications, and process flow diagrams. For some manufacturers—especially those with novel technologies or proprietary processes—this information is competitively sensitive.

If the DMF is subject to FOIA disclosure (even with redactions), competitors could potentially reverse-engineer the manufacturing strategy.

2. CDMO Relationships

For Contract Development and Manufacturing Organizations (CDMOs), the Type V DMF creates a dilemma. The CDMO owns the facility, but the sponsor owns the product. Who submits the DMF? Who controls access to it? If multiple sponsors use the same CDMO facility, can they all reference the same DMF, or must each sponsor submit a separate one?

Industry requested clarity on these ownership and confidentiality issues, but the FDA has not yet provided detailed guidance.

This is not a trivial concern. Approximately 50% of facility-related CRLs involve CDMOs. If PreCheck cannot accommodate the CDMO business model, its utility is limited.

The Confidentiality Paradox: Good for Companies, Uncertain for Consumers

The confidentiality protections embedded in the DMF system—and by extension, in PreCheck’s Type V DMF—serve a legitimate commercial purpose. They allow manufacturers to protect proprietary manufacturing processes, equipment specifications, and quality system innovations from competitors. This protection is particularly critical for Contract Manufacturing Organizations (CMOs) who serve multiple competing sponsors and cannot afford to have one client’s proprietary methods disclosed to another.

But there is a tension here that deserves explicit acknowledgment: confidentiality rules that benefit companies are not necessarily optimal for consumers. This is not an argument for eliminating trade secret protections—innovation requires some degree of secrecy. Rather, it is a call to examine where the balance is struck and whether current confidentiality practices are serving the public interest as robustly as they serve commercial interests.

What Confidentiality Hides from Public View

Under current FDA confidentiality rules (21 CFR 314.420 for DMFs, and broader FOIA exemptions for commercial information), the following categories of information are routinely shielded from public disclosure.

The detailed manufacturing procedures, equipment specifications, and process parameters submitted in Type II DMFs (drug substances) and Type V DMFs (facilities) are never disclosed to the public. They may not even be disclosed to the sponsor referencing the DMF—only the FDA reviews them.

This means that if a manufacturer is using a novel but potentially risky manufacturing technique—say, a continuous manufacturing process that has not been validated at scale, or a cleaning procedure that is marginally effective—the public has no way to know. The FDA reviews this information, but the public cannot verify the FDA’s judgment.

2. Drug Pricing Data and Financial Arrangements

Pharmaceutical companies have successfully invoked trade secret protections to keep drug prices, manufacturing costs, and financial arrangements (rebates, discounts) confidential. In the United States, transparency laws requiring companies to disclose drug pricing information have faced constitutional challenges on the grounds that such disclosure constitutes an uncompensated “taking” of trade secrets.

This opacity prevents consumers, researchers, and policymakers from understanding why drugs cost what they cost and whether those prices are justified by manufacturing expenses or are primarily driven by monopoly pricing.

3. Manufacturing Deficiencies and Inspection Findings

When the FDA conducts an inspection and issues a Form FDA 483 (Inspectional Observations), those observations are eventually made public. But the detailed underlying evidence—the batch records showing failures, the deviations that were investigated, the CAPA plans that were proposed—remain confidential as part of the company’s internal quality records.

This means the public can see that a deficiency occurred, but cannot assess how serious it was or whether the corrective action was adequate. We are asked to trust that the FDA’s judgment was sound, without access to the data that informed that judgment.

The Public Interest Argument for Greater Transparency

The case for reducing confidentiality protections—or at least creating exceptions for public health—rests on several arguments:

Argument 1: The Public Funds Drug Development

As health law scholars have noted, the public makes extraordinary investments in private companies’ drug research and development through NIH grants, tax incentives, and government contracts. Yet details of clinical trial data, manufacturing processes, and government contracts often remain secret, even though the public paid for the research.

During the COVID-19 pandemic, for example, the Johnson & Johnson vaccine contract explicitly allowed the company to keep secret “production/manufacturing know-how, trade secrets, [and] clinical data,” despite massive public funding of the vaccine’s development. European Commission vaccine contracts similarly included generous redactions of price per dose, amounts paid up front, and rollout schedules.

If the public is paying for innovation, the argument goes, the public should have access to the results.

Argument 2: Regulators Are Understaffed and Sometimes Wrong

The FDA is chronically understaffed and under pressure to approve medicines quickly. Regulators sometimes make mistakes. Without access to the underlying data—manufacturing details, clinical trial results, safety signals—independent researchers cannot verify the FDA’s conclusions or identify errors that might not be apparent to a time-pressured reviewer.

Clinical trial transparency advocates argue that summary-level data, study protocols, and even individual participant data can be shared in ways that protect patient privacy (through anonymization and redaction) while allowing independent verification of safety and efficacy claims.

The same logic applies to manufacturing data. If a facility has chronic contamination control issues, or a process validation that barely meets specifications, should that information remain confidential? Or should researchers, patient advocates, and public health officials have access to assess whether the FDA’s acceptance of the facility was reasonable?

Argument 3: Trade Secret Claims Are Often Overbroad

Legal scholars studying pharmaceutical trade secrecy have documented that companies often claim trade secret protection for information that does not meet the legal definition of a trade secret.

For example, “naked price” information—the actual price a company charges for a drug—has been claimed as a trade secret to prevent regulatory disclosure, even though such information provides minimal competitive advantage and is of significant public interest. Courts have begun to push back on these claims, recognizing that the public interest in transparency can outweigh the commercial interest in secrecy, especially in highly regulated industries like pharmaceuticals.

The concern is that companies use trade secret law strategically to suppress unwanted regulation, transparency, and competition—not to protect genuine innovations.

Argument 4: Secrecy Delays Generic Competition

Even after patent and data exclusivity periods expire, trade secret protections allow pharmaceutical companies to keep the precise composition or manufacturing process for medications confidential. This slows the release of generic competitors by preventing them from relying on existing engineering and manufacturing data.

For complex biologics, this problem is particularly acute. Biosimilar developers must reverse-engineer the manufacturing process without access to the originator’s process data, leading to delays of many years and higher costs.

If manufacturing data were disclosed after a defined exclusivity period—say, 10 years—generic and biosimilar developers could bring competition to market faster, reducing drug prices for consumers.

The Counter-Argument: Why Companies Need Confidentiality

It is important to acknowledge the legitimate reasons why confidentiality protections exist:

1. Protecting Innovation Incentives

If manufacturing processes were disclosed, competitors could immediately copy them, undermining the innovator’s investment in developing the process. This would reduce incentives for process innovation and potentially slow the development of more efficient, higher-quality manufacturing methods.

2. Preventing Misuse of Information

Detailed manufacturing data could, in theory, be used by bad actors to produce counterfeit drugs or to identify vulnerabilities in the supply chain. Confidentiality reduces these risks.

3. Maintaining Competitive Differentiation

For CMOs in particular, their manufacturing expertise is their product. If their processes were disclosed, they would lose competitive advantage and potentially business. This could consolidate the industry and reduce competition among manufacturers.

4. Protecting Collaborations

The DMF system enables collaborations between API suppliers, excipient manufacturers, and drug sponsors precisely because each party can protect its proprietary information. If all information had to be disclosed, vertical integration would increase (companies would manufacture everything in-house to avoid disclosure), reducing specialization and efficiency.

Where Should the Balance Be?

The tension is real, and there is no simple resolution. But several principles might guide a more consumer-protective approach to confidentiality:

Principle 1: Time-Limited Secrecy

Trade secrets currently have no expiration date—they can remain secret indefinitely, as long as they remain non-public. But public health interests might be better served by time-limited confidentiality. After a defined period (e.g., 10-15 years post-approval), manufacturing data could be disclosed to facilitate generic/biosimilar competition.

Principle 2: Public Interest Exceptions

Confidentiality rules should include explicit public health exceptions that allow disclosure when there is a compelling public interest—for example, during pandemics, public health emergencies, or when safety signals emerge. Oregon’s drug pricing transparency law includes such an exception: trade secrets are protected unless the public interest requires disclosure.

Principle 3: Independent Verification Rights

Researchers, patient advocates, and public health officials should have structured access to clinical trial data, manufacturing data, and inspection findings under conditions that protect commercial confidentiality (e.g., through data use agreements, anonymization, secure research environments). The goal is not to publish trade secrets on the internet, but to enable independent verification of regulatory decisions.

The FDA already does this in limited ways—for example, by allowing outside experts to review confidential data during advisory committee meetings under non-disclosure agreements. This model could be expanded.

Principle 4: Narrow Trade Secret Claims

Courts and regulators should scrutinize trade secret claims more carefully, rejecting overbroad claims that seek to suppress transparency without protecting genuine innovation. “Naked price” information, aggregate safety data, and high-level manufacturing principles should not qualify for trade secret protection, even if detailed process parameters do.

Implications for PreCheck

In the context of PreCheck, the confidentiality tension manifests in several ways:

For Type V DMFs: The facility information submitted in Phase 1—site layouts, quality systems, QMM practices—will be reviewed by the FDA but not disclosed to the public or even to other sponsors using the same CMO. If a facility has marginal quality practices but passes PreCheck Phase 1, the public will never know. We are asked to trust the FDA’s judgment without transparency into what was reviewed or what deficiencies (if any) were identified.

For QMM Disclosure: Industry is concerned that submitting Quality Management Maturity information is “risky” because it discloses advanced practices beyond CGMP requirements. But the flip side is: if manufacturers are not willing to disclose their quality practices, how can regulators—or the public—assess whether those practices are adequate?

QMM is supposed to reward transparency and maturity. But if the information remains confidential and is never subjected to independent scrutiny, it becomes another form of compliance theater—a document that the FDA reviews in secret, with no external verification.

For Inspection Reliance: If the FDA begins accepting EMA GMP certificates or PIC/S inspection reports (as industry has requested), will those international inspection findings be more transparent than U.S. inspections? In some jurisdictions, yes—the EU publishes more detailed inspection outcomes than the FDA does. But in other jurisdictions, confidentiality practices may be even more restrictive.

A Tension Worth Monitoring

I do not claim to have resolved this tension. Reasonable people can disagree on where the line should be drawn between protecting innovation and ensuring public accountability.

But what I will argue is this: the tension deserves ongoing attention. As PreCheck evolves, as QMM assessments become more detailed, as Type V DMFs accumulate facility data over years—we should ask, repeatedly:

  • Who benefits from confidentiality, and who bears the risk?
  • Are there ways to enable independent verification without destroying commercial incentives?
  • Is the FDA using its discretion to share data proactively, or defaulting to secrecy when transparency might serve the public interest?

The history of pharmaceutical regulation is, in part, a history of secrets revealed too late. Vioxx’s cardiovascular risks. Thalidomide’s teratogenicity. OxyContin’s addictiveness. In each case, information that was known or knowable earlier remained hidden—sometimes due to fraud, sometimes due to regulatory caution, sometimes due to confidentiality rules that prioritized commercial interests over public health.

PreCheck, if it succeeds, will create a new repository of confidential facility data held by the FDA. That data could be a public asset—enabling faster approvals, better-informed regulatory decisions, earlier detection of quality problems. Or it could become another black box, where the public is asked to trust that the system works without access to the evidence.

The choice is not inevitable. It is a design decision—one that regulators, legislators, and industry will make, explicitly or implicitly, in the years ahead.

We should make it explicitly, with full awareness of whose interests are being prioritized and what risks are being accepted on behalf of patients who have no seat at the table.

Challenge 4: QMM is Not Fully Defined, and Submission Feels “Risky”

As discussed earlier, manufacturers are wary of submitting Quality Management Maturity (QMM) information because the assessment framework is not fully developed.

One attendee at the public meeting described QMM submission as “risky” because:

  • The FDA has not published the final QMM assessment protocol
  • The maturity criteria are subjective and open to interpretation
  • Disclosing quality practices beyond CGMP requirements could create new expectations that the manufacturer must meet

The analogy is this: if you tell the FDA, “We use statistical process control to detect process drift in real-time,” the FDA might respond, “Great! Show us your SPC data for the last two years.” If that data reveals a trend that the manufacturer considered acceptable but the FDA considers concerning, the manufacturer has created a problem by disclosing the information.

This is the opposite of the trust-building that QMM is supposed to enable. Instead of rewarding manufacturers for advanced quality practices, the program risks punishing them for transparency.

Until the FDA clarifies that QMM participation is non-punitive and that disclosure of advanced practices will not trigger heightened scrutiny, industry will remain reluctant to engage fully with this component of PreCheck.

Challenge 5: Resource Constraints—Will PreCheck Starve Other FDA Programs?

Industry stakeholders raised a practical concern: if the FDA dedicates inspectors and reviewers to PreCheck, will that reduce resources for routine surveillance inspections, post-approval change reviews, and other critical programs?

The FDA has not provided a detailed resource plan for PreCheck. The program is described as voluntary, which implies it is additive to existing workload, not a replacement for existing activities.

But inspectors and reviewers are finite resources. If PreCheck becomes popular (which the FDA hopes it will), the agency will need to either:

  • Hire additional staff to support PreCheck (requiring Congressional appropriations)
  • Deprioritize other inspection activities (e.g., routine surveillance)
  • Limit the number of PreCheck engagements per year (creating a bottleneck)

One industry representative noted that the economic incentives for domestic manufacturing are weak—it takes 5-7 years to build a new plant, and generic drug margins are thin. Unless the FDA can demonstrate that PreCheck provides substantial time and cost savings, manufacturers may not participate at the scale needed to meet the program’s supply chain security goals.

The CRL Crisis—How Facility Deficiencies Are Blocking Approvals

To understand the urgency of PreCheck, we must examine the data on inspection-related Complete Response Letters (CRLs).

The Numbers: CRLs Are Rising, Facility Issues Are a Leading Cause

In 2023, BLAs were issued CRLs nearly half the time—an unprecedented rate. This represents a sharp increase from previous years, driven by multiple factors:

  • More BLA submissions overall (especially biosimilars under the 351(k) pathway)
  • Increased scrutiny of manufacturing and CMC sections
  • More for-cause inspections (up 250% in 2025 compared to historical baseline)

Of the CRLs issued in 2023-2024, approximately 20% were due to facility inspection failures. This makes facility issues the third most common CRL driver, behind Manufacturing/CMC deficiencies (44%) and Clinical Evidence Gaps (44%).

Breaking down the facility-related CRLs:

  • Foreign manufacturing sites are associated with more CRLs proportionate to the number of PLIs conducted
  • 50% of facility deficiencies involve Contract Manufacturing Organizations (CMOs)
  • 75% of Applicant-Site CRs are for biosimilar applications
  • The five most-cited facilities account for ~35% of CR deficiencies

This last statistic is revealing: the CRL problem is concentrated among a small number of repeat offenders. These facilities receive CRLs on multiple products, suggesting systemic quality issues that are not being resolved between applications.

What Deficiencies Are Causing CRLs?

Analysis of FDA 483 observations and warning letters from FY2024 reveals the top inspection findings driving CRLs:

  1. Data Integrity Failures (most common)
    • ALCOA+ principles not followed
    • Inadequate audit trails
    • 21 CFR Part 11 non-compliance
  2. Quality Unit Failures
    • Inadequate oversight
    • Poor release decisions
    • Ineffective CAPA systems
    • Superficial root cause analysis
  3. Inadequate Process/Equipment Qualification
    • Equipment not qualified before use
    • Process validation protocols deficient
    • Continued Process Verification not implemented
  4. Contamination Control and Environmental Monitoring Issues
    • Inadequate monitoring locations (the “representative” trap discussed in my Rechon and LeMaitre analyses)
    • Failure to investigate excursions
    • Contamination Control Strategy not followed
  5. Stability Program Deficiencies
    • Incomplete stability testing
    • Data does not support claimed shelf-life

These findings are not product-specific. They are systemic quality system failures that affect the facility’s ability to manufacture any product reliably.

This is the fundamental problem with the current PAI/PLI model: the FDA discovers general GMP deficiencies during a product-specific inspection, and those deficiencies block approval even though they are not unique to that product.

The Cascade Effect: One Facility Failure Blocks Multiple Approvals

The data on repeat offenders is particularly troubling. Facilities with ≥3 CRs are primarily biosimilar manufacturers or CMOs.

This creates a cascade: a CMO fails a PLI for Product A. The FDA places the CMO on heightened surveillance. Products B, C, and D—all unrelated to Product A—face delayed PAIs because the FDA prioritizes re-inspecting the CMO to verify corrective actions. By the time Products B, C, and D reach their PDUFA dates, the CMO still has not cleared the OAI classification, and all three products receive CRLs.

This is the opposite of a risk-based system. Products B, C, and D are being held hostage by Product A’s facility issues, even though the manufacturing processes are different and the sponsors are different.

The EMA’s decoupled model avoids this by treating the facility as a separate remediation pathway. If the CMO has GMP issues, the NCA works with the CMO to fix them. Product applications proceed on their own timeline. If the facility is not compliant, products cannot be approved, but the remediation does not block the application review.

For-Cause Inspections: The FDA Is Catching More Failures

One contributing factor to the rise in CRLs is the sharp increase in for-cause inspections.

In 2025, the FDA conducted for-cause inspections at nearly 25% of all inspection events, up from the historical baseline of ~10%. For-cause inspections are triggered by:

  • Consumer complaints
  • Post-market safety signals (Field Alert Reports, adverse event reports)
  • Product recalls or field alerts
  • Prior OAI inspections or warning letters

For-cause inspections have a 33.5% OAI rate—5.6 times higher than routine inspections. And approximately 50% of OAI classifications lead to a warning letter or import alert.

This suggests that the FDA is increasingly detecting facilities with serious compliance issues that were not evident during prior routine inspections. These facilities are then subjected to heightened scrutiny, and their pending product applications face CRLs.

The problem: for-cause inspections are reactive. They occur after a failure has already reached the market (a recall, a complaint, a safety signal). By that point, patient harm may have already occurred.

PreCheck is, in theory, a proactive alternative. By evaluating facilities early (Phase 1), the FDA can identify systemic quality issues before the facility begins commercial manufacturing. But PreCheck only applies to new facilities. It does not solve the problem of existing facilities with poor compliance histories.


A Framework for Site Readiness—In Place, In Use, In Control

The current PAI/PLI model treats site readiness as a binary: the facility is either “compliant” or “not compliant” at a single moment in time.

PreCheck introduces a two-phase model, separating facility design review (Phase 1) from product-specific review (Phase 2).

But I propose that a more useful—and more falsifiable—framework for site readiness is three-stage:

  1. In Place: Systems, procedures, equipment, and documentation exist and meet design specifications.
  2. In Use: Systems and procedures are actively implemented in routine operations as designed.
  3. In Control: Systems maintain validated state through continuous verification, trend analysis, and proactive improvement.

This framework maps directly onto:

  • The FDA’s process validation lifecycle (Stage 1: Process Design = In Place; Stage 2: Process Qualification = In Use; Stage 3: Continued Process Verification = In Control)
  • The ISPE/EU Annex 15 qualification stages (DQ/IQ = In Place; OQ/PQ = In Use; Ongoing monitoring = In Control)
  • The ICH Q10 “state of control” concept (In Control)

The advantage of this framework is that it explicitly separates three distinct questions that are often conflated:

  • Does the system exist? (In Place)
  • Is the system being used? (In Use)
  • Is the system working? (In Control)

A facility can be “In Place” without being “In Use” (e.g., SOPs are written but operators are not trained). A facility can be “In Use” without being “In Control” (e.g., operators follow procedures, but the process produces high variability and frequent deviations).

Let me define each stage in detail.

Stage 1: In Place (Structural Readiness)

Definition: Systems, procedures, equipment, and documentation exist and meet design specifications.

This is the output of Design Qualification (DQ) and Installation Qualification (IQ). It answers the question: “Has the facility been designed and built according to GMP requirements?”

Key Elements:

  • Facility layout meets User Requirements Specification (URS) and regulatory expectations
  • Equipment installed per manufacturer specifications
  • SOPs written and approved
  • Quality systems documented (change control, deviation management, CAPA, training)
  • Utilities qualified (HVAC, water systems, compressed air, clean steam)
  • Cleaning and sanitation programs established
  • Environmental monitoring plan defined
  • Personnel hired and organizational chart defined

Assessment Methods:

  • Document review (URS, design specifications, as-built drawings)
  • Equipment calibration certificates
  • SOP index review
  • Site Master File review
  • Validation Master Plan review

Alignment with PreCheck: This is what Phase 1 (Facility Readiness) evaluates. The Type V DMF submitted during Phase 1 contains evidence that systems are In Place.

Alignment with EMA: This corresponds to the initial GMP inspection conducted by the NCA before granting a manufacturing license.

Inspection Outcome: If a facility is “In Place,” it means the infrastructure exists. But it says nothing about whether the infrastructure is functional or effective.

Stage 2: In Use (Operational Readiness)

Definition: Systems and procedures are actively implemented in routine operations as designed.

This is the output Validation. It answers the question: “Can the facility execute its processes reliably?”

Key Elements:

  • Equipment operates within qualified parameters during production
  • Personnel trained and demonstrate competency
  • Process consistently produces batches meeting specifications
  • Environmental monitoring executing according to contamination control strategy and generating data
  • Quality systems actively used (deviations documented, investigations completed, CAPA plans implemented)
  • Data integrity controls functioning (audit trails enabled, electronic records secure)
  • Work-as-Done matches Work-as-Imagined 

Assessment Methods:

  • Observation of operations
  • Review of batch records and deviations
  • Interviews with operators and otherstaff
  • Trending of process data (yields, cycle times, in-process controls)
  • Audit of training records and competency assessments
  • Inspection of actual manufacturing runs (not simulations)

Alignment with PreCheck: This is what Phase 2 (Application Submission) evaluates, particularly during the PAI/PLI (if one is conducted). The FDA inspector observes operations, reviews batch records, and verifies that the process described in the CMC section is actually being executed.

Alignment with EMA: This corresponds to the pre-approval GMP inspection requested by the CHMP if the facility has not been recently inspected.

Inspection Outcome: If a facility is “In Use,” it means the systems are functional. But it does not guarantee that the systems will remain functional over time or that the organization can detect and correct drift.

Stage 3: In Control (Sustained Performance)

Definition: Systems maintain validated state through continuous verification, trend analysis, and proactive improvement.

This is the output of Stage 3 Process Validation (Continued Process Verification). It answers the question: “Does the facility have the organizational discipline to sustain compliance?”

Key Elements:

  • Statistical process control (SPC) implemented to detect trends and shifts
  • Routine monitoring identifies drift before it becomes deviation
  • Root cause analysis is rigorous and identifies systemic issues, not just proximate causes
  • CAPA effectiveness is verified—corrective actions prevent recurrence
  • Process capability is quantified and improving (Cp, Cpk trending upward)
  • Annual Product Reviews drive process improvements
  • Knowledge management systems capture learnings from deviations, investigations, and inspections
  • Quality culture is embedded—staff at all levels understand their role in maintaining control
  • The organization actively seeks to falsify its own assumptions (the core principle of this blog)

Assessment Methods:

  • Trending of process capability indices over time
  • Review of Annual Product Reviews and management review meetings
  • Audit of CAPA effectiveness (do similar deviations recur?)
  • Statistical analysis of deviation rates and types
  • Assessment of organizational culture (e.g., FDA’s QMM assessment)
  • Evaluation of how the facility responds to “near-misses” and “weak signals”[blog]

Alignment with PreCheck: This is not explicitly evaluated in PreCheck as currently designed. PreCheck Phase 1 and Phase 2 focus on facility design and process execution, but do not assess long-term performance or organizational maturity.

However, the inclusion of Quality Management Maturity (QMM) practices in the Type V DMF is an attempt to evaluate this dimension. A facility with mature QMM practices is, in theory, more likely to remain “In Control” over time.

This also corresponds to routine re-inspections conducted every 1-3 years. The purpose of these inspections is not to re-validate the facility (which is already licensed), but to verify that the facility has maintained its validated state and has not accumulated unresolved compliance drift.

Inspection Outcome: If a facility is “In Control,” it means the organization has demonstrated sustained capability to manufacture products reliably. This is the goal of all GMP systems, but it is the hardest state to verify because it requires longitudinal data and cultural assessment, not just a snapshot inspection.

Mapping the Framework to Regulatory Timelines

The three-stage framework provides a logic for when and how to conduct regulatory inspections.

StageTimingEvaluation MethodFDA EquivalentEMA EquivalentFailure Mode
In PlaceBefore operations beginDesign review, document audit, installation verificationPreCheck Phase 1 (Facility Readiness)Initial GMP inspection for licenseFacility design flaws, inadequate documentation, unqualified equipment
In UseDuring early operationsProcess performance, batch record review, observation of operationsPreCheck Phase 2 / PAI/PLIPre-approval inspection (if needed)Process failures, operator errors, inadequate training, poor execution
In ControlOngoing (post-approval)Trend analysis, statistical monitoring, culture assessmentRoutine surveillance inspections, QMM assessmentRoutine re-inspections (1-3 years)Process drift, CAPA ineffectiveness, organizational complacency, systemic failures

The current PAI/PLI model collapses “In Place,” “In Use,” and “In Control” into a single inspection event conducted at the worst possible time (near PDUFA). This creates the illusion that a facility’s compliance status can be determined in 5-10 days.

PreCheck separates “In Place” (Phase 1) from “In Use” (Phase 2), which is a significant improvement. But it still does not address the hardest question: how do we know a facility will remain “In Control” over time?

The answer is: you don’t. Not from a one-time inspection. You need continuous verification.

This is the insight embedded in the FDA’s 2011 process validation guidance: validation is not an event, it is a lifecycle. The validated state must be maintained through Stage 3 Continued Process Verification.

The same logic applies to facilities. A facility is not “validated” by passing a single PAI. It is validated by demonstrating control over time.

PreCheck needs to be part of a wider model at the FDA:

  1. Allow facilities that complete Phase 1 to earn presumption of compliance for future product applications (reducing PAI frequency)
  2. Implement more robust routine surveillance inspections on a 1-3 year cycle to verify “In Control” status. The data shows how much the FDA is missing this target.
  3. Adjust inspection frequency dynamically based on the facility’s performance (low-risk facilities inspected less often, high-risk facilities more often)

This is the system the industry is asking for. It is the system the FDA could build on the foundation of PreCheck—if it commits to the long-term vision.

The Quality Experience Must Be Brought In at Design—And Most Companies Get This Wrong

PreCheck’s most important innovation is not its timeline or its documentation requirements. It is the implicit philosophical claim that facilities can be made better by involving quality experts at the design phase, not at the commissioning phase.

This is a radical departure from current practice. In most pharmaceutical manufacturing projects, the sequence is:

  1. Engineering designs the facility (architecture, HVAC, water systems, equipment layout)
  2. Procurement procures equipment based on engineering specs
  3. Construction builds the facility
  4. Commissioning and qualification begin (and quality suddenly becomes relevant)

Quality is brought in too late. By the time a quality professional reviews a facility design, the fundamental decisions—pipe routing, equipment locations, air handling unit sizing, cleanroom pressure differentials—have already been made. Suggestions to change the design are met with “we can’t change that now, we’ve already ordered the equipment” or “that’s going to add 3 months to the project and cost $500K.”

This is Quality-by-Testing (QbT): design first, test for compliance later, and hope the test passes.

PreCheck, by contrast, asks manufacturers to submit facility designs to the FDA during the design phase, while the designs are still malleable. The FDA can identify compliance gaps—inadequate environmental monitoring locations, cleanroom pressure challenges, segregation inadequacies, data integrity risks—before construction begins.

This is the beginning of Quality-by-Design (QbD) applied to facilities.

But for PreCheck to work—for Phase 1 to actually prevent facility disasters—manufacturers must embed quality expertise in the design process from the start. And most companies do not do this well.

The “Quality at the End” Trap

The root cause is organizational structure and financial incentives. In a typical pharmaceutical manufacturing project:

  • Engineering owns the timeline and the budget
  • Quality is invited to the party once the facility is built
  • Operations is waiting in the wings to take over once everything is “validated”

Each function optimizes locally:

  • Engineering optimizes for cost and schedule (build it fast, build it cheap)
  • Quality optimizes for compliance (every SOP written, every deviation documented)
  • Operations optimizes for throughput (run as many batches as possible per week)

Nobody optimizes for “Will this facility sustainably produce quality products?”—which is a different optimization problem entirely.

Bringing a quality professional into the design phase requires:

  • Allocating budget for quality consultation during design (not just during qualification)
  • Slowing the design phase to allow time for risk assessments and tradeoff discussions
  • Empowering quality to say “no” to designs that meet engineering requirements but fail quality risk management
  • Building quality leadership into the project from the kickoff, not adding it in Phase 3

Most companies treat this as optional. It is not optional if you want PreCheck to work.

Why Most Companies Fail to Do This Well

Despite the theoretical importance of bringing quality into design, most pharmaceutical companies still treat design-phase quality as a non-essential activity. Several reasons explain this:

1. Quality Does Not Own a Budget Line

In a manufacturing project, the Engineering team has a budget (equipment, construction, contingency). Operations has a budget (staffing, training). Quality typically has no budget allocation for the design phase. Quality professionals are asked to contribute their “expertise” without resources, timeline allocation, or accountability.

The result: quality advice is given in meetings but not acted upon, because there are no resources to implement it.

2. Quality Experience Is Scarce

The pharmaceutical industry has a shortage of quality professionals with deep experience in facility design, contamination control, data integrity architecture, and process validation. Many quality people come from a compliance background (inspections, audits, documentation) rather than a design background (risk management, engineering, systems thinking).

When a designer asks, “What should we do about data integrity?” the compliance-oriented quality person says, “We’ll need SOPs and training programs.” But the design-oriented quality person says, “We need to architect the IT infrastructure such that changes are logged and cannot be backdated. Here’s what that requires…”

The former approach adds cost and schedule. The latter approach prevents problems.

3. The Design Phase Is Urgent

Pharmaceutical companies operate under intense pressure to bring new facilities online as quickly as possible. The design phase is compressed—schedules are aggressive, meetings are packed, decisions are made rapidly.

Adding quality review to the design phase is perceived as slowing the project down. A quality person who carefully works through a contamination control strategy (“Wait, have we tested whether the airflow assumption holds at scale? Do we understand the failure modes?”) is seen as a bottleneck.

The company that brings in quality expertise early pays a perceived cost (delay, complexity) and receives a delayed benefit (better operations, fewer deviations, smoother inspections). In a pressure-cooker environment, the delayed benefit is not valued.

4. Quality Experience Is Not Integrated Across the Organization

In a some pharmaceutical company, quality expertise is fragmented:

  • Quality Assurance handles deviations and investigations
  • Quality Control runs the labs
  • Regulatory Affairs manages submissions
  • Process Validation leads qualification projects

None of these groups are responsible for facility design quality. So it falls to no one, and it ends up being everyone’s secondary responsibility—which means it is no one’s primary responsibility.

A company with an integrated quality culture would have a quality leader who is accountable for the design, and who has authority to delay the project if critical risks are not addressed. Most companies do not have this structure.

What PreCheck Requires: The Quality Experience in Design

For PreCheck to deliver its promised benefits, companies participating in Phase 1 must make a commitment that quality expertise is embedded throughout design.

Specifically:

1. Quality leadership is assigned early – Someone in quality (not engineering, not operations) is accountable for quality risk management in the facility design from Day 1.

2. Quality has authority to influence design – The quality leader can say “no” to designs that create unacceptable quality risks, even if the design meets engineering specifications.

3. Quality risk management is performed systematically – Not just “quality review of designs,” but structured risk management identifying critical quality risks and mitigation strategies.

4. Design Qualification includes quality experts – DQ is not just engineering verification that design meets specs; it includes quality verification that design enables quality control.

5. Contamination control is designed, not tested – Environmental monitoring strategies, microbial testing plans, and statistical approaches are designed into the facility, not bolted on during commissioning.

6. Data integrity is architected – IT systems are designed to prevent data manipulation, not as an afterthought.

7. The organization is aligned on what “quality” means – Not compliance (“checking boxes”), but the organizational discipline to sustain control and to detect and correct drift before it becomes a failure.

This is fundamentally a cultural commitment. It is about believing that quality is not something you add at the end; it is something you design in.

The FDA’s Unspoken Expectation in PreCheck Phase 1

When the FDA reviews a Type V DMF in PreCheck Phase 1, the agency is asking: “Did this manufacturer apply quality expertise to the design?”

How does the FDA assess this? By looking for:

  • Risk assessments that show systematic thinking, not checkbox compliance
  • Design decisions that are justified by quality risk management, not just engineering convenience
  • Contamination control strategies that are grounded in understanding the failure modes
  • Data integrity architectures that prevent (not just detect) problems
  • Quality systems that are designed to evolve and improve, not static and reactive

If the Type V DMF reads like it was prepared by an engineering firm that called quality for comments, the FDA will see it. If it reads like it was co-developed by quality and engineering with equal voice, the FDA will see that too.

PreCheck Phase 1 is not just a design review. It is a quality culture assessment.

And this is why most companies are not ready for PreCheck. Not because they lack the engineering capability to design a facility. But because they lack the quality experience, organizational structure, and cultural commitment to bring quality into the design process as a peer equal to engineering.

Companies that participate in PreCheck with a transactional mindset—”Let’s submit our designs to the FDA and get early feedback”—will get some benefit. They will catch some design issues early.

But companies that participate with a transformational mindset—”We are going to redesign how we approach facility development to embed quality from the start”—will get deeper benefits. They will build facilities that are easier to operate, that generate fewer deviations, that demonstrate sustained control over time, and that will likely pass future inspections without significant findings.

The choice is not forced on the company by PreCheck. PreCheck is voluntary; you can choose the transactional approach.

But if you want the regulatory trust that PreCheck is supposed to enable—if you want the FDA to accept your facility as “ready” with minimal re-inspection—you need to bring the quality experience in at design.

That is what Phase 1 actually measures.

The Epistemology of Trust

Regulatory inspections are not merely compliance checks. They are trust-building mechanisms.

When the FDA inspector walks into a facility, the question is not “Does this facility have an SOP for cleaning validation?” (It does. Almost every facility does.) The question is: “Can I trust that this facility will produce quality products consistently, even when I am not watching?”

Trust cannot be established in 5 days.

Trust is built through:

  • Repeated interactions over time
  • Demonstrated capability under varied conditions
  • Transparency when failures occur
  • Evidence of learning from those failures

The current PAI/PLI model attempts to establish trust through a single high-stakes audit. This is like trying to assess a person’s character by observing them for one hour during a job interview. It is better than nothing, but it is not sufficient.

PreCheck is a step toward a trust-building system. By engaging early (Phase 1) and providing continuity into the application review (Phase 2), the FDA can develop a relationship with the manufacturer rather than a one-off transaction.

But PreCheck as currently proposed is still transactional. It is a program for new facilities. It does not create a facility lifecycle framework. It does not provide a pathway for facilities to earn cumulative trust over multiple products.

The FDA could do this—if it commits to three principles:

1. Decouple facility inspections from product applications.

Facilities should be assessed independently and granted a facility certificate (or equivalent) that can be referenced by multiple products. This separates facility remediation from product approval timelines and prevents the cascade failures we see in the current system.

2. Recognize that “In Control” is not a state achieved once, but a discipline maintained continuously.

The FDA’s own process validation guidance says this explicitly: validation is a lifecycle, not an event. The same logic must apply to facilities. A facility is not “GMP compliant” because it passed one inspection. It is GMP compliant because it has demonstrated, over time, the organizational discipline to detect and correct failures before they reach patients.

PreCheck could be the foundation for this system. But only if the FDA is willing to embrace the full implication of what it has started: that regulatory trust is earned through sustained performance, and that the agency’s job is not to catch failures through surprise inspections, but to partner with manufacturers in building systems that are designed to reveal their own weaknesses.

This is the principle of falsifiable quality applied to regulatory oversight. A quality system that cannot be proven wrong is a quality system that cannot be trusted. A facility that fears inspection is a facility that has not internalized the discipline of continuous verification.

The facilities that succeed under PreCheck—and under any future evolution of this system—will be those that understand that “In Place, In Use, In Control” is not a checklist to complete, but a philosophy to embody.

Sources

A 2025 Retrospective for Investigations of a Dog

If the history of pharmaceutical quality management were written as a geological timeline, 2025 would hopefully mark the end of the Holocene of Compliance—a long, stable epoch where “following the procedure” was sufficient to ensure survival—and the beginning of the Anthropocene of Complexity.

For decades, our industry has operated under a tacit social contract. We agreed to pretend that “compliance” was synonymous with “quality.” We agreed to pretend that a validated method would work forever because we proved it worked once in a controlled protocol three years ago. We agreed to pretend that “zero deviations” meant “perfect performance,” rather than “blind surveillance.” We agreed to pretend that if we wrote enough documents, reality would conform to them.

If I had my wish 2025 would be the year that contract finally dissolved.

Throughout the year—across dozens of posts, technical analyses, and industry critiques on this blog—I have tried to dismantle the comfortable illusions of “Compliance Theater” and show how this theater collides violently with the unforgiving reality of complex systems.

The connecting thread running through every one of these developments is the concept I have returned to obsessively this year: Falsifiable Quality.

This Year in Review is not merely a summary of blog posts. It is an attempt to synthesize the fragmented lessons of 2025 into a coherent argument. The argument is this: A quality system that cannot be proven wrong is a quality system that cannot be trusted.

If our systems—our validation protocols, our risk assessments, our environmental monitoring programs—are designed only to confirm what we hope is true (the “Happy Path”), they are not quality systems at all. They are comfort blankets. And 2025 was the year we finally started pulling the blanket off.

The Philosophy of Doubt

(Reflecting on: The Effectiveness Paradox, Sidney Dekker, and Gerd Gigerenzer)

Before we dissect the technical failures of 2025, let me first establish the philosophical framework that defined this year’s analysis.

In August, I published The Effectiveness Paradox: Why ‘Nothing Bad Happened’ Doesn’t Prove Your Quality System Works.” It became one of the most discussed posts of the year because it attacked the most sacred metric in our industry: the trend line that stays flat.

We are conditioned to view stability as success. If Environmental Monitoring (EM) data shows zero excursions for six months, we throw a pizza party. If a method validation passes all acceptance criteria on the first try, we commend the development team. If a year goes by with no Critical deviations, we pay out bonuses.

But through the lens of Falsifiable Quality—a concept heavily influenced by the philosophy of Karl Popper, the challenging insights of Deming, and the safety science of Sidney Dekker, whom we discussed in November—these “successes” look suspiciously like failures of inquiry.

The Problem with Unfalsifiable Systems

Karl Popper famously argued that a scientific theory is only valid if it makes predictions that can be tested and proven false. “All swans are white” is a scientific statement because finding one black swan falsifies it. “God is love” is not, because no empirical observation can disprove it.

In 2025, I argued that most Pharmaceutical Quality Systems (PQS) are designed to be unfalsifiable.

  • The Unfalsifiable Alert Limit: We set alert limits based on historical averages + 3 standard deviations. This ensures that we only react to statistical outliers, effectively blinding us to gradual drift or systemic degradation that remains “within the noise.”
  • The Unfalsifiable Robustness Study: We design validation protocols that test parameters we already know are safe (e.g., pH +/- 0.1), avoiding the “cliff edges” where the method actually fails. We prove the method works where it works, rather than finding where it breaks.
  • The Unfalsifiable Risk Assessment: We write FMEAs where the conclusion (“The risk is acceptable”) is decided in advance, and the RPN scores are reverse-engineered to justify it.

This is “Safety Theater,” a term Dekker uses to describe the rituals organizations perform to look safe rather than be safe.

Safety-I vs. Safety-II

In November’s post Sidney Dekker: The Safety Scientist Who Influences How I Think About Quality, I explored Dekker’s distinction between Safety-I (minimizing things that go wrong) and Safety-II (understanding how things usually go right).

Traditional Quality Assurance is obsessed with Safety-I. We count deviations. We count OOS results. We count complaints. When those counts are low, we assume the system is healthy.
But as the LeMaitre Vascular warning letter showed us this year (discussed in Part III), a system can have “zero deviations” simply because it has stopped looking for them. LeMaitre had excellent water data—because they were cleaning the valves before they sampled them. They were measuring their ritual, not their water.

Falsifiable Quality is the bridge to Safety-II. It demands that we treat every batch record not as a compliance artifact, but as a hypothesis test.

  • Hypothesis: “The contamination control strategy is effective.”
  • Test: Aggressive monitoring in worst-case locations, not just the “representative” center of the room.
  • Result: If we find nothing, the hypothesis survives another day. If we find something, we have successfully falsified the hypothesis—which is a good thing because it reveals reality.

The shift from “fearing the deviation” to “seeking the falsification” is a cultural pivot point of 2025.

The Epistemological Crisis in the Lab (Method Validation)

(Reflecting on: USP <1225>, Method Qualification vs. Validation, and Lifecycle Management)

Nowhere was the battle for Falsifiable Quality fought more fiercely in 2025 than in the analytical laboratory.

The proposed revision to USP <1225> Validation of Compendial Procedures (published in Pharmacopeial Forum 51(6)) arrived late in the year, but it serves as the perfect capstone to the arguments I’ve been making since January.

For forty years, analytical validation has been the ultimate exercise in “Validation as an Event.” You develop a method. You write a protocol. You execute the protocol over three days with your best analyst and fresh reagents. You print the report. You bind it. You never look at it again.

This model is unfalsifiable. It assumes that because the method worked in the “Work-as-Imagined” conditions of the validation study, it will work in the “Work-as-Done” reality of routine QC for the next decade.

The Reportable Result: Validating Decisions, Not Signals

The revised USP <1225>—aligned with ICH Q14(Analytical Procedure Development) and USP <1220> (The Lifecycle Approach)—destroys this assumption. It introduces concepts that force falsifiability into the lab.

The most critical of these is the Reportable Result.

Historically, we validated “the instrument” or “the measurement.” We proved that the HPLC could inject the same sample ten times with < 1.0% RSD.

But the Reportable Result is the final value used for decision-making—the value that appears on the Certificate of Analysis. It is the product of a complex chain: Sampling -> Transport -> Storage -> Preparation -> Dilution -> Injection -> Integration -> Calculation -> Averaging.

Validating the injection precision (the end of the chain) tells us nothing about the sampling variability (the beginning of the chain).

By shifting focus to the Reportable Result, USP <1225> forces us to ask: “Does this method generate decisions we can trust?”

The Replication Strategy: Validating “Work-as-Done”

The new guidance insists that validation must mimic the replication strategy of routine testing.
If your SOP says “We report the average of 3 independent preparations,” then your validation must evaluate the precision and accuracy of that average, not of the individual preparations.

This seems subtle, but it is revolutionary. It prevents the common trick of “averaging away” variability during validation to pass the criteria, only to face OOS results in routine production because the routine procedure doesn’t use the same averaging scheme.

It forces the validation study to mirror the messy reality of the “Work-as-Done,” making the validation data a falsifiable predictor of routine performance, rather than a theoretical maximum capability.

Method Qualification vs. Validation: The June Distinction

I wrote Method Qualification and Validation,” clarifying a distinction that often confuses the industry.

  • Qualification is the “discovery phase” where we explore the method’s limits. It is inherently falsifiable—we want to find where the method breaks.
  • Validation has traditionally been the “confirmation phase” where we prove it works.

The danger, as I noted in that post, is when we skip the falsifiable Qualification step and go straight to Validation. We write the protocol based on hope, not data.

USP <1225> essentially argues that Validation must retain the falsifiable spirit of Qualification. It is not a coronation; it is a stress test.

The Death of “Method Transfer” as We Know It

In a Falsifiable Quality system, a method is never “done.” The Analytical Target Profile (ATP)—a concept from ICH Q14 that permeates the new thinking—is a standing hypothesis: “This method measures Potency within +/- 2%.”

Every time we run a system suitability check, every time we run a control standard, we are testing that hypothesis.

If the method starts drifting—even if it still passes broad system suitability limits—a falsifiable system flags the drift. An unfalsifiable system waits for the OOS.

The draft revision of USP <1225> is a call to arms. It asks us to stop treating validation as a “ticket to ride”—a one-time toll we pay to enter GMP compliance—and start treating it as a “ticket to doubt.” Validation gives us permission to use the method, but only as long as the data continues to support the hypothesis of fitness.

The Reality Check (The “Unholy Trinity” of Warning Letters)

Philosophy and guidelines are fine, but in 2025, reality kicked in the door. The regulatory year was defined by three critical warning letters—SanofiLeMaitre, and Rechon—that collectively dismantled the industry’s illusions of control.

It began, as these things often do, with a ghost from the past.

Sanofi Framingham: The Pendulum Swings Back

(Reflecting on: Failure to Investigate Critical Deviations and The Sanofi Warning Letter)

The year opened with a shock. On January 15, 2025, the FDA issued a warning letter to Sanofi’s Framingham facility—the sister site to the legacy Genzyme Allston landing, whose consent decree defined an entire generation of biotech compliance and of my career.

In my January analysis (Failure to Investigate Critical Deviations: A Cautionary Tale), I noted that the FDA’s primary citation was a failure to “thoroughly investigate any unexplained discrepancy.”

This is the cardinal sin of Falsifiable Quality.

An “unexplained discrepancy” is a signal from reality. It is the system telling you, “Your hypothesis about this process is wrong.”

  • The Falsifiable Response: You dive into the discrepancy. You assume your control strategy missed something. You use Causal Reasoning (the topic of my May post) to find the mechanism of failure.
  • The Sanofi Response: As the warning letter detailed, they frequently attributed failures to “isolated incidents” or superficial causes without genuine evidence.

This is the “Refusal to Falsify.” By failing to investigate thoroughly, the firm protects the comfortable status quo. They choose to believe the “Happy Path” (the process is robust) over the evidence (the discrepancy).

The Pendulum of Compliance

In my companion post (Sanofi Warning Letter”), I discussed the “pendulum of compliance.” The Framingham site was supposed to be the fortress of quality, built on the lessons of the Genzyme crisis.

The failure at Sanofi wasn’t a lack of SOPs; it was a lack of curiosity.

The investigators likely had checklists, templates, and timelines (Compliance Theater), but they lacked the mandate—or perhaps the Expertise —to actually solve the problem.

This set the thematic stage for the rest of 2025. Sanofi showed us that “closing the deviation” is not the same as fixing the problem. This insight led directly into my August argument in The Effectiveness Paradox: You can close 100% of your deviations on time and still have a manufacturing process that is spinning out of control.

If Sanofi was the failure of investigation (looking back), Rechon and LeMaitre were failures of surveillance (looking forward). Together, they form a complete picture of why unfalsifiable systems fail.

Reflecting on: Rechon Life Science and LeMaitre Vascular

Philosophy and guidelines are fine, but in September, reality kicked in the door.

Two warning letters in 2025—Rechon Life Science (September) and LeMaitre Vascular (August)—provided brutal case studies in what happens when “representative sampling” is treated as a buzzword rather than a statistical requirement.

Rechon Life Science: The Map vs. The Territory

The Rechon Life Science warning letter was a significant regulatory signal of 2025 regarding sterile manufacturing. It wasn’t just a list of observations; it was an indictment of unfalsifiable Contamination Control Strategies (CCS).

We spent 2023 and 2024 writing massive CCS documents to satisfy Annex 1. Hundreds of pages detailing airflows, gowning procedures, and material flows. We felt good about them. We felt “compliant.”

Then the FDA walked into Rechon and essentially asked: “If your CCS is so good, why does your smoke study show turbulence over the open vials?”

The warning letter highlighted a disconnect I’ve called “The Map vs. The Territory.”

  • The Map: The CCS document says the airflow is unidirectional and protects the product.
  • The Territory: The smoke study video shows air eddying backward from the operator to the sterile core.

In an unfalsifiable system, we ignore the smoke study (or film it from a flattering angle) because it contradicts the CCS. We prioritize the documentation (the claim) over the observation (the evidence).

In a falsifiable system, the smoke study is the test. If the smoke shows turbulence, the CCS is falsified. We don’t defend the CCS; we rewrite it. We redesign the line.

The FDA’s critique of Rechon’s “dynamic airflow visualization” was devastating because it showed that Rechon was using the smoke study as a marketing video, not a diagnostic tool. They filmed “representative” operations that were carefully choreographed to look clean, rather than the messy reality of interventions.

LeMaitre Vascular: The Sin of “Aspirational Data”

If Rechon was about air, LeMaitre Vascular (analyzed in my August post When Water Systems Fail) was about water. And it contained an even more egregious sin against falsifiability.

The FDA observed that LeMaitre’s water sampling procedures required cleaning and purging the sample valves before taking the sample.

Let’s pause and consider the epistemology of this.

  • The Goal: To measure the quality of the water used in manufacturing.
  • The Reality: Manufacturing operators do not purge and sanitize the valve for 10 minutes before filling the tank. They open the valve and use the water.
  • The Sample: By sanitizing the valve before sampling, LeMaitre was measuring the quality of the sampling process, not the quality of the water system.

I call this “Aspirational Data.” It is data that reflects the system as we wish it existed, not as it actually exists. It is the ultimate unfalsifiable metric. You can never find biofilm in a valve if you scrub the valve with alcohol before you open it.

The FDA’s warning letter was clear: “Sampling… must include any pathway that the water travels to reach the process.”

LeMaitre also performed an unauthorized “Sterilant Switcheroo,” changing their sanitization agent without change control or biocompatibility assessment. This is the hallmark of an unfalsifiable culture: making changes based on convenience, assuming they are safe, and never designing the study to check if that assumption is wrong.

The “Representative” Trap

Both warning letters pivot on the misuse of the word “representative.”

Firms love to claim their EM sampling locations are “representative.” But representative of what? Usually, they are representative of the average condition of the room—the clean, empty spaces where nothing happens.

But contamination is not an “average” event. It is a specific, localized failure. A falsifiable EM program places probes in the “worst-case” locations—near the door, near the operator’s hands, near the crimping station. It tries to find contamination. It tries to falsify the claim that the zone is sterile, asceptic or bioburden reducing.

When Rechon and LeMaitre failed to justify their sampling locations, they were guilty of designing an unfalsifiable experiment. They placed the “microscope” where they knew they wouldn’t find germs.

2025 taught us that regulators are no longer impressed by the thickness of the CCS binder. They are looking for the logic of control. They are testing your hypothesis. And if you haven’t tested it yourself, you will fail.

The Investigation as Evidence

(Reflecting on: The Golden Start to a Deviation InvestigationCausal ReasoningTake-the-Best Heuristics, and The Catalent Case)

If Rechon, LeMaitre, and Sanofi teach us anything, it is that the quality system’s ability to discover failure is more important than its ability to prevent failure.

A perfect manufacturing process that no one is looking at is indistinguishable from a collapsing process disguised by poor surveillance. But a mediocre process that is rigorously investigated, understood, and continuously improved is a path toward genuine control.

The investigation itself—how we respond to a deviation, how we reason about causation, how we design corrective actions—is where falsifiable quality either succeeds or fails.

The Golden Day: When Theory Meets Work-as-Done

In April, I published “The Golden Start to a Deviation Investigation,” which made a deceptively simple argument: The first 24 hours after a deviation is discovered are where your quality system either commits to discovering truth or retreats into theater.

This argument sits at the heart of falsifiable quality.

When a deviation occurs, you have a narrow window—what I call the “Golden Day”—where evidence is fresh, memories are intact, and the actual conditions that produced the failure still exist. If you waste this window with vague problem statements and abstract discussions, you permanently lose the ability to test causal hypotheses later.

The post outlined a structured protocol:

First, crystallize the problem. Not “potency was low”—but “Lot X234, potency measured at 87% on January 15th at 14:32, three hours after completion of blending in Vessel C-2.” Precision matters because only specific, bounded statements can be falsified. A vague problem statement can always be “explained away.”

Second, go to the Gemba. This is the antidote to “work-as-imagined” investigation. The SOP says the temperature controller should maintain 37°C +/- 2°C. But the Gemba walk reveals that the probe is positioned six inches from the heating element, the data logger is in a recessed pocket where humidity accumulates, and the operator checks it every four hours despite a requirement to check hourly. These are the facts that predict whether the deviation will recur.

Third, interview with cognitive discipline. Most investigations fail not because investigators lack information, but because they extract information poorly. Cognitive interviewing—developed by the FBI and the National Transportation Safety Board—uses mental reinstatement, multiple perspectives, and sequential reordering to access accurate recall rather than confabulated narrative. The investigator asks the operator to walk through the event in different orders, from different viewpoints, each time triggering different memory pathways. This is not “soft” technique; it is a mechanism for generating falsifiable evidence.

The Golden Day post makes it clear: You do not investigate deviations to document compliance. You investigate deviations to gather evidence about whether your understanding of the process is correct.

Causal Reasoning: Moving Beyond “What Was Missing”

Most investigation tools fail not because they are flawed, but because they are applied with the wrong mindset. In my May post “Causal Reasoning: A Transformative Approach to Root Cause Analysis,” I argued that pharmaceutical investigations are often trapped in “negative reasoning.”

Negative reasoning asks: “What barrier was missing? What should have been done but wasn’t?” This mindset leads to unfalsifiable conclusions like “Procedure not followed” or “Training was inadequate.” These are dead ends because they describe the absence of an ideal, not the presence of a cause.

Causal reasoning flips the script. It asks: “What was present in the system that made the observed outcome inevitable?”

Instead of settling for “human error,” causal reasoning demands we ask: What environmental cues made the action sensible to the operator at that moment? Were the instructions ambiguous? Did competing priorities make compliance impossible? Was the process design fragile?

This shift transforms the investigation from a compliance exercise into a scientific inquiry.

Consider the LeMaitre example:

  • Negative Reasoning: “Why didn’t they sample the true condition?” Answer: “Because they didn’t follow the intent of the sampling plan.”
  • Causal Reasoning: “What made the pre-cleaning practice sensible to them?” Answer: “They believed it ensured sample validity by removing valve residue.”

By understanding the why, we identify a knowledge gap that can be tested and corrected, rather than a negligence gap that can only be punished.

In September, “Take-the-Best Heuristic for Causal Investigation” provided a practical framework for this. Instead of listing every conceivable cause—a process that often leads to paralysis—the “Take-the-Best” heuristic directs investigators to focus on the most information-rich discriminators. These are the factors that, if different, would have prevented the deviation. This approach focuses resources where they matter most, turning the investigation into a targeted search for truth.

CAPA: Predictions, Not Promises

The Sanofi warning letter—analyzed in January—showed the destination of unfalsifiable investigation: CAPAs that exist mainly as paperwork.

Sanofi had investigation reports. They had “corrective actions.” But the FDA noted that deviations recurred in similar patterns, suggesting that the investigation had identified symptoms, not mechanisms, and that the “corrective” action had not actually addressed causation.

This is the sin of treating CAPA as a promise rather than a hypothesis.

A falsifiable CAPA is structured as an explicit prediction“If we implement X change, then Y undesirable outcome will not recur under conditions Z.”

This can be tested. If it fails the test, the CAPA itself becomes evidence—not of failure, but of incomplete causal understanding. Which is valuable.

In the Rechon analysis, this showed up concretely: The FDA’s real criticism was not just that contamination was found; it was that Rechon’s Contamination Control Strategy had no mechanism to falsify itself. If the CCS said “unidirectional airflow protects the product,” and smoke studies showed bidirectional eddies, the CCS had been falsified. But Rechon treated the falsification as an anomaly to be explained away, rather than evidence that the CCS hypothesis was wrong.

A falsifiable organization would say: “Our CCS predicted that Grade A in an isolator with this airflow pattern would remain sterile. The smoke study proves that prediction wrong. Therefore, the CCS is false. We redesign.”

Instead, they filmed from a different angle and said the aerodynamics were “acceptable.”

Knowledge Integration: When Deviations Become the Curriculum

The final piece of falsifiable investigation is what I call “knowledge integration.” A single deviation is a data point. But across the organization, deviations should form a curriculum about how systems actually fail.

Sanofi’s failure was not that they investigated each deviation badly (though they did). It was that they investigated them in isolation. Each deviation closed on its own. Each CAPA addressed its own batch. There was no organizational learning—no mechanism for a pattern of similar deviations to trigger a hypothesis that the control strategy itself was fundamentally flawed.

This is where the Catalent case study, analyzed in September’s “When 483s Reveal Zemblanity,” becomes instructive. Zemblanity is the opposite of serendipity: the seemingly random recurrence of the same failure through different paths. Catalent’s 483 observations were not isolated mistakes; they formed a pattern that revealed a systemic assumption (about equipment capability, about environmental control, about material consistency) that was false across multiple products and locations.

A falsifiable quality system catches zemblanity early by:

  1. Treating each deviation as a test of organizational hypotheses, not as an isolated incident.
  2. Trending deviation patterns to detect when the same causal mechanism is producing failures across different products, equipment, or operators.
  3. Revising control strategies when patterns falsify the original assumptions, rather than tightening parameters at the margins.

The Digital Hallucination (CSA, AI, and the Expertise Crisis)

(Reflecting on: CSA: The Emperor’s New Clothes, Annex 11, and The Expertise Crisis)

While we battled microbes in the cleanroom, a different battle was raging in the server room. 2025 was the year the industry tried to “modernize” validation through Computer Software Assurance (CSA) and AI, and in many ways, it was the year we tried to automate our way out of thinking.

CSA: The Emperor’s New Validation Clothes

In September, I published Computer System Assurance: The Emperor’s New Validation Clothes,” a critique of the the contortions being made around the FDA’s guidance. The narrative sold by consultants for years was that traditional Computer System Validation (CSV) was “broken”—too much documentation, too much testing—and that CSA was a revolutionary new paradigm of “critical thinking.”

My analysis showed that this narrative is historically illiterate.

The principles of CSA—risk-based testing, leveraging vendor audits, focusing on intended use—are not new. They are the core principles of GAMP5 and have been applied for decades now.

The industry didn’t need a new guidance to tell us to use critical thinking; we had simply chosen not to use the critical thinking tools we already had. We had chosen to apply “one-size-fits-all” templates because they were safe (unfalsifiable).

The CSA guidance is effectively the FDA saying: “Please read the GAMP5 guide you claimed to be following for the last 15 years.”

The danger of the “CSA Revolution” narrative is that it encourages a swing to the opposite extreme: “Unscripted Testing” that becomes “No Testing.”

In a falsifiable system, “unscripted testing” is highly rigorous—it is an expert trying to break the software (“Ad Hoc testing”). But in an unfalsifiable system, “unscripted testing” becomes “I clicked around for 10 minutes and it looked fine.”

The Expertise Crisis: AI and the Death of the Apprentice

This leads directly to the Expertise Crisis. In September, I wrote The Expertise Crisis: Why AI’s War on Entry-Level Jobs Threatens Quality’s Future.” This was perhaps the most personal topic I covered this year, because it touches on the very survival of our profession.

We are rushing to integrate Artificial Intelligence (AI) into quality systems. We have AI writing deviations, AI drafting SOPs, AI summarizing regulatory changes. The efficiency gains are undeniable. But the cost is hidden, and it is epistemological.

Falsifiability requires expertise.
To falsify a claim—to look at a draft investigation report and say, “No, that conclusion doesn’t follow from the data”—you need deep, intuitive knowledge of the process. You need to know what a “normal” pH curve looks like so you can spot the “abnormal” one that the AI smoothed over.

Where does that intuition come from? It comes from the “grunt work.” It comes from years of reviewing batch records, years of interviewing operators, years of struggling to write a root cause analysis statement.

The Expertise Crisis is this: If we give all the entry-level work to AI, where will the next generation of Quality Leaders come from?

  • The Junior Associate doesn’t review the raw data; the AI summarizes it.
  • The Junior Associate doesn’t write the deviation; the AI generates the text.
  • Therefore, the Junior Associate never builds the mental models necessary to critique the AI.

The Loop of Unfalsifiable Hallucination

We are creating a closed loop of unfalsifiability.

  1. The AI generates a plausible-sounding investigation report.
  2. The human reviewer (who has been “de-skilled” by years of AI reliance) lacks the deep expertise to spot the subtle logical flaw or the missing data point.
  3. The report is approved.
  4. The “hallucination” becomes the official record.

In a falsifiable quality system, the human must remain the adversary of the algorithm. The human’s job is to try to break the AI’s logic, to check the citations, to verify the raw data.
But in 2025, we saw the beginnings of a “Compliance Autopilot”—a desire to let the machine handle the “boring stuff.”

My warning in September remains urgent: Efficiency without expertise is just accelerated incompetence. If we lose the ability to falsify our own tools, we are no longer quality professionals; we are just passengers in a car driven by a statistical model that doesn’t know what “truth” is.

My post “The Missing Middle in GMP Decision Making: How Annex 22 Redefines Human-Machine Collaboration in Pharmaceutical Quality Assurance” goes a lot deeper here.

Annex 11 and Data Governance

In August, I analyzed the draft Annex 11 (Computerised Systems) in the post Data Governance Systems: A Fundamental Shift.”

The Europeans are ahead of the FDA here. While the FDA talks about “Assurance” (testing less), the EU is talking about “Governance” (controlling more). The new Annex 11 makes it clear: You cannot validate a system if you do not control the data lifecycle. Validation is not a test script; it is a state of control.

This aligns perfectly with USP <1225> and <1220>. Whether it’s a chromatograph or an ERP system, the requirement is the same: Prove that the data is trustworthy, not just that the software is installed.

The Process as a Hypothesis (CPV & Cleaning)

(Reflecting on: Continuous Process Verification and Hypothesis Formation)

The final frontier of validation we explored in 2025 was the manufacturing process itself.

CPV: Continuous Falsification

In March, I published Continuous Process Verification (CPV) Methodology and Tool Selection.”
CPV is the ultimate expression of Falsifiable Quality in manufacturing.

  • Traditional Validation (3 Batches): “We made 3 good batches, therefore the process is perfect forever.” (Unfalsifiable extrapolation).
  • CPV: “We made 3 good batches, so we have a license to manufacture, but we will statistically monitor every subsequent batch to detect drift.” (Continuous hypothesis testing).

The challenge with CPV, as discussed in the post, is that it requires statistical literacy. You cannot implement CPV if your quality unit doesn’t understand the difference between Cpk and Ppk, or between control limits and specification limits.

This circles back to the Expertise Crisis. We are implementing complex statistical tools (CPV software) at the exact moment we are de-skilling the workforce. We risk creating a “CPV Dashboard” that turns red, but no one knows why or what to do about it.

Cleaning Validation: The Science of Residue

In August, I tried to apply falsifiability to one of the most stubborn areas of dogma: Cleaning Validation.

In Building Decision-Making with Structured Hypothesis Formation, I argued that cleaning validation should not be about “proving it’s clean.” It should be about “understanding why it gets dirty.”

  • Traditional Approach: Swab 10 spots. If they pass, we are good.
  • Hypothesis Approach: “We hypothesize that the gasket on the bottom valve is the hardest to clean. We predict that if we reduce rinse time by 1 minute, that gasket will fail.”

By testing the boundaries—by trying to make the cleaning fail—we understand the Design Space of the cleaning process.

We discussed the “Visual Inspection” paradox in cleaning: If you can see the residue, it failed. But if you can’t see it, does it pass?

Only if you have scientifically determined the Visible Residue Limit (VRL). Using “visually clean” without a validated VRL is—you guessed it—unfalsifiable.

To: Jeremiah Genest
From: Perplexity Research
Subject: Draft Content – Single-Use Systems & E&L Section

Here is a section on Single-Use Systems (SUS) and Extractables & Leachables (E&L).

I have positioned this piece to bridge the gap between “Part III: The Reality Check” (Contamination/Water) and “Part V: The Process as a Hypothesis” (Cleaning Validation).

The argument here is that by switching from Stainless Steel to Single-Use, we traded a visible risk (cleaning residue) for an invisible one (chemical migration), and that our current approach to E&L is often just “Paper Safety”—relying on vendor data that doesn’t reflect the “Work-as-Done” reality of our specific process conditions.

The Plastic Paradox (Single-Use Systems and the E&L Mirage)

If the Rechon and LeMaitre warning letters were about the failure to control biological contaminants we can find, the industry’s struggle with Single-Use Systems (SUS) in 2025 was about the chemical contaminants we choose not to find.

We have spent the last decade aggressively swapping stainless steel for plastic. The value proposition was irresistible: Eliminate cleaning validation, eliminate cross-contamination, increase flexibility. We traded the “devil we know” (cleaning residue) for the “devil we don’t” (Extractables and Leachables).

But in 2025, with the enforcement reality of USP <665> (Plastic Components and Systems) settling in, we had to confront the uncomfortable truth: Most E&L risk assessments are unfalsifiable.

The Vendor Data Trap

The standard industry approach to E&L is the ultimate form of “Compliance Theater.”

  1. We buy a single-use bag.
  2. We request the vendor’s regulatory support package (the “Map”).
  3. We see that the vendor extracted the film with aggressive solvents (ethanol, hexane) for 7 days.
  4. We conclude: “Our process uses water for 24 hours; therefore, we are safe.”

This logic is epistemologically bankrupt. It assumes that the Vendor’s Model (aggressive solvents/short time) maps perfectly to the User’s Reality (complex buffers/long duration/specific surfactants).

It ignores the fact that plastics are dynamic systems. Polymers age. Gamma irradiation initiates free radical cascades that evolve over months. A bag manufactured in January might have a different leachable profile than a bag manufactured in June, especially if the resin supplier made a “minor” change that didn’t trigger a notification.

By relying solely on the vendor’s static validation package, we are choosing not to falsify our safety hypothesis. We are effectively saying, “If the vendor says it’s clean, we will not look for dirt.”

USP <665>: A Baseline, Not a Ceiling

The full adoption of USP <665> was supposed to bring standardization. And it has—it provides a standard set of extraction conditions. But standards can become ceilings.

In 2025, I observed a troubling trend of “Compliance by Citation.” Firms are citing USP <665> compliance as proof of absence of risk, stopping the inquiry there.

A Falsifiable E&L Strategy goes further. It asks:

  • “What if the vendor data is irrelevant to my specific surfactant?”
  • “What if the gamma irradiation dose varied?”
  • “What if the interaction between the tubing and the connector creates a new species?”

The Invisible Process Aid

We must stop viewing Single-Use Systems as inert piping. They are active process components. They are chemically reactive vessels that participate in our reaction kinetics.

When we treat them as inert, we are engaging in the same “Aspirational Thinking” that LeMaitre used on their water valves. We are modeling the system we want (pure, inert plastic), not the system we have (a complex soup of antioxidants, slip agents, and degradants).

The lesson of 2025 is that Material Qualification cannot be a paper exercise. If you haven’t done targeted simulation studies that mimic your actual “Work-as-Done” conditions, you haven’t validated the system. You’ve just filed the receipt.

The Mandate for 2026

As we look toward 2026, the path is clear. We cannot go back to the comfortable fiction of the pre-2025 era.

The regulatory environment (Annex 1, ICH Q14, USP <1225>, Annex 11) is explicitly demanding evidence of control, not just evidence of compliance. The technological environment (AI) is demanding that we sharpen our human expertise to avoid becoming obsolete. The physical environment (contamination, supply chain complexity) is demanding systems that are robust, not just rigid.

The mandate for the coming year is to build Falsifiable Quality Systems.

What does that look like practically?

  1. In the Lab: Implement USP <1225> logic now. Don’t wait for the official date. Validate your reportable results. Add “challenge tests” to your routine monitoring.
  2. In the Plant: Redesign your Environmental Monitoring to hunt for contamination, not to avoid it. If you have a “perfect” record in a Grade C area, move the plates until you find the dirt.
  3. In the Office: Treat every investigation as a chance to falsify the control strategy. If a deviation occurs that the control strategy said was impossible, update the control strategy.
  4. In the Culture: Reward the messenger. The person who finds the crack in the system is not a troublemaker; they are the most valuable asset you have. They just falsified a false sense of security.
  5. In Design: Embrace the Elegant Quality System (discussed in May). Complexity is the enemy of falsifiability. Complex systems hide failures; simple, elegant systems reveal them.

2025 was the year we stopped pretending. 2026 must be the year we start building. We must build systems that are honest enough to fail, so that we can build processes that are robust enough to endure.

Thank you for reading, challenging, and thinking with me this year. The investigation continues.

The Taxonomy of Clean: Why Confusing Microbial Control, Aseptic, and Sterile is Wrecking Your Contamination Control Strategy

If I had a dollar for every time I sat in a risk assessment workshop and heard someone use “aseptic” and “sterile” interchangeably, I could probably fund my own private isolator line. It is one of those semantic slips that seems harmless on the surface—like confusing “precision” with “accuracy”—but in the pharmaceutical quality world, these linguistic shortcuts are often the canary in the coal mine for a systemic failure of understanding.

We are currently navigating the post-Annex 1 implementation landscape, a world where the Contamination Control Strategy (CCS) has transitioned from a “nice-to-have” philosophy to a mandatory, living document. Yet, I frequently see CCS documents that read like a disorganized shopping list of controls rather than a coherent strategy. Why? Because the authors haven’t fundamentally distinguished between microbial control, aseptic processing, and sterility.

If we cannot agree on what we are trying to achieve, we certainly cannot build a strategy to achieve it. Today, I want to unpack these terms—not for the sake of pedantry, but because the distinction dictates your facility design, your risk profile, and ultimately, patient safety. We will also look at how these definitions map onto the spectrum of open and closed systems, and critically, how they apply across drug substance and drug product manufacturing. This last point is where I see the most confusion—and where the stakes are highest.

The Definitions: More Than Just Semantics

Let’s strip this back. These aren’t just vocabulary words; they are distinct operational states that demand different control philosophies.

Microbial Control: The Art of Management

Microbial control is the baseline. It is the broad umbrella under which all our activities sit, but it is not synonymous with sterility. In the world of non-sterile manufacturing (tablets, oral liquids, topicals), microbial control is about bioburden management. We aren’t trying to eliminate life; we are trying to keep it within safe, predefined limits and, crucially, ensure the absence of “objectionable organisms.”

In a sterile manufacturing context, microbial control is what happens before the sterilization step. It is the upstream battle. It is the control of raw materials, the WFI loops, the bioburden of the bulk solution prior to filtration.

Impact on CCS: If your CCS treats microbial control as “sterility light,” you will fail. A strategy for microbial control focuses on trend analysis, cleaning validation, and objectionable organism assessments. It relies heavily on understanding the microbiome of your facility. It accepts that microorganisms are present but demands they be the right kind (skin flora vs. fecal) and in the right numbers.

Sterile: The Absolute Negative

Sterility is an absolute. There is no such thing as “a little bit sterile.” It is a theoretical concept defined by a probability—the Sterility Assurance Level (SAL), typically 10⁻⁶.

Here is the critical philosophical point: Sterility is a negative quality attribute. You cannot test for it. You cannot inspect for it. By the time you get a sterility test result, the batch is already made. Therefore, you cannot “control” sterility in the same way you control pH or dissolved oxygen. You can only assure it through the validation of the process that delivered it.

Impact on CCS: Your CCS cannot rely on monitoring to prove sterility. Any strategy that points to “passing sterility tests” as a primary control measure is fundamentally flawed. The CCS for sterility must focus entirely on the robustness of the sterilization cycle (autoclave validation, gamma irradiation dosimetry, VHP cycles) and the integrity of the container closure system.

Aseptic: The Maintenance of State

This is where the confusion peaks. Aseptic does not mean “sterilizing.” Aseptic processing is the methodology of maintaining the sterility of components that have already been sterilized individually. It is the handling, the assembly, and the filling of sterile parts in a sterile environment.

If sterilization is the act of killing, aseptic processing is the act of not re-contaminating.

Impact on CCS: This is the highest risk area. Why? Because it involves the single dirtiest variable in our industry: people. An aseptic CCS is almost entirely focused on intervention management, first air protection, and behavioral controls. It is about the “tacit knowledge” of the operator—knowing how to move slowly, knowing not to block the HEPA flow. If your CCS focuses on environmental monitoring (EM) data here, you are reacting, not controlling. The strategy must be prevention of ingress.

Drug Substance vs. Drug Product: The Fork in the Road

This is where the plot thickens. Many quality professionals treat the CCS as a monolithic framework, but drug substance manufacturing and drug product manufacturing are fundamentally different activities with different contamination risks, different control philosophies, and different success criteria.

Let me be direct: confusing these two stages is the source of many failed validation studies, inappropriate risk assessments, and ultimately, preventable contamination events.

Drug Substance: The Upstream Challenge

Drug substance (the active pharmaceutical ingredient, or API) is typically manufactured in a dedicated facility, often from biological fermentation (for biotech) or chemical synthesis. The critical distinction is this: drug substance manufacturing is almost always a closed process.

Why? Because the bulk is continuously held in vessels, tanks, or bioreactors. It is rarely exposed to the open room environment. Even where additions occur (buffers, precipitants), these are often made through closed connectors or valving systems.

The CCS for drug substance therefore prioritizes:

  • Bioburden control of the bulk product at defined process stages. This is not about sterility assurance; it is about understanding the microbial load before formulation and the downstream sterilizing filter. The European guidance (CPMP Note for Guidance on Manufacture) is explicit: the maximum acceptable bioburden prior to sterilizing filtration is typically ≤10 CFU/100 mL for aseptically filled products.
  • Process hold times. One of the most underappreciated risks in drug substance manufacturing is the hold time between stages—the time the bulk sits in a vessel before the next operation. If you haven’t validated that microorganisms won’t grow during a 72-hour hold at room temperature, you haven’t validated your process. The pharmaceutical literature is littered with cases where insufficient attention to hold time validation led to unexpected bioburden increases (50-100× increases have been observed).
  • Intermediate bioburden testing. The CCS must specify where in the process bioburden is assessed. I advocate for testing at critical junctures:
    • At the start of manufacturing (raw materials/fermentation)
    • Post-purification (to assess effectiveness of unit operations)
    • Prior to formulation/final filtration (this is the regulatory checkpoint)
  • Equipment design and cleanliness. Drug substance vessels and transfer lines are part of the microbial control landscape. They are not Grade A environments (because the product is in a closed vessel), but they must be designed and maintained to prevent bioburden increase. This includes cleaning and disinfection, material of construction (stainless steel vs. single-use), and microbial monitoring of water used for equipment cleaning.
  • Water systems. The water used in drug substance manufacturing (for rinsing, for buffer preparation) is a critical contamination source. Water for Injection (WFI) has a specification of ≤0.1 CFU/mL. However, many drug substance processes use purified water or even highly purified water (HPW), where microbial control is looser. The CCS must specify the water system design, the microbial limits, and the monitoring frequency.

The environmental monitoring program for drug substance is quite different from drug product. There are no settle plates of the drug substance itself (it’s not open). Instead, EM focuses on the compressor room (if using compressed gases), water systems, and post-manufacturing equipment surfaces. The EM is about detecting facility drift, not about detecting product contamination in real-time.

Drug Product: The Aseptic Battlefield

Drug product manufacturing—the formulation, filling, and capping of the drug substance into vials or containers—is where the real contamination risk lives.

For sterile drug products, this is the aseptic filling stage. And here, the CCS is almost entirely different from drug substance.

The CCS for drug product prioritizes:

  • Intervention management and aseptic technique validation. Every opening of a sterile vial, every manual connection, every operator interaction is a potential contamination event. The CCS must specify:
    • Gowning requirements (Grade A background requires full body coverage, including hood, suit, and sterile gloves)
    • Aseptic technique training and periodic requalification (gloved hand aseptic technique, GHAT)
    • First-air protection (the air directly above the vial or connection point must be Grade A)
    • Speed of operations (rapid movements increase turbulence and microbial dispersion)
  • Container closure integrity. Once filled, the vial is sealed. But the window of vulnerability is the time between filling and capping. The CCS must specify maximum exposure times prior to closure (often 5-15 minutes, depending on the filling line). Any vial left uncapped beyond this window is at risk.
  • Real-time environmental monitoring. Unlike drug substance manufacturing, drug product EM is your primary detective. Settle plates in the Grade A filling zone, active air samplers, surface monitoring, and gloved-hand contact plates are all part of the CCS. The logic is: if you see a trend in EM data during the filling run, you can stop the batch and investigate. You cannot do this with end-product sterility testing (you get the result weeks later). This is why parametric monitoring of differential pressures, airflow velocities, and particle counts is critical—it gives you live feedback.
  • Container closure integrity testing. This is critical for the drug product CCS. You can fill a vial perfectly under Grade A conditions, but if the container closure system is compromised, the sterility is lost. The CCS must include:
    • Validation of the closure system during development
    • Routine CCI testing (often helium leak detection) as part of QC
    • Shelf-life stability studies that include CCI assessments

The key distinction: Drug substance CCS is about upstream prevention (keeping microorganisms out of the bulk). Drug product CCS is about downstream detection and prevention of re-contamination (because the product is no longer in a controlled vessel, it is now exposed).

The Bridge: Sterilizing Filtration

Here is where the two meet. The drug substance, with its controlled bioburden, passes through a sterilizing-grade filter (0.2 µm) into a sterile holding vessel. This is the handoff point. The filter is validated to remove ≥99.99999999% (log 10) of the challenge organisms.

The CCS must address this transition:

  • The bioburden before filtration must be ≤10 CFU/100 mL (European limit; the FDA requires “appropriate limits” but does not specify a number).
  • The filtration process itself must be validated with the actual drug substance and challenge organisms.
  • Post-filtration, the bulk is considered sterile (by probability) and enters aseptic filling.

Many failures I have seen involve inadequate attention to the state of the product at this handoff. A bulk solution that has grown from 5 CFU/mL to 500 CFU/mL during a hold time can still technically be “filtered.” But it challenges the sterilizing filter, increases the risk of breakthrough, and is frankly an indication of poor upstream control. The CCS must make this connection explicit.

From Definitions to Strategy: The Open vs. Closed Spectrum

Now that we have the definitions, and we understand the distinction between drug substance and drug product, we have to talk about where these activities happen. The regulatory wind (specifically Annex 1) is blowing hard in one direction: separation of the operator from the process.

This brings us to the concept of Open vs. Closed systems. This isn’t a binary switch; it’s a spectrum of risk.

The “Open” System: The Legacy Nightmare

In a truly open system, the product or critical surfaces are exposed to the cleanroom environment, which is shared by operators.

  • The Setup: A Grade A filling line with curtain barriers, or worse, just laminar flow hoods where operators reach in with gowned arms.
  • The Risk: The operator is part of the environment. Every movement sheds particles. Every intervention is a roll of the dice.
  • CCS Implications: If you are running an open system, your CCS is working overtime. You are relying heavily on personnel qualification, gowning discipline, and aggressive Environmental Monitoring (EM). You are essentially fighting a war of attrition against entropy. The “Microbial Control” aspect here is desperate; you are relying on airflow to sweep away the contamination that you know is being generated by the people in the room.

This is almost never used for drug substance (which is in a closed vessel) but remains common in older drug product filling lines.

The Restricted Access Barrier System (RABS): The Middle Ground

RABS attempts to separate the operator from the critical zone via a rigid wall and glove ports, but it retains a connection to the room’s air supply.

  • Active RABS: Has its own onboard fan/HEPA units.
  • Passive RABS: Relies on the ceiling HEPA filters of the room.
  • Closed RABS: Doors are kept locked during the batch.
  • Open RABS: Doors can be opened (though they shouldn’t be).

CCS Implications: Here, the CCS shifts. The reliance on gowning decreases slightly (though Grade B background is still required), and the focus shifts to intervention management. The “Aseptic” strategy here is about door discipline. If a door is opened, you have effectively reverted to an open system. The CCS must explicitly define what constitutes a “closed” state and rigorously justify any breach.

The Closed System: The Holy Grail

A closed system is one where the product is never exposed to the immediate room environment. This is achieved via Isolators (for drug product filling) or Single-Use Systems (SUS) (for both drug substance transfers and drug product formulation).

  • Isolators: These are fully sealed units, often biodecontaminated with VHP, operating at a pressure differential. The operator is physically walled off. The critical zone (inside the isolator) is often Class 5 or better, while the surrounding room can be Class 7 or Class 8.
  • Single-Use Systems (SUS): Gamma-irradiated bags, tubing, and connectors (like aseptic connectors or tube welders) that create a sterile fluid path from start to finish. For drug substance, SUS is increasingly the norm—a connected bioprocess using Flexel or similar technology. For drug product, SUS includes pre-filled syringe filling systems, which eliminate the open vial/filling needle risk.

CCS Implications:

This is where the definitions we discussed earlier truly diverge, and where the drug substance vs. drug product distinction becomes clear.

Microbial Control (Drug Substance in SUS): The environment outside the SUS matters almost not at all. The control focus moves to:

  • Integrity testing (leak testing the connections)
  • Bioburden of the incoming bulk (before it enters the SUS)
  • Duration of hold (how long can the sterile fluid path remain static without microbial growth?)
  • A drug substance process using SUS (e.g., a continuous perfusion bioreactor feeding into a SUS train for chromatography, buffer exchange, and concentration) can run in a Grade C or even Grade D facility. The process itself is closed.

Sterile (Isolator for Drug Product Filling): The focus is on the VHP cycle validation. The isolator is fumigated with vaporized hydrogen peroxide, and the cycle is validated to achieve a 6-log reduction of a challenge organism. Once biodecontaminated, the isolator is considered “sterile” (or more accurately, “free from viable organisms”), and the drug product filling occurs inside.

Aseptic (Within Closed Systems): The “aseptic” risk is reduced to the connection points. For example: In a SUS, the risk is the act of disconnecting the bag when the process is complete. This must be done aseptically (often with a tube welder).

In an isolator filling line, the risk is the transfer of vials into and out of the isolator (through a rapid transfer port, or RTP, or through a port that is first disinfected).

The CCS focuses on the make or break moment—the point where sterility can be compromised.

The “Functionally Closed” Trap

A word of caution: I often see processes described as “closed” that are merely “functionally closed.”

  • Example: A bioreactor is SIP’d (sterilized in place) and runs in a closed loop, but then an operator has to manually open a sampling port with a needle to withdraw samples for bioburden testing.
  • The Reality: That is an open operation in a closed vessel.
  • CCS Requirement: Your strategy must identify these “briefly open” moments. These are your Critical Control Points (CCPs) (if using HACCP terminology). The strategy must layer controls here:
    • Localized Grade A air (a laminar flow station or glovebox around the sampling port)
    • Strict behavioral training (the operator must don sterile gloves, swab the port with 70% isopropyl alcohol, and execute the sampling in <2 minutes)
    • Immediate closure and post-sampling disinfection

I have seen drug substance batches rejected because of a single bioburden sample taken during an open operation that exceeded action levels. The bioburden itself may not have been representative of the bulk; it may have been adventitious contamination during sampling. But the CCS failed to protect the process during that vulnerable moment.

The “So What?” for Your Contamination Control Strategy

So, how do we pull this together into a cohesive document that doesn’t just sit on a shelf gathering dust?

Map the Process, Not the Room

Stop writing your CCS based on room grades. Write it based on the process flow. Map the journey of the product.

For Drug Substance:

  • Where is it synthesized or fermented? (typically in closed bioreactors)
  • Where is it purified? (chromatography columns, which are generally closed)
  • Where is it concentrated or buffer-exchanged? (tangential flow filtration units, which are closed)
  • Where is it held before filtration? (hold vessels, which are closed)
  • Where does it become sterile (filtration through 0.2 µm filter)

For Drug Product:

  • Where is the sterile bulk formulated? (generally in closed tanks or bags)
  • Where is it filled? (either in an isolator, a RABS, or an open line)
  • Where is it sealed? (capping machine, which must maintain Grade A conditions)
  • Where is it tested (QC lab, which is a separate cleanroom environment)

Within each of these stages, identify:

  • Where microbial control is critical (e.g., bioburden monitoring in drug substance holds)
  • Where sterility is assured (e.g., the sterilizing filter)
  • Where aseptic state is maintained (e.g., the filling room, the isolator)

Differentiate the Detectors

  • For Microbial Control: Use in-process bioburden and endotoxin testing to trend “bulk product quality.” If you see a shift from 5 CFU/mL (upstream) to 100 CFU/mL (mid-process), your CCS has a problem. These are alerts, not just data points.
  • For Aseptic Processing: Use physical monitoring (differential pressures, airflow velocities, particle counts) as your primary real-time indicators. If the pressure drops in the isolator, the aseptic state is compromised, regardless of what the settle plate says 5 days later.
  • For Sterility: Focus on parametric release concepts. The sterilizing filter validation data, the VHP cycle documentation—these are the product assurance. The end-product sterility test is a confirmation, not a control.

Justify Your Choices: Open vs. Closed, Drug Substance vs. Drug Product

For Drug Substance:

  • If you are using a closed bioreactor or SUS, your CCS can focus on upstream bioburden control and process hold time validation. Environmental monitoring is secondary (you’re monitoring the facility, not the product).
  • If you are using an open process (e.g., open fermentation, open harvesting), your CCS must be much tighter, and you need extensive EM.

For Drug Product:

  • If you are using an isolator or SUS (pre-filled syringe), your CCS focuses on biodecontamination validation and connection point discipline. You can fill in a lower-grade environment.
  • If you are using an open line or RABS, your CCS must extensively cover gowning, aseptic technique, and real-time EM. This is the higher-risk approach, and Annex 1 is explicitly nudging you away from it.

Explicitly Connect the Two Stages

Your CCS should have a section titled something like “Drug Substance to Drug Product Handoff: The Sterilizing Filtration Stage.” This section should specify:

  • The target bioburden for the drug substance bulk prior to filtration (typically ≤10 CFU/100 mL)
  • The filter used (pore size, expected log-reduction value, vendor qualification)
  • The validation data supporting the filtration (challenge testing with the actual drug substance, with a representative microbial panel)
  • The post-filtration process (transfer to sterile holding tank, aseptic filling)

This handoff is where drug substance “becomes” sterile, and where aseptic processing “begins.” Do not gloss over it.

One final point, because I see this trip up good quality teams: your CCS must specify how data is collected, stored, analyzed, and acted upon.

For drug substance bioburden and endotoxin data:

  • Is trending performed monthly? Quarterly?
  • Who reviews the data?
  • At what point does a trend prompt investigation?
  • Are alert and action levels set based on historical facility data, not just pharmacopeial guidance?

For drug product environmental monitoring:

  • Are EM results reviewed during the filling run (with rapid methods) or after?
  • If a grow is seen, what is the protocol? Do you stop the batch?
  • Are microorganisms identified to species? If not, how do you know if it’s a contamination event or just normal flora?

A CCS is only as good as its data management infrastructure. If you are still printing out EM results and filing them in binders, you are not executing Annex 1 in its intended spirit.

Conclusion

The difference between microbial control, aseptic, and sterile is not academic. It is the difference between managing a risk, maintaining a state, and assuring an absolute.

When we confuse these terms, we get “sterile” manufacturing lines that rely on “microbial control” tactics—like trying to test quality into a product via settle plates. We get risk assessments that underestimate the “aseptic” challenge of a manual connection because we assume the “sterile” tube will save us. We get drug substance processes that are validated like drug product processes, with unnecessary Grade A facilities and excessive EM, when a tight bioburden control strategy would be more effective.

Worse, we get a single CCS that tries to cover both drug substance and drug product with the same language and the same controls. These are fundamentally different manufacturing activities with different risks and different control philosophies.

A robust Contamination Control Strategy requires us to be linguistically and technically precise. It demands that we move away from the comfort of open systems and the reliance on retrospective monitoring. It forces us to acknowledge that while we can control microbes in drug substance and assure sterility through sterilization, the aseptic state in drug product filling is a fragile thing, maintained only by the rigor of our design, the separation of the operator from the process, and the discipline of our decisions.

Stop ticking boxes. Start analyzing the process. Understand where you are dealing with microbial control, aseptic processing, or sterility assurance—and make sure your CCS reflects that understanding. And for the love of quality, stop using a single template to describe both drug substance and drug product manufacturing.