Risk Blindness: The Invisible Threat

Risk blindness is an insidious loss of organizational perception—the gradual erosion of a company’s ability to recognize, interpret, and respond to threats that undermine product safety, regulatory compliance, and ultimately, patient trust. It is not merely ignorance or oversight; rather, risk blindness manifests as the cumulative inability to see threats, often resulting from process shortcuts, technology overreliance, and the undervaluing of hands-on learning.

Unlike risk aversion or neglect, which involves conscious choices, risk blindness is an unconscious deficiency. It often stems from structural changes like the automation of foundational jobs, fragmented risk ownership, unchallenged assumptions, and excessive faith in documentation or AI-generated reports. At its core, risk blindness breeds a false sense of security and efficiency while creating unseen vulnerabilities.

Pattern Recognition and Risk Blindness: The Cognitive Foundation of Quality Excellence

The Neural Architecture of Risk Detection

Pattern recognition lies at the heart of effective risk management in quality systems. It represents the sophisticated cognitive process by which experienced professionals unconsciously scan operational environments, data trends, and behavioral cues to detect emerging threats before they manifest as full-scale quality events. This capability distinguishes expert practitioners from novices and forms the foundation of what we might call “risk literacy” within quality organizations.

The development of pattern recognition in pharmaceutical quality follows predictable stages. At the most basic level (Level 1 Situational Awareness), professionals learn to perceive individual elements—deviation rates, environmental monitoring trends, supplier performance metrics. However, true expertise emerges at Level 2 (Comprehension), where practitioners begin to understand the relationships between these elements, and Level 3 (Projection), where they can anticipate future system states based on current patterns.

Research in clinical environments demonstrates that expert pattern recognition relies on matching current situational elements with previously stored patterns and knowledge, creating rapid, often unconscious assessments of risk significance. In pharmaceutical quality, this translates to the seasoned professional who notices that “something feels off” about a batch record, even when all individual data points appear within specification, or the environmental monitoring specialist who recognizes subtle trends that precede contamination events.

The Apprenticeship Dividend: Building Pattern Recognition Through Experience

The development of sophisticated pattern recognition capabilities requires what we’ve previously termed the “apprenticeship dividend”—the cumulative learning that occurs through repeated exposure to routine operations, deviations, and corrective actions. This learning cannot be accelerated through technology or condensed into senior-level training programs; it must be built through sustained practice and mentored reflection.

The Stages of Pattern Recognition Development:

Foundation Stage (Years 1-2): New professionals learn to identify individual risk elements—understanding what constitutes a deviation, recognizing out-of-specification results, and following investigation procedures. Their pattern recognition is limited to explicit, documented criteria.

Integration Stage (Years 3-5): Practitioners begin to see relationships between different quality elements. They notice when environmental monitoring trends correlate with equipment issues, or when supplier performance changes precede raw material problems. This represents the emergence of tacit knowledge—insights that are difficult to articulate but guide decision-making.

Mastery Stage (Years 5+): Expert practitioners develop what researchers call “intuitive expertise”—the ability to rapidly assess complex situations and identify subtle risk patterns that others miss. They can sense when a investigation is heading in the wrong direction, recognize when supplier responses are evasive, or detect process drift before it appears in formal metrics.

Tacit Knowledge: The Uncodifiable Foundation of Risk Assessment

Perhaps the most critical aspect of pattern recognition in pharmaceutical quality is the role of tacit knowledge—the experiential wisdom that cannot be fully documented or transmitted through formal training systems. Tacit knowledge encompasses the subtle cues, contextual understanding, and intuitive insights that experienced professionals develop through years of hands-on practice.

In pharmaceutical quality systems, tacit knowledge manifests in numerous ways:

  • Knowing which equipment is likely to fail after cleaning cycles, based on subtle operational cues rather than formal maintenance schedules
  • Recognizing when supplier audit responses are technically correct but practically inadequate
  • Sensing when investigation teams are reaching premature closure without adequate root cause analysis
  • Detecting process drift through operator reports and informal observations before it appears in formal monitoring data

This tacit knowledge cannot be captured in standard operating procedures or electronic systems. It exists in the experienced professional’s ability to read “between the lines” of formal data, to notice what’s missing from reports, and to sense when organizational pressures are affecting the quality of risk assessments.

The GI Joe Fallacy: The Dangers of “Knowing is Half the Battle”

A persistent—and dangerous—belief in quality organizations is the idea that simply knowing about risks, standards, or biases will prevent us from falling prey to them. This is known as the GI Joe fallacy—the misguided notion that awareness is sufficient to overcome cognitive biases or drive behavioral change.

What is the GI Joe Fallacy?

Inspired by the classic 1980s G.I. Joe cartoons, which ended each episode with “Now you know. And knowing is half the battle,” the GI Joe fallacy describes the disconnect between knowledge and action. Cognitive science consistently shows that knowing about biases or desired actions does not ensure that individuals or organizations will behave accordingly.

Even the founder of bias research, Daniel Kahneman, has noted that reading about biases doesn’t fundamentally change our tendency to commit them. Organizations often believe that training, SOPs, or system prompts are enough to inoculate staff against error. In reality, knowledge is only a small part of the battle; much larger are the forces of habit, culture, distraction, and deeply rooted heuristics.

GI Joe Fallacy in Quality Risk Management

In pharmaceutical quality risk management, the GI Joe fallacy can have severe consequences. Teams may know the details of risk matrices, deviation procedures, and regulatory requirements, yet repeatedly fail to act with vigilance or critical scrutiny in real situations. Loss aversion, confirmation bias, and overconfidence persist even for those trained in their dangers.

For example, base rate neglect—a bias where salient event data distracts from underlying probabilities—can influence decisions even when staff know better intellectually. This manifests in investigators overreacting to recent dramatic events while ignoring stable process indicators. Knowing about risk frameworks isn’t enough; structures and culture must be designed specifically to challenge these biases in practice, not simply in theory.

Structural Roots of Risk Blindness

The False Economy of Automation and Overconfidence

Risk blindness often arises from a perceived efficiency gained through process automation or the curtailment of on-the-ground learning. When organizations substitute active engagement for passive oversight, staff lose critical exposure to routine deviations and process variables.

Senior staff who only approve system-generated risk assessments lack daily operational familiarity, making them susceptible to unseen vulnerabilities. Real risk assessment requires repeated, active interaction with process data—not just a review of output.

Fragmented Ownership and Deficient Learning Culture

Risk ownership must be robust and proximal. When roles are fragmented—where the “system” manages risk and people become mere approvers—vital warnings can be overlooked. A compliance-oriented learning culture that believes training or SOPs are enough to guard against operational threats falls deeper into the GI Joe fallacy: knowledge is mistaken for vigilance.

Instead, organizations need feedback loops, reflection, and opportunities to surface doubts and uncertainties. Training must be practical and interactive, not limited to information transfer.

Zemblanity: The Shadow of Risk Blindness

Zemblanity is the antithesis of serendipity in the context of pharmaceutical quality—it describes the persistent tendency for organizations to encounter negative, foreseeable outcomes when risk signals are repeatedly ignored, misunderstood, or left unacted upon.

When examining risk blindness, zemblanity stands as the practical outcome: a quality system that, rather than stumbling upon unexpected improvements or positive turns, instead seems trapped in cycles of self-created adversity. Unlike random bad luck, zemblanity results from avoidable and often visible warning signs—deviations that are rationalized, oversight meetings that miss the point, and cognitive biases like the GI Joe fallacy that lull teams into a false sense of mastery

Real-World Manifestations

Case: The Disappearing Deviation

Digital batch records reduced documentation errors and deviation reports, creating an illusion of process control. But when technology transfer led to out-of-spec events, the lack of manually trained eyes meant no one was poised to detect subtle process anomalies. Staff “knew” the process in theory—yet risk blindness set in because the signals were no longer being actively, expertly interpreted. Knowledge alone was not enough.

Case: Supplier Audit Blindness

Virtual audits relying solely on documentation missed chronic training issues that onsite teams would likely have noticed. The belief that checklist knowledge and documentation sufficed prevented the team from recognizing deeper underlying risks. Here, the GI Joe fallacy made the team believe their expertise was shield enough, when in reality, behavioral engagement and observation were necessary.

Counteracting Risk Blindness: Beyond Knowing to Acting

Effective pharmaceutical quality systems must intentionally cultivate and maintain pattern recognition capabilities across their workforce. This requires structured approaches that go beyond traditional training and incorporate the principles of expertise development:

Structured Exposure Programs: New professionals need systematic exposure to diverse risk scenarios—not just successful cases, but also investigations that went wrong, supplier audits that missed problems, and process changes that had unexpected consequences. This exposure must be guided by experienced mentors who can help identify and interpret relevant patterns.

Cross-Functional Pattern Sharing: Different functional areas—manufacturing, quality control, regulatory affairs, supplier management—develop specialized pattern recognition capabilities. Organizations need systematic mechanisms for sharing these patterns across functions, ensuring that insights from one area can inform risk assessment in others.

Cognitive Diversity in Assessment Teams: Research demonstrates that diverse teams are better at pattern recognition than homogeneous groups, as different perspectives help identify patterns that might be missed by individuals with similar backgrounds and experience. Quality organizations should intentionally structure assessment teams to maximize cognitive diversity.

Systematic Challenge Processes: Pattern recognition can become biased or incomplete over time. Organizations need systematic processes for challenging established patterns—regular “red team” exercises, external perspectives, and structured devil’s advocate processes that test whether recognized patterns remain valid.

Reflective Practice Integration: Pattern recognition improves through reflection on both successes and failures. Organizations should create systematic opportunities for professionals to analyze their pattern recognition decisions, understand when their assessments were accurate or inaccurate, and refine their capabilities accordingly.

Using AI as a Learning Accelerator

AI and automation should support, not replace, human risk assessment. Tools can help new professionals identify patterns in data, but must be employed as aids to learning—not as substitutes for judgment or action.

Diagnosing and Treating Risk Blindness

Assess organizational risk literacy not by the presence of knowledge, but by the frequency of active, critical engagement with real risks. Use self-assessment questions such as:

  • Do deviation investigations include frontline voices, not just system reviewers?
  • Are new staff exposed to real processes and deviations, not just theoretical scenarios?
  • Are risk reviews structured to challenge assumptions, not merely confirm them?
  • Is there evidence that knowledge is regularly translated into action?

Why Preventing Risk Blindness Matters

Regulators evaluate quality maturity not simply by compliance, but by demonstrable capability to anticipate and mitigate risks. AI and digital transformation are intensifying the risk of the GI Joe fallacy by tempting organizations to substitute data and technology for judgment and action.

As experienced professionals retire, the gap between knowing and doing risks widening. Only organizations invested in hands-on learning, mentorship, and behavioral feedback will sustain true resilience.

Choosing Sight

Risk blindness is perpetuated by the dangerous notion that knowing is enough. The GI Joe fallacy teaches that organizational memory, vigilance, and capability require much more than knowledge—they demand deliberate structures, engaged cultures, and repeated practice that link theory to action.

Quality leaders must invest in real development, relentless engagement, and humility about the limits of their own knowledge. Only then will risk blindness be cured, and resilience secured.

Beyond “Knowing Is Half the Battle”

Dr. Valerie Mulholland’s recent exploration of the GI Joe Bias strikes gets to the heart of a fundamental challenge in pharmaceutical quality management: the persistent belief that awareness of cognitive biases is sufficient to overcome them. I find Valerie’s analysis particularly compelling because it connects directly to the practical realities we face when implementing ICH Q9(R1)’s mandate to actively manage subjectivity in risk assessment.

Valerie’s observation that “awareness of a bias does little to prevent it from influencing our decisions” shows us that the GI Joe Bias underlays a critical gap between intellectual understanding and practical application—a gap that pharmaceutical organizations must bridge if they hope to achieve the risk-based decision-making excellence that ICH Q9(R1) demands.

The Expertise Paradox: Why Quality Professionals Are Particularly Vulnerable

Valerie correctly identifies that quality risk management facilitators are often better at spotting biases in others than in themselves. This observation connects to a deeper challenge I’ve previously explored: the fallacy of expert immunity. Our expertise in pharmaceutical quality systems creates cognitive patterns that simultaneously enable rapid, accurate technical judgments while increasing our vulnerability to specific biases.

The very mechanisms that make us effective quality professionals—pattern recognition, schema-based processing, heuristic shortcuts derived from base rate experiences—are the same cognitive tools that generate bias. When I conduct investigations or facilitate risk assessments, my extensive experience with similar events creates expectations and assumptions that can blind me to novel failure modes or unexpected causal relationships. This isn’t a character flaw; it’s an inherent part of how expertise develops and operates.

Valerie’s emphasis on the need for trained facilitators in high-formality QRM activities reflects this reality. External facilitation isn’t just about process management—it’s about introducing cognitive diversity and bias detection capabilities that internal teams, no matter how experienced, cannot provide for themselves. The facilitator serves as a structured intervention against the GI Joe fallacy, embodying the systematic approaches that awareness alone cannot deliver.

From Awareness to Architecture: Building Bias-Resistant Quality Systems

The critical insight from both Valerie’s work and my writing about structured hypothesis formation is that effective bias management requires architectural solutions, not individual willpower. ICH Q9(R1)’s introduction of the “Managing and Minimizing Subjectivity” section represents recognition that regulatory compliance requires systematic approaches to cognitive bias management.

In my post on reducing subjectivity in quality risk management, I identified four strategies that directly address the limitations Valerie highlights about the GI Joe Bias:

  1. Leveraging Knowledge Management: Rather than relying on individual awareness, effective bias management requires systematic capture and application of objective information. When risk assessors can access structured historical data, supplier performance metrics, and process capability studies, they’re less dependent on potentially biased recollections or impressions.
  2. Good Risk Questions: The formulation of risk questions represents a critical intervention point. Well-crafted questions can anchor assessments in specific, measurable terms rather than vague generalizations that invite subjective interpretation. Instead of asking “What are the risks to product quality?”, effective risk questions might ask “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months based on the last three years of data?”
  3. Cross-Functional Teams: Valerie’s observation that we’re better at spotting biases in others translates directly into team composition strategies. Diverse, cross-functional teams naturally create the external perspective that individual bias recognition cannot provide. The manufacturing engineer, quality analyst, and regulatory specialist bring different cognitive frameworks that can identify blind spots in each other’s reasoning.
  4. Structured Decision-Making Processes: The tools Valerie mentions—PHA, FMEA, Ishikawa, bow-tie analysis—serve as external cognitive scaffolding that guides thinking through systematic pathways rather than relying on intuitive shortcuts that may be biased.

The Formality Framework: When and How to Escalate Bias Management

One of the most valuable aspects of ICH Q9(R1) is its introduction of the formality concept—the idea that different situations require different levels of systematic intervention. Valerie’s article implicitly addresses this by noting that “high formality QRM activities” require trained facilitators. This suggests a graduated approach to bias management that scales intervention intensity with decision importance.

This formality framework needs to include bias management that organizations can use to determine when and how intensively to apply bias mitigation strategies:

  • Low Formality Situations: Routine decisions with well-understood parameters, limited stakeholders, and reversible outcomes. Basic bias awareness training and standardized checklists may be sufficient.
  • Medium Formality Situations: Decisions involving moderate complexity, uncertainty, or impact. These require cross-functional input, structured decision tools, and documentation of rationales.
  • High Formality Situations: Complex, high-stakes decisions with significant uncertainty, multiple conflicting objectives, or diverse stakeholders. These demand external facilitation, systematic bias checks, and formal documentation of how potential biases were addressed.

This framework acknowledges that the GI Joe fallacy is most dangerous in high-formality situations where the stakes are highest and the cognitive demands greatest. It’s precisely in these contexts that our confidence in our ability to overcome bias through awareness becomes most problematic.

The Cultural Dimension: Creating Environments That Support Bias Recognition

Valerie’s emphasis on fostering humility, encouraging teams to acknowledge that “no one is immune to bias, even the most experienced professionals” connects to my observations about building expertise in quality organizations. Creating cultures that can effectively manage subjectivity requires more than tools and processes; it requires psychological safety that allows bias recognition without professional threat.

I’ve noted in past posts that organizations advancing beyond basic awareness levels demonstrate “systematic recognition of cognitive bias risks” with growing understanding that “human judgment limitations can affect risk assessment quality.” However, the transition from awareness to systematic application requires cultural changes that make bias discussion routine rather than threatening.

This cultural dimension becomes particularly important when we consider the ironic processing effects that Valerie references. When organizations create environments where acknowledging bias is seen as admitting incompetence, they inadvertently increase bias through suppression attempts. Teams that must appear confident and decisive may unconsciously avoid bias recognition because it threatens their professional identity.

The solution is creating cultures that frame bias recognition as professional competence rather than limitation. Just as we expect quality professionals to understand statistical process control or regulatory requirements, we should expect them to understand and systematically address their cognitive limitations.

Practical Implementation: Moving Beyond the GI Joe Fallacy

Building on Valerie’s recommendations for structured tools and systematic approaches, here are some specific implementation strategies that organizations can adopt to move beyond bias awareness toward bias management:

  • Bias Pre-mortems: Before conducting risk assessments, teams explicitly discuss what biases might affect their analysis and establish specific countermeasures. This makes bias consideration routine rather than reactive.
  • Devil’s Advocate Protocols: Systematic assignment of team members to challenge prevailing assumptions and identify information that contradicts emerging conclusions.
  • Perspective-Taking Requirements: Formal requirements to consider how different stakeholders (patients, regulators, operators) might view risks differently from the assessment team.
  • Bias Audit Trails: Documentation requirements that capture not just what decisions were made, but how potential biases were recognized and addressed during the decision-making process.
  • External Review Requirements: For high-formality decisions, mandatory review by individuals who weren’t involved in the initial assessment and can provide fresh perspectives.

These interventions acknowledge that bias management is not about eliminating human judgment—it’s about scaffolding human judgment with systematic processes that compensate for known cognitive limitations.

The Broader Implications: Subjectivity as Systemic Challenge

Valerie’s analysis of the GI Joe Bias connects to broader themes in my work about the effectiveness paradox and the challenges of building rigorous quality systems in an age of pop psychology. The pharmaceutical industry’s tendency to adopt appealing frameworks without rigorous evaluation extends to bias management strategies. Organizations may implement “bias training” or “awareness programs” that create the illusion of progress while failing to address the systematic changes needed for genuine improvement.

The GI Joe Bias serves as a perfect example of this challenge. It’s tempting to believe that naming the bias—recognizing that awareness isn’t enough—somehow protects us from falling into the awareness trap. But the bias is self-referential: knowing about the GI Joe Bias doesn’t automatically prevent us from succumbing to it when implementing bias management strategies.

This is why Valerie’s emphasis on systematic interventions rather than individual awareness is so crucial. Effective bias management requires changing the decision-making environment, not just the decision-makers’ knowledge. It requires building systems, not slogans.

A Call for Systematic Excellence in Bias Management

Valerie’s exploration of the GI Joe Bias provides a crucial call for advancing pharmaceutical quality management beyond the illusion that awareness equals capability. Her work, combined with ICH Q9(R1)’s explicit recognition of subjectivity challenges, creates an opportunity for the industry to develop more sophisticated approaches to cognitive bias management.

The path forward requires acknowledging that bias management is a core competency for quality professionals, equivalent to understanding analytical method validation or process characterization. It requires systematic approaches that scaffold human judgment rather than attempting to eliminate it. Most importantly, it requires cultures that view bias recognition as professional strength rather than weakness.

As I continue to build frameworks for reducing subjectivity in quality risk management and developing structured approaches to decision-making, Valerie’s insights about the limitations of awareness provide essential grounding. The GI Joe Bias reminds us that knowing is not half the battle—it’s barely the beginning.

The real battle lies in creating pharmaceutical quality systems that systematically compensate for human cognitive limitations while leveraging human expertise and judgment. That battle is won not through individual awareness or good intentions, but through systematic excellence in bias management architecture.

What structured approaches has your organization implemented to move beyond bias awareness toward systematic bias management? Share your experiences and challenges as we work together to advance the maturity of risk management practices in our industry.


Meet Valerie Mulholland

Dr. Valerie Mulholland is transforming how our industry thinks about quality risk management. As CEO and Principal Consultant at GMP Services in Ireland, Valerie brings over 25 years of hands-on experience auditing and consulting across biopharmaceutical, pharmaceutical, medical device, and blood transfusion industries throughout the EU, US, and Mexico.

But what truly sets Valerie apart is her unique combination of practical expertise and cutting-edge research. She recently earned her PhD from TU Dublin’s Pharmaceutical Regulatory Science Team, focusing on “Effective Risk-Based Decision Making in Quality Risk Management”. Her groundbreaking research has produced 13 academic papers, with four publications specifically developed to support ICH’s work—research that’s now incorporated into the official ICH Q9(R1) training materials. This isn’t theoretical work gathering dust on academic shelves; it’s research that’s actively shaping global regulatory guidance.

Why Risk Revolution Deserves Your Attention

The Risk Revolution podcast, co-hosted by Valerie alongside Nuala Calnan (25-year pharmaceutical veteran and Arnold F. Graves Scholar) and Dr. Lori Richter (Director of Risk Management at Ultragenyx with 21+ years industry experience), represents something unique in pharmaceutical podcasting. This isn’t your typical regulatory update show—it’s a monthly masterclass in advancing risk management maturity.

In an industry where staying current isn’t optional—it’s essential for patient safety—Risk Revolution offers the kind of continuing education that actually advances your professional capabilities. These aren’t recycled conference presentations; they’re conversations with the people shaping our industry’s future.

The Expertise Crisis: Why AI’s War on Entry-Level Jobs Threatens Quality Excellence

As pharmaceutical and biotech organizations rush to harness artificial intelligence to eliminate “inefficient” entry-level positions, we are at risk of creating a crisis that threatens the very foundation of quality expertise. The Harvard Business Review’s recent analysis of AI’s impact on entry-level jobs reads like a prophecy of organizational doom—one that quality leaders should heed before it’s too late.

Research from Stanford indicates that there has been a 13% decline in entry-level job opportunities for workers aged 22 to 25 since the widespread adoption of generative AI. The study shows that 50-60% of typical junior tasks—such as report drafting, research synthesis, data cleaning, and scheduling—can now be performed by AI. For high-quality organizations already facing expertise gaps, this trend signals a potential self-destructive path rather than increased efficiency.

Equally concerning, automation is leading to the phasing out of some traditional entry-level professional tasks. When I started in the field, newcomers would gain experience through tasks like batch record reviews and good documentation practices for protocols. However, with the introduction of electronic batch records and electronic validation management, these tasks have largely disappeared. AI is expected to accelerate this trend even further.

Everyone should go and read “The Perils of Using AI to Replace Entry-Level Jobs” by Amy C. Edmondson and Tomas Chamorro-Premuzic and then come back and read this post.

The Apprenticeship Dividend: What We Lose When We Skip the Journey

Every expert in pharmaceutical quality began somewhere. They learned to read batch records, investigated their first deviations, struggled through their first CAPA investigations, and gradually developed the pattern recognition that distinguishes competent from exceptional quality professionals. This journey, what the Edmondson and Chamorro-Premuzic call the “apprenticeship dividend”, cannot be replicated by AI or compressed into senior-level training programs.

Consider commissioning, qualification, and validation (CQV) work in biotech manufacturing. Junior engineers traditionally started by documenting Installation Qualification protocols, learning to recognize when equipment specifications align with user requirements. They progressed to Operational Qualification, developing understanding of how systems behave under various conditions. Only after this foundation could they effectively design Performance Qualification strategies that demonstrate process capability.

When organizations eliminate these entry-level CQV roles in favor of AI-generated documentation and senior engineers managing multiple systems simultaneously, they create what appears to be efficiency. In reality, they’ve severed the pipeline that transforms technical contributors into systems thinkers capable of managing complex manufacturing operations.

The Expertise Pipeline: Building Quality Gardeners

As I’ve written previously about building competency frameworks for quality professionals, true expertise requires integration of technical knowledge, methodological skills, social capabilities, and self-management abilities. This integration occurs through sustained practice, mentorship, and gradual assumption of responsibility—precisely what entry-level positions provide.

The traditional path from Quality specialist to Quality Manager to Quality Director illustrates this progression:

Foundation Level: Learning to execute quality methods methods, understand requirements, and recognize when results fall outside acceptance criteria. Basic deviation investigation and CAPA support.

Intermediate Level: Taking ownership of requirement gathering, leading routine investigations, participating in supplier audits, and beginning to see connections between different quality systems.

Advanced Level: Designing audit activities, facilitating cross-functional investigations, mentoring junior staff, and contributing to strategic quality initiatives.

Leadership Level: Building quality cultures, designing organizational capabilities, and creating systems that enable others to excel.

Each level builds upon the previous, creating what we might call “quality gardeners”—professionals who nurture quality systems as living ecosystems rather than enforcing compliance through rigid oversight. Skip the foundation levels, and you cannot develop the sophisticated understanding required for advanced practice.

The False Economy of AI Substitution

Organizations defending entry-level job elimination often point to cost savings and “efficiency gains.” This thinking reflects a fundamental misunderstanding of how expertise develops and quality systems function. Consider risk management in biotech manufacturing—a domain where pattern recognition and contextual judgment are essential.

A senior risk management professional reviewing a contamination event can quickly identify potential failure modes, assess likelihood and severity, and design effective mitigation strategies. This capability developed through years of investigating routine deviations, participating in CAPA teams, and learning to distinguish significant risks from minor variations.

When AI handles initial risk assessments and senior professionals review only the outputs, we create a dangerous gap. The senior professional lacks the deep familiarity with routine variations that enables recognition of truly significant deviations. Meanwhile, no one is developing the foundational expertise needed to replace retiring experts.

The result is what is called expertise hollowing, organizations that appear capable on the surface but lack the deep competency required to handle complex challenges or adapt to changing conditions.

Building Expertise in a Quality Organization

Creating robust expertise development requires intentional design that recognizes both the value of human development and the capabilities of AI tools. Rather than eliminating entry-level positions, quality organizations should redesign them to maximize learning value while leveraging AI appropriately.

Structured Apprenticeship Programs

Quality organizations should implement formal apprenticeship programs that combine academic learning with progressive practical responsibility. These programs should span 2-3 years and include:

Year 1: Foundation Building

  • Basic GMP principles and quality systems overview
  • Hands-on experience with routine quality operations
  • Mentorship from experienced quality professionals
  • Participation in investigations under supervision

Year 2: Skill Development

  • Specialized training in areas like CQV, risk management, or supplier quality
  • Leading routine activities with oversight
  • Cross-functional project participation
  • Beginning to train newer apprentices

Year 3: Integration and Leadership

  • Independent project leadership
  • Mentoring responsibilities
  • Contributing to strategic quality initiatives
  • Preparation for advanced roles

As I evaluate the organization I am building, this is a critical part of the vision.

Mentorship as Core Competency

Every senior quality professional should be expected to mentor junior colleagues as a core job responsibility, not an additional burden. This requires:

  • Formal Mentorship Training: Teaching experienced professionals how to transfer tacit knowledge, provide effective feedback, and create learning opportunities.
  • Protected Time: Ensuring mentors have dedicated time for development activities, not just “additional duties as assigned.”
  • Measurement Systems: Tracking mentorship effectiveness through apprentice progression, retention rates, and long-term career development.
  • Recognition Programs: Rewarding excellent mentorship as a valued contribution to organizational capability.

Progressive Responsibility Models

Entry-level roles should be designed with clear progression pathways that gradually increase responsibility and complexity:

CQV Progression Example:

  • CQV Technician: Executing test protocols, documenting results, supporting commissioning activities
  • CQV Specialist: Writing protocols, leading qualification activities, interfacing with vendors
  • CQV Engineer: Designing qualification strategies, managing complex projects, training others
  • CQV Manager: Building organizational CQV capabilities, strategic planning, external representation

Risk Management Progression:

  • Risk Analyst: Data collection, basic risk identification, supporting formal assessments
  • Risk Specialist: Facilitating risk assessments, developing mitigation strategies, training stakeholders
  • Risk Manager: Designing risk management systems, building organizational capabilities, strategic oversight

AI as Learning Accelerator, Not Replacement

Rather than replacing entry-level workers, AI should be positioned as a learning accelerator that enables junior professionals to handle more complex work earlier in their careers:

  • Enhanced Analysis Capabilities: AI can help junior professionals identify patterns in large datasets, enabling them to focus on interpretation and decision-making rather than data compilation.
  • Simulation and Modeling: AI-powered simulations can provide safe environments for junior professionals to practice complex scenarios without real-world consequences.
  • Knowledge Management: AI can help junior professionals access relevant historical examples, best practices, and regulatory guidance more efficiently.
  • Quality Control: AI can help ensure that junior professionals’ work meets standards while they’re developing expertise, providing a safety net during the learning process.

The Cost of Expertise Shortcuts

Organizations that eliminate entry-level positions in pursuit of short-term efficiency gains will face predictable long-term consequences:

  • Expertise Gaps: As senior professionals retire or move to other organizations, there will be no one prepared to replace them.
  • Reduced Innovation: Innovation often comes from fresh perspectives questioning established practices—precisely what entry-level employees provide.
  • Cultural Degradation: Quality cultures are maintained through socialization and shared learning experiences that occur naturally in diverse, multi-level teams.
  • Risk Blindness: Without the deep familiarity that comes from hands-on experience, organizations become vulnerable to risks they cannot recognize or understand.
  • Competitive Disadvantage: Organizations with strong expertise development programs will attract and retain top talent while building superior capabilities.

Choosing Investment Over Extraction

The decision to eliminate entry-level positions represents a choice between short-term cost extraction and long-term capability investment. For quality organizations, this choice is particularly stark because our work depends fundamentally on human judgment, pattern recognition, and the ability to adapt to novel situations.

AI should augment human capability, not replace the human development process. The organizations that thrive in the next decade will be those that recognize expertise development as a core competency and invest accordingly. They will build “quality gardeners” who can nurture adaptive, resilient quality systems rather than simply enforce compliance.

The expertise crisis is not inevitable—it’s a choice. Quality leaders must choose wisely, before the cost of that choice becomes irreversible.

When 483s Reveal Zemblanity: The Catalent Investigation – A Case Study in Systemic Quality Failure

The Catalent Indiana 483 form from July 2025 reads like a textbook example of my newest word, zemblanity, in risk management—the patterned, preventable misfortune that accrues not from blind chance, but from human agency and organizational design choices that quietly hardwire failure into our operations.

Twenty hair contamination deviations. Seven months to notify suppliers. Critical equipment failures dismissed as “not impacting SISPQ.” Media fill programs missing the very interventions they should validate. This isn’t random bad luck—it’s a quality system that has systematically normalized exactly the kinds of deviations that create inspection findings.

The Architecture of Inevitable Failure

Reading through the six major observations, three systemic patterns emerge that align perfectly with the hidden architecture of failure I discussed in my recent post on zemblanity.

Pattern 1: Investigation Theatre Over Causal Understanding

Observation 1 reveals what happens when investigations become compliance exercises rather than learning tools. The hair contamination trend—20 deviations spanning multiple product codes—received investigation resources proportional to internal requirement, not actual risk. As I’ve written about causal reasoning versus negative reasoning, these investigations focused on what didn’t happen rather than understanding the causal mechanisms that allowed hair to systematically enter sterile products.

The tribal knowledge around plunger seating issues exemplifies this perfectly. Operators developed informal workarounds because the formal system failed them, yet when this surfaced during an investigation, it wasn’t captured as a separate deviation worthy of systematic analysis. The investigation closed the immediate problem without addressing the systemic failure that created the conditions for operator innovation in the first place.

Pattern 2: Trend Blindness and Pattern Fragmentation

The most striking aspect of this 483 is how pattern recognition failed across multiple observations. Twenty-three work orders on critical air handling systems. Ten work orders on a single critical water system. Recurring membrane failures. Each treated as isolated maintenance issues rather than signals of systematic degradation.

This mirrors what I’ve discussed about normalization of deviance—where repeated occurrences of problems that don’t immediately cause catastrophe gradually shift our risk threshold. The work orders document a clear pattern of equipment degradation, yet each was risk-assessed as “not impacting SISPQ” without apparent consideration of cumulative or interactive effects.

Pattern 3: Control System Fragmentation

Perhaps most revealing is how different control systems operated in silos. Visual inspection systems that couldn’t detect the very defects found during manual inspection. Environmental monitoring that didn’t include the most critical surfaces. Media fills that omitted interventions documented as root causes of previous failures.

This isn’t about individual system inadequacy—it’s about what happens when quality systems evolve as collections of independent controls rather than integrated barriers designed to work together.

Solutions: From Zemblanity to Serendipity

Drawing from the approaches I’ve developed on this blog, here’s how Catalent could transform their quality system from one that breeds inevitable failure to one that creates conditions for quality serendipity:

Implement Causally Reasoned Investigations

The Energy Safety Canada white paper I discussed earlier this year offers a powerful framework for moving beyond counterfactual analysis. Instead of concluding that operators “failed to follow procedure” regarding stopper installation, investigate why the procedure was inadequate for the equipment configuration. Instead of noting that supplier notification was delayed seven months, understand the systemic factors that made immediate notification unlikely.

Practical Implementation:

  • Retrain investigators in causal reasoning techniques
  • Require investigation sponsors (area managers) to set clear expectations for causal analysis
  • Implement structured causal analysis tools like Cause-Consequence Analysis
  • Focus on what actually happened and why it made sense to people at the time
  • Implement rubrics to guide consistency

Build Integrated Barrier Systems

The take-the-best heuristic I recently explored offers a powerful lens for barrier analysis. Rather than implementing multiple independent controls, identify the single most causally powerful barrier that would prevent each failure type, then design supporting barriers that enhance rather than compete with the primary control.

For hair contamination specifically:

  • Implement direct stopper surface monitoring as the primary barrier
  • Design visual inspection systems specifically to detect proteinaceous particles
  • Create supplier qualification that includes contamination risk assessment
  • Establish real-time trend analysis linking supplier lots to contamination events

Establish Dynamic Trend Integration

Traditional trending treats each system in isolation—environmental monitoring trends, deviation trends, CAPA trends, maintenance trends. The Catalent 483 shows what happens when these parallel trend systems fail to converge into integrated risk assessment.

Integrated Trending Framework:

  • Create cross-functional trend review combining all quality data streams
  • Implement predictive analytics linking maintenance patterns to quality risks
  • Establish trigger points where equipment degradation patterns automatically initiate quality investigations
  • Design Product Quality Reviews that explicitly correlate equipment performance with product quality data

Transform CAPA from Compliance to Learning

The recurring failures documented in this 483—repeated hair findings after CAPA implementation, continued equipment failures after “repair”—reflect what I’ve called the effectiveness paradox. Traditional CAPA focuses on thoroughness over causal accuracy.

CAPA Transformation Strategy:

  • Implement a proper CAPA hierarchy, prioritizing elimination and replacement over detection and mitigation
  • Establish effectiveness criteria before implementation, not after
  • Create learning-oriented CAPA reviews that ask “What did this teach us about our system?”
  • Link CAPA effectiveness directly to recurrence prevention rather than procedural compliance

Build Anticipatory Quality Architecture

The most sophisticated element would be creating what I call “quality serendipity”—systems that create conditions for positive surprises rather than inevitable failures. This requires moving from reactive compliance to anticipatory risk architecture.

Anticipatory Elements:

  • Implement supplier performance modeling that predicts contamination risk before it manifests
  • Create equipment degradation models that trigger quality assessment before failure
  • Establish operator feedback systems that capture emerging risks in real-time
  • Design quality reviews that explicitly seek weak signals of system stress

The Cultural Foundation

None of these technical solutions will work without addressing the cultural foundation that allowed this level of systematic failure to persist. The 483’s most telling detail isn’t any single observation—it’s the cumulative picture of an organization where quality indicators were consistently rationalized rather than interrogated.

As I’ve written about quality culture, without psychological safety and learning orientation, people won’t commit to building and supporting robust quality systems. The tribal knowledge around plunger seating, the normalization of recurring equipment failures, the seven-month delay in supplier notification—these suggest a culture where adaptation to system inadequacy became preferable to system improvement.

The path forward requires leadership that creates conditions for quality serendipity: reward pattern recognition over problem solving, celebrate early identification of weak signals, and create systems that make the right choice the easy choice.

Beyond Compliance: Building Anti-Fragile Quality

The Catalent 483 offers more than a cautionary tale—it provides a roadmap for quality transformation. Every observation represents an invitation to build quality systems that become stronger under stress rather than more brittle.

Organizations that master this transformation—moving from zemblanity-generating systems to serendipity-creating ones—will find that quality becomes not just a regulatory requirement but a competitive advantage. They’ll detect risks earlier, respond more effectively, and create the kind of operational resilience that turns disruption into opportunity.

The choice is clear: continue managing quality as a collection of independent compliance activities, or build integrated systems designed to create the conditions for sustained quality success. The Catalent case shows us what happens when we choose poorly. The frameworks exist to choose better.


What patterns of “inevitable failure” do you see in your own quality systems? How might shifting from negative reasoning to causal understanding transform your approach to investigations? Share your thoughts—this conversation about quality transformation is one we need to have across the industry.

Zemblanity

William Boyd is a favorite author for me, so I was pleased to read The hidden architecture of failure – understanding “zemblanity”. While I’ve read Armadillio, I missed the applicability of the word.

Zemblanity is actually a pretty good word for our field. I’m going to test it out, see if it has legs.

Zemblanity in Risk Management: Turning the Mirror on Hidden System Fragility

If you’re reading this blog, you already know that risk management isn’t about tallying up hypothetical hazards and ticking regulatory boxes. But have you ever stopped to ask whether your systems are quietly hardwiring failure—almost by design? Christian Busch’s recent LSE Business Review article lands on a word for this: zemblanity—the “opposite of serendipity,” or, more pointedly, bad luck that’s neither blind nor random, but structured right into the bones of our operations.

This idea resonates powerfully with the transformations occurring in pharmaceutical quality systems—the same evolution guiding the draft revision of Eudralex Volume 4 Chapter 1. In both Busch’s analysis and regulatory trends, we’re urged to confront root causes, trace risk back to its hidden architecture, and actively dismantle the quiet routines and incentives that breed failure. This isn’t mere thought leadership; it’s a call to reexamine how our own practices may be cultivating fields of inevitable misfortune—the very zemblanity that keeps reputational harm and catastrophic events just a few triggers away.

The Zemblanity Field: Where Routine Becomes Risk

Let’s be honest: the ghosts in our machines are rarely accidents. They don’t erupt out of blue-sky randomness. They were grown in cultures that prized efficiency over resilience, chased short-term gains, and normalized critical knowledge gaps. In my blog post on normalization of deviance (see: “Why Normalization of Deviance Threatens your CAPA Logic”), I map out how subtle cues and “business as usual” thinking produce exactly these sorts of landmines.

Busch’s zemblanity—the patterned and preventable misfortune that accrues from human agency—makes for a brutal mirror. Risk managers must ask: Which of our controls are truly protective, and which merely deliver the warm glow of compliance while quietly amplifying vulnerability? If serendipity is a lucky break, zemblanity is the misstep built into the schedule, the fragility we invite by squeezing the system too hard.

From Hypotheticals to Archaeology: How to Evaluate Zemblanity

So, how does one bring zemblanity into practical risk management? It starts by shifting the focus from cataloguing theoretical events to archaeology: uncovering the layered decisions, assumptions, and interdependencies that have silently locked in failure modes.

1. Map Near Misses and Routine Workarounds

Stop treating near misses as flukes. Every recurrence is a signpost pointing to underlying zemblanity. Investigate not just what happened, but why the system allowed it in the first place. High-performing teams capture these “almost events” the way a root cause analyst mines deviations for actionable knowledge .

2. Scrutinize Margins and Slack

Where are your processes running on fumes? Organizations that cut every buffer in service of “efficiency” are constructing perfect conditions for zemblanity. Whether it’s staffing, redundancy in critical utilities, or quality reserves, scrutinize these margins. If slim tolerances have become your operating norm, you’re nurturing the zemblanity field.

3. Map Hidden Interdependencies

Borrowing from system dynamics and failure mode mapping, draw out the connections you typically overlook and the informal routes by which information or pressure travels. Build reverse timelines—starting at failure—to trace seemingly disparate weak points back to core drivers.

4. Interrogate Culture and Incentives

A robust risk culture isn’t measured by the thoroughness of your SOPs, but by whether staff feel safe raising “bad news” and questioning assumptions.

5. Audit Cost-Cutting and “Optimizations”

Lean initiatives and cost-cutting programs can easily morph from margin enhancement to zemblanity engines. Run post-implementation reviews of such changes: was resilience sacrificed for pennywise savings? If so, add these to your risk register, and reframe “efficiency” in light of the total cost of a fragile response to disruption.

6. Challenge “Never Happen Here” Assumptions

Every mature risk program needs a cadence of challenging assumptions. Run pre-mortem workshops with line staff and cross-functional teams to simulate how multi-factor failures could cascade. Spotlight scenarios previously dismissed as “impossible” and ask why. Highlight usage in quality system design.

Operationalizing Zemblanity in PQS

The Eudralex Chapter 1 draft’s movement from static compliance to dynamic, knowledge-centric risk management lines up perfectly here. Embedding zemblanity analysis is less about new tools and more about repurposing familiar practices: after-action reviews, bowtie diagrams, CAPA trend analysis, incident logs—all sharpened with explicit attention to how our actions and routines cultivate not just risk, but structural misfortune.

Your Product Quality Review (PQR) process, for instance, should now interrogate near misses, not just reject rates or OOS incidents. It is time to pivot from dull data reviews reviews to causal inference—asking how past knowledge blind spots or hasty “efficiencies” became hazards.

And as pharmaceutical supply chains grow ever more interdependent and brittle, proactive risk detection needs routine revisiting. Integrate zemblanity logic into your risk and resilience dashboards—flag not just frequency, but pattern, agency, and the cultural drivers of preventable failures.

Toward Serendipity: Dismantle Zemblanity, Build Quality Luck

Risk professionals can no longer limit themselves to identifying hazards and correcting defects post hoc. Proactive knowledge management and an appetite for self-interrogation will mark the difference between organizations set up for breakthroughs and those unwittingly primed for avoidable disaster.

The challenge—echoed in both Busch’s argument and the emergent GMP landscape—is clear: shrink the zemblanity field. Turn pattern-seeking into your default. Reward curiosity within your team. Build analytic vigilance into every level of the organization. Only then can resilience move from rhetoric to reality, and only then can your PQS become not just a bulwark against failure, but a platform for continuous, serendipitous improvement.