Beyond Malfunction Mindset: Normal Work, Adaptive Quality, and the Future of Pharmaceutical Problem-Solving

Beyond the Shadow of Failure

Problem-solving is too often shaped by the assumption that the system is perfectly understood and fully specified. If something goes wrong—a deviation, a batch out-of-spec, or a contamination event—our approach is to dissect what “failed” and fix that flaw, believing this will restore order. This way of thinking, which I call the malfunction mindset, is as ingrained as it is incomplete. It assumes that successful outcomes are the default, that work always happens as written in SOPs, and that only failure deserves our scrutiny.

But here’s the paradox: most of the time, our highly complex manufacturing environments actually succeed—often under imperfect, shifting, and not fully understood conditions. If we only study what failed, and never question how our systems achieve their many daily successes, we miss the real nature of pharmaceutical quality: it is not the absence of failure, but the presence of robust, adaptive work. Taking this broader, more nuanced perspective is not just an academic exercise—it’s essential for building resilient operations that truly protect patients, products, and our organizations.

Drawing from my thinking through zemblanity (the predictable but often overlooked negative outcomes of well-intentioned quality fixes), the effectiveness paradox (why “nothing bad happened” isn’t proof your quality system works), and the persistent gap between work-as-imagined and work-as-done, this post explores why the malfunction mindset persists, how it distorts investigations, and what future-ready quality management should look like.

The Allure—and Limits—of the Failure Model

Why do we reflexively look for broken parts and single points of failure? It is, as Sidney Dekker has argued, both comforting and defensible. When something goes wrong, you can always point to a failed sensor, a missed checklist, or an operator error. This approach—introducing another level of documentation, another check, another layer of review—offers a sense of closure and regulatory safety. After all, as long as you can demonstrate that you “fixed” something tangible, you’ve fulfilled investigational due diligence.

Yet this fails to account for how quality is actually produced—or lost—in the real world. The malfunction model treats systems like complicated machines: fix the broken gear, oil the creaky hinge, and the machine runs smoothly again. But, as Dekker reminds us in Drift Into Failure, such linear thinking ignores the drift, adaptation, and emergent complexity that characterize real manufacturing environments. The truth is, in complex adaptive systems like pharmaceutical manufacturing, it often takes more than one “error” for failure to manifest. The system absorbs small deviations continuously, adapting and flexing until, sometimes, a boundary is crossed and a problem surfaces.

W. Edwards Deming’s wisdom rings truer than ever: “Most problems result from the system itself, not from individual faults.” A sustainable approach to quality is one that designs for success—and that means understanding the system-wide properties enabling robust performance, not just eliminating isolated malfunctions.

Procedural Fundamentalism: The Work-as-Imagined Trap

One of the least examined, yet most impactful, contributors to the malfunction mindset is procedural fundamentalism—the belief that the written procedure is both a complete specification and an accurate description of work. This feels rigorous and provides compliance comfort, but it is a profound misreading of how work actually happens in pharmaceutical manufacturing.

Work-as-imagined, as elucidated by Erik Hollnagel and others, represents an abstraction: it is how distant architects of SOPs visualize the “correct” execution of a process. Yet, real-world conditions—resource shortages, unexpected interruptions, mismatched raw materials, shifting priorities—force adaptation. Operators, supervisors, and Quality professionals do not simply “follow the recipe”: they interpret, improvise, and—crucially—adjust on the fly.

When we treat procedures as authoritative descriptions of reality, we create the proxy problem: our investigations compare real operations against an imagined baseline that never fully existed. Deviations become automatically framed as problem points, and success is redefined as rigid adherence, regardless of context or outcome.

Complexity, Performance Variability, and Real Success

So, how do pharmaceutical operations succeed so reliably despite the ever-present complexity and variability of daily work?

The answer lies in embracing performance variability as a feature of robust systems, not a flaw. In high-reliability environments—from aviation to medicine to pharmaceutical manufacturing—success is routinely achieved not by demanding strict compliance, but by cultivating adaptive capacity.

Consider environmental monitoring in a sterile suite: The procedure may specify precise times and locations, but a seasoned operator, noticing shifts in people flow or equipment usage, might proactively sample a high-risk area more frequently. This adaptation—not captured in work-as-imagined—actually strengthens data integrity. Yet, traditional metrics would treat this as a procedural deviation.

This is the paradox of the malfunction mindset: in seeking to eliminate all performance variability, we risk undermining precisely those adaptive behaviors that produce reliable quality under uncertainty.

Why the Malfunction Mindset Persists: Cognitive Comfort and Regulatory Reinforcement

Why do organizations continue to privilege the malfunction mindset, even as evidence accumulates of its limits? The answer is both psychological and cultural.

Component breakdown thinking is psychologically satisfying—it offers a clear problem, a specific cause, and a direct fix. For regulatory agencies, it is easy to measure and audit: did the deviation investigation determine the root cause, did the CAPA address it, does the documentation support this narrative? Anything that doesn’t fit this model is hard to defend in audits or inspections.

Yet this approach offers, at best, a partial diagnosis and, at worst, the illusion of control. It encourages organizations to catalog deviations while blindly accepting a much broader universe of unexamined daily adaptations that actually determine system robustness.

Complexity Science and the Art of Organizational Success

To move toward a more accurate—and ultimately more effective—model of quality, pharmaceutical leaders must integrate the insights of complexity science. Drawing from the work of Stuart Kauffman and others at the Santa Fe Institute, we understand that the highest-performing systems operate not at the edge of rigid order, but at the “edge of chaos,” where structure is balanced with adaptability.

In these systems, success and failure both arise from emergent properties—the patterns of interaction between people, procedures, equipment, and environment. The most meaningful interventions, therefore, address how the parts interact, not just how each part functions in isolation.

This explains why traditional root cause analysis, focused on the parts, often fails to produce lasting improvements; it cannot account for outcomes that emerge only from the collective dynamics of the system as a whole.

Investigating for Learning: The Take-the-Best Heuristic

A key innovation needed in pharmaceutical investigations is a shift to what Hollnagel calls Safety-II thinking: focusing on how things go right as well as why they occasionally go wrong.

Here, the take-the-best heuristic becomes crucial. Instead of compiling lists of all deviations, ask: Among all contributing factors, which one, if addressed, would have the most powerful positive impact on future outcomes, while preserving adaptive capacity? This approach ensures investigations generate actionable, meaningful learning, rather than feeding the endless paper chase of “compliance theater.”

Building Systems That Support Adaptive Capability

Taking complexity and adaptive performance seriously requires practical changes to how we design procedures, train, oversee, and measure quality.

  • Procedure Design: Make explicit the distinction between objectives and methods. Procedures should articulate clear quality goals, specify necessary constraints, but deliberately enable workers to choose methods within those boundaries when faced with new conditions.
  • Training: Move beyond procedural compliance. Develop adaptive expertise in your staff, so they can interpret and adjust sensibly—understanding not just “what” to do, but “why” it matters in the bigger system.
  • Oversight and Monitoring: Audit for adaptive capacity. Don’t just track “compliance” but also whether workers have the resources and knowledge to adapt safely and intelligently. Positive performance variability (smart adaptations) should be recognized and studied.
  • Quality System Design: Build systematic learning from both success and failure. Examine ordinary operations to discern how adaptive mechanisms work, and protect these capabilities rather than squashing them in the name of “control.”

Leadership and Systems Thinking

Realizing this vision depends on a transformation in leadership mindset—from one seeking control to one enabling adaptive capacity. Deming’s profound knowledge and the principles of complexity leadership remind us that what matters is not enforcing ever-stricter compliance, but cultivating an organizational context where smart adaptation and genuine learning become standard.

Leadership must:

  • Distinguish between complicated and complex: Apply detailed procedures to the former (e.g., calibration), but support flexible, principles-based management for the latter.
  • Tolerate appropriate uncertainty: Not every problem has a clear, single answer. Creating psychological safety is essential for learning and adaptation during ambiguity.
  • Develop learning organizations: Invest in deep understanding of operations, foster regular study of work-as-done, and celebrate insights from both expected and unexpected sources.

Practical Strategies for Implementation

Turning these insights into institutional practice involves a systematic, research-inspired approach:

  • Start procedure development with observation of real work before specifying methods. Small scale and mock exercises are critical.
  • Employ cognitive apprenticeship models in training, so that experience, reasoning under uncertainty, and systems thinking become core competencies.
  • Begin investigations with appreciative inquiry—map out how the system usually works, not just how it trips up.
  • Measure leading indicators (capacity, information flow, adaptability) not just lagging ones (failures, deviations).
  • Create closed feedback loops for corrective actions—insisting every intervention be evaluated for impact on both compliance and adaptive capacity.

Scientific Quality Management and Adaptive Systems: No Contradiction

The tension between rigorous scientific quality management (QbD, process validation, risk management frameworks) and support for adaptation is a false dilemma. Indeed, genuine scientific quality management starts with humility: the recognition that our understanding of complex systems is always partial, our controls imperfect, and our frameworks provisional.

A falsifiable quality framework embeds learning and adaptation at its core—treating deviations as opportunities to test and refine models, rather than simply checkboxes to complete.

The best organizations are not those that experience the fewest deviations, but those that learn fastest from both expected and unexpected events, and apply this knowledge to strengthen both system structure and adaptive capacity.

Embracing Normal Work: Closing the Gap

Normal pharmaceutical manufacturing is not the story of perfect procedural compliance; it’s the story of people, working together to achieve quality goals under diverse, unpredictable, and evolving conditions. This is both more challenging—and more rewarding—than any plan prescribed solely by SOPs.

To truly move the needle on pharmaceutical quality, organizations must:

  • Embrace performance variability as evidence of adaptive capacity, not just risk.
  • Investigate for learning, not blame; study success, not just failure.
  • Design systems to support both structure and flexible adaptation—never sacrificing one entirely for the other.
  • Cultivate leadership that values humility, systems thinking, and experimental learning, creating a culture comfortable with complexity.

This approach will not be easy. It means questioning decades of compliance custom, organizational habit, and intellectual ease. But the payoff is immense: more resilient operations, fewer catastrophic surprises, and, above all, improved safety and efficacy for the patients who depend on our products.

The challenge—and the opportunity—facing pharmaceutical quality management is to evolve beyond compliance theater and malfunction thinking into a new era of resilience and organizational learning. Success lies not in the illusory comfort of perfectly executed procedures, but in the everyday adaptations, intelligent improvisation, and system-level capabilities that make those successes possible.

The call to action is clear: Investigate not just to explain what failed, but to understand how, and why, things so often go right. Protect, nurture, and enhance the adaptive capacities of your organization. In doing so, pharmaceutical quality can finally become more than an after-the-fact audit; it will become the creative, resilient capability that patients, regulators, and organizations genuinely want to hire.

Applying Jobs-to-Be-Done to Risk Management

In my recent exploration of the Jobs-to-Be-Done (JTBD) tool for process improvement, I examined how this customer-centric approach could revolutionize our understanding of deviation management. I want to extend that analysis to another fundamental challenge in pharmaceutical quality: risk management.

As we grapple with increasing regulatory complexity, accelerating technological change, and the persistent threat of risk blindness, most organizations remain trapped in what I call “compliance theater”—performing risk management activities that satisfy auditors but fail to build genuine organizational resilience. JTBD is a useful tool as we move beyond this theater toward risk management that actually creates value.

The Risk Management Jobs Users Actually Hire

When quality professionals, executives, and regulatory teams engage with risk management processes, what job are they really trying to accomplish? The answer reveals a profound disconnect between organizational intent and actual capability.

The Core Functional Job

“When facing uncertainty that could impact product quality, patient safety, or business continuity, I want to systematically understand and address potential threats, so I can make confident decisions and prevent surprise failures.”

This job statement immediately exposes the inadequacy of most risk management systems. They focus on documentation rather than understanding, assessment rather than decision enablement, and compliance rather than prevention.

The Consumption Jobs: The Hidden Workload

Risk management involves numerous consumption jobs that organizations often ignore:

  • Evaluation and Selection: “I need to choose risk assessment methodologies that match our operational complexity and regulatory environment.”
  • Implementation and Training: “I need to build organizational risk capability without creating bureaucratic overhead.”
  • Maintenance and Evolution: “I need to keep our risk approach current as our business and threat landscape evolves.”
  • Integration and Communication: “I need to ensure risk insights actually influence business decisions rather than gathering dust in risk registers.”

These consumption jobs represent the difference between risk management systems that organizations grudgingly tolerate and those they genuinely want to “hire.”

The Eight-Step Risk Management Job Map

Applying JTBD’s universal job map to risk management reveals where current approaches systematically fail:

1. Define: Establishing Risk Context

What users need: Clear understanding of what they’re assessing, why it matters, and what decisions the risk analysis will inform.

Current reality: Risk assessments often begin with template completion rather than context establishment, leading to generic analyses that don’t support actual decision-making.

2. Locate: Gathering Risk Intelligence

What users need: Access to historical data, subject matter expertise, external intelligence, and tacit knowledge about how things actually work.

Current reality: Risk teams typically work from documentation rather than engaging with operational reality, missing the pattern recognition and apprenticeship dividend that experienced practitioners possess.

3. Prepare: Creating Assessment Conditions

What users need: Diverse teams, psychological safety for honest risk discussions, and structured approaches that challenge rather than confirm existing assumptions.

Current reality: Risk assessments often involve homogeneous teams working through predetermined templates, perpetuating the GI Joe fallacy—believing that knowledge of risk frameworks prevents risky thinking.

4. Confirm: Validating Assessment Readiness

What users need: Confidence that they have sufficient information, appropriate expertise, and clear success criteria before proceeding.

Current reality: Risk assessments proceed regardless of information quality or team readiness, driven by schedule rather than preparation.

5. Execute: Conducting Risk Analysis

What users need: Systematic identification of risks, analysis of interconnections, scenario testing, and development of robust mitigation strategies.

Current reality: Risk analysis often becomes risk scoring—reducing complex phenomena to numerical ratings that provide false precision rather than genuine insight.

6. Monitor: Tracking Risk Reality

What users need: Early warning systems that detect emerging risks and validate the effectiveness of mitigation strategies.

Current reality: Risk monitoring typically involves periodic register updates rather than active intelligence gathering, missing the dynamic nature of risk evolution.

7. Modify: Adapting to New Information

What users need: Responsive adjustment of risk strategies based on monitoring feedback and changing conditions.

Current reality: Risk assessments often become static documents, updated only during scheduled reviews rather than when new information emerges.

8. Conclude: Capturing Risk Learning

What users need: Systematic capture of risk insights, pattern recognition, and knowledge transfer that builds organizational risk intelligence.

Current reality: Risk analysis conclusions focus on compliance closure rather than learning capture, missing opportunities to build the organizational memory that prevents risk blindness.

The Emotional and Social Dimensions

Risk management involves profound emotional and social jobs that traditional approaches ignore:

  • Confidence: Risk practitioners want to feel genuinely confident that significant threats have been identified and addressed, not just that procedures have been followed.
  • Intellectual Satisfaction: Quality professionals are attracted to rigorous analysis and robust reasoning—risk management should engage their analytical capabilities, not reduce them to form completion.
  • Professional Credibility: Risk managers want to be perceived as strategic enablers rather than bureaucratic obstacles—as trusted advisors who help organizations navigate uncertainty rather than create administrative burden.
  • Organizational Trust: Executive teams want assurance that their risk management capabilities are genuinely protective, not merely compliant.

What’s Underserved: The Innovation Opportunities

JTBD analysis reveals four critical areas where current risk management approaches systematically underserve user needs:

Risk Intelligence

Current systems document known risks but fail to develop early warning capabilities, pattern recognition across multiple contexts, or predictive insights about emerging threats. Organizations need risk management that builds institutional awareness, not just institutional documentation.

Decision Enablement

Risk assessments should create confidence for strategic decisions, enable rapid assessment of time-sensitive opportunities, and provide scenario planning that prepares organizations for multiple futures. Instead, most risk management creates decision paralysis through endless analysis.

Organizational Capability

Effective risk management should build risk literacy across all levels, create cultural resilience that enables honest risk conversations, and develop adaptive capacity to respond when risks materialize. Current approaches often centralize risk thinking rather than distributing risk capability.

Stakeholder Trust

Risk management should enable transparent communication about threats and mitigation strategies, demonstrate competence in risk anticipation, and provide regulatory confidence in organizational capabilities. Too often, risk management creates opacity rather than transparency.

Canvas representation of the JBTD

Moving Beyond Compliance Theater

The JTBD framework helps us address a key challenge in risk management: many organizations place excessive emphasis on “table stakes” such as regulatory compliance and documentation requirements, while neglecting vital aspects like intelligence, enablement, capability, and trust that contribute to genuine resilience.

This represents a classic case of process myopia—becoming so focused on risk management activities that we lose sight of the fundamental job those activities should accomplish. Organizations perfect their risk registers while remaining vulnerable to surprise failures, not because they lack risk management processes, but because those processes fail to serve the jobs users actually need accomplished.

Design Principles for User-Centered Risk Management

  • Context Over Templates: Begin risk analysis with clear understanding of decisions to be informed rather than forms to be completed.
  • Intelligence Over Documentation: Prioritize systems that build organizational awareness and pattern recognition rather than risk libraries.
  • Engagement Over Compliance: Create risk processes that attract rather than burden users, recognizing that effective risk management requires active intellectual participation.
  • Learning Over Closure: Structure risk activities to build institutional memory and capability rather than simply completing assessment cycles.
  • Integration Over Isolation: Ensure risk insights flow naturally into operational decisions rather than remaining in separate risk management systems.

Hiring Risk Management for Real Jobs

The most dangerous risk facing pharmaceutical organizations may be risk management systems that create false confidence while building no real capability. JTBD analysis reveals why: these systems optimize for regulatory approval rather than user needs, creating elaborate processes that nobody genuinely wants to “hire.”

True risk management begins with understanding what jobs users actually need accomplished: building confidence for difficult decisions, developing organizational intelligence about threats, creating resilience against surprise failures, and enabling rather than impeding business progress. Organizations that design risk management around these jobs will develop competitive advantages in an increasingly uncertain world.

The choice is clear: continue performing compliance theater, or build risk management systems that organizations genuinely want to hire. In a world where zemblanity—the tendency to encounter negative, foreseeable outcomes—threatens every quality system, only the latter approach offers genuine protection.

Risk management should not be something organizations endure. It should be something they actively seek because it makes them demonstrably better at navigating uncertainty and protecting what matters most.

Beyond “Knowing Is Half the Battle”

Dr. Valerie Mulholland’s recent exploration of the GI Joe Bias strikes gets to the heart of a fundamental challenge in pharmaceutical quality management: the persistent belief that awareness of cognitive biases is sufficient to overcome them. I find Valerie’s analysis particularly compelling because it connects directly to the practical realities we face when implementing ICH Q9(R1)’s mandate to actively manage subjectivity in risk assessment.

Valerie’s observation that “awareness of a bias does little to prevent it from influencing our decisions” shows us that the GI Joe Bias underlays a critical gap between intellectual understanding and practical application—a gap that pharmaceutical organizations must bridge if they hope to achieve the risk-based decision-making excellence that ICH Q9(R1) demands.

The Expertise Paradox: Why Quality Professionals Are Particularly Vulnerable

Valerie correctly identifies that quality risk management facilitators are often better at spotting biases in others than in themselves. This observation connects to a deeper challenge I’ve previously explored: the fallacy of expert immunity. Our expertise in pharmaceutical quality systems creates cognitive patterns that simultaneously enable rapid, accurate technical judgments while increasing our vulnerability to specific biases.

The very mechanisms that make us effective quality professionals—pattern recognition, schema-based processing, heuristic shortcuts derived from base rate experiences—are the same cognitive tools that generate bias. When I conduct investigations or facilitate risk assessments, my extensive experience with similar events creates expectations and assumptions that can blind me to novel failure modes or unexpected causal relationships. This isn’t a character flaw; it’s an inherent part of how expertise develops and operates.

Valerie’s emphasis on the need for trained facilitators in high-formality QRM activities reflects this reality. External facilitation isn’t just about process management—it’s about introducing cognitive diversity and bias detection capabilities that internal teams, no matter how experienced, cannot provide for themselves. The facilitator serves as a structured intervention against the GI Joe fallacy, embodying the systematic approaches that awareness alone cannot deliver.

From Awareness to Architecture: Building Bias-Resistant Quality Systems

The critical insight from both Valerie’s work and my writing about structured hypothesis formation is that effective bias management requires architectural solutions, not individual willpower. ICH Q9(R1)’s introduction of the “Managing and Minimizing Subjectivity” section represents recognition that regulatory compliance requires systematic approaches to cognitive bias management.

In my post on reducing subjectivity in quality risk management, I identified four strategies that directly address the limitations Valerie highlights about the GI Joe Bias:

  1. Leveraging Knowledge Management: Rather than relying on individual awareness, effective bias management requires systematic capture and application of objective information. When risk assessors can access structured historical data, supplier performance metrics, and process capability studies, they’re less dependent on potentially biased recollections or impressions.
  2. Good Risk Questions: The formulation of risk questions represents a critical intervention point. Well-crafted questions can anchor assessments in specific, measurable terms rather than vague generalizations that invite subjective interpretation. Instead of asking “What are the risks to product quality?”, effective risk questions might ask “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months based on the last three years of data?”
  3. Cross-Functional Teams: Valerie’s observation that we’re better at spotting biases in others translates directly into team composition strategies. Diverse, cross-functional teams naturally create the external perspective that individual bias recognition cannot provide. The manufacturing engineer, quality analyst, and regulatory specialist bring different cognitive frameworks that can identify blind spots in each other’s reasoning.
  4. Structured Decision-Making Processes: The tools Valerie mentions—PHA, FMEA, Ishikawa, bow-tie analysis—serve as external cognitive scaffolding that guides thinking through systematic pathways rather than relying on intuitive shortcuts that may be biased.

The Formality Framework: When and How to Escalate Bias Management

One of the most valuable aspects of ICH Q9(R1) is its introduction of the formality concept—the idea that different situations require different levels of systematic intervention. Valerie’s article implicitly addresses this by noting that “high formality QRM activities” require trained facilitators. This suggests a graduated approach to bias management that scales intervention intensity with decision importance.

This formality framework needs to include bias management that organizations can use to determine when and how intensively to apply bias mitigation strategies:

  • Low Formality Situations: Routine decisions with well-understood parameters, limited stakeholders, and reversible outcomes. Basic bias awareness training and standardized checklists may be sufficient.
  • Medium Formality Situations: Decisions involving moderate complexity, uncertainty, or impact. These require cross-functional input, structured decision tools, and documentation of rationales.
  • High Formality Situations: Complex, high-stakes decisions with significant uncertainty, multiple conflicting objectives, or diverse stakeholders. These demand external facilitation, systematic bias checks, and formal documentation of how potential biases were addressed.

This framework acknowledges that the GI Joe fallacy is most dangerous in high-formality situations where the stakes are highest and the cognitive demands greatest. It’s precisely in these contexts that our confidence in our ability to overcome bias through awareness becomes most problematic.

The Cultural Dimension: Creating Environments That Support Bias Recognition

Valerie’s emphasis on fostering humility, encouraging teams to acknowledge that “no one is immune to bias, even the most experienced professionals” connects to my observations about building expertise in quality organizations. Creating cultures that can effectively manage subjectivity requires more than tools and processes; it requires psychological safety that allows bias recognition without professional threat.

I’ve noted in past posts that organizations advancing beyond basic awareness levels demonstrate “systematic recognition of cognitive bias risks” with growing understanding that “human judgment limitations can affect risk assessment quality.” However, the transition from awareness to systematic application requires cultural changes that make bias discussion routine rather than threatening.

This cultural dimension becomes particularly important when we consider the ironic processing effects that Valerie references. When organizations create environments where acknowledging bias is seen as admitting incompetence, they inadvertently increase bias through suppression attempts. Teams that must appear confident and decisive may unconsciously avoid bias recognition because it threatens their professional identity.

The solution is creating cultures that frame bias recognition as professional competence rather than limitation. Just as we expect quality professionals to understand statistical process control or regulatory requirements, we should expect them to understand and systematically address their cognitive limitations.

Practical Implementation: Moving Beyond the GI Joe Fallacy

Building on Valerie’s recommendations for structured tools and systematic approaches, here are some specific implementation strategies that organizations can adopt to move beyond bias awareness toward bias management:

  • Bias Pre-mortems: Before conducting risk assessments, teams explicitly discuss what biases might affect their analysis and establish specific countermeasures. This makes bias consideration routine rather than reactive.
  • Devil’s Advocate Protocols: Systematic assignment of team members to challenge prevailing assumptions and identify information that contradicts emerging conclusions.
  • Perspective-Taking Requirements: Formal requirements to consider how different stakeholders (patients, regulators, operators) might view risks differently from the assessment team.
  • Bias Audit Trails: Documentation requirements that capture not just what decisions were made, but how potential biases were recognized and addressed during the decision-making process.
  • External Review Requirements: For high-formality decisions, mandatory review by individuals who weren’t involved in the initial assessment and can provide fresh perspectives.

These interventions acknowledge that bias management is not about eliminating human judgment—it’s about scaffolding human judgment with systematic processes that compensate for known cognitive limitations.

The Broader Implications: Subjectivity as Systemic Challenge

Valerie’s analysis of the GI Joe Bias connects to broader themes in my work about the effectiveness paradox and the challenges of building rigorous quality systems in an age of pop psychology. The pharmaceutical industry’s tendency to adopt appealing frameworks without rigorous evaluation extends to bias management strategies. Organizations may implement “bias training” or “awareness programs” that create the illusion of progress while failing to address the systematic changes needed for genuine improvement.

The GI Joe Bias serves as a perfect example of this challenge. It’s tempting to believe that naming the bias—recognizing that awareness isn’t enough—somehow protects us from falling into the awareness trap. But the bias is self-referential: knowing about the GI Joe Bias doesn’t automatically prevent us from succumbing to it when implementing bias management strategies.

This is why Valerie’s emphasis on systematic interventions rather than individual awareness is so crucial. Effective bias management requires changing the decision-making environment, not just the decision-makers’ knowledge. It requires building systems, not slogans.

A Call for Systematic Excellence in Bias Management

Valerie’s exploration of the GI Joe Bias provides a crucial call for advancing pharmaceutical quality management beyond the illusion that awareness equals capability. Her work, combined with ICH Q9(R1)’s explicit recognition of subjectivity challenges, creates an opportunity for the industry to develop more sophisticated approaches to cognitive bias management.

The path forward requires acknowledging that bias management is a core competency for quality professionals, equivalent to understanding analytical method validation or process characterization. It requires systematic approaches that scaffold human judgment rather than attempting to eliminate it. Most importantly, it requires cultures that view bias recognition as professional strength rather than weakness.

As I continue to build frameworks for reducing subjectivity in quality risk management and developing structured approaches to decision-making, Valerie’s insights about the limitations of awareness provide essential grounding. The GI Joe Bias reminds us that knowing is not half the battle—it’s barely the beginning.

The real battle lies in creating pharmaceutical quality systems that systematically compensate for human cognitive limitations while leveraging human expertise and judgment. That battle is won not through individual awareness or good intentions, but through systematic excellence in bias management architecture.

What structured approaches has your organization implemented to move beyond bias awareness toward systematic bias management? Share your experiences and challenges as we work together to advance the maturity of risk management practices in our industry.


Meet Valerie Mulholland

Dr. Valerie Mulholland is transforming how our industry thinks about quality risk management. As CEO and Principal Consultant at GMP Services in Ireland, Valerie brings over 25 years of hands-on experience auditing and consulting across biopharmaceutical, pharmaceutical, medical device, and blood transfusion industries throughout the EU, US, and Mexico.

But what truly sets Valerie apart is her unique combination of practical expertise and cutting-edge research. She recently earned her PhD from TU Dublin’s Pharmaceutical Regulatory Science Team, focusing on “Effective Risk-Based Decision Making in Quality Risk Management”. Her groundbreaking research has produced 13 academic papers, with four publications specifically developed to support ICH’s work—research that’s now incorporated into the official ICH Q9(R1) training materials. This isn’t theoretical work gathering dust on academic shelves; it’s research that’s actively shaping global regulatory guidance.

Why Risk Revolution Deserves Your Attention

The Risk Revolution podcast, co-hosted by Valerie alongside Nuala Calnan (25-year pharmaceutical veteran and Arnold F. Graves Scholar) and Dr. Lori Richter (Director of Risk Management at Ultragenyx with 21+ years industry experience), represents something unique in pharmaceutical podcasting. This isn’t your typical regulatory update show—it’s a monthly masterclass in advancing risk management maturity.

In an industry where staying current isn’t optional—it’s essential for patient safety—Risk Revolution offers the kind of continuing education that actually advances your professional capabilities. These aren’t recycled conference presentations; they’re conversations with the people shaping our industry’s future.

The Expertise Crisis: Why AI’s War on Entry-Level Jobs Threatens Quality Excellence

As pharmaceutical and biotech organizations rush to harness artificial intelligence to eliminate “inefficient” entry-level positions, we are at risk of creating a crisis that threatens the very foundation of quality expertise. The Harvard Business Review’s recent analysis of AI’s impact on entry-level jobs reads like a prophecy of organizational doom—one that quality leaders should heed before it’s too late.

Research from Stanford indicates that there has been a 13% decline in entry-level job opportunities for workers aged 22 to 25 since the widespread adoption of generative AI. The study shows that 50-60% of typical junior tasks—such as report drafting, research synthesis, data cleaning, and scheduling—can now be performed by AI. For high-quality organizations already facing expertise gaps, this trend signals a potential self-destructive path rather than increased efficiency.

Equally concerning, automation is leading to the phasing out of some traditional entry-level professional tasks. When I started in the field, newcomers would gain experience through tasks like batch record reviews and good documentation practices for protocols. However, with the introduction of electronic batch records and electronic validation management, these tasks have largely disappeared. AI is expected to accelerate this trend even further.

Everyone should go and read “The Perils of Using AI to Replace Entry-Level Jobs” by Amy C. Edmondson and Tomas Chamorro-Premuzic and then come back and read this post.

The Apprenticeship Dividend: What We Lose When We Skip the Journey

Every expert in pharmaceutical quality began somewhere. They learned to read batch records, investigated their first deviations, struggled through their first CAPA investigations, and gradually developed the pattern recognition that distinguishes competent from exceptional quality professionals. This journey, what the Edmondson and Chamorro-Premuzic call the “apprenticeship dividend”, cannot be replicated by AI or compressed into senior-level training programs.

Consider commissioning, qualification, and validation (CQV) work in biotech manufacturing. Junior engineers traditionally started by documenting Installation Qualification protocols, learning to recognize when equipment specifications align with user requirements. They progressed to Operational Qualification, developing understanding of how systems behave under various conditions. Only after this foundation could they effectively design Performance Qualification strategies that demonstrate process capability.

When organizations eliminate these entry-level CQV roles in favor of AI-generated documentation and senior engineers managing multiple systems simultaneously, they create what appears to be efficiency. In reality, they’ve severed the pipeline that transforms technical contributors into systems thinkers capable of managing complex manufacturing operations.

The Expertise Pipeline: Building Quality Gardeners

As I’ve written previously about building competency frameworks for quality professionals, true expertise requires integration of technical knowledge, methodological skills, social capabilities, and self-management abilities. This integration occurs through sustained practice, mentorship, and gradual assumption of responsibility—precisely what entry-level positions provide.

The traditional path from Quality specialist to Quality Manager to Quality Director illustrates this progression:

Foundation Level: Learning to execute quality methods methods, understand requirements, and recognize when results fall outside acceptance criteria. Basic deviation investigation and CAPA support.

Intermediate Level: Taking ownership of requirement gathering, leading routine investigations, participating in supplier audits, and beginning to see connections between different quality systems.

Advanced Level: Designing audit activities, facilitating cross-functional investigations, mentoring junior staff, and contributing to strategic quality initiatives.

Leadership Level: Building quality cultures, designing organizational capabilities, and creating systems that enable others to excel.

Each level builds upon the previous, creating what we might call “quality gardeners”—professionals who nurture quality systems as living ecosystems rather than enforcing compliance through rigid oversight. Skip the foundation levels, and you cannot develop the sophisticated understanding required for advanced practice.

The False Economy of AI Substitution

Organizations defending entry-level job elimination often point to cost savings and “efficiency gains.” This thinking reflects a fundamental misunderstanding of how expertise develops and quality systems function. Consider risk management in biotech manufacturing—a domain where pattern recognition and contextual judgment are essential.

A senior risk management professional reviewing a contamination event can quickly identify potential failure modes, assess likelihood and severity, and design effective mitigation strategies. This capability developed through years of investigating routine deviations, participating in CAPA teams, and learning to distinguish significant risks from minor variations.

When AI handles initial risk assessments and senior professionals review only the outputs, we create a dangerous gap. The senior professional lacks the deep familiarity with routine variations that enables recognition of truly significant deviations. Meanwhile, no one is developing the foundational expertise needed to replace retiring experts.

The result is what is called expertise hollowing, organizations that appear capable on the surface but lack the deep competency required to handle complex challenges or adapt to changing conditions.

Building Expertise in a Quality Organization

Creating robust expertise development requires intentional design that recognizes both the value of human development and the capabilities of AI tools. Rather than eliminating entry-level positions, quality organizations should redesign them to maximize learning value while leveraging AI appropriately.

Structured Apprenticeship Programs

Quality organizations should implement formal apprenticeship programs that combine academic learning with progressive practical responsibility. These programs should span 2-3 years and include:

Year 1: Foundation Building

  • Basic GMP principles and quality systems overview
  • Hands-on experience with routine quality operations
  • Mentorship from experienced quality professionals
  • Participation in investigations under supervision

Year 2: Skill Development

  • Specialized training in areas like CQV, risk management, or supplier quality
  • Leading routine activities with oversight
  • Cross-functional project participation
  • Beginning to train newer apprentices

Year 3: Integration and Leadership

  • Independent project leadership
  • Mentoring responsibilities
  • Contributing to strategic quality initiatives
  • Preparation for advanced roles

As I evaluate the organization I am building, this is a critical part of the vision.

Mentorship as Core Competency

Every senior quality professional should be expected to mentor junior colleagues as a core job responsibility, not an additional burden. This requires:

  • Formal Mentorship Training: Teaching experienced professionals how to transfer tacit knowledge, provide effective feedback, and create learning opportunities.
  • Protected Time: Ensuring mentors have dedicated time for development activities, not just “additional duties as assigned.”
  • Measurement Systems: Tracking mentorship effectiveness through apprentice progression, retention rates, and long-term career development.
  • Recognition Programs: Rewarding excellent mentorship as a valued contribution to organizational capability.

Progressive Responsibility Models

Entry-level roles should be designed with clear progression pathways that gradually increase responsibility and complexity:

CQV Progression Example:

  • CQV Technician: Executing test protocols, documenting results, supporting commissioning activities
  • CQV Specialist: Writing protocols, leading qualification activities, interfacing with vendors
  • CQV Engineer: Designing qualification strategies, managing complex projects, training others
  • CQV Manager: Building organizational CQV capabilities, strategic planning, external representation

Risk Management Progression:

  • Risk Analyst: Data collection, basic risk identification, supporting formal assessments
  • Risk Specialist: Facilitating risk assessments, developing mitigation strategies, training stakeholders
  • Risk Manager: Designing risk management systems, building organizational capabilities, strategic oversight

AI as Learning Accelerator, Not Replacement

Rather than replacing entry-level workers, AI should be positioned as a learning accelerator that enables junior professionals to handle more complex work earlier in their careers:

  • Enhanced Analysis Capabilities: AI can help junior professionals identify patterns in large datasets, enabling them to focus on interpretation and decision-making rather than data compilation.
  • Simulation and Modeling: AI-powered simulations can provide safe environments for junior professionals to practice complex scenarios without real-world consequences.
  • Knowledge Management: AI can help junior professionals access relevant historical examples, best practices, and regulatory guidance more efficiently.
  • Quality Control: AI can help ensure that junior professionals’ work meets standards while they’re developing expertise, providing a safety net during the learning process.

The Cost of Expertise Shortcuts

Organizations that eliminate entry-level positions in pursuit of short-term efficiency gains will face predictable long-term consequences:

  • Expertise Gaps: As senior professionals retire or move to other organizations, there will be no one prepared to replace them.
  • Reduced Innovation: Innovation often comes from fresh perspectives questioning established practices—precisely what entry-level employees provide.
  • Cultural Degradation: Quality cultures are maintained through socialization and shared learning experiences that occur naturally in diverse, multi-level teams.
  • Risk Blindness: Without the deep familiarity that comes from hands-on experience, organizations become vulnerable to risks they cannot recognize or understand.
  • Competitive Disadvantage: Organizations with strong expertise development programs will attract and retain top talent while building superior capabilities.

Choosing Investment Over Extraction

The decision to eliminate entry-level positions represents a choice between short-term cost extraction and long-term capability investment. For quality organizations, this choice is particularly stark because our work depends fundamentally on human judgment, pattern recognition, and the ability to adapt to novel situations.

AI should augment human capability, not replace the human development process. The organizations that thrive in the next decade will be those that recognize expertise development as a core competency and invest accordingly. They will build “quality gardeners” who can nurture adaptive, resilient quality systems rather than simply enforce compliance.

The expertise crisis is not inevitable—it’s a choice. Quality leaders must choose wisely, before the cost of that choice becomes irreversible.

When 483s Reveal Zemblanity: The Catalent Investigation – A Case Study in Systemic Quality Failure

The Catalent Indiana 483 form from July 2025 reads like a textbook example of my newest word, zemblanity, in risk management—the patterned, preventable misfortune that accrues not from blind chance, but from human agency and organizational design choices that quietly hardwire failure into our operations.

Twenty hair contamination deviations. Seven months to notify suppliers. Critical equipment failures dismissed as “not impacting SISPQ.” Media fill programs missing the very interventions they should validate. This isn’t random bad luck—it’s a quality system that has systematically normalized exactly the kinds of deviations that create inspection findings.

The Architecture of Inevitable Failure

Reading through the six major observations, three systemic patterns emerge that align perfectly with the hidden architecture of failure I discussed in my recent post on zemblanity.

Pattern 1: Investigation Theatre Over Causal Understanding

Observation 1 reveals what happens when investigations become compliance exercises rather than learning tools. The hair contamination trend—20 deviations spanning multiple product codes—received investigation resources proportional to internal requirement, not actual risk. As I’ve written about causal reasoning versus negative reasoning, these investigations focused on what didn’t happen rather than understanding the causal mechanisms that allowed hair to systematically enter sterile products.

The tribal knowledge around plunger seating issues exemplifies this perfectly. Operators developed informal workarounds because the formal system failed them, yet when this surfaced during an investigation, it wasn’t captured as a separate deviation worthy of systematic analysis. The investigation closed the immediate problem without addressing the systemic failure that created the conditions for operator innovation in the first place.

Pattern 2: Trend Blindness and Pattern Fragmentation

The most striking aspect of this 483 is how pattern recognition failed across multiple observations. Twenty-three work orders on critical air handling systems. Ten work orders on a single critical water system. Recurring membrane failures. Each treated as isolated maintenance issues rather than signals of systematic degradation.

This mirrors what I’ve discussed about normalization of deviance—where repeated occurrences of problems that don’t immediately cause catastrophe gradually shift our risk threshold. The work orders document a clear pattern of equipment degradation, yet each was risk-assessed as “not impacting SISPQ” without apparent consideration of cumulative or interactive effects.

Pattern 3: Control System Fragmentation

Perhaps most revealing is how different control systems operated in silos. Visual inspection systems that couldn’t detect the very defects found during manual inspection. Environmental monitoring that didn’t include the most critical surfaces. Media fills that omitted interventions documented as root causes of previous failures.

This isn’t about individual system inadequacy—it’s about what happens when quality systems evolve as collections of independent controls rather than integrated barriers designed to work together.

Solutions: From Zemblanity to Serendipity

Drawing from the approaches I’ve developed on this blog, here’s how Catalent could transform their quality system from one that breeds inevitable failure to one that creates conditions for quality serendipity:

Implement Causally Reasoned Investigations

The Energy Safety Canada white paper I discussed earlier this year offers a powerful framework for moving beyond counterfactual analysis. Instead of concluding that operators “failed to follow procedure” regarding stopper installation, investigate why the procedure was inadequate for the equipment configuration. Instead of noting that supplier notification was delayed seven months, understand the systemic factors that made immediate notification unlikely.

Practical Implementation:

  • Retrain investigators in causal reasoning techniques
  • Require investigation sponsors (area managers) to set clear expectations for causal analysis
  • Implement structured causal analysis tools like Cause-Consequence Analysis
  • Focus on what actually happened and why it made sense to people at the time
  • Implement rubrics to guide consistency

Build Integrated Barrier Systems

The take-the-best heuristic I recently explored offers a powerful lens for barrier analysis. Rather than implementing multiple independent controls, identify the single most causally powerful barrier that would prevent each failure type, then design supporting barriers that enhance rather than compete with the primary control.

For hair contamination specifically:

  • Implement direct stopper surface monitoring as the primary barrier
  • Design visual inspection systems specifically to detect proteinaceous particles
  • Create supplier qualification that includes contamination risk assessment
  • Establish real-time trend analysis linking supplier lots to contamination events

Establish Dynamic Trend Integration

Traditional trending treats each system in isolation—environmental monitoring trends, deviation trends, CAPA trends, maintenance trends. The Catalent 483 shows what happens when these parallel trend systems fail to converge into integrated risk assessment.

Integrated Trending Framework:

  • Create cross-functional trend review combining all quality data streams
  • Implement predictive analytics linking maintenance patterns to quality risks
  • Establish trigger points where equipment degradation patterns automatically initiate quality investigations
  • Design Product Quality Reviews that explicitly correlate equipment performance with product quality data

Transform CAPA from Compliance to Learning

The recurring failures documented in this 483—repeated hair findings after CAPA implementation, continued equipment failures after “repair”—reflect what I’ve called the effectiveness paradox. Traditional CAPA focuses on thoroughness over causal accuracy.

CAPA Transformation Strategy:

  • Implement a proper CAPA hierarchy, prioritizing elimination and replacement over detection and mitigation
  • Establish effectiveness criteria before implementation, not after
  • Create learning-oriented CAPA reviews that ask “What did this teach us about our system?”
  • Link CAPA effectiveness directly to recurrence prevention rather than procedural compliance

Build Anticipatory Quality Architecture

The most sophisticated element would be creating what I call “quality serendipity”—systems that create conditions for positive surprises rather than inevitable failures. This requires moving from reactive compliance to anticipatory risk architecture.

Anticipatory Elements:

  • Implement supplier performance modeling that predicts contamination risk before it manifests
  • Create equipment degradation models that trigger quality assessment before failure
  • Establish operator feedback systems that capture emerging risks in real-time
  • Design quality reviews that explicitly seek weak signals of system stress

The Cultural Foundation

None of these technical solutions will work without addressing the cultural foundation that allowed this level of systematic failure to persist. The 483’s most telling detail isn’t any single observation—it’s the cumulative picture of an organization where quality indicators were consistently rationalized rather than interrogated.

As I’ve written about quality culture, without psychological safety and learning orientation, people won’t commit to building and supporting robust quality systems. The tribal knowledge around plunger seating, the normalization of recurring equipment failures, the seven-month delay in supplier notification—these suggest a culture where adaptation to system inadequacy became preferable to system improvement.

The path forward requires leadership that creates conditions for quality serendipity: reward pattern recognition over problem solving, celebrate early identification of weak signals, and create systems that make the right choice the easy choice.

Beyond Compliance: Building Anti-Fragile Quality

The Catalent 483 offers more than a cautionary tale—it provides a roadmap for quality transformation. Every observation represents an invitation to build quality systems that become stronger under stress rather than more brittle.

Organizations that master this transformation—moving from zemblanity-generating systems to serendipity-creating ones—will find that quality becomes not just a regulatory requirement but a competitive advantage. They’ll detect risks earlier, respond more effectively, and create the kind of operational resilience that turns disruption into opportunity.

The choice is clear: continue managing quality as a collection of independent compliance activities, or build integrated systems designed to create the conditions for sustained quality success. The Catalent case shows us what happens when we choose poorly. The frameworks exist to choose better.


What patterns of “inevitable failure” do you see in your own quality systems? How might shifting from negative reasoning to causal understanding transform your approach to investigations? Share your thoughts—this conversation about quality transformation is one we need to have across the industry.