The Kafkaesque Quality System: Escaping the Bureaucratic Trap

On the morning of his thirtieth birthday, Josef K. is arrested. He doesn’t know what crime he’s accused of committing. The arresting officers can’t tell him. His neighbors assure him the authorities must have good reasons, though they don’t know what those reasons are. When he seeks answers, he’s directed to a court that meets in tenement attics, staffed by officials whose actions are never explained but always assumed to be justified. The bureaucracy processing his case is described as “flawless,” yet K. later witnesses a servant destroying paperwork because he can’t determine who the recipient should be.​

Franz Kafka wrote The Trial in 1914, but he could have been describing a pharmaceutical deviation investigation in 2026.

Consider: A batch is placed on hold. The deviation report cites “failure to follow approved procedure.” Investigators interview operators, review batch records, and examine environmental monitoring data. The investigation concludes that training was inadequate, procedures were unclear, and the change control process should have flagged this risk. Corrective actions are assigned: retraining all operators, revising the SOP, and implementing a new review checkpoint in change control. The CAPA effectiveness check, conducted six months later, confirms that all actions have been completed. The quality system has functioned flawlessly.

Yet if you ask the operator what actually happened—what really happened, in the moment when the deviation occurred—you get a different story. The procedure said to verify equipment settings before starting, but the equipment interface doesn’t display the parameters the SOP references. It hasn’t for the past three software updates. So operators developed a workaround: check the parameters through a different screen, document in the batch record that verification occurred, and continue. Everyone knows this. Supervisors know it. The quality oversight person stationed on the manufacturing floor knows it. It’s been working fine for months.

Until this batch, when the workaround didn’t work, and suddenly everyone had to pretend they didn’t know about the workaround that everyone knew about.

This is what I call the Kafkaesque quality system. Not because it’s absurd—though it often is. But because it exhibits the same structural features Kafka identified in bureaucratic systems: officials whose actions are never explained, contradictory rationalizations praised as features rather than bugs, the claim of flawlessness maintained even as paperwork literally gets destroyed because nobody knows what to do with it, and above all, the systemic production of gaps between how things are supposed to work and how they actually work—gaps that everyone must pretend don’t exist.​

Pharmaceutical quality systems are not designed to be Kafkaesque. They’re designed to ensure that medicines are safe, effective, and consistently manufactured to specification. They emerge from legitimate regulatory requirements grounded in decades of experience about what can go wrong when quality oversight is inadequate. ICH Q10, the FDA’s Quality Systems Guidance, EU GMP—these frameworks represent hard-won knowledge about the critical control points that prevent contamination, mix-ups, degradation, and the thousand other ways pharmaceutical manufacturing can fail.​

But somewhere between the legitimate need for control and the actual functioning of quality systems, something goes wrong. The system designed to ensure quality becomes a system designed to ensure compliance. The compliance designed to demonstrate quality becomes compliance designed to satisfy inspections. The investigations designed to understand problems become investigations designed to document that all required investigation steps were completed. And gradually, imperceptibly, we build the Castle—an elaborate bureaucracy that everyone assumes is functioning properly, that generates enormous amounts of documentation proving it functions properly, and that may or may not actually be ensuring the quality it was built to ensure.

Legibility and Control

Regulatory authorities, corporate management, and any entity trying to govern complex systems—need legibility. They need to be able to “read” what’s happening in the systems they regulate. For pharmaceutical regulators, this means being able to understand, from batch records and validation documentation and investigation reports, whether a manufacturer is consistently producing medicines of acceptable quality.

Legibility requires simplification. The actual complexity of pharmaceutical manufacturing—with its tacit knowledge, operator expertise, equipment quirks, material variability, and environmental influences—cannot be fully captured in documents. So we create simplified representations. Batch records that reduce manufacturing to a series of checkboxes. Validation protocols that demonstrate method performance under controlled conditions. Investigation reports that fit problems into categories like “inadequate training” or “equipment malfunction”.

This simplification serves a legitimate purpose. Without it, regulatory oversight would be impossible. How could an inspector evaluate whether a manufacturer maintains adequate control if they had to understand every nuance of every process, every piece of tacit knowledge held by every operator, every local adaptation that makes the documented procedures actually work?

But we can often mistake the simplified, legible representation for the reality it represents. We fall prey to the fallacy that if we can fully document a system, we can fully control it. If we specify every step in SOPs, operators will perform those steps. If we validate analytical methods, those methods will continue performing as validated. If we investigate deviations and implement CAPAs, similar deviations won’t recur.

The assumption is seductive because it’s partly true. Documentation does facilitate control. Validation does improve analytical reliability. CAPA does prevent recurrence—sometimes. But the simplified, legible version of pharmaceutical manufacturing is always a reduction of the actual complexity. And our quality systems can forget that the map is not the territory.

What happens when the gap between the legible representation and the actual reality grows too large? Our Pharmaceutical quality systems fail quietly, in the gap between work-as-imagined and work-as-done. In procedures that nobody can actually follow. In validated methods that don’t work under routine conditions. In investigations that document everything except what actually happened. In quality metrics that measure compliance with quality processes rather than actual product quality.

Metis: The Knowledge Bureaucracies Cannot See

We can contrast this formal, systematic, documented knowledge with metis: practical wisdom gained through experience, local knowledge that adapts to specific contexts, the know-how that cannot be fully codified.

Greek mythology personified metis as cunning intelligence, adaptive resourcefulness, the ability to navigate complex situations where formal rules don’t apply. Scott uses the term to describe the local, practical knowledge that makes complex systems actually work despite their formal structures.

In pharmaceutical manufacturing, metis is the operator who knows that the tablet press runs better when you start it up slowly, even though the SOP doesn’t mention this. It’s the analytical chemist who can tell from the peak shape that something’s wrong with the HPLC column before it fails system suitability. It’s the quality reviewer who recognizes patterns in deviations that indicate an underlying equipment issue nobody has formally identified yet.​

This knowledge is typically tacit—difficult to articulate, learned through experience rather than training, tied to specific contexts. Studies suggest tacit knowledge comprises 90% of organizational knowledge, yet it’s rarely documented because it can’t easily be reduced to procedural steps. When operators leave or transfer, their metis goes with them.​

High-modernist quality systems struggle with metis because they can’t see it. It doesn’t appear in batch records. It can’t be validated. It doesn’t fit into investigation templates. From the regulator’s-eye view, or the quality management’s-eye view—it’s invisible.

So we try to eliminate it. We write more detailed SOPs that specify exactly how to operate equipment, leaving no room for operator discretion. We implement lockout systems that prevent deviation from prescribed parameters. We design quality oversight that verifies operators follow procedures exactly as written.

This creates a dilemma that Sidney Dekker identifies as central to bureaucratic safety systems: the gap between work-as-imagined and work-as-done.

Work-as-imagined is how quality management, procedure writers, and regulators believe manufacturing happens. It’s documented in SOPs, taught in training, and represented in batch records. Work-as-done is what actually happens on the manufacturing floor when real operators encounter real equipment under real conditions.

In ultra-adaptive environments—which pharmaceutical manufacturing surely is, with its material variability, equipment drift, environmental factors, and human elements—work cannot be fully prescribed in advance. Operators must adapt, improvise, apply judgment. They must use metis.

But adaptation and improvisation look like “deviation from approved procedures” in a high-modernist quality system. So operators learn to document work-as-imagined in batch records while performing work-as-done on the floor. The batch record says they “verified equipment settings per SOP section 7.3.2” when what they actually did was apply the metis they’ve learned through experience to determine whether the equipment is really ready to run.

This isn’t dishonesty—or rather, it’s the kind of necessary dishonesty that bureaucratic systems force on the people operating within them. Kafka understood this. The villagers in The Castle provide contradictory explanations for the officials’ actions, and everyone praises this ambiguity as a feature of the system rather than recognizing it as a dysfunction. Everyone knows the official story and the actual story don’t match, but admitting that would undermine the entire bureaucratic structure.

Metis, Expertise, and the Architecture of Knowledge

Understanding why pharmaceutical quality systems struggle to preserve and utilize operator knowledge requires examining how knowledge actually exists and develops in organizations. Three frameworks illuminate different facets of this challenge: James C. Scott’s concept of metis, W. Edwards Deming’s System of Profound Knowledge, and the research on expertise development and knowledge management pioneered by Ikujiro Nonaka and Anders Ericsson.

These frameworks aren’t merely academic concepts. They reveal why quality systems that look comprehensive on paper fail in practice, why experienced operators leave and take critical capability with them, and why organizations keep making the same mistakes despite extensive documentation of lessons learned.

The Architecture of Knowledge: Tacit and Explicit

Management scholar Ikujiro Nonaka distinguishes between two fundamental types of knowledge that coexist in all organizations. Explicit knowledge is codifiable—it can be expressed in words, numbers, formulas, documented procedures. It’s the content of SOPs, validation protocols, batch records, training materials. It’s what we can write down and transfer through formal documentation.

Tacit knowledge is subjective, experience-based, and context-specific. It includes cognitive skills like beliefs, mental models, and intuition, as well as technical skills like craft and know-how. Tacit knowledge is notoriously difficult to articulate. When an experienced analytical chemist looks at a chromatogram and says “something’s not right with that peak shape,” they’re drawing on tacit knowledge built through years of observing normal and abnormal results.

Nonaka’s insight is that these two types of knowledge exist in continuous interaction through what he calls the SECI model—four modes of knowledge conversion that form a spiral of organizational learning:

  • Socialization (tacit to tacit): Tacit knowledge transfers between individuals through shared experience and direct interaction. An operator training a new hire doesn’t just explain the procedure; they demonstrate the subtle adjustments, the feel of properly functioning equipment, the signs that something’s going wrong. This is experiential learning, the acquisition of skills and mental models through observation and practice.
  • Externalization (tacit to explicit): The difficult process of making tacit knowledge explicit through articulation. This happens through dialogue, metaphor, and reflection-on-action—stepping back from practice to describe what you’re doing and why. When investigation teams interview operators about what actually happened during a deviation, they’re attempting externalization. But externalization requires psychological safety; operators won’t articulate their tacit knowledge if doing so will reveal deviations from approved procedures.
  • Combination (explicit to explicit): Documented knowledge combined into new forms. This is what happens when validation teams synthesize development data, platform knowledge, and method-specific studies into validation strategies. It’s the easiest mode because it works entirely with already-codified knowledge.
  • Internalization (explicit to tacit): The process of embodying explicit knowledge through practice until it becomes “sticky” individual knowledge—operational capability. When operators internalize procedures through repeated execution, they’re converting the explicit knowledge in SOPs into tacit capability. Over time, with reflection and deliberate practice, they develop expertise that goes beyond what the SOP specifies.

Metis is the tacit knowledge that resists externalization. It’s context-specific, adaptive, often non-verbal. It’s what operators know about equipment quirks, material variability, and process subtleties—knowledge gained through direct engagement with complex, variable systems.

High-modernist quality systems, in their drive for legibility and control, attempt to externalize all tacit knowledge into explicit procedures. But some knowledge fundamentally resists codification. The operator’s ability to hear when equipment isn’t running properly, the analyst’s judgment about whether a result is credible despite passing specification, the quality reviewer’s pattern recognition that connects apparently unrelated deviations—this metis cannot be fully proceduralized.

Worse, the attempt to externalize all knowledge into procedures creates what Nonaka would recognize as a broken learning spiral. Organizations that demand perfect procedural compliance prevent socialization—operators can’t openly share their tacit knowledge because it would reveal that work-as-done doesn’t match work-as-imagined. Externalization becomes impossible because articulating tacit knowledge is seen as confession of deviation. The knowledge spiral collapses, and organizations lose their capacity for learning.

Deming’s Theory of Knowledge: Prediction and Learning

W. Edwards Deming’s System of Profound Knowledge provides a complementary lens on why quality systems struggle with knowledge. One of its four interrelated elements—Theory of Knowledge—addresses how we actually learn and improve systems.

Deming’s central insight: there is no knowledge without theory. Knowledge doesn’t come from merely accumulating experience or documenting procedures. It comes from making predictions based on theory and testing whether those predictions hold. This is what makes knowledge falsifiable—it can be proven wrong through empirical observation.

Consider analytical method validation through this lens. Traditional validation documents that a method performed acceptably under specified conditions; this is a description of past events, not theory. Lifecycle validation, properly understood, makes a theoretical prediction: “This method will continue generating results of acceptable quality when operated within the defined control strategy”. That prediction can be tested through Stage 3 ongoing verification. When the prediction fails—when the method doesn’t perform as validation claimed—we gain knowledge about the gap between our theory (the validation claim) and reality.

This connects directly to metis. Operators with metis have internalized theories about how systems behave. When an experienced operator says “We need to start the tablet press slowly today because it’s cold in here and the tooling needs to warm up gradually,” they’re articulating a theory based on their tacit understanding of equipment behavior. The theory makes a prediction: starting slowly will prevent the coating defects we see when we rush on cold days.

But hierarchical, procedure-driven quality systems don’t recognize operator theories as legitimate knowledge. They demand compliance with documented procedures regardless of operator predictions about outcomes. So the operator follows the SOP, the coating defects occur, a deviation is written, and the investigation concludes that “procedure was followed correctly” without capturing the operator’s theoretical knowledge that could have prevented the problem.

Deming’s other element—Knowledge of Variation—is equally crucial. He distinguished between common cause variation (inherent to the system, management’s responsibility to address through system redesign) and special cause variation (abnormalities requiring investigation). His research across multiple industries suggested that 94% of problems are common cause—they reflect system design issues, not individual failures.​

Bureaucratic quality systems systematically misattribute variation. When operators struggle to follow procedures, the system treats this as special cause (operator error, inadequate training) rather than common cause (the procedures don’t match operational reality, the system design is flawed). This misattribution prevents system improvement and destroys operator metis by treating adaptive responses as deviations.​

From Deming’s perspective, metis is how operators manage system variation when procedures don’t account for the full range of conditions they encounter. Eliminating metis through rigid procedural compliance doesn’t eliminate variation—it eliminates the adaptive capacity that was compensating for system design flaws.​

Ericsson and the Development of Expertise

Psychologist Anders Ericsson’s research on expertise development reveals another dimension of how knowledge works in organizations. His studies across fields from chess to music to medicine dismantled the myth that expert performers have unusual innate talents. Instead, expertise is the result of what he calls deliberate practice—individualized training activities specifically designed to improve particular aspects of performance through repetition, feedback, and successive refinement.

Deliberate practice has specific characteristics:

  • It involves tasks initially outside the current realm of reliable performance but masterable within hours through focused concentration​
  • It requires immediate feedback on performance
  • It includes reflection between practice sessions to guide subsequent improvement
  • It continues for extended periods—Ericsson found it takes a minimum of ten years of full-time deliberate practice to reach high levels of expertise even in well-structured domains

Critically, experience alone does not create expertise. Studies show only a weak correlation between years of professional experience and actual performance quality. Merely repeating activities leads to automaticity and arrested development—practice makes permanent, but only deliberate practice improves performance.

This has profound implications for pharmaceutical quality systems. When we document procedures and require operators to follow them exactly, we’re eliminating the deliberate practice conditions that develop expertise. Operators execute the same steps repeatedly without feedback on the quality of performance (only on compliance with procedure), without reflection on how to improve, and without tackling progressively more challenging aspects of the work.

Worse, the compliance focus actively prevents expertise development. Ericsson emphasizes that experts continually try to improve beyond their current level of performance. But quality systems that demand perfect procedural compliance punish the very experimentation and adaptation that characterizes deliberate practice. Operators who develop metis through deliberate engagement with operational challenges must conceal that knowledge because it reveals they adapted procedures rather than following them exactly.

The expertise literature also reveals how knowledge transfers—or fails to transfer—in organizations. Research identifies multiple knowledge transfer mechanisms: social networks, organizational routines, personnel mobility, organizational design, and active search. But effective transfer depends critically on the type of knowledge involved.

Tacit knowledge transfers primarily through mentoring, coaching, and peer-to-peer interaction—what Nonaka calls socialization. When experienced operators leave, this tacit knowledge vanishes if it hasn’t been transferred through direct working relationships. No amount of documentation captures it because tacit knowledge is experience-based and context-specific.

Explicit knowledge transfers through documentation, formal training, and digital platforms. This is what quality systems are designed for: capturing knowledge in SOPs, specifications, validation protocols. But organizations often mistake documentation for knowledge transfer. Creating comprehensive procedures doesn’t ensure that people learn from them. Without internalization—the conversion of explicit knowledge back into tacit operational capability through practice and reflection—documented knowledge remains inert.

Knowledge Management Failures in Pharmaceutical Quality

These three frameworks—Nonaka’s knowledge conversion spiral, Deming’s theory of knowledge and variation, Ericsson’s deliberate practice—reveal systematic failures in how pharmaceutical quality systems handle knowledge:

  • Broken socialization: Quality systems that punish deviation prevent operators from openly sharing tacit knowledge about work-as-done. New operators learn the documented procedures but not the metis that makes those procedures actually work.
  • Failed externalization: Investigation processes that focus on compliance rather than understanding don’t capture operator theories about causation. The tacit knowledge that could prevent recurrence remains tacit—and often punishable if revealed.
  • Meaningless combination: Organizations generate elaborate CAPA documentation by combining explicit knowledge about what should happen without incorporating tacit knowledge about what actually happens. The resulting “knowledge” doesn’t reflect operational reality.
  • Superficial internalization: Training programs that emphasize procedure memorization rather than capability development don’t convert explicit knowledge into genuine operational expertise. Operators learn to document compliance without developing the metis needed for quality work.
  • Misattribution of variation: Systems treat operator adaptation as special cause (individual failure) rather than recognizing it as response to common cause system design issues. This prevents learning because the organization never addresses the system flaws that necessitate adaptation.
  • Prevention of deliberate practice: Rigid procedural compliance eliminates the conditions for expertise development—challenging tasks, immediate feedback on quality (not just compliance), reflection, and progressive improvement. Organizations lose expertise development capacity.
  • Knowledge transfer theater: Extensive documentation of lessons learned and best practices without the mentoring relationships and communities of practice that enable actual tacit knowledge transfer. Knowledge “management” that manages documents rather than enabling organizational learning.

The consequence is what Nonaka would call organizational knowledge destruction rather than creation. Each layer of bureaucracy, each procedure demanding rigid compliance, each investigation that treats adaptation as deviation, breaks another link in the knowledge spiral. The organization becomes progressively more ignorant about its own operations even as it generates more and more documentation claiming to capture knowledge.

Building Systems That Preserve and Develop Metis

If metis is essential for quality, if expertise develops through deliberate practice, if knowledge exists in continuous interaction between tacit and explicit forms, how do we design quality systems that work with these realities rather than against them?

Enable genuine socialization: Create legitimate spaces for experienced operators to work directly with less experienced ones in conditions where tacit knowledge can be openly shared. This means job shadowing, mentoring relationships, and communities of practice where work-as-done can be discussed without fear of punishment for revealing that it differs from work-as-imagined.

Design for externalization: Investigation processes should aim to capture operator theories about causation, not just document procedural compliance. Use dialogue, ask operators for metaphors and analogies that help articulate tacit understanding, create reflection opportunities where people can step back from action to describe what they know. But this requires just culture—operators won’t externalize knowledge if doing so triggers blame.

Support deliberate practice: Instead of demanding perfect procedural compliance, create conditions for expertise development. This means progressively challenging work assignments, immediate feedback on quality of outcomes (not just compliance), reflection time between executions, and explicit permission to adapt within understood boundaries. Document decision rules rather than rigid procedures, so operators develop judgment rather than just following steps.

Apply Deming’s knowledge theory: Make quality system elements falsifiable by articulating explicit predictions that can be tested. Validated methods should predict ongoing performance, CAPAs should predict reduction in deviation frequency, training should predict capability improvement. Then test those predictions systematically and learn when they fail.

Correctly attribute variation: When operators struggle with procedures or adapt them, ask whether this is special cause (unusual circumstances) or common cause (system design doesn’t match operational reality). If it’s common cause—which Deming suggests is 94% of the time—management must redesign the system rather than demanding better compliance.

Build knowledge transfer mechanisms: Recognize that different knowledge types require different transfer approaches. Tacit knowledge needs mentoring and communities of practice, not just documentation. Explicit knowledge needs accessible documentation and effective training, not just comprehensive procedure libraries. Knowledge transfer is a property of organizational systems and culture, not just techniques.​

Measure knowledge outcomes, not documentation volume: Success isn’t demonstrated by comprehensive procedures or extensive training records. It’s demonstrated by whether people can actually perform quality work, whether they have the tacit knowledge and expertise that come from deliberate practice and genuine organizational learning. Measure investigation quality by whether investigations capture knowledge that prevents recurrence, measure CAPA effectiveness by whether problems actually decrease, measure training effectiveness by whether capability improves.

The fundamental insight across all three frameworks is that knowledge is not documentation. Knowledge exists in the dynamic interaction between explicit and tacit forms, between theory and practice, between individual expertise and organizational capability. Quality systems designed around documentation—assuming that if we write comprehensive procedures and require people to follow them, quality will result—are systems designed in ignorance of how knowledge actually works.

Metis is not an obstacle to be eliminated through standardization. It is an essential organizational capability that develops through deliberate practice and transfers through socialization. Deming’s profound knowledge isn’t just theory—it’s the lens that reveals why bureaucratic systems systematically destroy the very knowledge they need to function effectively.

Building quality systems that preserve and develop metis means building systems for organizational learning, not organizational documentation. It means recognizing operator expertise as legitimate knowledge rather than deviation from procedures. It means creating conditions for deliberate practice rather than demanding perfect compliance. It means enabling knowledge conversion spirals rather than breaking them through blame and rigid control.

This is the escape from the Kafkaesque quality system. Not through more procedures, more documentation, more oversight—but through quality systems designed around how humans actually learn, how expertise actually develops, how knowledge actually exists in organizations.

The Pathologies of Bureaucracy

Sociologist Robert K. Merton studied how bureaucracies develop characteristic dysfunctions even when staffed by competent, well-intentioned people. He identified what he called “bureaucratic pathologies”—systematic problems that emerge from the structure of bureaucratic organizations rather than from individual failures.​

The primary pathology is what Merton called “displacement of goals”. Bureaucracies establish rules and procedures as means to achieve organizational objectives. But over time, following the rules becomes an end in itself. Officials focus on “doing things by the book” rather than on whether the book is achieving its intended purpose.

Does this sound familiar to pharmaceutical quality professionals?

How many deviation investigations focus primarily on demonstrating that investigation procedures were followed—impact assessment completed, timeline met, all required signatures obtained—with less attention to whether the investigation actually understood what happened and why? How many CAPA effectiveness checks verify that corrective actions were implemented but don’t rigorously test whether they solved the underlying problem? How many validation studies are designed to satisfy validation protocol requirements rather than to genuinely establish method fitness for purpose?

Merton identified another pathology: bureaucratic officials are discouraged from showing initiative because they lack the authority to deviate from procedures. When problems arise that don’t fit prescribed categories, officials “pass the buck” to the next level of hierarchy. Meanwhile, the rigid adherence to rules and the impersonal attitude this generates are interpreted by those subject to the bureaucracy as arrogance or indifference.

Quality professionals will recognize this pattern. The quality oversight person on the manufacturing floor sees a problem but can’t address it without a deviation report. The deviation report triggers an investigation that can’t conclude without identifying root cause according to approved categories. The investigation assigns CAPA that requires multiple levels of approval before implementation. By the time the CAPA is implemented, the original problem may have been forgotten, or operators may have already developed their own workaround that will remain invisible to the formal system.

Dekker argues that bureaucratization creates “structural secrecy”—not active concealment, but systematic conditions under which information cannot flow. Bureaucratic accountability determines who owns data “up to where and from where on”. Once the quality staff member presents a deviation report to management, their bureaucratic accountability is complete. What happens to that information afterward is someone else’s problem.​

Meanwhile, operators know things that quality staff don’t know, quality staff know things that management doesn’t know, and management knows things that regulators don’t know. Not because anyone is deliberately hiding information, but because the bureaucratic structure creates boundaries across which information doesn’t naturally flow.

This is structural secrecy, and it’s lethal to quality systems because quality depends on information about what’s actually happening. When the formal system cannot see work-as-done, cannot access operator metis, cannot flow information across bureaucratic boundaries, it’s managing an imaginary factory rather than the real one.

Compliance Theater: The Performance of Quality

If bureaucratic quality systems manage imaginary factories, they require imaginary proof that quality is maintained. Enter compliance theater—the systematic creation of documentation and monitoring that prioritizes visible adherence to requirements over substantive achievement of quality objectives.

Compliance theater has several characteristic features:​

  • Surface-level implementation: Organizations develop extensive documentation, training programs, and monitoring systems that create the appearance of comprehensive quality control while lacking the depth necessary to actually ensure quality.​
  • Metrics gaming: Success is measured through easily manipulable indicators—training completion rates, deviation closure timeliness, CAPA on-time implementation—rather than outcomes reflecting actual quality performance.
  • Resource misallocation: Significant resources devoted to compliance performance rather than substantive quality improvement, creating opportunity costs that impede genuine progress.
  • Temporal patterns: Activity spikes before inspections or audits rather than continuous vigilance.

Consider CAPA effectiveness checks. In principle, these verify that corrective actions actually solved the underlying problem. But how many CAPA effectiveness checks truly test this? The typical approach: verify that the planned actions were implemented (revised SOP distributed, training completed, new equipment qualified), wait for some period during which no similar deviation occurs, declare the CAPA effective.

This is ritualistic compliance, not genuine verification. If the deviation was caused by operator metis being inadequate for the actual demands of the task, and the corrective action was “revise SOP to clarify requirements and retrain operators,” the effectiveness check should test whether operators now have the knowledge and capability to handle the task. But we don’t typically test capability. We verify that training attendance was documented and that no deviations of the exact same type have been reported in the past six months.

No deviations reported is not the same as no deviations occurring. It might mean operators developed better workarounds that don’t trigger quality system alerts. It might mean supervisors are managing issues informally rather than generating deviation reports. It might mean we got lucky.

But the paperwork says “CAPA verified effective,” and the compliance theater continues.​

Analytical method validation presents another arena for compliance theater. Traditional validation treats validation as an event: conduct studies demonstrating acceptable performance, generate a validation report, file with regulatory authorities, and consider the method “validated”. The implicit assumption is that a method that passed validation will continue performing acceptably forever, as long as we check system suitability.​

But methods validated under controlled conditions with expert analysts and fresh materials often perform differently under routine conditions with typical analysts and aged reagents. The validation represented work-as-imagined. What happens during routine testing is work-as-done.

If we took lifecycle validation seriously, we would treat validation as predicting future performance and continuously test those predictions through Stage 3 ongoing verification. We would monitor not just system suitability pass/fail but trends suggesting performance drift. We would investigate anomalous results as potential signals of method inadequacy.​

But Stage 3 verification is underdeveloped in regulatory guidance and practice. So validated methods continue being used until they fail spectacularly, at which point we investigate the failure, implement CAPA, revalidate, and resume the cycle.

The validation documentation proves the method is validated. Whether the method actually works is a separate question.

The Bureaucratic Trap: How Good Systems Go Bad

I need to emphasize: pharmaceutical quality systems did not become bureaucratic because quality professionals are incompetent or indifferent. The bureaucratization happens through the interaction of legitimate pressures that push systems toward forms that are legible, auditable, and defensible but increasingly disconnected from the complex reality they’re meant to govern.

  • Regulatory pressure: Inspectors need evidence that quality is controlled. The most auditable evidence is documentation showing compliance with established procedures. Over time, quality systems optimize for auditability rather than effectiveness.
  • Liability pressure: When quality failures occur, organizations face regulatory action, litigation, and reputational damage. The best defense is demonstrating that all required procedures were followed. This incentivizes comprehensive documentation even when that documentation doesn’t enhance actual quality.
  • Complexity: Pharmaceutical manufacturing is genuinely complex, with thousands of variables affecting product quality. Reducing this complexity to manageable procedures requires simplification. The simplification is necessary, but organizations forget that it’s a reduction rather than the full reality.
  • Scale: As organizations grow, quality systems must work across multiple sites, products, and regulatory jurisdictions. Standardization is necessary for consistency, but standardization requires abstracting away local context—precisely the domain where metis operates.
  • Knowledge loss: When experienced operators leave, their tacit knowledge goes with them. Organizations try to capture this knowledge in ever-more-detailed procedures, but metis cannot be fully proceduralized. The detailed procedures give the illusion of captured knowledge while the actual knowledge has vanished.
  • Management distance: Quality executives are increasingly distant from manufacturing operations. They manage through metrics, dashboards, and reports rather than direct observation. These tools require legibility—quantitative measures, standardized reports, formatted data. The gap between management’s understanding and operational reality grows.
  • Inspection trauma: After regulatory inspections that identify deficiencies, organizations often respond by adding more procedures, more documentation, more oversight. The response to bureaucratic dysfunction is more bureaucracy.

Each of these pressures is individually rational. Taken together, they create what the conditions for failure: administrative ordering of complex systems, confidence in formal procedures and documentation, authority willing to enforce compliance, and increasingly, a weakened operational environment that can’t effectively resist.

What we get is the Kafkaesque quality system: elaborate, well-documented, apparently flawless, generating enormous amounts of evidence that it’s functioning properly, and potentially failing to ensure the quality it was designed to ensure.

The Consequences: When Bureaucracy Defeats Quality

The most insidious aspect of bureaucratic quality systems is that they can fail quietly. Unlike catastrophic contamination events or major product recalls, bureaucratic dysfunction produces gradual degradation that may go unnoticed because all the quality metrics say everything is fine.

Investigation without learning: Investigations that focus on completing investigation procedures rather than understanding causal mechanisms don’t generate knowledge that prevents recurrence. Organizations keep investigating the same types of problems, implementing CAPAs that check compliance boxes without addressing underlying issues, and declaring investigations “closed” when the paperwork is complete.

Research on incident investigation culture reveals what investigators call “new blame”—a dysfunction where investigators avoid examining human factors for fear of seeming accusatory, instead quickly attributing problems to “unclear procedures” or “inadequate training” without probing what actually happened. This appears to be blame-free but actually prevents learning by refusing to engage with the complexity of how humans interact with systems.

Analytical unreliability: Methods that “passed validation” may be silently failing under routine conditions, generating subtly inaccurate results that don’t trigger obvious failures but gradually degrade understanding of product quality. Nobody knows because Stage 3 verification isn’t rigorous enough to detect drift.​

Operator disengagement: When operators know that the formal procedures don’t match operational reality, when they’re required to document work-as-imagined while performing work-as-done, when they see problems but reporting them triggers bureaucratic responses that don’t fix anything, they disengage. They stop reporting. They develop workarounds. They focus on satisfying the visible compliance requirements rather than ensuring genuine quality.

This is exactly what Merton predicted: bureaucratic structures that punish initiative and reward procedural compliance create officials who follow rules rather than thinking about purpose.

Resource misallocation: Organizations spend enormous resources on compliance activities that satisfy audit requirements without enhancing quality. Documentation of training that doesn’t transfer knowledge. CAPA systems that process hundreds of actions of marginal effectiveness. Validation studies that prove compliance with validation requirements without establishing genuine fitness for purpose.

Structural secrecy: Critical information that front-line operators possess about equipment quirks, material variability, and process issues doesn’t flow to quality management because bureaucratic boundaries prevent information transfer. Management makes decisions based on formal reports that reflect work-as-imagined while work-as-done remains invisible.

Loss of resilience: Organizations that depend on rigid procedures and standardized responses become brittle. When unexpected situations arise—novel contamination sources, unusual material properties, equipment failures that don’t fit prescribed categories—the organization can’t adapt because it has systematically eliminated the metis that enables adaptive response.

This last point deserves emphasis. Quality systems should make organizations more resilient—better able to maintain quality despite disturbances and variability. But bureaucratic quality systems can do the opposite. By requiring that everything be prescribed in advance, they eliminate the adaptive capacity that enables resilience.

The Alternative: High Reliability Organizations

So how do we escape the bureaucratic trap? The answer emerges from studying what researchers Karl Weick and Kathleen Sutcliffe call “High Reliability Organizations”—organizations that operate in complex, hazardous environments yet maintain exceptional safety records.

Nuclear aircraft carriers. Air traffic control systems. Wildland firefighting teams. These organizations can’t afford the luxury of bureaucratic dysfunction because failure means catastrophic consequences. Yet they operate in environments at least as complex as pharmaceutical manufacturing.

Weick and Sutcliffe identified five principles that characterize HROs:

Preoccupation with failure: HROs treat any anomaly as a potential symptom of deeper problems. They don’t wait for catastrophic failures. They investigate near-misses rigorously. They encourage reporting of even minor issues.

This is the opposite of compliance-focused quality systems that measure success by absence of major deviations and treat minor issues as acceptable noise.

Reluctance to simplify: HROs resist the temptation to reduce complex situations to simple categories. They maintain multiple interpretations of what’s happening rather than prematurely converging on a single explanation.

This challenges the bureaucratic need for legibility. It’s harder to manage systems that resist simple categorization. But it’s more effective than managing simplified representations that don’t reflect reality.

Sensitivity to operations: HROs maintain ongoing awareness of what’s happening at the sharp end where work is actually done. Leaders stay connected to operational reality rather than managing through dashboards and metrics.

This requires bridging the gap between work-as-imagined and work-as-done. It requires seeing metis rather than trying to eliminate it.​

Commitment to resilience: HROs invest in adaptive capacity—the ability to respond effectively when unexpected situations arise. They practice scenario-based training. They maintain reserves of expertise. They design systems that can accommodate surprises.

This is different from bureaucratic systems that try to prevent all surprises through comprehensive procedures.

Deference to expertise: In HROs, authority migrates to whoever has relevant expertise regardless of hierarchical rank. During anomalous situations, the person with the best understanding of what’s happening makes decisions, even if that’s a junior operator rather than a senior manager.

Weick describes this as valuing “greasy hands knowledge”—the practical, experiential understanding of people directly involved in operations. This is metis by another name.

These principles directly challenge bureaucratic pathologies. Where bureaucracies focus on following established procedures, HROs focus on constant vigilance for signs that procedures aren’t working. Where bureaucracies demand hierarchical approval, HROs defer to frontline expertise. Where bureaucracies simplify for legibility, HROs maintain complexity.

Can pharmaceutical quality systems adopt HRO principles? Not easily, because the regulatory environment demands legibility and auditability. But neither can pharmaceutical quality systems afford continued bureaucratic dysfunction as complexity increases and the gap between work-as-imagined and work-as-done widens.

Building Falsifiable Quality Systems

Throughout this blog I’ve advocated for what I call falsifiable quality systems—systems designed to make testable predictions that could be proven wrong through empirical observation.​

Traditional quality systems make unfalsifiable claims: “This method was validated according to ICH Q2 requirements.” “Procedures are followed.” “CAPA prevents recurrence.” These are statements about activities that occurred in the past, not predictions about future performance.

Falsifiable quality systems make explicit predictions: “This analytical method will generate reportable results within ±5% of true value under normal operating conditions.” “When operated within the defined control strategy, this process will consistently produce product meeting specifications.” “The corrective action implemented will reduce this deviation type by at least 50% over the next six months”.​

These predictions can be tested. If ongoing data shows the method isn’t achieving ±5% accuracy, the prediction is falsified—the method isn’t performing as validation claimed. If deviations haven’t decreased after CAPA implementation, the prediction is falsified—the corrective action didn’t work.

Falsifiable systems create accountability for effectiveness rather than compliance. They force honest engagement with whether quality systems are actually ensuring quality.

This connects directly to HRO principles. Preoccupation with failure means treating falsification seriously—when predictions fail, investigating why. Reluctance to simplify means acknowledging the complexity that makes some predictions uncertain. Sensitivity to operations means using operational data to test predictions continuously. Commitment to resilience means building systems that can recognize and respond when predictions fail.

It also requires what researchers call “just culture”—systems that distinguish between honest errors, at-risk behaviors, and reckless violations. Bureaucratic blame cultures punish all failures, driving problems underground. “No-blame” cultures avoid examining human factors, preventing learning. Just cultures examine what happened honestly, including human decisions and actions, while focusing on system improvement rather than individual punishment.

In just culture, when a prediction is falsified—when a validated method fails, when CAPA doesn’t prevent recurrence, when operators can’t follow procedures—the response isn’t to blame individuals or to paper over the gap with more documentation. The response is to examine why the prediction was wrong and redesign the system to make it correct.

This requires the intellectual honesty to acknowledge when quality systems aren’t working. It requires willingness to look at work-as-done rather than only work-as-imagined. It requires recognizing operator metis as legitimate knowledge rather than deviation from procedures. It requires valuing learning over legibility.

Practical Steps: Escaping the Castle

How do pharmaceutical quality organizations actually implement these principles? How do we escape Kafka’s Castle once we’ve built it?​

I won’t pretend this is easy. The pressures toward bureaucratization are real and powerful. Regulatory requirements demand legibility. Corporate management requires standardization. Inspection findings trigger defensive responses. The path of least resistance is always more procedures, more documentation, more oversight.

But some concrete steps can bend the trajectory away from bureaucratic dysfunction toward genuine effectiveness:

Make quality systems falsifiable: For every major quality commitment—validated analytical methods, qualified processes, implemented CAPAs—articulate explicit, testable predictions about future performance. Then systematically test those predictions through ongoing monitoring. When predictions fail, investigate why and redesign systems rather than rationalizing the failure away.

Close the WAI/WAD gap: Create safe mechanisms for understanding work-as-done. Don’t punish operators for revealing that procedures don’t match reality. Instead, use this information to improve procedures or acknowledge that some adaptation is necessary and train operators in effective adaptation rather than pretending perfect procedural compliance is possible.

Value metis: Recognize that operator expertise, analytical judgment, and troubleshooting capability are not obstacles to standardization but essential elements of quality systems. Document not just procedures but decision rules for when to adapt. Create mechanisms for transferring tacit knowledge. Include experienced operators in investigation and CAPA design.

Practice just culture: Distinguish between system-induced errors, at-risk behaviors under production pressure, and genuinely reckless violations. Focus investigations on understanding causal factors rather than assigning blame or avoiding blame. Hold people accountable for reporting problems and learning from them, not for making the inevitable errors that complex systems generate.

Implement genuine Stage 3 verification: Treat validation as predicting ongoing performance rather than certifying past performance. Monitor analytical methods, processes, and quality system elements for signs that their performance is drifting from predictions. Detect and address degradation early rather than waiting for catastrophic failure.

Bridge bureaucratic boundaries: Create information flows that cross organizational boundaries so that what operators know reaches quality management, what quality management knows reaches site leadership, and what site leadership knows shapes corporate quality strategy. This requires fighting against structural secrecy, perhaps through regular gemba walks, operator inclusion in quality councils, and bottom-up reporting mechanisms that protect operators who surface uncomfortable truths.

Test CAPA effectiveness honestly: Don’t just verify that corrective actions were implemented. Test whether they solved the problem. If a deviation was caused by inadequate operator capability, test whether capability improved. If it was caused by equipment limitation, test whether the limitation was eliminated. If the problem hasn’t recurred but you haven’t tested whether your corrective action was responsible, you don’t know if the CAPA worked—you know you got lucky.

Question metrics that measure activity rather than outcomes: Training completion rates don’t tell you whether people learned anything. Deviation closure timeliness doesn’t tell you whether investigations found root causes. CAPA implementation rates don’t tell you whether CAPAs were effective. Replace these with metrics that test quality system predictions: analytical result accuracy, process capability indices, deviation recurrence rates after CAPA, investigation quality assessed by independent review.

Embrace productive failure: When quality system elements fail—when validated methods prove unreliable, when procedures can’t be followed, when CAPAs don’t prevent recurrence—treat these as opportunities to improve systems rather than problems to be concealed or rationalized. HRO preoccupation with failure means seeing small failures as gifts that reveal system weaknesses before they cause catastrophic problems.

Continuous improvement, genuinely practiced: Implement PDCA (Plan-Do-Check-Act) or PDSA (Plan-Do-Study-Act) cycles not as compliance requirements but as systematic methods for testing changes before full implementation. Use small-scale experiments to determine whether proposed improvements actually improve rather than deploying changes enterprise-wide based on assumption.

Reduce the burden of irrelevant documentation: Much compliance documentation serves no quality purpose—it exists to satisfy audit requirements or regulatory expectations that may themselves be bureaucratic artifacts. Distinguish between documentation that genuinely supports quality (specifications, test results, deviation investigations that find root causes) and documentation that exists to demonstrate compliance (training attendance rosters for content people already know, CAPA effectiveness checks that verify nothing). Fight to eliminate the latter, or at least prevent it from crowding out the former.​

The Politics of De-Bureaucratization

Here’s the uncomfortable truth: escaping the Kafkaesque quality system requires political will at the highest levels of organizations.

Quality professionals can implement some improvements within their spheres of influence—better investigation practices, more rigorous CAPA effectiveness checks, enhanced Stage 3 verification. But truly escaping the bureaucratic trap requires challenging structures that powerful constituencies benefit from.

Regulatory authorities benefit from legibility—it makes inspection and oversight possible. Corporate management benefits from standardization and quantitative metrics—they enable governance at scale. Quality bureaucracies themselves benefit from complexity and documentation—they justify resources and headcount.

Operators and production management often bear the costs of bureaucratization—additional documentation burden, inability to adapt to reality, blame when gaps between procedures and practice are revealed. But they’re typically the least powerful constituencies in pharmaceutical organizations.

Changing this dynamic requires quality leaders who understand that their role is ensuring genuine quality rather than managing compliance theater. It requires site leaders who recognize that bureaucratic dysfunction threatens product quality even when all audit checkboxes are green. It requires regulatory relationships mature enough to discuss work-as-done openly rather than pretending work-as-imagined is reality.

Scott argues that successful resistance to high-modernist schemes depends on civil society’s capacity to push back. In pharmaceutical organizations, this means empowering operational voices—the people with metis, with greasy-hands knowledge, with direct experience of the gap between procedures and reality. It means creating forums where they can speak without fear of retaliation. It means quality leaders who listen to operational expertise even when it reveals uncomfortable truths about quality system dysfunction.

This is threatening to bureaucratic structures precisely because it challenges their premise—that quality can be ensured through comprehensive documented procedures enforced by hierarchical oversight. If we acknowledge that operator metis is essential, that adaptation is necessary, that work-as-done will never perfectly match work-as-imagined, we’re admitting that the Castle isn’t really flawless.

But the Castle never was flawless. Kafka knew that. The servant destroying paperwork because he couldn’t figure out the recipient wasn’t an aberration—it was a glimpse of reality. The question is whether we continue pretending the bureaucracy works perfectly while it fails quietly, or whether we build quality systems honest enough to acknowledge their limitations and resilient enough to function despite them.

The Quality System We Need

Pharmaceutical quality systems exist in genuine tension. They must be rigorous enough to prevent failures that harm patients. They must be documented well enough to satisfy regulatory scrutiny. They must be standardized enough to work across global operations. These are not trivial requirements, and they cannot be dismissed as mere bureaucratic impositions.

But they must also be realistic enough to accommodate the complexity of manufacturing, flexible enough to incorporate operator metis, honest enough to acknowledge the gap between procedures and practice, and resilient enough to detect and correct performance drift before catastrophic failures occur.

We will not achieve this by adding more procedures, more documentation, more oversight. We’ve been trying that approach for decades, and the result is the bureaucratic trap we’re in. Every new procedure adds another layer to the Castle, another barrier between quality management and operational reality, another opportunity for the gap between work-as-imagined and work-as-done to widen.

Instead, we need quality systems designed around falsifiable predictions tested through ongoing verification. Systems that value learning over legibility. Systems that bridge bureaucratic boundaries to incorporate greasy-hands knowledge. Systems that distinguish between productive compliance and compliance theater. Systems that acknowledge complexity rather than reducing it to manageable simplifications that don’t reflect reality.

We need, in short, to stop building the Castle and start building systems for humans doing real work under real conditions.

Kafka never finished The Castle. The manuscript breaks off mid-sentence. Whether K. ever reaches the Castle, whether the officials ever explain themselves, whether the flawless bureaucracy ever acknowledges its contradictions—we’ll never know.​

But pharmaceutical quality professionals don’t have the luxury of leaving the story unfinished. We’re living in it. Every day we choose whether to add another procedure to the Castle or to build something different. Every deviation investigation either perpetuates compliance theater or pursues genuine learning. Every CAPA either checks boxes or solves problems. Every validation either creates falsifiable predictions or generates documentation that satisfies audits without ensuring quality.

The bureaucratic trap is powerful precisely because each individual choice seems reasonable. Each procedure addresses a real gap. Each documentation requirement responds to an audit finding. Each oversight layer prevents a potential problem. And gradually, imperceptibly, we build a system that looks comprehensive and rigorous and “flawless” but may or may not be ensuring the quality it exists to ensure.

Escaping the trap requires intellectual honesty about whether our quality systems are working. It requires organizational courage to acknowledge gaps between procedures and practice. It requires regulatory maturity to discuss work-as-done rather than pretending work-as-imagined is reality. It requires quality leadership that values effectiveness over auditability.

Most of all, it requires remembering why we built quality systems in the first place: not to satisfy inspections, not to generate documentation, not to create employment for quality professionals, but to ensure that medicines reaching patients are safe, effective, and consistently manufactured to specification.

That goal is not served by Kafkaesque bureaucracy. It’s not served by the Castle, with its mysterious officials and contradictory explanations and flawless procedures that somehow involve destroying paperwork when nobody knows what to do with it.​

It’s served by systems designed for humans, systems that acknowledge complexity, systems that incorporate the metis of people who actually do the work, systems that make falsifiable predictions and honestly evaluate whether those predictions hold.

It’s served by escaping the bureaucratic trap.

The question is whether pharmaceutical quality leadership has the courage to leave the Castle.

Beyond Malfunction Mindset: Normal Work, Adaptive Quality, and the Future of Pharmaceutical Problem-Solving

Beyond the Shadow of Failure

Problem-solving is too often shaped by the assumption that the system is perfectly understood and fully specified. If something goes wrong—a deviation, a batch out-of-spec, or a contamination event—our approach is to dissect what “failed” and fix that flaw, believing this will restore order. This way of thinking, which I call the malfunction mindset, is as ingrained as it is incomplete. It assumes that successful outcomes are the default, that work always happens as written in SOPs, and that only failure deserves our scrutiny.

But here’s the paradox: most of the time, our highly complex manufacturing environments actually succeed—often under imperfect, shifting, and not fully understood conditions. If we only study what failed, and never question how our systems achieve their many daily successes, we miss the real nature of pharmaceutical quality: it is not the absence of failure, but the presence of robust, adaptive work. Taking this broader, more nuanced perspective is not just an academic exercise—it’s essential for building resilient operations that truly protect patients, products, and our organizations.

Drawing from my thinking through zemblanity (the predictable but often overlooked negative outcomes of well-intentioned quality fixes), the effectiveness paradox (why “nothing bad happened” isn’t proof your quality system works), and the persistent gap between work-as-imagined and work-as-done, this post explores why the malfunction mindset persists, how it distorts investigations, and what future-ready quality management should look like.

The Allure—and Limits—of the Failure Model

Why do we reflexively look for broken parts and single points of failure? It is, as Sidney Dekker has argued, both comforting and defensible. When something goes wrong, you can always point to a failed sensor, a missed checklist, or an operator error. This approach—introducing another level of documentation, another check, another layer of review—offers a sense of closure and regulatory safety. After all, as long as you can demonstrate that you “fixed” something tangible, you’ve fulfilled investigational due diligence.

Yet this fails to account for how quality is actually produced—or lost—in the real world. The malfunction model treats systems like complicated machines: fix the broken gear, oil the creaky hinge, and the machine runs smoothly again. But, as Dekker reminds us in Drift Into Failure, such linear thinking ignores the drift, adaptation, and emergent complexity that characterize real manufacturing environments. The truth is, in complex adaptive systems like pharmaceutical manufacturing, it often takes more than one “error” for failure to manifest. The system absorbs small deviations continuously, adapting and flexing until, sometimes, a boundary is crossed and a problem surfaces.

W. Edwards Deming’s wisdom rings truer than ever: “Most problems result from the system itself, not from individual faults.” A sustainable approach to quality is one that designs for success—and that means understanding the system-wide properties enabling robust performance, not just eliminating isolated malfunctions.

Procedural Fundamentalism: The Work-as-Imagined Trap

One of the least examined, yet most impactful, contributors to the malfunction mindset is procedural fundamentalism—the belief that the written procedure is both a complete specification and an accurate description of work. This feels rigorous and provides compliance comfort, but it is a profound misreading of how work actually happens in pharmaceutical manufacturing.

Work-as-imagined, as elucidated by Erik Hollnagel and others, represents an abstraction: it is how distant architects of SOPs visualize the “correct” execution of a process. Yet, real-world conditions—resource shortages, unexpected interruptions, mismatched raw materials, shifting priorities—force adaptation. Operators, supervisors, and Quality professionals do not simply “follow the recipe”: they interpret, improvise, and—crucially—adjust on the fly.

When we treat procedures as authoritative descriptions of reality, we create the proxy problem: our investigations compare real operations against an imagined baseline that never fully existed. Deviations become automatically framed as problem points, and success is redefined as rigid adherence, regardless of context or outcome.

Complexity, Performance Variability, and Real Success

So, how do pharmaceutical operations succeed so reliably despite the ever-present complexity and variability of daily work?

The answer lies in embracing performance variability as a feature of robust systems, not a flaw. In high-reliability environments—from aviation to medicine to pharmaceutical manufacturing—success is routinely achieved not by demanding strict compliance, but by cultivating adaptive capacity.

Consider environmental monitoring in a sterile suite: The procedure may specify precise times and locations, but a seasoned operator, noticing shifts in people flow or equipment usage, might proactively sample a high-risk area more frequently. This adaptation—not captured in work-as-imagined—actually strengthens data integrity. Yet, traditional metrics would treat this as a procedural deviation.

This is the paradox of the malfunction mindset: in seeking to eliminate all performance variability, we risk undermining precisely those adaptive behaviors that produce reliable quality under uncertainty.

Why the Malfunction Mindset Persists: Cognitive Comfort and Regulatory Reinforcement

Why do organizations continue to privilege the malfunction mindset, even as evidence accumulates of its limits? The answer is both psychological and cultural.

Component breakdown thinking is psychologically satisfying—it offers a clear problem, a specific cause, and a direct fix. For regulatory agencies, it is easy to measure and audit: did the deviation investigation determine the root cause, did the CAPA address it, does the documentation support this narrative? Anything that doesn’t fit this model is hard to defend in audits or inspections.

Yet this approach offers, at best, a partial diagnosis and, at worst, the illusion of control. It encourages organizations to catalog deviations while blindly accepting a much broader universe of unexamined daily adaptations that actually determine system robustness.

Complexity Science and the Art of Organizational Success

To move toward a more accurate—and ultimately more effective—model of quality, pharmaceutical leaders must integrate the insights of complexity science. Drawing from the work of Stuart Kauffman and others at the Santa Fe Institute, we understand that the highest-performing systems operate not at the edge of rigid order, but at the “edge of chaos,” where structure is balanced with adaptability.

In these systems, success and failure both arise from emergent properties—the patterns of interaction between people, procedures, equipment, and environment. The most meaningful interventions, therefore, address how the parts interact, not just how each part functions in isolation.

This explains why traditional root cause analysis, focused on the parts, often fails to produce lasting improvements; it cannot account for outcomes that emerge only from the collective dynamics of the system as a whole.

Investigating for Learning: The Take-the-Best Heuristic

A key innovation needed in pharmaceutical investigations is a shift to what Hollnagel calls Safety-II thinking: focusing on how things go right as well as why they occasionally go wrong.

Here, the take-the-best heuristic becomes crucial. Instead of compiling lists of all deviations, ask: Among all contributing factors, which one, if addressed, would have the most powerful positive impact on future outcomes, while preserving adaptive capacity? This approach ensures investigations generate actionable, meaningful learning, rather than feeding the endless paper chase of “compliance theater.”

Building Systems That Support Adaptive Capability

Taking complexity and adaptive performance seriously requires practical changes to how we design procedures, train, oversee, and measure quality.

  • Procedure Design: Make explicit the distinction between objectives and methods. Procedures should articulate clear quality goals, specify necessary constraints, but deliberately enable workers to choose methods within those boundaries when faced with new conditions.
  • Training: Move beyond procedural compliance. Develop adaptive expertise in your staff, so they can interpret and adjust sensibly—understanding not just “what” to do, but “why” it matters in the bigger system.
  • Oversight and Monitoring: Audit for adaptive capacity. Don’t just track “compliance” but also whether workers have the resources and knowledge to adapt safely and intelligently. Positive performance variability (smart adaptations) should be recognized and studied.
  • Quality System Design: Build systematic learning from both success and failure. Examine ordinary operations to discern how adaptive mechanisms work, and protect these capabilities rather than squashing them in the name of “control.”

Leadership and Systems Thinking

Realizing this vision depends on a transformation in leadership mindset—from one seeking control to one enabling adaptive capacity. Deming’s profound knowledge and the principles of complexity leadership remind us that what matters is not enforcing ever-stricter compliance, but cultivating an organizational context where smart adaptation and genuine learning become standard.

Leadership must:

  • Distinguish between complicated and complex: Apply detailed procedures to the former (e.g., calibration), but support flexible, principles-based management for the latter.
  • Tolerate appropriate uncertainty: Not every problem has a clear, single answer. Creating psychological safety is essential for learning and adaptation during ambiguity.
  • Develop learning organizations: Invest in deep understanding of operations, foster regular study of work-as-done, and celebrate insights from both expected and unexpected sources.

Practical Strategies for Implementation

Turning these insights into institutional practice involves a systematic, research-inspired approach:

  • Start procedure development with observation of real work before specifying methods. Small scale and mock exercises are critical.
  • Employ cognitive apprenticeship models in training, so that experience, reasoning under uncertainty, and systems thinking become core competencies.
  • Begin investigations with appreciative inquiry—map out how the system usually works, not just how it trips up.
  • Measure leading indicators (capacity, information flow, adaptability) not just lagging ones (failures, deviations).
  • Create closed feedback loops for corrective actions—insisting every intervention be evaluated for impact on both compliance and adaptive capacity.

Scientific Quality Management and Adaptive Systems: No Contradiction

The tension between rigorous scientific quality management (QbD, process validation, risk management frameworks) and support for adaptation is a false dilemma. Indeed, genuine scientific quality management starts with humility: the recognition that our understanding of complex systems is always partial, our controls imperfect, and our frameworks provisional.

A falsifiable quality framework embeds learning and adaptation at its core—treating deviations as opportunities to test and refine models, rather than simply checkboxes to complete.

The best organizations are not those that experience the fewest deviations, but those that learn fastest from both expected and unexpected events, and apply this knowledge to strengthen both system structure and adaptive capacity.

Embracing Normal Work: Closing the Gap

Normal pharmaceutical manufacturing is not the story of perfect procedural compliance; it’s the story of people, working together to achieve quality goals under diverse, unpredictable, and evolving conditions. This is both more challenging—and more rewarding—than any plan prescribed solely by SOPs.

To truly move the needle on pharmaceutical quality, organizations must:

  • Embrace performance variability as evidence of adaptive capacity, not just risk.
  • Investigate for learning, not blame; study success, not just failure.
  • Design systems to support both structure and flexible adaptation—never sacrificing one entirely for the other.
  • Cultivate leadership that values humility, systems thinking, and experimental learning, creating a culture comfortable with complexity.

This approach will not be easy. It means questioning decades of compliance custom, organizational habit, and intellectual ease. But the payoff is immense: more resilient operations, fewer catastrophic surprises, and, above all, improved safety and efficacy for the patients who depend on our products.

The challenge—and the opportunity—facing pharmaceutical quality management is to evolve beyond compliance theater and malfunction thinking into a new era of resilience and organizational learning. Success lies not in the illusory comfort of perfectly executed procedures, but in the everyday adaptations, intelligent improvisation, and system-level capabilities that make those successes possible.

The call to action is clear: Investigate not just to explain what failed, but to understand how, and why, things so often go right. Protect, nurture, and enhance the adaptive capacities of your organization. In doing so, pharmaceutical quality can finally become more than an after-the-fact audit; it will become the creative, resilient capability that patients, regulators, and organizations genuinely want to hire.

Applying Jobs-to-Be-Done to Risk Management

In my recent exploration of the Jobs-to-Be-Done (JTBD) tool for process improvement, I examined how this customer-centric approach could revolutionize our understanding of deviation management. I want to extend that analysis to another fundamental challenge in pharmaceutical quality: risk management.

As we grapple with increasing regulatory complexity, accelerating technological change, and the persistent threat of risk blindness, most organizations remain trapped in what I call “compliance theater”—performing risk management activities that satisfy auditors but fail to build genuine organizational resilience. JTBD is a useful tool as we move beyond this theater toward risk management that actually creates value.

The Risk Management Jobs Users Actually Hire

When quality professionals, executives, and regulatory teams engage with risk management processes, what job are they really trying to accomplish? The answer reveals a profound disconnect between organizational intent and actual capability.

The Core Functional Job

“When facing uncertainty that could impact product quality, patient safety, or business continuity, I want to systematically understand and address potential threats, so I can make confident decisions and prevent surprise failures.”

This job statement immediately exposes the inadequacy of most risk management systems. They focus on documentation rather than understanding, assessment rather than decision enablement, and compliance rather than prevention.

The Consumption Jobs: The Hidden Workload

Risk management involves numerous consumption jobs that organizations often ignore:

  • Evaluation and Selection: “I need to choose risk assessment methodologies that match our operational complexity and regulatory environment.”
  • Implementation and Training: “I need to build organizational risk capability without creating bureaucratic overhead.”
  • Maintenance and Evolution: “I need to keep our risk approach current as our business and threat landscape evolves.”
  • Integration and Communication: “I need to ensure risk insights actually influence business decisions rather than gathering dust in risk registers.”

These consumption jobs represent the difference between risk management systems that organizations grudgingly tolerate and those they genuinely want to “hire.”

The Eight-Step Risk Management Job Map

Applying JTBD’s universal job map to risk management reveals where current approaches systematically fail:

1. Define: Establishing Risk Context

What users need: Clear understanding of what they’re assessing, why it matters, and what decisions the risk analysis will inform.

Current reality: Risk assessments often begin with template completion rather than context establishment, leading to generic analyses that don’t support actual decision-making.

2. Locate: Gathering Risk Intelligence

What users need: Access to historical data, subject matter expertise, external intelligence, and tacit knowledge about how things actually work.

Current reality: Risk teams typically work from documentation rather than engaging with operational reality, missing the pattern recognition and apprenticeship dividend that experienced practitioners possess.

3. Prepare: Creating Assessment Conditions

What users need: Diverse teams, psychological safety for honest risk discussions, and structured approaches that challenge rather than confirm existing assumptions.

Current reality: Risk assessments often involve homogeneous teams working through predetermined templates, perpetuating the GI Joe fallacy—believing that knowledge of risk frameworks prevents risky thinking.

4. Confirm: Validating Assessment Readiness

What users need: Confidence that they have sufficient information, appropriate expertise, and clear success criteria before proceeding.

Current reality: Risk assessments proceed regardless of information quality or team readiness, driven by schedule rather than preparation.

5. Execute: Conducting Risk Analysis

What users need: Systematic identification of risks, analysis of interconnections, scenario testing, and development of robust mitigation strategies.

Current reality: Risk analysis often becomes risk scoring—reducing complex phenomena to numerical ratings that provide false precision rather than genuine insight.

6. Monitor: Tracking Risk Reality

What users need: Early warning systems that detect emerging risks and validate the effectiveness of mitigation strategies.

Current reality: Risk monitoring typically involves periodic register updates rather than active intelligence gathering, missing the dynamic nature of risk evolution.

7. Modify: Adapting to New Information

What users need: Responsive adjustment of risk strategies based on monitoring feedback and changing conditions.

Current reality: Risk assessments often become static documents, updated only during scheduled reviews rather than when new information emerges.

8. Conclude: Capturing Risk Learning

What users need: Systematic capture of risk insights, pattern recognition, and knowledge transfer that builds organizational risk intelligence.

Current reality: Risk analysis conclusions focus on compliance closure rather than learning capture, missing opportunities to build the organizational memory that prevents risk blindness.

The Emotional and Social Dimensions

Risk management involves profound emotional and social jobs that traditional approaches ignore:

  • Confidence: Risk practitioners want to feel genuinely confident that significant threats have been identified and addressed, not just that procedures have been followed.
  • Intellectual Satisfaction: Quality professionals are attracted to rigorous analysis and robust reasoning—risk management should engage their analytical capabilities, not reduce them to form completion.
  • Professional Credibility: Risk managers want to be perceived as strategic enablers rather than bureaucratic obstacles—as trusted advisors who help organizations navigate uncertainty rather than create administrative burden.
  • Organizational Trust: Executive teams want assurance that their risk management capabilities are genuinely protective, not merely compliant.

What’s Underserved: The Innovation Opportunities

JTBD analysis reveals four critical areas where current risk management approaches systematically underserve user needs:

Risk Intelligence

Current systems document known risks but fail to develop early warning capabilities, pattern recognition across multiple contexts, or predictive insights about emerging threats. Organizations need risk management that builds institutional awareness, not just institutional documentation.

Decision Enablement

Risk assessments should create confidence for strategic decisions, enable rapid assessment of time-sensitive opportunities, and provide scenario planning that prepares organizations for multiple futures. Instead, most risk management creates decision paralysis through endless analysis.

Organizational Capability

Effective risk management should build risk literacy across all levels, create cultural resilience that enables honest risk conversations, and develop adaptive capacity to respond when risks materialize. Current approaches often centralize risk thinking rather than distributing risk capability.

Stakeholder Trust

Risk management should enable transparent communication about threats and mitigation strategies, demonstrate competence in risk anticipation, and provide regulatory confidence in organizational capabilities. Too often, risk management creates opacity rather than transparency.

Canvas representation of the JBTD

Moving Beyond Compliance Theater

The JTBD framework helps us address a key challenge in risk management: many organizations place excessive emphasis on “table stakes” such as regulatory compliance and documentation requirements, while neglecting vital aspects like intelligence, enablement, capability, and trust that contribute to genuine resilience.

This represents a classic case of process myopia—becoming so focused on risk management activities that we lose sight of the fundamental job those activities should accomplish. Organizations perfect their risk registers while remaining vulnerable to surprise failures, not because they lack risk management processes, but because those processes fail to serve the jobs users actually need accomplished.

Design Principles for User-Centered Risk Management

  • Context Over Templates: Begin risk analysis with clear understanding of decisions to be informed rather than forms to be completed.
  • Intelligence Over Documentation: Prioritize systems that build organizational awareness and pattern recognition rather than risk libraries.
  • Engagement Over Compliance: Create risk processes that attract rather than burden users, recognizing that effective risk management requires active intellectual participation.
  • Learning Over Closure: Structure risk activities to build institutional memory and capability rather than simply completing assessment cycles.
  • Integration Over Isolation: Ensure risk insights flow naturally into operational decisions rather than remaining in separate risk management systems.

Hiring Risk Management for Real Jobs

The most dangerous risk facing pharmaceutical organizations may be risk management systems that create false confidence while building no real capability. JTBD analysis reveals why: these systems optimize for regulatory approval rather than user needs, creating elaborate processes that nobody genuinely wants to “hire.”

True risk management begins with understanding what jobs users actually need accomplished: building confidence for difficult decisions, developing organizational intelligence about threats, creating resilience against surprise failures, and enabling rather than impeding business progress. Organizations that design risk management around these jobs will develop competitive advantages in an increasingly uncertain world.

The choice is clear: continue performing compliance theater, or build risk management systems that organizations genuinely want to hire. In a world where zemblanity—the tendency to encounter negative, foreseeable outcomes—threatens every quality system, only the latter approach offers genuine protection.

Risk management should not be something organizations endure. It should be something they actively seek because it makes them demonstrably better at navigating uncertainty and protecting what matters most.

Beyond “Knowing Is Half the Battle”

Dr. Valerie Mulholland’s recent exploration of the GI Joe Bias strikes gets to the heart of a fundamental challenge in pharmaceutical quality management: the persistent belief that awareness of cognitive biases is sufficient to overcome them. I find Valerie’s analysis particularly compelling because it connects directly to the practical realities we face when implementing ICH Q9(R1)’s mandate to actively manage subjectivity in risk assessment.

Valerie’s observation that “awareness of a bias does little to prevent it from influencing our decisions” shows us that the GI Joe Bias underlays a critical gap between intellectual understanding and practical application—a gap that pharmaceutical organizations must bridge if they hope to achieve the risk-based decision-making excellence that ICH Q9(R1) demands.

The Expertise Paradox: Why Quality Professionals Are Particularly Vulnerable

Valerie correctly identifies that quality risk management facilitators are often better at spotting biases in others than in themselves. This observation connects to a deeper challenge I’ve previously explored: the fallacy of expert immunity. Our expertise in pharmaceutical quality systems creates cognitive patterns that simultaneously enable rapid, accurate technical judgments while increasing our vulnerability to specific biases.

The very mechanisms that make us effective quality professionals—pattern recognition, schema-based processing, heuristic shortcuts derived from base rate experiences—are the same cognitive tools that generate bias. When I conduct investigations or facilitate risk assessments, my extensive experience with similar events creates expectations and assumptions that can blind me to novel failure modes or unexpected causal relationships. This isn’t a character flaw; it’s an inherent part of how expertise develops and operates.

Valerie’s emphasis on the need for trained facilitators in high-formality QRM activities reflects this reality. External facilitation isn’t just about process management—it’s about introducing cognitive diversity and bias detection capabilities that internal teams, no matter how experienced, cannot provide for themselves. The facilitator serves as a structured intervention against the GI Joe fallacy, embodying the systematic approaches that awareness alone cannot deliver.

From Awareness to Architecture: Building Bias-Resistant Quality Systems

The critical insight from both Valerie’s work and my writing about structured hypothesis formation is that effective bias management requires architectural solutions, not individual willpower. ICH Q9(R1)’s introduction of the “Managing and Minimizing Subjectivity” section represents recognition that regulatory compliance requires systematic approaches to cognitive bias management.

In my post on reducing subjectivity in quality risk management, I identified four strategies that directly address the limitations Valerie highlights about the GI Joe Bias:

  1. Leveraging Knowledge Management: Rather than relying on individual awareness, effective bias management requires systematic capture and application of objective information. When risk assessors can access structured historical data, supplier performance metrics, and process capability studies, they’re less dependent on potentially biased recollections or impressions.
  2. Good Risk Questions: The formulation of risk questions represents a critical intervention point. Well-crafted questions can anchor assessments in specific, measurable terms rather than vague generalizations that invite subjective interpretation. Instead of asking “What are the risks to product quality?”, effective risk questions might ask “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months based on the last three years of data?”
  3. Cross-Functional Teams: Valerie’s observation that we’re better at spotting biases in others translates directly into team composition strategies. Diverse, cross-functional teams naturally create the external perspective that individual bias recognition cannot provide. The manufacturing engineer, quality analyst, and regulatory specialist bring different cognitive frameworks that can identify blind spots in each other’s reasoning.
  4. Structured Decision-Making Processes: The tools Valerie mentions—PHA, FMEA, Ishikawa, bow-tie analysis—serve as external cognitive scaffolding that guides thinking through systematic pathways rather than relying on intuitive shortcuts that may be biased.

The Formality Framework: When and How to Escalate Bias Management

One of the most valuable aspects of ICH Q9(R1) is its introduction of the formality concept—the idea that different situations require different levels of systematic intervention. Valerie’s article implicitly addresses this by noting that “high formality QRM activities” require trained facilitators. This suggests a graduated approach to bias management that scales intervention intensity with decision importance.

This formality framework needs to include bias management that organizations can use to determine when and how intensively to apply bias mitigation strategies:

  • Low Formality Situations: Routine decisions with well-understood parameters, limited stakeholders, and reversible outcomes. Basic bias awareness training and standardized checklists may be sufficient.
  • Medium Formality Situations: Decisions involving moderate complexity, uncertainty, or impact. These require cross-functional input, structured decision tools, and documentation of rationales.
  • High Formality Situations: Complex, high-stakes decisions with significant uncertainty, multiple conflicting objectives, or diverse stakeholders. These demand external facilitation, systematic bias checks, and formal documentation of how potential biases were addressed.

This framework acknowledges that the GI Joe fallacy is most dangerous in high-formality situations where the stakes are highest and the cognitive demands greatest. It’s precisely in these contexts that our confidence in our ability to overcome bias through awareness becomes most problematic.

The Cultural Dimension: Creating Environments That Support Bias Recognition

Valerie’s emphasis on fostering humility, encouraging teams to acknowledge that “no one is immune to bias, even the most experienced professionals” connects to my observations about building expertise in quality organizations. Creating cultures that can effectively manage subjectivity requires more than tools and processes; it requires psychological safety that allows bias recognition without professional threat.

I’ve noted in past posts that organizations advancing beyond basic awareness levels demonstrate “systematic recognition of cognitive bias risks” with growing understanding that “human judgment limitations can affect risk assessment quality.” However, the transition from awareness to systematic application requires cultural changes that make bias discussion routine rather than threatening.

This cultural dimension becomes particularly important when we consider the ironic processing effects that Valerie references. When organizations create environments where acknowledging bias is seen as admitting incompetence, they inadvertently increase bias through suppression attempts. Teams that must appear confident and decisive may unconsciously avoid bias recognition because it threatens their professional identity.

The solution is creating cultures that frame bias recognition as professional competence rather than limitation. Just as we expect quality professionals to understand statistical process control or regulatory requirements, we should expect them to understand and systematically address their cognitive limitations.

Practical Implementation: Moving Beyond the GI Joe Fallacy

Building on Valerie’s recommendations for structured tools and systematic approaches, here are some specific implementation strategies that organizations can adopt to move beyond bias awareness toward bias management:

  • Bias Pre-mortems: Before conducting risk assessments, teams explicitly discuss what biases might affect their analysis and establish specific countermeasures. This makes bias consideration routine rather than reactive.
  • Devil’s Advocate Protocols: Systematic assignment of team members to challenge prevailing assumptions and identify information that contradicts emerging conclusions.
  • Perspective-Taking Requirements: Formal requirements to consider how different stakeholders (patients, regulators, operators) might view risks differently from the assessment team.
  • Bias Audit Trails: Documentation requirements that capture not just what decisions were made, but how potential biases were recognized and addressed during the decision-making process.
  • External Review Requirements: For high-formality decisions, mandatory review by individuals who weren’t involved in the initial assessment and can provide fresh perspectives.

These interventions acknowledge that bias management is not about eliminating human judgment—it’s about scaffolding human judgment with systematic processes that compensate for known cognitive limitations.

The Broader Implications: Subjectivity as Systemic Challenge

Valerie’s analysis of the GI Joe Bias connects to broader themes in my work about the effectiveness paradox and the challenges of building rigorous quality systems in an age of pop psychology. The pharmaceutical industry’s tendency to adopt appealing frameworks without rigorous evaluation extends to bias management strategies. Organizations may implement “bias training” or “awareness programs” that create the illusion of progress while failing to address the systematic changes needed for genuine improvement.

The GI Joe Bias serves as a perfect example of this challenge. It’s tempting to believe that naming the bias—recognizing that awareness isn’t enough—somehow protects us from falling into the awareness trap. But the bias is self-referential: knowing about the GI Joe Bias doesn’t automatically prevent us from succumbing to it when implementing bias management strategies.

This is why Valerie’s emphasis on systematic interventions rather than individual awareness is so crucial. Effective bias management requires changing the decision-making environment, not just the decision-makers’ knowledge. It requires building systems, not slogans.

A Call for Systematic Excellence in Bias Management

Valerie’s exploration of the GI Joe Bias provides a crucial call for advancing pharmaceutical quality management beyond the illusion that awareness equals capability. Her work, combined with ICH Q9(R1)’s explicit recognition of subjectivity challenges, creates an opportunity for the industry to develop more sophisticated approaches to cognitive bias management.

The path forward requires acknowledging that bias management is a core competency for quality professionals, equivalent to understanding analytical method validation or process characterization. It requires systematic approaches that scaffold human judgment rather than attempting to eliminate it. Most importantly, it requires cultures that view bias recognition as professional strength rather than weakness.

As I continue to build frameworks for reducing subjectivity in quality risk management and developing structured approaches to decision-making, Valerie’s insights about the limitations of awareness provide essential grounding. The GI Joe Bias reminds us that knowing is not half the battle—it’s barely the beginning.

The real battle lies in creating pharmaceutical quality systems that systematically compensate for human cognitive limitations while leveraging human expertise and judgment. That battle is won not through individual awareness or good intentions, but through systematic excellence in bias management architecture.

What structured approaches has your organization implemented to move beyond bias awareness toward systematic bias management? Share your experiences and challenges as we work together to advance the maturity of risk management practices in our industry.


Meet Valerie Mulholland

Dr. Valerie Mulholland is transforming how our industry thinks about quality risk management. As CEO and Principal Consultant at GMP Services in Ireland, Valerie brings over 25 years of hands-on experience auditing and consulting across biopharmaceutical, pharmaceutical, medical device, and blood transfusion industries throughout the EU, US, and Mexico.

But what truly sets Valerie apart is her unique combination of practical expertise and cutting-edge research. She recently earned her PhD from TU Dublin’s Pharmaceutical Regulatory Science Team, focusing on “Effective Risk-Based Decision Making in Quality Risk Management”. Her groundbreaking research has produced 13 academic papers, with four publications specifically developed to support ICH’s work—research that’s now incorporated into the official ICH Q9(R1) training materials. This isn’t theoretical work gathering dust on academic shelves; it’s research that’s actively shaping global regulatory guidance.

Why Risk Revolution Deserves Your Attention

The Risk Revolution podcast, co-hosted by Valerie alongside Nuala Calnan (25-year pharmaceutical veteran and Arnold F. Graves Scholar) and Dr. Lori Richter (Director of Risk Management at Ultragenyx with 21+ years industry experience), represents something unique in pharmaceutical podcasting. This isn’t your typical regulatory update show—it’s a monthly masterclass in advancing risk management maturity.

In an industry where staying current isn’t optional—it’s essential for patient safety—Risk Revolution offers the kind of continuing education that actually advances your professional capabilities. These aren’t recycled conference presentations; they’re conversations with the people shaping our industry’s future.

The Expertise Crisis: Why AI’s War on Entry-Level Jobs Threatens Quality Excellence

As pharmaceutical and biotech organizations rush to harness artificial intelligence to eliminate “inefficient” entry-level positions, we are at risk of creating a crisis that threatens the very foundation of quality expertise. The Harvard Business Review’s recent analysis of AI’s impact on entry-level jobs reads like a prophecy of organizational doom—one that quality leaders should heed before it’s too late.

Research from Stanford indicates that there has been a 13% decline in entry-level job opportunities for workers aged 22 to 25 since the widespread adoption of generative AI. The study shows that 50-60% of typical junior tasks—such as report drafting, research synthesis, data cleaning, and scheduling—can now be performed by AI. For high-quality organizations already facing expertise gaps, this trend signals a potential self-destructive path rather than increased efficiency.

Equally concerning, automation is leading to the phasing out of some traditional entry-level professional tasks. When I started in the field, newcomers would gain experience through tasks like batch record reviews and good documentation practices for protocols. However, with the introduction of electronic batch records and electronic validation management, these tasks have largely disappeared. AI is expected to accelerate this trend even further.

Everyone should go and read “The Perils of Using AI to Replace Entry-Level Jobs” by Amy C. Edmondson and Tomas Chamorro-Premuzic and then come back and read this post.

The Apprenticeship Dividend: What We Lose When We Skip the Journey

Every expert in pharmaceutical quality began somewhere. They learned to read batch records, investigated their first deviations, struggled through their first CAPA investigations, and gradually developed the pattern recognition that distinguishes competent from exceptional quality professionals. This journey, what the Edmondson and Chamorro-Premuzic call the “apprenticeship dividend”, cannot be replicated by AI or compressed into senior-level training programs.

Consider commissioning, qualification, and validation (CQV) work in biotech manufacturing. Junior engineers traditionally started by documenting Installation Qualification protocols, learning to recognize when equipment specifications align with user requirements. They progressed to Operational Qualification, developing understanding of how systems behave under various conditions. Only after this foundation could they effectively design Performance Qualification strategies that demonstrate process capability.

When organizations eliminate these entry-level CQV roles in favor of AI-generated documentation and senior engineers managing multiple systems simultaneously, they create what appears to be efficiency. In reality, they’ve severed the pipeline that transforms technical contributors into systems thinkers capable of managing complex manufacturing operations.

The Expertise Pipeline: Building Quality Gardeners

As I’ve written previously about building competency frameworks for quality professionals, true expertise requires integration of technical knowledge, methodological skills, social capabilities, and self-management abilities. This integration occurs through sustained practice, mentorship, and gradual assumption of responsibility—precisely what entry-level positions provide.

The traditional path from Quality specialist to Quality Manager to Quality Director illustrates this progression:

Foundation Level: Learning to execute quality methods methods, understand requirements, and recognize when results fall outside acceptance criteria. Basic deviation investigation and CAPA support.

Intermediate Level: Taking ownership of requirement gathering, leading routine investigations, participating in supplier audits, and beginning to see connections between different quality systems.

Advanced Level: Designing audit activities, facilitating cross-functional investigations, mentoring junior staff, and contributing to strategic quality initiatives.

Leadership Level: Building quality cultures, designing organizational capabilities, and creating systems that enable others to excel.

Each level builds upon the previous, creating what we might call “quality gardeners”—professionals who nurture quality systems as living ecosystems rather than enforcing compliance through rigid oversight. Skip the foundation levels, and you cannot develop the sophisticated understanding required for advanced practice.

The False Economy of AI Substitution

Organizations defending entry-level job elimination often point to cost savings and “efficiency gains.” This thinking reflects a fundamental misunderstanding of how expertise develops and quality systems function. Consider risk management in biotech manufacturing—a domain where pattern recognition and contextual judgment are essential.

A senior risk management professional reviewing a contamination event can quickly identify potential failure modes, assess likelihood and severity, and design effective mitigation strategies. This capability developed through years of investigating routine deviations, participating in CAPA teams, and learning to distinguish significant risks from minor variations.

When AI handles initial risk assessments and senior professionals review only the outputs, we create a dangerous gap. The senior professional lacks the deep familiarity with routine variations that enables recognition of truly significant deviations. Meanwhile, no one is developing the foundational expertise needed to replace retiring experts.

The result is what is called expertise hollowing, organizations that appear capable on the surface but lack the deep competency required to handle complex challenges or adapt to changing conditions.

Building Expertise in a Quality Organization

Creating robust expertise development requires intentional design that recognizes both the value of human development and the capabilities of AI tools. Rather than eliminating entry-level positions, quality organizations should redesign them to maximize learning value while leveraging AI appropriately.

Structured Apprenticeship Programs

Quality organizations should implement formal apprenticeship programs that combine academic learning with progressive practical responsibility. These programs should span 2-3 years and include:

Year 1: Foundation Building

  • Basic GMP principles and quality systems overview
  • Hands-on experience with routine quality operations
  • Mentorship from experienced quality professionals
  • Participation in investigations under supervision

Year 2: Skill Development

  • Specialized training in areas like CQV, risk management, or supplier quality
  • Leading routine activities with oversight
  • Cross-functional project participation
  • Beginning to train newer apprentices

Year 3: Integration and Leadership

  • Independent project leadership
  • Mentoring responsibilities
  • Contributing to strategic quality initiatives
  • Preparation for advanced roles

As I evaluate the organization I am building, this is a critical part of the vision.

Mentorship as Core Competency

Every senior quality professional should be expected to mentor junior colleagues as a core job responsibility, not an additional burden. This requires:

  • Formal Mentorship Training: Teaching experienced professionals how to transfer tacit knowledge, provide effective feedback, and create learning opportunities.
  • Protected Time: Ensuring mentors have dedicated time for development activities, not just “additional duties as assigned.”
  • Measurement Systems: Tracking mentorship effectiveness through apprentice progression, retention rates, and long-term career development.
  • Recognition Programs: Rewarding excellent mentorship as a valued contribution to organizational capability.

Progressive Responsibility Models

Entry-level roles should be designed with clear progression pathways that gradually increase responsibility and complexity:

CQV Progression Example:

  • CQV Technician: Executing test protocols, documenting results, supporting commissioning activities
  • CQV Specialist: Writing protocols, leading qualification activities, interfacing with vendors
  • CQV Engineer: Designing qualification strategies, managing complex projects, training others
  • CQV Manager: Building organizational CQV capabilities, strategic planning, external representation

Risk Management Progression:

  • Risk Analyst: Data collection, basic risk identification, supporting formal assessments
  • Risk Specialist: Facilitating risk assessments, developing mitigation strategies, training stakeholders
  • Risk Manager: Designing risk management systems, building organizational capabilities, strategic oversight

AI as Learning Accelerator, Not Replacement

Rather than replacing entry-level workers, AI should be positioned as a learning accelerator that enables junior professionals to handle more complex work earlier in their careers:

  • Enhanced Analysis Capabilities: AI can help junior professionals identify patterns in large datasets, enabling them to focus on interpretation and decision-making rather than data compilation.
  • Simulation and Modeling: AI-powered simulations can provide safe environments for junior professionals to practice complex scenarios without real-world consequences.
  • Knowledge Management: AI can help junior professionals access relevant historical examples, best practices, and regulatory guidance more efficiently.
  • Quality Control: AI can help ensure that junior professionals’ work meets standards while they’re developing expertise, providing a safety net during the learning process.

The Cost of Expertise Shortcuts

Organizations that eliminate entry-level positions in pursuit of short-term efficiency gains will face predictable long-term consequences:

  • Expertise Gaps: As senior professionals retire or move to other organizations, there will be no one prepared to replace them.
  • Reduced Innovation: Innovation often comes from fresh perspectives questioning established practices—precisely what entry-level employees provide.
  • Cultural Degradation: Quality cultures are maintained through socialization and shared learning experiences that occur naturally in diverse, multi-level teams.
  • Risk Blindness: Without the deep familiarity that comes from hands-on experience, organizations become vulnerable to risks they cannot recognize or understand.
  • Competitive Disadvantage: Organizations with strong expertise development programs will attract and retain top talent while building superior capabilities.

Choosing Investment Over Extraction

The decision to eliminate entry-level positions represents a choice between short-term cost extraction and long-term capability investment. For quality organizations, this choice is particularly stark because our work depends fundamentally on human judgment, pattern recognition, and the ability to adapt to novel situations.

AI should augment human capability, not replace the human development process. The organizations that thrive in the next decade will be those that recognize expertise development as a core competency and invest accordingly. They will build “quality gardeners” who can nurture adaptive, resilient quality systems rather than simply enforce compliance.

The expertise crisis is not inevitable—it’s a choice. Quality leaders must choose wisely, before the cost of that choice becomes irreversible.