Beyond Malfunction Mindset: Normal Work, Adaptive Quality, and the Future of Pharmaceutical Problem-Solving

Beyond the Shadow of Failure

Problem-solving is too often shaped by the assumption that the system is perfectly understood and fully specified. If something goes wrong—a deviation, a batch out-of-spec, or a contamination event—our approach is to dissect what “failed” and fix that flaw, believing this will restore order. This way of thinking, which I call the malfunction mindset, is as ingrained as it is incomplete. It assumes that successful outcomes are the default, that work always happens as written in SOPs, and that only failure deserves our scrutiny.

But here’s the paradox: most of the time, our highly complex manufacturing environments actually succeed—often under imperfect, shifting, and not fully understood conditions. If we only study what failed, and never question how our systems achieve their many daily successes, we miss the real nature of pharmaceutical quality: it is not the absence of failure, but the presence of robust, adaptive work. Taking this broader, more nuanced perspective is not just an academic exercise—it’s essential for building resilient operations that truly protect patients, products, and our organizations.

Drawing from my thinking through zemblanity (the predictable but often overlooked negative outcomes of well-intentioned quality fixes), the effectiveness paradox (why “nothing bad happened” isn’t proof your quality system works), and the persistent gap between work-as-imagined and work-as-done, this post explores why the malfunction mindset persists, how it distorts investigations, and what future-ready quality management should look like.

The Allure—and Limits—of the Failure Model

Why do we reflexively look for broken parts and single points of failure? It is, as Sidney Dekker has argued, both comforting and defensible. When something goes wrong, you can always point to a failed sensor, a missed checklist, or an operator error. This approach—introducing another level of documentation, another check, another layer of review—offers a sense of closure and regulatory safety. After all, as long as you can demonstrate that you “fixed” something tangible, you’ve fulfilled investigational due diligence.

Yet this fails to account for how quality is actually produced—or lost—in the real world. The malfunction model treats systems like complicated machines: fix the broken gear, oil the creaky hinge, and the machine runs smoothly again. But, as Dekker reminds us in Drift Into Failure, such linear thinking ignores the drift, adaptation, and emergent complexity that characterize real manufacturing environments. The truth is, in complex adaptive systems like pharmaceutical manufacturing, it often takes more than one “error” for failure to manifest. The system absorbs small deviations continuously, adapting and flexing until, sometimes, a boundary is crossed and a problem surfaces.

W. Edwards Deming’s wisdom rings truer than ever: “Most problems result from the system itself, not from individual faults.” A sustainable approach to quality is one that designs for success—and that means understanding the system-wide properties enabling robust performance, not just eliminating isolated malfunctions.

Procedural Fundamentalism: The Work-as-Imagined Trap

One of the least examined, yet most impactful, contributors to the malfunction mindset is procedural fundamentalism—the belief that the written procedure is both a complete specification and an accurate description of work. This feels rigorous and provides compliance comfort, but it is a profound misreading of how work actually happens in pharmaceutical manufacturing.

Work-as-imagined, as elucidated by Erik Hollnagel and others, represents an abstraction: it is how distant architects of SOPs visualize the “correct” execution of a process. Yet, real-world conditions—resource shortages, unexpected interruptions, mismatched raw materials, shifting priorities—force adaptation. Operators, supervisors, and Quality professionals do not simply “follow the recipe”: they interpret, improvise, and—crucially—adjust on the fly.

When we treat procedures as authoritative descriptions of reality, we create the proxy problem: our investigations compare real operations against an imagined baseline that never fully existed. Deviations become automatically framed as problem points, and success is redefined as rigid adherence, regardless of context or outcome.

Complexity, Performance Variability, and Real Success

So, how do pharmaceutical operations succeed so reliably despite the ever-present complexity and variability of daily work?

The answer lies in embracing performance variability as a feature of robust systems, not a flaw. In high-reliability environments—from aviation to medicine to pharmaceutical manufacturing—success is routinely achieved not by demanding strict compliance, but by cultivating adaptive capacity.

Consider environmental monitoring in a sterile suite: The procedure may specify precise times and locations, but a seasoned operator, noticing shifts in people flow or equipment usage, might proactively sample a high-risk area more frequently. This adaptation—not captured in work-as-imagined—actually strengthens data integrity. Yet, traditional metrics would treat this as a procedural deviation.

This is the paradox of the malfunction mindset: in seeking to eliminate all performance variability, we risk undermining precisely those adaptive behaviors that produce reliable quality under uncertainty.

Why the Malfunction Mindset Persists: Cognitive Comfort and Regulatory Reinforcement

Why do organizations continue to privilege the malfunction mindset, even as evidence accumulates of its limits? The answer is both psychological and cultural.

Component breakdown thinking is psychologically satisfying—it offers a clear problem, a specific cause, and a direct fix. For regulatory agencies, it is easy to measure and audit: did the deviation investigation determine the root cause, did the CAPA address it, does the documentation support this narrative? Anything that doesn’t fit this model is hard to defend in audits or inspections.

Yet this approach offers, at best, a partial diagnosis and, at worst, the illusion of control. It encourages organizations to catalog deviations while blindly accepting a much broader universe of unexamined daily adaptations that actually determine system robustness.

Complexity Science and the Art of Organizational Success

To move toward a more accurate—and ultimately more effective—model of quality, pharmaceutical leaders must integrate the insights of complexity science. Drawing from the work of Stuart Kauffman and others at the Santa Fe Institute, we understand that the highest-performing systems operate not at the edge of rigid order, but at the “edge of chaos,” where structure is balanced with adaptability.

In these systems, success and failure both arise from emergent properties—the patterns of interaction between people, procedures, equipment, and environment. The most meaningful interventions, therefore, address how the parts interact, not just how each part functions in isolation.

This explains why traditional root cause analysis, focused on the parts, often fails to produce lasting improvements; it cannot account for outcomes that emerge only from the collective dynamics of the system as a whole.

Investigating for Learning: The Take-the-Best Heuristic

A key innovation needed in pharmaceutical investigations is a shift to what Hollnagel calls Safety-II thinking: focusing on how things go right as well as why they occasionally go wrong.

Here, the take-the-best heuristic becomes crucial. Instead of compiling lists of all deviations, ask: Among all contributing factors, which one, if addressed, would have the most powerful positive impact on future outcomes, while preserving adaptive capacity? This approach ensures investigations generate actionable, meaningful learning, rather than feeding the endless paper chase of “compliance theater.”

Building Systems That Support Adaptive Capability

Taking complexity and adaptive performance seriously requires practical changes to how we design procedures, train, oversee, and measure quality.

  • Procedure Design: Make explicit the distinction between objectives and methods. Procedures should articulate clear quality goals, specify necessary constraints, but deliberately enable workers to choose methods within those boundaries when faced with new conditions.
  • Training: Move beyond procedural compliance. Develop adaptive expertise in your staff, so they can interpret and adjust sensibly—understanding not just “what” to do, but “why” it matters in the bigger system.
  • Oversight and Monitoring: Audit for adaptive capacity. Don’t just track “compliance” but also whether workers have the resources and knowledge to adapt safely and intelligently. Positive performance variability (smart adaptations) should be recognized and studied.
  • Quality System Design: Build systematic learning from both success and failure. Examine ordinary operations to discern how adaptive mechanisms work, and protect these capabilities rather than squashing them in the name of “control.”

Leadership and Systems Thinking

Realizing this vision depends on a transformation in leadership mindset—from one seeking control to one enabling adaptive capacity. Deming’s profound knowledge and the principles of complexity leadership remind us that what matters is not enforcing ever-stricter compliance, but cultivating an organizational context where smart adaptation and genuine learning become standard.

Leadership must:

  • Distinguish between complicated and complex: Apply detailed procedures to the former (e.g., calibration), but support flexible, principles-based management for the latter.
  • Tolerate appropriate uncertainty: Not every problem has a clear, single answer. Creating psychological safety is essential for learning and adaptation during ambiguity.
  • Develop learning organizations: Invest in deep understanding of operations, foster regular study of work-as-done, and celebrate insights from both expected and unexpected sources.

Practical Strategies for Implementation

Turning these insights into institutional practice involves a systematic, research-inspired approach:

  • Start procedure development with observation of real work before specifying methods. Small scale and mock exercises are critical.
  • Employ cognitive apprenticeship models in training, so that experience, reasoning under uncertainty, and systems thinking become core competencies.
  • Begin investigations with appreciative inquiry—map out how the system usually works, not just how it trips up.
  • Measure leading indicators (capacity, information flow, adaptability) not just lagging ones (failures, deviations).
  • Create closed feedback loops for corrective actions—insisting every intervention be evaluated for impact on both compliance and adaptive capacity.

Scientific Quality Management and Adaptive Systems: No Contradiction

The tension between rigorous scientific quality management (QbD, process validation, risk management frameworks) and support for adaptation is a false dilemma. Indeed, genuine scientific quality management starts with humility: the recognition that our understanding of complex systems is always partial, our controls imperfect, and our frameworks provisional.

A falsifiable quality framework embeds learning and adaptation at its core—treating deviations as opportunities to test and refine models, rather than simply checkboxes to complete.

The best organizations are not those that experience the fewest deviations, but those that learn fastest from both expected and unexpected events, and apply this knowledge to strengthen both system structure and adaptive capacity.

Embracing Normal Work: Closing the Gap

Normal pharmaceutical manufacturing is not the story of perfect procedural compliance; it’s the story of people, working together to achieve quality goals under diverse, unpredictable, and evolving conditions. This is both more challenging—and more rewarding—than any plan prescribed solely by SOPs.

To truly move the needle on pharmaceutical quality, organizations must:

  • Embrace performance variability as evidence of adaptive capacity, not just risk.
  • Investigate for learning, not blame; study success, not just failure.
  • Design systems to support both structure and flexible adaptation—never sacrificing one entirely for the other.
  • Cultivate leadership that values humility, systems thinking, and experimental learning, creating a culture comfortable with complexity.

This approach will not be easy. It means questioning decades of compliance custom, organizational habit, and intellectual ease. But the payoff is immense: more resilient operations, fewer catastrophic surprises, and, above all, improved safety and efficacy for the patients who depend on our products.

The challenge—and the opportunity—facing pharmaceutical quality management is to evolve beyond compliance theater and malfunction thinking into a new era of resilience and organizational learning. Success lies not in the illusory comfort of perfectly executed procedures, but in the everyday adaptations, intelligent improvisation, and system-level capabilities that make those successes possible.

The call to action is clear: Investigate not just to explain what failed, but to understand how, and why, things so often go right. Protect, nurture, and enhance the adaptive capacities of your organization. In doing so, pharmaceutical quality can finally become more than an after-the-fact audit; it will become the creative, resilient capability that patients, regulators, and organizations genuinely want to hire.

You Gotta Have Heart: Combating Human Error

The persistent attribution of human error as a root cause deviations reveals far more about systemic weaknesses than individual failings. The label often masks deeper organizational, procedural, and cultural flaws. Like cracks in a foundation, recurring human errors signal where quality management systems (QMS) fail to account for the complexities of human cognition, communication, and operational realities.

The Myth of Human Error as a Root Cause

Regulatory agencies increasingly reject “human error” as an acceptable conclusion in deviation investigations. This shift recognizes that human actions occur within a web of systemic influences. A technician’s missed documentation step or a formulation error rarely stem from carelessness alone but emerge from:

The aviation industry’s “Tower of Babel” problem—where siloed teams develop isolated communication loops—parallels pharmaceutical manufacturing. The Quality Unit may prioritize regulatory compliance, while production focuses on throughput, creating disjointed interpretations of “quality.” These disconnects manifest as errors when cross-functional risks go unaddressed.

Cognitive Architecture and Error Propagation

Human cognition operates under predictable constraints. Attentional biases, memory limitations, and heuristic decision-making—while evolutionarily advantageous—create vulnerabilities in GMP environments. For example:

  • Attentional tunneling: An operator hyper-focused on solving a equipment jam may overlook a temperature excursion alert.
  • Procedural drift: Subtle deviations from written protocols accumulate over time as workers optimize for perceived efficiency.
  • Complacency cycles: Over-familiarity with routine tasks reduces vigilance, particularly during night shifts or prolonged operations.

These cognitive patterns aren’t failures but features of human neurobiology. Effective QMS design anticipates them through:

  1. Error-proofing: Automated checkpoints that detect deviations before critical process stages
  2. Cognitive load management: Procedures (including batch records) tailored to cognitive load principles with decision-support prompts
  3. Resilience engineering: Simulations that train teams to recognize and recover from near-misses

Strategies for Reframing Human Error Analysis

Conduct Cognitive Autopsies

Move beyond 5-Whys to adopt human factors analysis frameworks:

  • Human Error Assessment and Reduction Technique (HEART): Quantifies the likelihood of specific error types based on task characteristics
  • Critical Action and Decision (CAD) timelines: Maps decision points where system defenses failed

For example, a labeling mix-up might reveal:

  • Task factors: Nearly identical packaging for two products (29% contribution to error likelihood)
  • Environmental factors: Poor lighting in labeling area (18%)
  • Organizational factors: Inadequate change control when adding new SKUs (53%)

Redesign for Intuitive Use

The redesign of for intuitive use requires multilayered approaches based on understand how human brains actually work. At the foundation lies procedural chunking, an evidence-based method that restructures complex standard operating procedures (SOPs) into digestible cognitive units aligned with working memory limitations. This approach segments multiphase processes like aseptic filling into discrete verification checkpoints, reducing cognitive overload while maintaining procedural integrity through sequenced validation gates. By mirroring the brain’s natural pattern recognition capabilities, chunked protocols demonstrate significantly higher compliance rates compared to traditional monolithic SOP formats.

Complementing this cognitive scaffolding, mistake-proof redesigns create inherent error detection mechanisms.

To sustain these engineered safeguards, progressive facilities implement peer-to-peer audit protocols during critical operations and transition periods.

Leverage Error Data Analytics

The integration of data analytics into organizational processes has emerged as a critical strategy for minimizing human error, enhancing accuracy, and driving informed decision-making. By leveraging advanced computational techniques, automation, and machine learning, data analytics addresses systemic vulnerabilities.

Human Error Assessment and Reduction Technique (HEART): A Systematic Framework for Error Mitigation

Benefits of the Human Error Assessment and Reduction Technique (HEART)

1. Simplicity and Speed: HEART is designed to be straightforward and does not require complex tools, software, or large datasets. This makes it accessible to organizations without extensive human factors expertise and allows for rapid assessments. The method is easy to understand and apply, even in time-constrained or resource-limited environments.

2. Flexibility and Broad Applicability: HEART can be used across a wide range of industries—including nuclear, healthcare, aviation, rail, process industries, and engineering—due to its generic task classification and adaptability to different operational contexts. It is suitable for both routine and complex tasks.

3. Systematic Identification of Error Influences: The technique systematically identifies and quantifies Error Producing Conditions (EPCs) that increase the likelihood of human error. This structured approach helps organizations recognize the specific factors—such as time pressure, distractions, or poor procedures—that most affect reliability.

4. Quantitative Error Prediction: HEART provides a numerical estimate of human error probability for specific tasks, which can be incorporated into broader risk assessments, safety cases, or design reviews. This quantification supports evidence-based decision-making and prioritization of interventions.

5. Actionable Risk Reduction: By highlighting which EPCs most contribute to error, HEART offers direct guidance on where to focus improvement efforts—whether through engineering redesign, training, procedural changes, or automation. This can lead to reduced error rates, improved safety, fewer incidents, and increased productivity.

6. Supports Accident Investigation and Design: HEART is not only a predictive tool but also valuable in investigating incidents and guiding the design of safer systems and procedures. It helps clarify how and why errors occurred, supporting root cause analysis and preventive action planning.

7. Encourages Safety and Quality Culture and Awareness: Regular use of HEART increases awareness of human error risks and the importance of control measures among staff and management, fostering a proactive culture.

When Is HEART Best Used?

  • Risk Assessment for Critical Tasks: When evaluating tasks where human error could have severe consequences (e.g., operating nuclear control systems, administering medication, critical maintenance), HEART helps quantify and reduce those risks.
  • Design and Review of Procedures: During the design or revision of operational procedures, HEART can identify steps most vulnerable to error and suggest targeted improvements.
  • Incident Investigation: After an failure or near-miss, HEART helps reconstruct the event, identify contributing EPCs, and recommend changes to prevent recurrence.
  • Training and Competence Assessment: HEART can inform training programs by highlighting the conditions and tasks where errors are most likely, allowing for focused skill development and awareness.
  • Resource-Limited or Fast-Paced Environments: Its simplicity and speed make HEART ideal for organizations needing quick, reliable human error assessments without extensive resources or data.

Generic Task Types (GTTs): Establishing Baselines

HEART classifies human activities into nine Generic Task Types (GTT) with predefined nominal human error probabilities (NHEPs) derived from decades of industrial incident data:

GTT CodeTask DescriptionNominal HEP Range
AComplex, novel tasks requiring problem-solving0.55 (0.35–0.97)
BShifting attention between multiple systems0.26 (0.14–0.42)
CHigh-skill tasks under time constraints0.16 (0.12–0.28)
DRule-based diagnostics under stress0.09 (0.06–0.13)
ERoutine procedural tasks0.02 (0.007–0.045)
FRestoring system states0.003 (0.0008–0.007)
GHighly practiced routine operations0.0004 (0.00008–0.009)
HSupervised automated actions0.00002 (0.000006–0.0009)
MMiscellaneous/undefined tasks0.003 (0.008–0.11)

Comprehensive Taxonomy of Error-Producing Conditions (EPCs)

HEART’s 38 Error Producing Conditionss represent contextual amplifiers of error probability, categorized under the 4M Framework (Man, Machine, Media, Management):

EPC CodeDescriptionMax Effect4M Category
1Unfamiliarity with task17×Man
2Time shortage11×Management
3Low signal-to-noise ratio10×Machine
4Override capability of safety featuresMachine
5Spatial/functional incompatibilityMachine
6Model mismatch between mental and system statesMan
7Irreversible actionsMachine
8Channel overload (information density)Media
9Technique unlearningMan
10Inadequate knowledge transfer5.5×Management
11Performance ambiguityMedia
12Misperception of riskMan
13Poor feedback systemsMachine
14Delayed/incomplete feedbackMedia
15Operator inexperienceMan
16Impoverished information qualityMedia
17Inadequate checking proceduresManagement
18Conflicting objectives2.5×Management
19Lack of information diversity2.5×Media
20Educational/training mismatchManagement
21Dangerous incentivesManagement
22Lack of skill practice1.8×Man
23Unreliable instrumentation1.6×Machine
24Need for absolute judgments1.6×Man
25Unclear functional allocation1.6×Management
26No progress tracking1.4×Media
27Physical capability mismatches1.4×Man
28Low semantic meaning of information1.4×Media
29Emotional stress1.3×Man
30Ill-health1.2×Man
31Low workforce morale1.2×Management
32Inconsistent interface design1.15×Machine
33Poor environmental conditions1.1×Media
34Low mental workload1.1×Man
35Circadian rhythm disruption1.06×Man
36External task pacing1.03×Management
37Supernumerary staffing issues1.03×Management
38Age-related capability decline1.02×Man

HEP Calculation Methodology

The HEART equation incorporates both multiplicative and additive effects of EPCs:

Where:

  • NHEP: Nominal Human Error Probability from GTT
  • EPC_i: Maximum effect of i-th EPC
  • APOE_i: Assessed Proportion of Effect (0–1)

HEART Case Study: Operator Error During Biologics Drug Substance Manufacturing

A biotech facility was producing a monoclonal antibody (mAb) drug substance using mammalian cell culture in large-scale bioreactors. The process involved upstream cell culture (expansion and production), followed by downstream purification (protein A chromatography, filtration), and final bulk drug substance filling. The manufacturing process required strict adherence to parameters such as temperature, pH, and feed rates to ensure product quality, safety, and potency.

During a late-night shift, an operator was responsible for initiating a nutrient feed into a 2,000L production bioreactor. The standard operating procedure (SOP) required the feed to be started at 48 hours post-inoculation, with a precise flow rate of 1.5 L/hr for 12 hours. The operator, under time pressure and after a recent shift change, incorrectly programmed the feed rate as 15 L/hr rather than 1.5 L/hr.

Outcome:

  • The rapid addition of nutrients caused a metabolic imbalance, leading to excessive cell growth, increased waste metabolite (lactate/ammonia) accumulation, and a sharp drop in product titer and purity.
  • The batch failed to meet quality specifications for potency and purity, resulting in the loss of an entire production lot.
  • Investigation revealed no system alarms for the high feed rate, and the error was only detected during routine in-process testing several hours later.

HEART Analysis

Task Definition

  • Task: Programming and initiating nutrient feed in a GMP biologics manufacturing bioreactor.
  • Criticality: Direct impact on cell culture health, product yield, and batch quality.

Generic Task Type (GTT)

GTT CodeDescriptionNominal HEP
ERoutine procedural task with checking0.02

Error-Producing Conditions (EPCs) Using the 5M Model

5M CategoryEPC (HEART)Max EffectAPOEExample in Incident
ManInexperience with new feed system (EPC15)0.8Operator recently trained on upgraded control interface
MachinePoor feedback (no alarm for high feed rate, EPC13)0.7System did not alert on out-of-range input
MediaAmbiguous SOP wording (EPC11)0.5SOP listed feed rate as “1.5L/hr” in a table, not text
ManagementTime pressure to meet batch deadlines (EPC2)11×0.6Shift was behind schedule due to earlier equipment delay
MilieuDistraction during shift change (EPC36)1.03×0.9Handover occurred mid-setup, leading to divided attention

Human Error Probability (HEP) Calculation

HEP ≈ 3.5 (350%)
This extremely high error probability highlights a systemic vulnerability, not just an individual lapse.

Root Cause and Contributing Factors

  • Operator: Recently trained, unfamiliar with new interface (Man)
  • System: No feedback or alarm for out-of-spec feed rate (Machine)
  • SOP: Ambiguous presentation of critical parameter (Media)
  • Management: High pressure to recover lost time (Management)
  • Environment: Shift handover mid-task, causing distraction (Milieu)

Corrective Actions

Technical Controls

  • Automated Range Checks: Bioreactor control software now prevents entry of feed rates outside validated ranges and requires supervisor override for exceptions.
  • Visual SOP Enhancements: Critical parameters are now highlighted in both text and tables, and reviewed during operator training.

Human Factors & Training

  • Simulation-Based Training: Operators practice feed setup in a virtual environment simulating distractions and time pressure.
  • Shift Handover Protocol: Critical steps cannot be performed during handover periods; tasks must be paused or completed before/after shift changes.

Management & Environmental Controls

  • Production Scheduling: Buffer time added to schedules to reduce time pressure during critical steps.
  • Alarm System Upgrade: Real-time alerts for any parameter entry outside validated ranges.

Outcomes (6-Month Review)

MetricPre-InterventionPost-Intervention
Feed rate programming errors4/year0/year
Batch failures (due to feed)2/year0/year
Operator confidence (survey)62/10091/100

Lessons Learned

  • Systemic Safeguards: Reliance on operator vigilance alone is insufficient in complex biologics manufacturing; layered controls are essential.
  • Human Factors: Addressing EPCs across the 5M model—Man, Machine, Media, Management, Milieu—dramatically reduces error probability.
  • Continuous Improvement: Regular review of near-misses and operator feedback is crucial for maintaining process robustness in biologics manufacturing.

This case underscores how a HEART-based approach, tailored to biologics drug substance manufacturing, can identify and mitigate multi-factorial risks before they result in costly failures.

Quality Review

Maintaining high-quality products is paramount, and a critical component of ensuring quality is implementing a robust review of work by a second or third person, a peer review, and/or quality review—also known as a work product review process. Like many tools, it can be underutilized. It also gets to the heart of the question of Quality Unit oversight.

Introduction to Work Product Review

Work product review systematically evaluates the output from various processes or tasks to ensure they meet predefined quality standards. This review is crucial in environments where the quality of the final product directly impacts safety and efficacy, such as in pharmaceutical manufacturing. Work product review aims to identify any deviations or defects early in the process, allowing for timely corrections and minimizing the risk of non-compliance with regulatory requirements.

Criteria for Work Product Review

To ensure that work product reviews are effective, several key criteria should be established:

  1. Integration with Quality Management Systems: Integrate risk-based thinking into the quality management system to ensure that work product reviews are aligned with overall quality objectives. This involves regularly reviewing and updating risk assessments to reflect changes in processes or new information.
  2. Clear Objectives: The review should have well-defined objectives that align with the process they exist within and regulatory requirements. For instance, in pharmaceutical manufacturing, these objectives might include ensuring that all documentation is accurate and complete and that manufacturing processes adhere to GMP standards.
  3. Risk-Based: Apply work product reviews to areas identified as high-risk during the risk assessment. This ensures that resources are allocated efficiently, focusing on processes that have the greatest potential impact on quality.
  4. Standardized Procedures: Standardized procedures should be established for conducting the review. These procedures should outline the steps involved, the reviewers’ roles and responsibilities, and the criteria for accepting or rejecting the work product.
  5. Trained Reviewers: Reviewers should be adequately trained and competent in the subject matter. This means understanding not just the deliverable being reviewed but the regulatory framework it sits within and how it applies to the specific work products being reviewed in a GMP environment.
  6. Documentation: All reviews should be thoroughly documented. This documentation should include the review’s results, any findings or issues identified, and actions taken to address these issues.
  7. Feedback Loop: There should be a mechanism for feedback from the review process to improve future work products. This could involve revising procedures or providing additional training to personnel.

Bridging the Gap Between Work-as-Imagined, Work-as-Prescribed, and Work-as-Done

Work product review is a systematic process that evaluates the output from various tasks to ensure they meet predefined quality standards connecting to work-as-imagined, work-as-prescribed, and work-as-done. Work product review serves as a bridge between these concepts by systematically evaluating the output of work processes. Here’s how it connects:

  • Alignment with Work-as-Prescribed: Work product review ensures that outputs comply with established standards and procedures (work-as-prescribed), helping to maintain regulatory compliance and quality standards.
  • Insight into Work-as-Done: Through the review process, organizations gain insight into how work is actually being performed (work-as-done). This helps identify any deviations from prescribed procedures and allows for adjustments to improve alignment between work-as-prescribed and work-as-done.
  • Closing the Gap with Work-as-Imagined: By documenting and addressing discrepancies between work-as-imagined and work-as-done, work product review facilitates communication and feedback that can refine policies and procedures. This helps to bring work-as-imagined closer to the realities of work-as-done, improving the effectiveness of quality oversight.

Work product review is essential for ensuring that the quality of work outputs aligns with both prescribed standards and the realities of how work is actually performed. By bridging the gaps between work-as-imagined, work-as-prescribed, and work-as-done, organizations can enhance their quality management systems and maintain high standards of quality, safety and efficacy.

Aligning to the Role of Quality Unit Oversight

While work product review does not guarantee Quality Unit Oversight, it is a potential control to ensure this oversight.

In the pharmaceutical industry, the Quality Unit plays a pivotal role in ensuring drug products’ safety, efficacy, and quality. It oversees all quality-related aspects, from raw material selection to final product release. However, the Quality Unit must be enabled appropriately and structured within the organization to effectively exercise its authority and fulfill its responsibilities. This blog post explores what it means for a Quality Unit to have the necessary authority and how insufficient implementation of its responsibilities can impact pharmaceutical manufacturing.

Responsibilities of the Quality Unit

Establishing and Maintaining the Quality System: The Quality Unit must set up and continuously update the quality management system to ensure compliance with GxPs and industry best practices.

Auditing and Compliance: Conduct internal audits to ensure adherence to policies and procedures, and report quality system performance metrics.

Approving and Rejecting Components and Products: The Quality Unit has the authority to approve or reject components, drug products, and packaging materials based on quality standards.

Investigating Nonconformities: Ensuring thorough investigations into production errors, discrepancies, and complaints related to product quality.

Keeping Management Informed: Reporting on product, process, and system risks, as well as outcomes of regulatory inspections.

What It Means for a Quality Unit to Be Enabled

For a Quality Unit to be effectively enabled, it must have:

  • Independence: The Quality Unit should operate independently of production units to avoid conflicts of interest and ensure unbiased decision-making.
  • Authority: It must have the authority to approve or reject the work product without undue influence from other departments.
  • Resources: Adequate personnel are essential for conducting the quality unit functions.
  • Documentation and Procedures: Clear, documented procedures outlining responsibilities and processes are crucial for maintaining consistency and compliance.

Insufficient Implementation of Responsibilities

When a Quality Unit insufficiently implements its responsibilities, it can lead to significant issues, including:

  • Regulatory Noncompliance: Failure to adhere to GxPs and regulatory standards can result in regulatory action.
  • Product Quality Issues: Inadequate oversight can lead to the release of substandard products, posing risks to patient safety and public health.
  • Lack of Continuous Improvement: Without effective quality systems in place, opportunities for process improvements and innovation may be missed.

The Quality Unit is the backbone of pharmaceutical manufacturing, ensuring that products meet the highest standards of quality and safety. By understanding the Quality Unit’s responsibilities and ensuring it has the necessary authority and resources, pharmaceutical companies can maintain compliance, protect public health, and foster a culture of continuous improvement. Inadequate implementation of these responsibilities can have severe consequences, emphasizing the importance of a well-structured and empowered Quality Unit.

By understanding these responsibilities, we can take a risk-based approach to applying quality review.

When to Apply Quality Review as Work Product Review

Work product review by Quality should be applied at critical stages to guarantee critical-to-quality attributes, including adherence to the regulations. This should be a risk-based approach. As such, it should be identified as controls in a living risks assessment and adjusted (add more, remove where unnecessary) as appropriate.

Closely scrutinize the responsibilities of the Quality Unit in the regulations to ensure all are met.

Best Practices in Quality Review

Rubrics are a great way to standardize quality reviews. If it is important enough to require a work review, it is important enough to standardize. The process owner should develop and maintain these rubrics with an appropriate group of stakeholder custodians. This is a key part of knowledge management. Having this cross-functional perspective on the output and what quality looks like is critical. This rubric should include:

  • Definition of prescribed work and the intended output that is being reviewed
  • Potential outcomes related to critical attributes, including definitions of technical accuracy
  • Methods and techniques used to generate the outcome
  • Operating experience and lessons learned
  • Risks, hazards, and user-centered design considerations
  • Requirements, standards, and code compliance
  • Planning, oversight, and acceptance testing
  • Input data and sources
  • Assumptions
  • Documentation required
  • Reviews and approvals required
  • Program or procedural obstacles to desired performance
  • Surprise situations, for example, unanticipated risk factors, schedule or scope changes, and organizational issues
  • Engineering human performance tool(s) applicable to activities being reviewed.

The rubric should have an assessment component, and that assessment should feed back into the originator’s qualified state.

Work product reviews must be early enough to allow feedback into the normal work for repetitive tasks. This should lead to gates in processes, quality-on-the-floor, or better-trained supervisors performing better and more effective reviews. This feedback should always be to the responsible person – the originator—and should be, wherever possible, face-to-face feedback to resolve the particular issues identified. This dialogue is critical.

Conclusion

Work product review is a powerful tool for enhancing quality oversight. By aligning this process with the responsibilities of the Quality Unit and implementing best practices such as standardized rubrics and a risk-based approach, companies can ensure that their products meet the highest standards of quality and safety. Effective work product review not only supports regulatory compliance but also fosters a culture of continuous improvement, which is essential for maintaining excellence in the pharmaceutical industry.

Types of Work, an Explainer

The concepts of work-as-imagined, work-as-prescribed, work-as-done, work-as-disclosed, and work-as-reported have been discussed and developed primarily within the field of human factors and ergonomics. These concepts have been elaborated by various experts, including Steven Shorrock, who has written extensively on the topic and I cannot recommend enough.

  • Work-as-Imagined: This concept refers to how people think work should be done or imagine it is done. It is often used by policymakers, regulators, and managers who design work processes without direct involvement in the actual work.
  • Work-as-Prescribed: This involves the formalization of work through rules, procedures, and guidelines. It is how work is officially supposed to be done, often documented in organizational standards.
  • Work-as-Done: This represents the reality of how work is actually performed in practice, including the adaptations and adjustments made by workers to meet real-world demands.
  • Work-as-Disclosed: Also known as work-as-reported or work-as-explained, this is how people describe or report their work, which may differ from both work-as-prescribed and work-as-done due to various factors, including safety and organizational culture[3][4].
  • Work-as-Reported: This term is often used interchangeably with work-as-disclosed and refers to the accounts of work provided by workers, which may be influenced by what they believe should be communicated to others.
  • Work-as-Measured: The quantifiable aspects of work that are tracked and assessed, often focusing on performance metrics and outcomes
AspectWork-as-DoneWork-as-ImaginedWork-as-InstructedWork-as-PrescribedWork-as-ReportedWork-as-Measured
DefinitionActual activities performed in the workplace.How work is thought to be done, based on assumptions and expectation.Direct instructions given to workers on task performance.Formalized work according to rules, policies, and procedures.Description of work as shared verbally or in writing.Quantitative assessment of work performance.
PurposeAchieve objectives in real-world conditions, adapting as necessary.Conceptual understanding and planning of work.Ensure tasks are performed correctly and efficiently.Standardize and control work for compliance and safety.Communicate work processes and outcomes.Evaluate work efficiency and effectiveness.
CharacteristicsAdaptive, context-dependent, often involves improvisation.Based on assumptions, may not align with reality.Clear, direct, and often specific to tasks.Detailed, formal, assumed to be the correct way to work.May not fully reflect reality, influenced by audience and context.Objective, based on metrics and data.
AspectWork-as-MeasuredWork-as-Judged
DefinitionQuantification or classification of aspects of work.Evaluation or assessment of work based on criteria or standards.
PurposeTo assess, understand, and evaluate work performance using metrics and data.To form opinions or make decisions about work quality or effectiveness.
CharacteristicsObjective and subjective measures, often numerical; can lack stability and validity.Subjective, influenced by personal biases, experiences, and expectations.
AgencyConducted by supervisors, managers, or specialists in various fields.Performed by individuals or groups with authority to evaluate work performance.
GranularityCan range from coarse (e.g., overall productivity) to fine (e.g., specific actions).Typically broader, considering overall performance rather than specific details.
InfluenceAffected by technological, social, and regulatory contexts.Affected by preconceived notions and potential biases.

Further Reading

Global versus Local Process and Procedure and the eQMS

Companies both large and small grapple with how and when to create standard work at the global level, while still having the scalability to capture different GXP activity families and product modality.

I’ve discussed before on document hierarchy and on the leveling of process and procedure. It is really important to level your processes, and this architecture should be deliberate and shepherded.

This really gets to the heart of work-as-imagined and prescribed, and the concept of standard work.

Benefits of Standard Work

  • Ensures all work is done according to the current best practice
  • Consistency is the essential ingredient of quality
  • Allows organizations to scale rapidly
  • Puts the focus on the process and not an individual or team
  • Makes improvements easier and faster

Global versus Local Process and Procedure in the Document Hierarchy

Most Quality Hierarchies look fairly similar.

A Document Hierarchy

Excluding the Program level (which becomes even more important) we can expand the model in the process band to account for global versus local.

Global and local process within the document hierarchy

Quality Manual and Policy remains global with local input and determine the overall structure of the quality management system.

Global Process is created when a process is majority task and role driven at a global level. It is pan-GXP, pan-modality, pan-geography. It is the standard way of work to drive consistency across and through the organization.

Local Process is created when a process is specific to a specific GXP, product modality, geography.

Procedure, which describes the tasks, can be created off of local or global process. When the global process has localizations (a CAPA is a CAPA but how I build action items may differ across sites), I can build local versions off the global process.

For an example, Document and Record Management.

This approach takes real vision among leaders to drive for consistency and simplicity. This activity is a core component in good system design, no matter the size of the organization.

PrincipleDescriptionApplication for Global and Local Process
BalanceThe system creates value for the multiple stakeholders. While the ideal is to develop a design that maximizes the value for all the key stakeholders, the designer often has to compromise and balance the needs of the various stakeholders.The value of standard work really shines here.
CongruenceThe degree to which the system components are aligned and consistent with each other and the other organizational systems, culture, plans, processes, information, resource decisions, and actions.We gain congruence through ensuring key processes are at the global level.
ConvenienceThe system is designed to be as convenient as possible for the participants to implement (a.k.a. user friendly). System includes specific processes, procedures, and controls only when necessary.The discussion around global versus local will often depend on how you define convenience
CoordinationSystem components are interconnected and harmonized with the other (internal and external) components, systems, plans, processes, information, and resource decisions toward common action or effort. This is beyond congruence and is achieved when the individual components of a system operate as a fully interconnected unit.How we ensure coordination across and through an organization.
EleganceComplexity vs. benefit — the system includes only enough complexity as is necessary to meet the stakeholder’s needs. In other words, keep the design as simple as possible and no more while delivering the desired benefits. It often requires looking at the system in new ways.Keep this in mind as global for the sake of global is not always the right decision.
HumanParticipants in the system are able to find joy, purpose and meaning in their work.Never forget
LearningKnowledge management, with opportunities for reflection and learning (learning loops), is designed into the system. Reflection and learning are built into the system at key points to encourage single- and double-loop learning from experience to improve future implementation and to systematically evaluate the design of the system itself.Building the right knowledge management into the organization is critical to leverage this model
SustainabilityThe system effectively meets the near- and long-term needs of the current stakeholders without compromising the ability of future generations of stakeholders to meet their own needs.Ensure the appropriate tools exist to sustain, including regulatory intelligence. Long-term scalability.
Pillars of Good System Design for Gloval and Local Process

Utilizing the eQMS to drive

The ideal state when implementing (or improving) an eQMS is to establish global processes and allow system functionality to localize as appropriate.

Leveraging the eQMS

So for example, every CAPA is the same (identify problem and root cause, create plan, implement plan, prove implementation is effective. This is a global process. However, one wants specific task detail at a lower level, for example GMP sites may care about certain fields more the GCP, medical device has specific needs, etc. These local task level needs can be mainted within one workflow.

The Key is Fit-For-Purpose Fit-for-Use

A fit for purpose process meets the requirements of the organization.

A fit for use process is usable throughout the lifecycle.

Global and localizing processes is a key part of making both happen.