Sidney Dekker: The Safety Scientist Who Influences How I Think About Quality

Over the past decades, as I’ve grown and now led quality organizations in biotechnology, I’ve encountered many thinkers who’ve shaped my approach to investigation and risk management. But few have fundamentally altered my perspective like Sidney Dekker. His work didn’t just add to my toolkit—it forced me to question some of my most basic assumptions about human error, system failure, and what it means to create genuinely effective quality systems.

Dekker’s challenge to move beyond “safety theater” toward authentic learning resonates deeply with my own frustrations about quality systems that look impressive on paper but fail when tested by real-world complexity.

Why Dekker Matters for Quality Leaders

Professor Sidney Dekker brings a unique combination of academic rigor and operational experience to safety science. As both a commercial airline pilot and the Director of the Safety Science Innovation Lab at Griffith University, he understands the gap between how work is supposed to happen and how it actually gets done. This dual perspective—practitioner and scholar—gives his critiques of traditional safety approaches unusual credibility.

But what initially drew me to Dekker’s work wasn’t his credentials. It was his ability to articulate something I’d been experiencing but couldn’t quite name: the growing disconnect between our increasingly sophisticated compliance systems and our actual ability to prevent quality problems. His concept of “drift into failure” provided a framework for understanding why organizations with excellent procedures and well-trained personnel still experience systemic breakdowns.

The “New View” Revolution

Dekker’s most fundamental contribution is what he calls the “new view” of human error—a complete reframing of how we understand system failures. Having spent years investigating deviations and CAPAs, I can attest to how transformative this shift in perspective can be.

The Traditional Approach I Used to Take:

  • Human error causes problems
  • People are unreliable; systems need protection from human variability
  • Solutions focus on better training, clearer procedures, more controls

Dekker’s New View That Changed My Practice:

  • Human error is a symptom of deeper systemic issues
  • People are the primary source of system reliability, not the threat to it
  • Variability and adaptation are what make complex systems work

This isn’t just academic theory—it has practical implications for every investigation I lead. When I encounter “operator error” in a deviation investigation, Dekker’s framework pushes me to ask different questions: What made this action reasonable to the operator at the time? What system conditions shaped their decision-making? How did our procedures and training actually perform under real-world conditions?

This shift aligns perfectly with the causal reasoning approaches I’ve been developing on this blog. Instead of stopping at “failure to follow procedure,” we dig into the specific mechanisms that drove the event—exactly what Dekker’s view demands.

Drift Into Failure: Why Good Organizations Go Bad

Perhaps Dekker’s most powerful concept for quality leaders is “drift into failure”—the idea that organizations gradually migrate toward disaster through seemingly rational local decisions. This isn’t sudden catastrophic failure; it’s incremental erosion of safety margins through competitive pressure, resource constraints, and normalized deviance.

I’ve seen this pattern repeatedly. For example, a cleaning validation program starts with robust protocols, but over time, small shortcuts accumulate: sampling points that are “difficult to access” get moved, hold times get shortened when production pressure increases, acceptance criteria get “clarified” in ways that gradually expand limits.

Each individual decision seems reasonable in isolation. But collectively, they represent drift—a gradual migration away from the original safety margins toward conditions that enable failure. The contamination events and data integrity issues that plague our industry often represent the endpoint of these drift processes, not sudden breakdowns in otherwise reliable systems.

Beyond Root Cause: Understanding Contributing Conditions

Traditional root cause analysis seeks the single factor that “caused” an event, but complex system failures emerge from multiple interacting conditions. The take-the-best heuristic I’ve been exploring on this blog—focusing on the most causally powerful factor—builds directly on Dekker’s insight that we need to understand mechanisms, not hunt for someone to blame.

When I investigate a failure now, I’m not looking for THE root cause. I’m trying to understand how various factors combined to create conditions for failure. What pressures were operators experiencing? How did procedures perform under actual conditions? What information was available to decision-makers? What made their actions reasonable given their understanding of the situation?

This approach generates investigations that actually help prevent recurrence rather than just satisfying regulatory expectations for “complete” investigations.

Just Culture: Moving Beyond Blame

Dekker’s evolution of just culture thinking has been particularly influential in my leadership approach. His latest work moves beyond simple “blame-free” environments toward restorative justice principles—asking not “who broke the rule” but “who was hurt and how can we address underlying needs.”

This shift has practical implications for how I handle deviations and quality events. Instead of focusing on disciplinary action, I’m asking: What systemic conditions contributed to this outcome? What support do people need to succeed? How can we address the underlying vulnerabilities this event revealed?

This doesn’t mean eliminating accountability—it means creating accountability systems that actually improve performance rather than just satisfying our need to assign blame.

Safety Theater: The Problem with Compliance Performance

Dekker’s most recent work on “safety theater” hits particularly close to home in our regulated environment. He defines safety theater as the performance of compliance when under surveillance that retreats to actual work practices when supervision disappears.

I’ve watched organizations prepare for inspections by creating impressive documentation packages that bear little resemblance to how work actually gets done. Procedures get rewritten to sound more rigorous, training records get updated, and everyone rehearses the “right” answers for auditors. But once the inspection ends, work reverts to the adaptive practices that actually make operations function.

This theater emerges from our desire for perfect, controllable systems, but it paradoxically undermines genuine safety by creating inauthenticity. People learn to perform compliance rather than create genuine safety and quality outcomes.

The falsifiable quality systems I’ve been advocating on this blog represent one response to this problem—creating systems that can be tested and potentially proven wrong rather than just demonstrated as compliant.

Six Practical Takeaways for Quality Leaders

After years of applying Dekker’s insights in biotechnology manufacturing, here are the six most practical lessons for quality professionals:

1. Treat “Human Error” as the Beginning of Investigation, Not the End

When investigations conclude with “human error,” they’ve barely started. This should prompt deeper questions: Why did this action make sense? What system conditions shaped this decision? What can we learn about how our procedures and training actually perform under pressure?

2. Understand Work-as-Done, Not Just Work-as-Imagined

There’s always a gap between procedures (work-as-imagined) and actual practice (work-as-done). Understanding this gap and why it exists is more valuable than trying to force compliance with unrealistic procedures. Some of the most important quality improvements I’ve implemented came from understanding how operators actually solve problems under real conditions.

3. Measure Positive Capacities, Not Just Negative Events

Traditional quality metrics focus on what didn’t happen—no deviations, no complaints, no failures. I’ve started developing metrics around investigation quality, learning effectiveness, and adaptive capacity rather than just counting problems. How quickly do we identify and respond to emerging issues? How effectively do we share learning across sites? How well do our people handle unexpected situations?

4. Create Psychological Safety for Learning

Fear and punishment shut down the flow of safety-critical information. Organizations that want to learn from failures must create conditions where people can report problems, admit mistakes, and share concerns without fear of retribution. This is particularly challenging in our regulated environment, but it’s essential for moving beyond compliance theater toward genuine learning.

5. Focus on Contributing Conditions, Not Root Causes

Complex failures emerge from multiple interacting factors, not single root causes. The take-the-best approach I’ve been developing helps identify the most causally powerful factor while avoiding the trap of seeking THE cause. Understanding mechanisms is more valuable than finding someone to blame.

6. Embrace Adaptive Capacity Instead of Fighting Variability

People’s ability to adapt and respond to unexpected conditions is what makes complex systems work, not a threat to be controlled. Rather than trying to eliminate human variability through ever-more-prescriptive procedures, we should understand how that variability creates resilience and design systems that support rather than constrain adaptive problem-solving.

Connection to Investigation Excellence

Dekker’s work provides the theoretical foundation for many approaches I’ve been exploring on this blog. His emphasis on testable hypotheses rather than compliance theater directly supports falsifiable quality systems. His new view framework underlies the causal reasoning methods I’ve been developing. His focus on understanding normal work, not just failures, informs my approach to risk management.

Most importantly, his insistence on moving beyond negative reasoning (“what didn’t happen”) to positive causal statements (“what actually happened and why”) has transformed how I approach investigations. Instead of documenting failures to follow procedures, we’re understanding the specific mechanisms that drove events—and that makes all the difference in preventing recurrence.

Essential Reading for Quality Leaders

If you’re leading quality organizations in today’s complex regulatory environment, these Dekker works are essential:

Start Here:

For Investigation Excellence:

  • Behind Human Error (with Woods, Cook, et al.) – Comprehensive framework for moving beyond blame
  • Drift into Failure – Understanding how good organizations gradually deteriorate

For Current Challenges:

The Leadership Challenge

Dekker’s work challenges us as quality leaders to move beyond the comfortable certainty of compliance-focused approaches toward the more demanding work of creating genuine learning systems. This requires admitting that our procedures and training might not work as intended. It means supporting people when they make mistakes rather than just punishing them. It demands that we measure our success by how well we learn and adapt, not just how well we document compliance.

This isn’t easy work. It requires the kind of organizational humility that Amy Edmondson and other leadership researchers emphasize—the willingness to be proven wrong in service of getting better. But in my experience, organizations that embrace this challenge develop more robust quality systems and, ultimately, better outcomes for patients.

The question isn’t whether Sidney Dekker is right about everything—it’s whether we’re willing to test his ideas and learn from the results. That’s exactly the kind of falsifiable approach that both his work and effective quality systems demand.

Causal Reasoning: A Transformative Approach to Root Cause Analysis

Energy Safety Canada recently published a white paper on causal reasoning that offers valuable insights for quality professionals across industries. As someone who has spent decades examining how we investigate deviations and perform root cause analysis, I found their framework refreshing and remarkably aligned with the challenges we face in pharmaceutical quality. The paper proposes a fundamental shift in how we approach investigations, moving from what they call “negative reasoning” to “causal reasoning” that could significantly improve our ability to prevent recurring issues and drive meaningful improvement.

The Problem with Traditional Root Cause Analysis

Many of us in quality have experienced the frustration of seeing the same types of deviations recur despite thorough investigations and seemingly robust CAPAs. The Energy Safety Canada white paper offers a compelling explanation for this phenomenon: our investigations often focus on what did not happen rather than what actually occurred.

This approach, which the authors term “negative reasoning,” leads investigators to identify counterfactuals-things that did not occur, such as “operators not following procedures” or “personnel not stopping work when they should have”. The problem is fundamental: what was not happening cannot create the outcomes we experienced. As the authors aptly state, these counterfactuals “exist only in retrospection and never actually influenced events,” yet they dominate many of our investigation conclusions.

This insight resonates strongly with what I’ve observed in pharmaceutical quality. Six years ago the MHRA’s 2019 citation of 210 companies for inadequate root cause analysis and CAPA development – including 6 critical findings – takes on renewed significance in light of Sanofi’s 2025 FDA warning letter. While most cited organizations likely believed their investigation processes were robust (as Sanofi presumably did before their contamination failures surfaced), these parallel cases across regulatory bodies and years expose a persistent industry-wide disconnect between perceived and actual investigation effectiveness. These continued failures exemplify how superficial root cause analysis creates dangerous illusions of control – precisely the systemic flaw the MHRA data highlighted six years prior.

Negative Reasoning vs. Causal Reasoning: A Critical Distinction

The white paper makes a distinction that I find particularly valuable: negative reasoning seeks to explain outcomes based on what was missing from the system, while causal reasoning looks for what was actually present or what happened. This difference may seem subtle, but it fundamentally changes the nature and outcomes of our investigations.

When we use negative reasoning, we create what the white paper calls “an illusion of cause without being causal”. We identify things like “failure to follow procedures” or “inadequate risk assessment,” which may feel satisfying but don’t explain why those conditions existed in the first place. These conclusions often lead to generic corrective actions that fail to address underlying issues.

In contrast, causal reasoning requires statements that have time, place, and magnitude. It focuses on what was necessary and sufficient to create the effect, building a logically tight cause-and-effect diagram. This approach helps reveal how work is actually done rather than comparing reality to an imagined ideal.

This distinction parallels the gap between “work-as-imagined” (the black line) and “work-as-done” (the blue line). Too often, our investigations focus only on deviations from work-as-imagined without trying to understand why work-as-done developed differently.

A Tale of Two Analyses: The Power of Causal Reasoning

The white paper presents a compelling case study involving a propane release and operator injury that illustrates the difference between these two approaches. When initially analyzed through negative reasoning, investigators concluded the operator:

  • Used an improper tool
  • Deviated from good practice
  • Failed to recognize hazards
  • Failed to learn from past experiences

These conclusions placed blame squarely on the individual and led leadership to consider terminating the operator.

However, when the same incident was examined through causal reasoning, a different picture emerged:

  • The operator used the pipe wrench because it was available at the pump specifically for this purpose
  • The pipe wrench had been deliberately left at that location because operators knew the valve was hard to close
  • The operator acted quickly because he perceived a risk to the plant and colleagues
  • Leadership had actually endorsed this workaround four years earlier during a turnaround

This causally reasoned analysis revealed that what appeared to be an individual failure was actually a system-level issue that had been normalized over time. Rather than punishing the operator, leadership recognized their own role in creating the conditions for the incident and implemented systemic improvements.

This example reminded me of our discussions on barrier analysis, where we examine barriers that failed, weren’t used, or didn’t exist. But causal reasoning takes this further by exploring why those conditions existed in the first place, creating a much richer understanding of how work actually happens.

First 24 Hours: Where Causal Reasoning Meets The Golden Day

In my recent post on “The Golden Start to a Deviation Investigation,” I emphasized how critical the first 24 hours are after discovering a deviation. This initial window represents our best opportunity to capture accurate information and set the stage for a successful investigation. The Energy Safety Canada white paper complements this concept perfectly by providing guidance on how to use those critical hours effectively.

When we apply causal reasoning during these early stages, we focus on collecting specific, factual information about what actually occurred rather than immediately jumping to what should have happened. This means documenting events with specificity (time, place, magnitude) and avoiding premature judgments about deviations from procedures or expectations.

As I’ve previously noted, clear and precise problem definition forms the foundation of any effective investigation. Causal reasoning enhances this process by ensuring we document using specific, factual language that describes what occurred rather than what didn’t happen. This creates a much stronger foundation for the entire investigation.

Beyond Human Error: System Thinking and Leadership’s Role

One of the most persistent challenges in our field is the tendency to attribute events to “human error.” As I’ve discussed before, when human error is suspected or identified as the cause, this should be justified only after ensuring that process, procedural, or system-based errors have not been overlooked. The white paper reinforces this point, noting that human actions and decisions are influenced by the system in which people work.

In fact, the paper presents a hierarchy of causes that resonates strongly with systems thinking principles I’ve advocated for previously. Outcomes arise from physical mechanisms influenced by human actions and decisions, which are in turn governed by systemic factors. If we only address physical mechanisms or human behaviors without changing the system, performance will eventually migrate back to where it has always been.

This connects directly to what I’ve written about quality culture being fundamental to providing quality. The white paper emphasizes that leadership involvement is directly correlated with performance improvement. When leaders engage to set conditions and provide resources, they create an environment where investigations can reveal systemic issues rather than just identify procedural deviations or human errors.

Implementing Causal Reasoning in Pharmaceutical Quality

For pharmaceutical quality professionals looking to implement causal reasoning in their investigation processes, I recommend starting with these practical steps:

1. Develop Investigator Competencies

As I’ve discussed in my analysis of Sanofi’s FDA warning letter, having competent investigators is crucial. Organizations should:

  • Define required competencies for investigators
  • Provide comprehensive training on causal reasoning techniques
  • Implement mentoring programs for new investigators
  • Regularly assess and refresh investigator skills

2. Shift from Counterfactuals to Causal Statements

Review your recent investigations and look for counterfactual statements like “operators did not follow the procedure.” Replace these with causal statements that describe what actually happened and why it made sense to the people involved at the time.

3. Implement a Sponsor-Driven Approach

The white paper emphasizes the importance of investigation sponsors (otherwise known as Area Managers) who set clear conditions and expectations. This aligns perfectly with my belief that quality culture requires alignment between top management behavior and quality system philosophy. Sponsors should:

  • Clearly define the purpose and intent of investigations
  • Specify that a causal reasoning orientation should be used
  • Provide resources and access needed to find data and translate it into causes
  • Remain engaged throughout the investigation process
Infographic capturing the 4 things a sponsor should do above

4. Use Structured Causal Analysis Tools

While the M-based frameworks I’ve discussed previously (4M, 5M, 6M) remain valuable for organizing contributing factors, they should be complemented with tools that support causal reasoning. The Cause-Consequence Analysis (CCA) I described in a recent post offers one such approach, combining elements of fault tree analysis and event tree analysis to provide a holistic view of risk scenarios.

From Understanding to Improvement

The Energy Safety Canada white paper’s emphasis on causal reasoning represents a valuable contribution to how we think about investigations across industries. For pharmaceutical quality professionals, this approach offers a way to move beyond compliance-focused investigations to truly understand how our systems operate and how to improve them.

As the authors note, “The capacity for an investigation to improve performance is dependent on the type of reasoning used by investigators”. By adopting causal reasoning, we can build investigations that reveal how work actually happens rather than simply identifying deviations from how we imagine it should happen.

This approach aligns perfectly with my long-standing belief that without a strong quality culture, people will not be ready to commit and involve themselves fully in building and supporting a robust quality management system. Causal reasoning creates the transparency and learning that form the foundation of such a culture.

I encourage quality professionals to download and read the full white paper, reflect on their current investigation practices, and consider how causal reasoning might enhance their approach to understanding and preventing deviations. The most important questions to consider are:

  1. Do your investigation conclusions focus on what didn’t happen rather than what did?
  2. How often do you identify “human error” without exploring the system conditions that made that error likely?
  3. Are your leaders engaged as sponsors who set conditions for successful investigations?
  4. What barriers exist in your organization that prevent learning from events?

As we continue to evolve our understanding of quality and safety, approaches like causal reasoning offer valuable tools for creating the transparency needed to navigate complexity and drive meaningful improvement.

Quality Review

Maintaining high-quality products is paramount, and a critical component of ensuring quality is implementing a robust review of work by a second or third person, a peer review, and/or quality review—also known as a work product review process. Like many tools, it can be underutilized. It also gets to the heart of the question of Quality Unit oversight.

Introduction to Work Product Review

Work product review systematically evaluates the output from various processes or tasks to ensure they meet predefined quality standards. This review is crucial in environments where the quality of the final product directly impacts safety and efficacy, such as in pharmaceutical manufacturing. Work product review aims to identify any deviations or defects early in the process, allowing for timely corrections and minimizing the risk of non-compliance with regulatory requirements.

Criteria for Work Product Review

To ensure that work product reviews are effective, several key criteria should be established:

  1. Integration with Quality Management Systems: Integrate risk-based thinking into the quality management system to ensure that work product reviews are aligned with overall quality objectives. This involves regularly reviewing and updating risk assessments to reflect changes in processes or new information.
  2. Clear Objectives: The review should have well-defined objectives that align with the process they exist within and regulatory requirements. For instance, in pharmaceutical manufacturing, these objectives might include ensuring that all documentation is accurate and complete and that manufacturing processes adhere to GMP standards.
  3. Risk-Based: Apply work product reviews to areas identified as high-risk during the risk assessment. This ensures that resources are allocated efficiently, focusing on processes that have the greatest potential impact on quality.
  4. Standardized Procedures: Standardized procedures should be established for conducting the review. These procedures should outline the steps involved, the reviewers’ roles and responsibilities, and the criteria for accepting or rejecting the work product.
  5. Trained Reviewers: Reviewers should be adequately trained and competent in the subject matter. This means understanding not just the deliverable being reviewed but the regulatory framework it sits within and how it applies to the specific work products being reviewed in a GMP environment.
  6. Documentation: All reviews should be thoroughly documented. This documentation should include the review’s results, any findings or issues identified, and actions taken to address these issues.
  7. Feedback Loop: There should be a mechanism for feedback from the review process to improve future work products. This could involve revising procedures or providing additional training to personnel.

Bridging the Gap Between Work-as-Imagined, Work-as-Prescribed, and Work-as-Done

Work product review is a systematic process that evaluates the output from various tasks to ensure they meet predefined quality standards connecting to work-as-imagined, work-as-prescribed, and work-as-done. Work product review serves as a bridge between these concepts by systematically evaluating the output of work processes. Here’s how it connects:

  • Alignment with Work-as-Prescribed: Work product review ensures that outputs comply with established standards and procedures (work-as-prescribed), helping to maintain regulatory compliance and quality standards.
  • Insight into Work-as-Done: Through the review process, organizations gain insight into how work is actually being performed (work-as-done). This helps identify any deviations from prescribed procedures and allows for adjustments to improve alignment between work-as-prescribed and work-as-done.
  • Closing the Gap with Work-as-Imagined: By documenting and addressing discrepancies between work-as-imagined and work-as-done, work product review facilitates communication and feedback that can refine policies and procedures. This helps to bring work-as-imagined closer to the realities of work-as-done, improving the effectiveness of quality oversight.

Work product review is essential for ensuring that the quality of work outputs aligns with both prescribed standards and the realities of how work is actually performed. By bridging the gaps between work-as-imagined, work-as-prescribed, and work-as-done, organizations can enhance their quality management systems and maintain high standards of quality, safety and efficacy.

Aligning to the Role of Quality Unit Oversight

While work product review does not guarantee Quality Unit Oversight, it is a potential control to ensure this oversight.

In the pharmaceutical industry, the Quality Unit plays a pivotal role in ensuring drug products’ safety, efficacy, and quality. It oversees all quality-related aspects, from raw material selection to final product release. However, the Quality Unit must be enabled appropriately and structured within the organization to effectively exercise its authority and fulfill its responsibilities. This blog post explores what it means for a Quality Unit to have the necessary authority and how insufficient implementation of its responsibilities can impact pharmaceutical manufacturing.

Responsibilities of the Quality Unit

Establishing and Maintaining the Quality System: The Quality Unit must set up and continuously update the quality management system to ensure compliance with GxPs and industry best practices.

Auditing and Compliance: Conduct internal audits to ensure adherence to policies and procedures, and report quality system performance metrics.

Approving and Rejecting Components and Products: The Quality Unit has the authority to approve or reject components, drug products, and packaging materials based on quality standards.

Investigating Nonconformities: Ensuring thorough investigations into production errors, discrepancies, and complaints related to product quality.

Keeping Management Informed: Reporting on product, process, and system risks, as well as outcomes of regulatory inspections.

What It Means for a Quality Unit to Be Enabled

For a Quality Unit to be effectively enabled, it must have:

  • Independence: The Quality Unit should operate independently of production units to avoid conflicts of interest and ensure unbiased decision-making.
  • Authority: It must have the authority to approve or reject the work product without undue influence from other departments.
  • Resources: Adequate personnel are essential for conducting the quality unit functions.
  • Documentation and Procedures: Clear, documented procedures outlining responsibilities and processes are crucial for maintaining consistency and compliance.

Insufficient Implementation of Responsibilities

When a Quality Unit insufficiently implements its responsibilities, it can lead to significant issues, including:

  • Regulatory Noncompliance: Failure to adhere to GxPs and regulatory standards can result in regulatory action.
  • Product Quality Issues: Inadequate oversight can lead to the release of substandard products, posing risks to patient safety and public health.
  • Lack of Continuous Improvement: Without effective quality systems in place, opportunities for process improvements and innovation may be missed.

The Quality Unit is the backbone of pharmaceutical manufacturing, ensuring that products meet the highest standards of quality and safety. By understanding the Quality Unit’s responsibilities and ensuring it has the necessary authority and resources, pharmaceutical companies can maintain compliance, protect public health, and foster a culture of continuous improvement. Inadequate implementation of these responsibilities can have severe consequences, emphasizing the importance of a well-structured and empowered Quality Unit.

By understanding these responsibilities, we can take a risk-based approach to applying quality review.

When to Apply Quality Review as Work Product Review

Work product review by Quality should be applied at critical stages to guarantee critical-to-quality attributes, including adherence to the regulations. This should be a risk-based approach. As such, it should be identified as controls in a living risks assessment and adjusted (add more, remove where unnecessary) as appropriate.

Closely scrutinize the responsibilities of the Quality Unit in the regulations to ensure all are met.

Best Practices in Quality Review

Rubrics are a great way to standardize quality reviews. If it is important enough to require a work review, it is important enough to standardize. The process owner should develop and maintain these rubrics with an appropriate group of stakeholder custodians. This is a key part of knowledge management. Having this cross-functional perspective on the output and what quality looks like is critical. This rubric should include:

  • Definition of prescribed work and the intended output that is being reviewed
  • Potential outcomes related to critical attributes, including definitions of technical accuracy
  • Methods and techniques used to generate the outcome
  • Operating experience and lessons learned
  • Risks, hazards, and user-centered design considerations
  • Requirements, standards, and code compliance
  • Planning, oversight, and acceptance testing
  • Input data and sources
  • Assumptions
  • Documentation required
  • Reviews and approvals required
  • Program or procedural obstacles to desired performance
  • Surprise situations, for example, unanticipated risk factors, schedule or scope changes, and organizational issues
  • Engineering human performance tool(s) applicable to activities being reviewed.

The rubric should have an assessment component, and that assessment should feed back into the originator’s qualified state.

Work product reviews must be early enough to allow feedback into the normal work for repetitive tasks. This should lead to gates in processes, quality-on-the-floor, or better-trained supervisors performing better and more effective reviews. This feedback should always be to the responsible person – the originator—and should be, wherever possible, face-to-face feedback to resolve the particular issues identified. This dialogue is critical.

Conclusion

Work product review is a powerful tool for enhancing quality oversight. By aligning this process with the responsibilities of the Quality Unit and implementing best practices such as standardized rubrics and a risk-based approach, companies can ensure that their products meet the highest standards of quality and safety. Effective work product review not only supports regulatory compliance but also fosters a culture of continuous improvement, which is essential for maintaining excellence in the pharmaceutical industry.

Types of Work, an Explainer

The concepts of work-as-imagined, work-as-prescribed, work-as-done, work-as-disclosed, and work-as-reported have been discussed and developed primarily within the field of human factors and ergonomics. These concepts have been elaborated by various experts, including Steven Shorrock, who has written extensively on the topic and I cannot recommend enough.

  • Work-as-Imagined: This concept refers to how people think work should be done or imagine it is done. It is often used by policymakers, regulators, and managers who design work processes without direct involvement in the actual work.
  • Work-as-Prescribed: This involves the formalization of work through rules, procedures, and guidelines. It is how work is officially supposed to be done, often documented in organizational standards.
  • Work-as-Done: This represents the reality of how work is actually performed in practice, including the adaptations and adjustments made by workers to meet real-world demands.
  • Work-as-Disclosed: Also known as work-as-reported or work-as-explained, this is how people describe or report their work, which may differ from both work-as-prescribed and work-as-done due to various factors, including safety and organizational culture[3][4].
  • Work-as-Reported: This term is often used interchangeably with work-as-disclosed and refers to the accounts of work provided by workers, which may be influenced by what they believe should be communicated to others.
  • Work-as-Measured: The quantifiable aspects of work that are tracked and assessed, often focusing on performance metrics and outcomes
AspectWork-as-DoneWork-as-ImaginedWork-as-InstructedWork-as-PrescribedWork-as-ReportedWork-as-Measured
DefinitionActual activities performed in the workplace.How work is thought to be done, based on assumptions and expectation.Direct instructions given to workers on task performance.Formalized work according to rules, policies, and procedures.Description of work as shared verbally or in writing.Quantitative assessment of work performance.
PurposeAchieve objectives in real-world conditions, adapting as necessary.Conceptual understanding and planning of work.Ensure tasks are performed correctly and efficiently.Standardize and control work for compliance and safety.Communicate work processes and outcomes.Evaluate work efficiency and effectiveness.
CharacteristicsAdaptive, context-dependent, often involves improvisation.Based on assumptions, may not align with reality.Clear, direct, and often specific to tasks.Detailed, formal, assumed to be the correct way to work.May not fully reflect reality, influenced by audience and context.Objective, based on metrics and data.
AspectWork-as-MeasuredWork-as-Judged
DefinitionQuantification or classification of aspects of work.Evaluation or assessment of work based on criteria or standards.
PurposeTo assess, understand, and evaluate work performance using metrics and data.To form opinions or make decisions about work quality or effectiveness.
CharacteristicsObjective and subjective measures, often numerical; can lack stability and validity.Subjective, influenced by personal biases, experiences, and expectations.
AgencyConducted by supervisors, managers, or specialists in various fields.Performed by individuals or groups with authority to evaluate work performance.
GranularityCan range from coarse (e.g., overall productivity) to fine (e.g., specific actions).Typically broader, considering overall performance rather than specific details.
InfluenceAffected by technological, social, and regulatory contexts.Affected by preconceived notions and potential biases.

Further Reading

Self-Checking in Work-As-Done

Self-checking is one of the most effective tools we can teach and use. Rooted in the four aspects of risk-based thinking (anticipate, monitor, respond, and learn), it refers to the procedures and checks that employees perform as part of their routine tasks to ensure the quality and accuracy of their work. This practice is often implemented in industries where precision is critical, and errors can lead to significant consequences. For instance, in manufacturing or engineering, workers might perform self-checks to verify that their work meets the required specifications before moving on to the next production stage.

A proactive approach enhances the reliability, safety, and quality of various systems and practices by allowing for immediate detection and correction of errors, thereby preventing potential failures or flaws from escalating into more significant issues.

The memory aid STAR (stop, think, act, review) helps the user recall the thoughts and actions associated with self-checking.

  1. Stop – Just before conducting a task, pause to:
    • Eliminate distractions.
    • Focus attention on the task.
  2. Think – Understand what will happen when the action is performed.
    • Verify the action is appropriate.
    • Recall the critical parameters and the action’s expected result(s).
    • Consider contingencies to mitigate harm if an unexpected result occurs.
    • If there is any doubt, STOP and get help.
  3. Act – Perform the task per work-as-prescribed
  4. Review – Verify that the expected result is obtained.
    • Verify the desired change in critical parameters.
    • Stop work if criteria are not met.
    • Perform the contingency if an unexpected result occurs.