Applying Jobs-to-Be-Done to Risk Management

In my recent exploration of the Jobs-to-Be-Done (JTBD) tool for process improvement, I examined how this customer-centric approach could revolutionize our understanding of deviation management. I want to extend that analysis to another fundamental challenge in pharmaceutical quality: risk management.

As we grapple with increasing regulatory complexity, accelerating technological change, and the persistent threat of risk blindness, most organizations remain trapped in what I call “compliance theater”—performing risk management activities that satisfy auditors but fail to build genuine organizational resilience. JTBD is a useful tool as we move beyond this theater toward risk management that actually creates value.

The Risk Management Jobs Users Actually Hire

When quality professionals, executives, and regulatory teams engage with risk management processes, what job are they really trying to accomplish? The answer reveals a profound disconnect between organizational intent and actual capability.

The Core Functional Job

“When facing uncertainty that could impact product quality, patient safety, or business continuity, I want to systematically understand and address potential threats, so I can make confident decisions and prevent surprise failures.”

This job statement immediately exposes the inadequacy of most risk management systems. They focus on documentation rather than understanding, assessment rather than decision enablement, and compliance rather than prevention.

The Consumption Jobs: The Hidden Workload

Risk management involves numerous consumption jobs that organizations often ignore:

  • Evaluation and Selection: “I need to choose risk assessment methodologies that match our operational complexity and regulatory environment.”
  • Implementation and Training: “I need to build organizational risk capability without creating bureaucratic overhead.”
  • Maintenance and Evolution: “I need to keep our risk approach current as our business and threat landscape evolves.”
  • Integration and Communication: “I need to ensure risk insights actually influence business decisions rather than gathering dust in risk registers.”

These consumption jobs represent the difference between risk management systems that organizations grudgingly tolerate and those they genuinely want to “hire.”

The Eight-Step Risk Management Job Map

Applying JTBD’s universal job map to risk management reveals where current approaches systematically fail:

1. Define: Establishing Risk Context

What users need: Clear understanding of what they’re assessing, why it matters, and what decisions the risk analysis will inform.

Current reality: Risk assessments often begin with template completion rather than context establishment, leading to generic analyses that don’t support actual decision-making.

2. Locate: Gathering Risk Intelligence

What users need: Access to historical data, subject matter expertise, external intelligence, and tacit knowledge about how things actually work.

Current reality: Risk teams typically work from documentation rather than engaging with operational reality, missing the pattern recognition and apprenticeship dividend that experienced practitioners possess.

3. Prepare: Creating Assessment Conditions

What users need: Diverse teams, psychological safety for honest risk discussions, and structured approaches that challenge rather than confirm existing assumptions.

Current reality: Risk assessments often involve homogeneous teams working through predetermined templates, perpetuating the GI Joe fallacy—believing that knowledge of risk frameworks prevents risky thinking.

4. Confirm: Validating Assessment Readiness

What users need: Confidence that they have sufficient information, appropriate expertise, and clear success criteria before proceeding.

Current reality: Risk assessments proceed regardless of information quality or team readiness, driven by schedule rather than preparation.

5. Execute: Conducting Risk Analysis

What users need: Systematic identification of risks, analysis of interconnections, scenario testing, and development of robust mitigation strategies.

Current reality: Risk analysis often becomes risk scoring—reducing complex phenomena to numerical ratings that provide false precision rather than genuine insight.

6. Monitor: Tracking Risk Reality

What users need: Early warning systems that detect emerging risks and validate the effectiveness of mitigation strategies.

Current reality: Risk monitoring typically involves periodic register updates rather than active intelligence gathering, missing the dynamic nature of risk evolution.

7. Modify: Adapting to New Information

What users need: Responsive adjustment of risk strategies based on monitoring feedback and changing conditions.

Current reality: Risk assessments often become static documents, updated only during scheduled reviews rather than when new information emerges.

8. Conclude: Capturing Risk Learning

What users need: Systematic capture of risk insights, pattern recognition, and knowledge transfer that builds organizational risk intelligence.

Current reality: Risk analysis conclusions focus on compliance closure rather than learning capture, missing opportunities to build the organizational memory that prevents risk blindness.

The Emotional and Social Dimensions

Risk management involves profound emotional and social jobs that traditional approaches ignore:

  • Confidence: Risk practitioners want to feel genuinely confident that significant threats have been identified and addressed, not just that procedures have been followed.
  • Intellectual Satisfaction: Quality professionals are attracted to rigorous analysis and robust reasoning—risk management should engage their analytical capabilities, not reduce them to form completion.
  • Professional Credibility: Risk managers want to be perceived as strategic enablers rather than bureaucratic obstacles—as trusted advisors who help organizations navigate uncertainty rather than create administrative burden.
  • Organizational Trust: Executive teams want assurance that their risk management capabilities are genuinely protective, not merely compliant.

What’s Underserved: The Innovation Opportunities

JTBD analysis reveals four critical areas where current risk management approaches systematically underserve user needs:

Risk Intelligence

Current systems document known risks but fail to develop early warning capabilities, pattern recognition across multiple contexts, or predictive insights about emerging threats. Organizations need risk management that builds institutional awareness, not just institutional documentation.

Decision Enablement

Risk assessments should create confidence for strategic decisions, enable rapid assessment of time-sensitive opportunities, and provide scenario planning that prepares organizations for multiple futures. Instead, most risk management creates decision paralysis through endless analysis.

Organizational Capability

Effective risk management should build risk literacy across all levels, create cultural resilience that enables honest risk conversations, and develop adaptive capacity to respond when risks materialize. Current approaches often centralize risk thinking rather than distributing risk capability.

Stakeholder Trust

Risk management should enable transparent communication about threats and mitigation strategies, demonstrate competence in risk anticipation, and provide regulatory confidence in organizational capabilities. Too often, risk management creates opacity rather than transparency.

Canvas representation of the JBTD

Moving Beyond Compliance Theater

The JTBD framework helps us address a key challenge in risk management: many organizations place excessive emphasis on “table stakes” such as regulatory compliance and documentation requirements, while neglecting vital aspects like intelligence, enablement, capability, and trust that contribute to genuine resilience.

This represents a classic case of process myopia—becoming so focused on risk management activities that we lose sight of the fundamental job those activities should accomplish. Organizations perfect their risk registers while remaining vulnerable to surprise failures, not because they lack risk management processes, but because those processes fail to serve the jobs users actually need accomplished.

Design Principles for User-Centered Risk Management

  • Context Over Templates: Begin risk analysis with clear understanding of decisions to be informed rather than forms to be completed.
  • Intelligence Over Documentation: Prioritize systems that build organizational awareness and pattern recognition rather than risk libraries.
  • Engagement Over Compliance: Create risk processes that attract rather than burden users, recognizing that effective risk management requires active intellectual participation.
  • Learning Over Closure: Structure risk activities to build institutional memory and capability rather than simply completing assessment cycles.
  • Integration Over Isolation: Ensure risk insights flow naturally into operational decisions rather than remaining in separate risk management systems.

Hiring Risk Management for Real Jobs

The most dangerous risk facing pharmaceutical organizations may be risk management systems that create false confidence while building no real capability. JTBD analysis reveals why: these systems optimize for regulatory approval rather than user needs, creating elaborate processes that nobody genuinely wants to “hire.”

True risk management begins with understanding what jobs users actually need accomplished: building confidence for difficult decisions, developing organizational intelligence about threats, creating resilience against surprise failures, and enabling rather than impeding business progress. Organizations that design risk management around these jobs will develop competitive advantages in an increasingly uncertain world.

The choice is clear: continue performing compliance theater, or build risk management systems that organizations genuinely want to hire. In a world where zemblanity—the tendency to encounter negative, foreseeable outcomes—threatens every quality system, only the latter approach offers genuine protection.

Risk management should not be something organizations endure. It should be something they actively seek because it makes them demonstrably better at navigating uncertainty and protecting what matters most.

Risk Blindness: The Invisible Threat

Risk blindness is an insidious loss of organizational perception—the gradual erosion of a company’s ability to recognize, interpret, and respond to threats that undermine product safety, regulatory compliance, and ultimately, patient trust. It is not merely ignorance or oversight; rather, risk blindness manifests as the cumulative inability to see threats, often resulting from process shortcuts, technology overreliance, and the undervaluing of hands-on learning.

Unlike risk aversion or neglect, which involves conscious choices, risk blindness is an unconscious deficiency. It often stems from structural changes like the automation of foundational jobs, fragmented risk ownership, unchallenged assumptions, and excessive faith in documentation or AI-generated reports. At its core, risk blindness breeds a false sense of security and efficiency while creating unseen vulnerabilities.

Pattern Recognition and Risk Blindness: The Cognitive Foundation of Quality Excellence

The Neural Architecture of Risk Detection

Pattern recognition lies at the heart of effective risk management in quality systems. It represents the sophisticated cognitive process by which experienced professionals unconsciously scan operational environments, data trends, and behavioral cues to detect emerging threats before they manifest as full-scale quality events. This capability distinguishes expert practitioners from novices and forms the foundation of what we might call “risk literacy” within quality organizations.

The development of pattern recognition in pharmaceutical quality follows predictable stages. At the most basic level (Level 1 Situational Awareness), professionals learn to perceive individual elements—deviation rates, environmental monitoring trends, supplier performance metrics. However, true expertise emerges at Level 2 (Comprehension), where practitioners begin to understand the relationships between these elements, and Level 3 (Projection), where they can anticipate future system states based on current patterns.

Research in clinical environments demonstrates that expert pattern recognition relies on matching current situational elements with previously stored patterns and knowledge, creating rapid, often unconscious assessments of risk significance. In pharmaceutical quality, this translates to the seasoned professional who notices that “something feels off” about a batch record, even when all individual data points appear within specification, or the environmental monitoring specialist who recognizes subtle trends that precede contamination events.

The Apprenticeship Dividend: Building Pattern Recognition Through Experience

The development of sophisticated pattern recognition capabilities requires what we’ve previously termed the “apprenticeship dividend”—the cumulative learning that occurs through repeated exposure to routine operations, deviations, and corrective actions. This learning cannot be accelerated through technology or condensed into senior-level training programs; it must be built through sustained practice and mentored reflection.

The Stages of Pattern Recognition Development:

Foundation Stage (Years 1-2): New professionals learn to identify individual risk elements—understanding what constitutes a deviation, recognizing out-of-specification results, and following investigation procedures. Their pattern recognition is limited to explicit, documented criteria.

Integration Stage (Years 3-5): Practitioners begin to see relationships between different quality elements. They notice when environmental monitoring trends correlate with equipment issues, or when supplier performance changes precede raw material problems. This represents the emergence of tacit knowledge—insights that are difficult to articulate but guide decision-making.

Mastery Stage (Years 5+): Expert practitioners develop what researchers call “intuitive expertise”—the ability to rapidly assess complex situations and identify subtle risk patterns that others miss. They can sense when a investigation is heading in the wrong direction, recognize when supplier responses are evasive, or detect process drift before it appears in formal metrics.

Tacit Knowledge: The Uncodifiable Foundation of Risk Assessment

Perhaps the most critical aspect of pattern recognition in pharmaceutical quality is the role of tacit knowledge—the experiential wisdom that cannot be fully documented or transmitted through formal training systems. Tacit knowledge encompasses the subtle cues, contextual understanding, and intuitive insights that experienced professionals develop through years of hands-on practice.

In pharmaceutical quality systems, tacit knowledge manifests in numerous ways:

  • Knowing which equipment is likely to fail after cleaning cycles, based on subtle operational cues rather than formal maintenance schedules
  • Recognizing when supplier audit responses are technically correct but practically inadequate
  • Sensing when investigation teams are reaching premature closure without adequate root cause analysis
  • Detecting process drift through operator reports and informal observations before it appears in formal monitoring data

This tacit knowledge cannot be captured in standard operating procedures or electronic systems. It exists in the experienced professional’s ability to read “between the lines” of formal data, to notice what’s missing from reports, and to sense when organizational pressures are affecting the quality of risk assessments.

The GI Joe Fallacy: The Dangers of “Knowing is Half the Battle”

A persistent—and dangerous—belief in quality organizations is the idea that simply knowing about risks, standards, or biases will prevent us from falling prey to them. This is known as the GI Joe fallacy—the misguided notion that awareness is sufficient to overcome cognitive biases or drive behavioral change.

What is the GI Joe Fallacy?

Inspired by the classic 1980s G.I. Joe cartoons, which ended each episode with “Now you know. And knowing is half the battle,” the GI Joe fallacy describes the disconnect between knowledge and action. Cognitive science consistently shows that knowing about biases or desired actions does not ensure that individuals or organizations will behave accordingly.

Even the founder of bias research, Daniel Kahneman, has noted that reading about biases doesn’t fundamentally change our tendency to commit them. Organizations often believe that training, SOPs, or system prompts are enough to inoculate staff against error. In reality, knowledge is only a small part of the battle; much larger are the forces of habit, culture, distraction, and deeply rooted heuristics.

GI Joe Fallacy in Quality Risk Management

In pharmaceutical quality risk management, the GI Joe fallacy can have severe consequences. Teams may know the details of risk matrices, deviation procedures, and regulatory requirements, yet repeatedly fail to act with vigilance or critical scrutiny in real situations. Loss aversion, confirmation bias, and overconfidence persist even for those trained in their dangers.

For example, base rate neglect—a bias where salient event data distracts from underlying probabilities—can influence decisions even when staff know better intellectually. This manifests in investigators overreacting to recent dramatic events while ignoring stable process indicators. Knowing about risk frameworks isn’t enough; structures and culture must be designed specifically to challenge these biases in practice, not simply in theory.

Structural Roots of Risk Blindness

The False Economy of Automation and Overconfidence

Risk blindness often arises from a perceived efficiency gained through process automation or the curtailment of on-the-ground learning. When organizations substitute active engagement for passive oversight, staff lose critical exposure to routine deviations and process variables.

Senior staff who only approve system-generated risk assessments lack daily operational familiarity, making them susceptible to unseen vulnerabilities. Real risk assessment requires repeated, active interaction with process data—not just a review of output.

Fragmented Ownership and Deficient Learning Culture

Risk ownership must be robust and proximal. When roles are fragmented—where the “system” manages risk and people become mere approvers—vital warnings can be overlooked. A compliance-oriented learning culture that believes training or SOPs are enough to guard against operational threats falls deeper into the GI Joe fallacy: knowledge is mistaken for vigilance.

Instead, organizations need feedback loops, reflection, and opportunities to surface doubts and uncertainties. Training must be practical and interactive, not limited to information transfer.

Zemblanity: The Shadow of Risk Blindness

Zemblanity is the antithesis of serendipity in the context of pharmaceutical quality—it describes the persistent tendency for organizations to encounter negative, foreseeable outcomes when risk signals are repeatedly ignored, misunderstood, or left unacted upon.

When examining risk blindness, zemblanity stands as the practical outcome: a quality system that, rather than stumbling upon unexpected improvements or positive turns, instead seems trapped in cycles of self-created adversity. Unlike random bad luck, zemblanity results from avoidable and often visible warning signs—deviations that are rationalized, oversight meetings that miss the point, and cognitive biases like the GI Joe fallacy that lull teams into a false sense of mastery

Real-World Manifestations

Case: The Disappearing Deviation

Digital batch records reduced documentation errors and deviation reports, creating an illusion of process control. But when technology transfer led to out-of-spec events, the lack of manually trained eyes meant no one was poised to detect subtle process anomalies. Staff “knew” the process in theory—yet risk blindness set in because the signals were no longer being actively, expertly interpreted. Knowledge alone was not enough.

Case: Supplier Audit Blindness

Virtual audits relying solely on documentation missed chronic training issues that onsite teams would likely have noticed. The belief that checklist knowledge and documentation sufficed prevented the team from recognizing deeper underlying risks. Here, the GI Joe fallacy made the team believe their expertise was shield enough, when in reality, behavioral engagement and observation were necessary.

Counteracting Risk Blindness: Beyond Knowing to Acting

Effective pharmaceutical quality systems must intentionally cultivate and maintain pattern recognition capabilities across their workforce. This requires structured approaches that go beyond traditional training and incorporate the principles of expertise development:

Structured Exposure Programs: New professionals need systematic exposure to diverse risk scenarios—not just successful cases, but also investigations that went wrong, supplier audits that missed problems, and process changes that had unexpected consequences. This exposure must be guided by experienced mentors who can help identify and interpret relevant patterns.

Cross-Functional Pattern Sharing: Different functional areas—manufacturing, quality control, regulatory affairs, supplier management—develop specialized pattern recognition capabilities. Organizations need systematic mechanisms for sharing these patterns across functions, ensuring that insights from one area can inform risk assessment in others.

Cognitive Diversity in Assessment Teams: Research demonstrates that diverse teams are better at pattern recognition than homogeneous groups, as different perspectives help identify patterns that might be missed by individuals with similar backgrounds and experience. Quality organizations should intentionally structure assessment teams to maximize cognitive diversity.

Systematic Challenge Processes: Pattern recognition can become biased or incomplete over time. Organizations need systematic processes for challenging established patterns—regular “red team” exercises, external perspectives, and structured devil’s advocate processes that test whether recognized patterns remain valid.

Reflective Practice Integration: Pattern recognition improves through reflection on both successes and failures. Organizations should create systematic opportunities for professionals to analyze their pattern recognition decisions, understand when their assessments were accurate or inaccurate, and refine their capabilities accordingly.

Using AI as a Learning Accelerator

AI and automation should support, not replace, human risk assessment. Tools can help new professionals identify patterns in data, but must be employed as aids to learning—not as substitutes for judgment or action.

Diagnosing and Treating Risk Blindness

Assess organizational risk literacy not by the presence of knowledge, but by the frequency of active, critical engagement with real risks. Use self-assessment questions such as:

  • Do deviation investigations include frontline voices, not just system reviewers?
  • Are new staff exposed to real processes and deviations, not just theoretical scenarios?
  • Are risk reviews structured to challenge assumptions, not merely confirm them?
  • Is there evidence that knowledge is regularly translated into action?

Why Preventing Risk Blindness Matters

Regulators evaluate quality maturity not simply by compliance, but by demonstrable capability to anticipate and mitigate risks. AI and digital transformation are intensifying the risk of the GI Joe fallacy by tempting organizations to substitute data and technology for judgment and action.

As experienced professionals retire, the gap between knowing and doing risks widening. Only organizations invested in hands-on learning, mentorship, and behavioral feedback will sustain true resilience.

Choosing Sight

Risk blindness is perpetuated by the dangerous notion that knowing is enough. The GI Joe fallacy teaches that organizational memory, vigilance, and capability require much more than knowledge—they demand deliberate structures, engaged cultures, and repeated practice that link theory to action.

Quality leaders must invest in real development, relentless engagement, and humility about the limits of their own knowledge. Only then will risk blindness be cured, and resilience secured.

Risk Management is about reducing uncertainty

Risk Management is all about eliminating surprise. So to truly start to understand our risks, we need to understand uncertainty, we need to understand the unknowns. Borrowing from Andreas Schamanek’s Taxonomies of the unknown, let’s explore a few of the various taxonomies of what is not known.

Ignorance Map

I’m pretty sure Ann Kerwin first gave us the “known unknowns” and the “unknown knowns” that people still find a source of amusement about former defense secretary Rumsfield.

KnownUnknown
KnownKnown knowns Known unknowns (conscious ignorance)
Unknown Unknown knowns (tacit knowledge) Unknown unknowns (meta-ignorance)

Understanding uncertainty involves knowledge management, this is why a rigorous knowledge management program is a prerequisite for an effective quality management system.

Risk management is then a way of teasing out the unknowns and allowing us to take action:

  1. Risk assessments mostly easily focus on the ignorance that we are aware of, the ‘known unknowns’.
  2. Risk assessments can also serve as a tool of teasing out the ‘unknown knowns’. This is why participation of subject matter experts is so critical. Through the formal methodology of the risk assessment we expose and explore tacit knowledge.
  3. The third kind of ignorance is what we do now know we do not know, the ‘unknown unknowns’. We generally become aware of unknown unknowns in two ways: hindsight (deviations) and by purposefully expanding our horizons. This expansion includes diversity and also good experimentation. It is the hardest, but perhaps, most valuable part of risk management.

Taxonomy of Ignorance

Different Kinds of Unknowns, Source: Smithson (1989, p. 9); also in Bammer et al. (2008, p. 294).

Smithson distinguishes between passive and active ignorance. Passive ignorance involves areas that we are ignorant of, whereas active ignorance refers to areas we ignore. He uses the term ‘error’ for the unknowns encompassed by passive ignorance and ‘irrelevance’ for active ignorance.

Taboo is fascinating because it gets to the heart of our cultural blindness, those parts of our organization that are closed to scrutiny.

Smithson can help us understand why risk assessments are both a qualitative and a quantitative endeavor. While dealing with the unknown is the bread and butter of statistics, only a small part of the terrain of uncertainty is covered. Under Smithson’s typology, statistics primarily operates in the area of incompleteness, across probability and some kinds of vagueness. In terms of its considerations of sampling bias, statistics also has some overlap with inaccuracy. But, as the typology shows, there is much more to unknowns than the areas statistics deals with. This is another reason that subject matter experts, and different ways of thinking is a must.

Ensuring wide and appropriate expert participation gives additional perspectives on unknowns. There is also synergies by finding unrecognized similarities between disciplines and stakeholders in the unknowns they deal with and there may be great benefit from combining forces. It is important to use these concerns to enrich thinking about unknowns, rather than ruling them out as irrelevant.

Sources of Surprise

Risk management is all about managing surprise. It helps to break surprise down to three types: risk, uncertainty and ignorance.

  • Risk: The condition in which the event, process, or outcomes and the probability that each will occur is known.
    • Issue: In reality, complete knowledge of probabilities and range of potential outcomes or consequences is not usually known and is sometimes unknowable.
  • Uncertainty: The condition in which the event, process, or outcome is known (factually or hypothetically) but the probabilities that it will occur are not known.
    • Issue: The probabilities assigned, if any, are subjective, and ways to establish reliability for different subjective probability estimates are debatable.
  • Ignorance: The condition in which the event, process, or outcome is not known or expected.
    • Issue: How can we anticipate the unknown, improve the chances of anticipating, and, therefore, improve the chances of reducing vulnerability?

Effective use of the methodology moves ideally from ignorance to eventually risk.


Ignorance

DescriptionMethods of Mitigation
Closed Ignorance
Information is available but SMEs are unwilling or unable to consider that some outcomes are unknown to them.

Self-audit process, regular third-party audits, and open and transparent system with global participation
Open Ignorance
Information is available and SMEs are willing to recognize and consider that some outcomes are unknown.
Personal
Surprise occurs because an individual SME lacks knowledge or awareness of the available information.

effective teams xxplore multiple perspectives by including a diverse set of individuals and data sources for data gathering and analysis.

Transparency in process.
Communal
Surprise occurs because a group of SMEs has only similar viewpoints represented or may be less willing to consider views outside the community.
Diversity of viewpoints and sue of tools to overcome group-think and “tribal” knowledge
Novelty
Surprise occurs because the SMEs are unable to anticipate and prepare for external shocks or internal changes in preferences, technologies, and institutions.

Simulating impacts and gaming alternative outcomes of various potentials under different conditions
(Blue Team/Read Team exercises)
Complexity
Surprise occurs when inadequate forecasting tools are used to analyze the available data, resulting in inter-relationships, hidden dependencies, feedback loops, and other negative factors that lead to inadequate or incomplete understanding of the data.
System Thinking


Track changes and interrelationships of various systems to discover potential macro-effect force changes
12-Levers


Risk Management is all about understanding surprise and working to reduce uncertainty and ignorance in order to reduce, eliminate and sometimes accept. As a methodology it is effective at avoiding surrender and denial. With innovation we can even contemplate exploitation. As organizations mature, it is important to understand these concepts and utilize them.

References

  • Gigerenzer, Gerd and Garcia-Retamero, Rocio. Cassandra’s Regret: The Psychology of Not Wanting to Know (March 2017), Psychological Review, 2017, Vol. 124, No. 2, 179–196.
  • House, Robert J., Paul J. Hanges, Mansour Javidan, Peter Dorfman, and Vipin Gupta, eds. 2004. Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies. Thousand Oaks, Calif.: Sage Publications.
  • Kerwin, A. (1993). None Too Solid: Medical Ignorance. Knowledge, 15(2), 166–185.
  • Smithson, M. (1989) Ignorance and Uncertainty: Emerging Paradigms, New York: Springer-Verlag.
  • Smithson, M. (1993) “Ignorance and Science”, Knowledge: Creation, Diffusion, Utilization, 15(2) December: 133-156.