The Jobs-to-Be-Done (JTBD): Origins, Function, and Value for Quality Systems

In the relentless march of quality and operational improvement, frameworks, methodologies and tools abound but true breakthrough is rare. There is a persistent challenge: organizations often become locked into their own best practices, relying on habitual process reforms that seldom address the deeper why of operational behavior. This “process myopia”—where the visible sequence of tasks occludes the real purpose—runs in parallel to risk blindness, leaving many organizations vulnerable to the slow creep of inefficiency, bias, and ultimately, quality failures.

The Jobs-to-Be-Done (JTBD) tool offers an effective method for reorientation. Rather than focusing on processes or systems as static routines, JTBD asks a deceptively simple question: What job are people actually hiring this process or tool to do? In deviation management, audit response, even risk assessment itself, the answer to this question is the gravitational center on which effective redesign can be based.

What Does It Mean to Hire a Process?

To “hire” a process—even when it is a regulatory obligation—means viewing the process not merely as a compliance requirement, but as a tool or mechanism that stakeholders use to achieve specific, desirable outcomes beyond simple adherence. In Jobs-to-Be-Done (JTBD), the idea of “hiring” a process reframes organizational behavior: stakeholders (such as quality professionals, operators, managers, or auditors) are seen as engaging with the process to get particular jobs done—such as ensuring product safety, demonstrating control to regulators, reducing future risk, or creating operational transparency.

When a process is regulatory-mandated—such as deviation management, change control, or batch release—the “hiring” metaphor recognizes two coexisting realities:

Dual Functions: Compliance and Value Creation

  • Compliance Function: The organization must follow the process to satisfy legal, regulatory, or contractual obligations. Not following is not an option; it’s legally or organizationally enforced.
  • Functional “Hiring”: Even for required processes, users “hire” the process to accomplish additional jobs—like protecting patients, facilitating learning from mistakes, or building organizational credibility. A well-designed process serves both external (regulatory) and internal (value-creating) goals.

Implications for Process Design

  • Stakeholders still have choices in how they interact with the process—they can engage deeply (to learn and improve) or superficially (for box-checking), depending on how well the process helps them do their “real” job.
  • If a process is viewed only as a regulatory tax, users will find ways to shortcut, minimally comply, or bypass the spirit of the requirement, undermining learning and risk mitigation.
  • Effective design ensures the process delivers genuine value, making “compliance” a natural by-product of a process stakeholders genuinely want to “hire”—because it helps them achieve something meaningful and important.

Practical Example: Deviation Management

  • Regulatory “Must”: Deviations must be documented and investigated under GMP.
  • Users “Hire” the Process to: Identify real risks early, protect quality, learn from mistakes, and demonstrate control in audits.
  • If the process enables those jobs well, it will be embraced and used effectively. If not, it becomes paperwork compliance—and loses its potential as a learning or risk-reduction tool.

To “hire” a process under regulatory obligation is to approach its use intentionally, ensuring it not only satisfies external requirements but also delivers real value for those required to use it. The ultimate goal is to design a process that people would choose to “hire” even if it were not mandatory—because it supports their intrinsic goals, such as maintaining quality, learning, and risk control.

Unpacking Jobs-to-Be-Done: The Roots of Customer-Centricity

Historical Genesis: From Marketing Myopia to Outcome-Driven Innovation

The JTBD’s intellectual lineage traces back to Theodore Levitt’s famous adage: “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.” This insight, presented in his seminal 1960 Harvard Business Review article “Marketing Myopia,” underscores the fatal flaw of most process redesigns: overinvestment in features, tools, and procedures, while neglecting the underlying human need or outcome.

This thinking resonates strongly with Peter Drucker’s core dictum that “the purpose of a business is to create and keep a customer”—and that marketing and innovation, not internal optimization, are the only valid means to this end. Both Drucker and Levitt’s insights form the philosophical substrate for JTBD, framing the product, system, or process not as an end in itself, but as a means to enable desired change in someone’s “real world”.

Modern JTBD: Ulwick, Christensen, and Theory Development

Tony Ulwick, after experiencing firsthand the failure of IBM’s PCjr product, launched a search to discover how organizations could systematically identify the outcomes customers (or process users) use to judge new offerings. Ulwick formalized jobs-as-process thinking, and by marrying Six Sigma concepts with innovation research, developed the “Outcome-Driven Innovation” (ODI) method, later shared with Clayton Christensen at Harvard.

Clayton Christensen, in his disruption theory research, sharpened the framing: customers don’t simply buy products—they “hire” them to get a job done, to make progress in their lives or work. He and Bob Moesta extended this to include the emotional and social dimensions of these jobs, and added nuance on how jobs can signal category-breaking opportunities for disruptive innovation. In essence, JTBD isn’t just about features; it’s about the outcome and the experience of progress.

The JTBD tool is now well-established in business, product development, health care, and increasingly, internal process improvement.

What Is a “Job” and How Does JTBD Actually Work?

Core Premise: The “Job” as the Real Center of Process Design

A “Job” in JTBD is not a task or activity—it is the progress someone seeks in a specific context. In regulated quality systems, this reframing prompts a pivotal question: For every step in the process, what is the user actually trying to achieve?

JTBD Statement Structure:

When [situation], I want to [job], so I can [desired outcome].

  • “When a process deviation occurs, I want to quickly and accurately assess impact, so I can protect product quality without delaying production.”
  • “When reviewing supplier audit responses, I want to identify meaningful risk signals, so I can challenge assumptions before they become failures.”

The Mechanics: Job Maps, Outcome Statements, and Dimensional Analysis

Job Map:

JTBD practitioners break the “job” down into a series of steps—the job map—outlining the user’s journey to achieve the desired progress. Ulwick’s “Universal Job Map” includes steps like: Define and plan, Locate inputs, Prepare, Confirm and validate, Execute, Monitor, Modify, and Conclude.

Dimension Analysis:
A full JTBD approach considers not only the functional needs (what must be accomplished), but also emotional (how users want to feel), social (how users want to appear), and cost (what users have to give up).

Outcome Statements:
JTBD expresses desired process outcomes in solution-agnostic language: To [achieve a specific goal], [user] must [perform action] to [produce a result].

The Relationship Between Job Maps and Process Maps

Job maps and process maps represent fundamentally different approaches to understanding and documenting work, despite both being visual tools that break down activities into sequential steps. Understanding their relationship reveals why each serves distinct purposes in organizational improvement efforts.

Core Distinction: Purpose vs. Execution

Job Maps focus on what customers or users are trying to accomplish—their desired outcomes and progress independent of any specific solution or current method. A job map asks: “What is the person fundamentally trying to achieve at each step?”

Process Maps focus on how work currently gets done—the specific activities, decisions, handoffs, and systems involved in executing a workflow. A process map asks: “What are the actual steps, roles, and systems involved in completing this work?”

Job Map Structure

Job maps follow a universal eight-step method regardless of industry or solution:

  1. Define – Determine goals and plan resources
  2. Locate – Gather required inputs and information
  3. Prepare – Set up the environment for execution
  4. Confirm – Verify readiness to proceed
  5. Execute – Carry out the core activity
  6. Monitor – Assess progress and performance
  7. Modify – Make adjustments as needed
  8. Conclude – Finish or prepare for repetition

Process Map Structure

Process maps vary significantly based on the specific workflow being documented and typically include:

  • Tasks and activities performed by different roles
  • Decision points where choices affect the flow
  • Handoffs between departments or systems
  • Inputs and outputs at each step
  • Time and resource requirements
  • Exception handling and alternate paths

Perspective and Scope

Job Maps maintain a solution-agnostic perspective. We can actually get pretty close to universal industry job maps, because whatever approach an individual organization takes, the job map remains the same because it captures the underlying functional need, not the method of fulfillment. A job map starts an improvement effort, helping us understand what needs to exist.

Process Maps are solution-specific. They document exactly how a particular organization, system, or workflow operates, including specific tools, roles, and procedures currently in use. The process map defines what is, and is an outcome of process improvement.

JTBD vs. Design Thinking, and Other Process Redesign Models

Most process improvement methodologies—including classic “design thinking”—center around incremental improvement, risk minimization, and stakeholder consensus. As previously critiqued , design thinking’s participatory workshops and empathy prototypes can often reinforce conservative bias, indirectly perpetuating the status quo. The tendency to interview, ideate, and choose the “least disruptive” option can perpetuate “GI Joe Fallacy”: knowing is not enough; action emerges only through challenged structures and direct engagement.

JTBD’s strength?

It demands that organizations reframe the purpose and metrics of every step and tool: not “How do we optimize this investigation template?”; but rather, “Does this investigation process help users make actual progress towards safer, more effective risk detection?” JTBD uncovers latent needs, both explicit and tacit, that design thinking’s post-it note workshops often fail to surface.

Why JTBD Is Invaluable for Process Design in Quality Systems

JTBD Enables Auditable Process Redesign

In pharmaceutical manufacturing, deviation management is a linchpin process—defining how organizations identify, document, investigate, and respond to events that depart from expected norms. Classic improvement initiatives target cycle time, documentation accuracy, or audit readiness. But JTBD pushes deeper.

Example JTBD Analysis for Deviations:

  • Trigger: A deviation is detected.
  • Job: “I want to report and contextualize the event accurately, so I can ensure an effective response without causing unnecessary disruption.”
  • Desired Outcome: Minimized product quality risk, transparency of root causes, actionable learning, regulatory confidence.

By mapping out the jobs of different deviation process stakeholders—production staff, investigation leaders, quality approvers, regulatory auditors—organizations can surface unmet needs: e.g., “Accelerating cross-functional root cause analysis while maintaining unbiased investigation integrity”; “Helping frontline operators feel empowered rather than blamed for honest reporting”; “Ensuring remediation is prioritized and tracked.”

Revealing Hidden Friction and Underserved Needs

JTBD methodology surfaces both overt and tacit pain points, often ignored in traditional process audits:

  • Operators “hire” process workarounds when formal documentation is slow or punitive.
  • Investigators seek intuitive data access, not just fields for “root cause.”
  • Approvers want clarity, not bureaucracy.
  • Regulatory reviewers “hire” the deviation process to provide organizational intelligence—not just box-checking.

A JTBD-based diagnostic invariably shows where job performance is low, but process compliance is high—a warning sign of process myopia and risk blindness.

Practical JTBD for Deviation Management: Step-by-Step Example

Job Statement and Context Definition

Define user archetypes:

  • Frontline Production Staff: “When a deviation occurs, I want a frictionless way to report it, so I can get support and feedback without being blamed.”
  • Quality Investigator: “When reviewing deviations, I want accessible, chronological data so I can detect patterns and act swiftly before escalation.”
  • Quality Leader: “When analyzing deviation trends, I want systemic insights that allow for proactive action—not just retrospection.”

Job Mapping: Stages of Deviation Lifecycle

  • Trigger/Detection: Event recognition (pattern recognition)—often leveraging both explicit SOPs and staff tacit knowledge.
  • Reporting: Document the event in a way that preserves context and allows for nuanced understanding.
  • Assessment: Rapid triage—“Is this risk emergent or routine? Is there unseen connection to a larger trend?” “Does this impact the product?”
  • Investigation: “Does the process allow multidisciplinary problem-solving, or does it force siloed closure? Are patterns shared across functions?”
  • Remediation: Job statement: “I want assurance that action will prevent recurrence and create meaningful learning.”
  • Closure and Learning Loop: “Does the process enable reflective practice and cognitive diversity—can feedback loops improve risk literacy?”

JTBD mapping reveals specific breakpoints: documentation systems that prioritize completeness over interpretability, investigation timelines that erode engagement, premature closure.

Outcome Statements for Metrics

Instead of “deviations closed on time,” measure:

  • Number of deviations generating actionable cross-functional insights.
  • Staff perception of process fairness and learning.
  • Time to credible remediation vs. time to closure.
  • Audit reviewer alignment with risk signals detected pre-close, not only post-mortem.

JTBD and the Apprenticeship Dividend: Pattern Recognition and Tacit Knowledge

JTBD, when deployed authentically, actively supports the development of deeper pattern recognition and tacit knowledge—qualities essential for risk resilience.

  • Structured exposure programs ensure users “hire” the process to learn common and uncommon risks.
  • Cognitive diversity teams ensures the job of “challenging assumptions” is not just theoretical.
  • True process improvement emerges when the system supports practice, reflection, and mentoring—outcomes unmeasurable by conventional improvement metrics.

JTBD Limitations: Caveats and Critical Perspective

No methodology is infallible. JTBD is only as powerful as the organization’s willingness to confront uncomfortable truths and challenge compliance-driven inertia:

  • Rigorous but Demanding: JTBD synthesis is non-“snackable” and lacks the pop-management immediacy of other tools.
  • Action Over Awareness: Knowing the job to be done is not sufficient; structures must enable action.
  • Regulatory Realities: Quality processes must satisfy regulatory standards, which are not always aligned with lived user experience. JTBD should inform, not override, compliance strategies.
  • Skill and Culture: Successful use demands qualitative interviewing skill, genuine cross-functional buy-in, and a culture of psychological safety—conditions not easily created.

Despite these challenges, JTBD remains unmatched for surfacing hidden process failures, uncovering underserved needs, and catalyzing redesign where it matters most.

Breaking Through the Status Quo

Many organizations pride themselves on their calibration routines, investigation checklists, and digital documentation platforms. But the reality is that these systems are often “hired” not to create learning—but to check boxes, push responsibility, and sustain the illusion of control. This leads to risk blindess and organizations systematically make themselves vulnerable when process myopia replaces real learning – zemblanity.

JTBD’s foundational question—“What job are we hiring this process to do?”—is more than a strategic exercise. It is a countermeasure against stagnation and blindness. It insists on radical honesty, relentless engagement, and humility before the complexity of operational reality. For deviation management, JTBD is a tool not just for compliance, but for organizational resilience and quality excellence.

Quality leaders should invest in JTBD not as a “one more tool,” but as a philosophical commitment: a way to continually link theory to action, root cause to remediation, and process improvement to real progress. Only then will organizations break free of procedural conservatism, cure risk blindness, and build systems worthy of trust and regulatory confidence.

Level of Effort for Planning

Risk based approach for planning

In the post “Design Lifecycle within PDCA – Planning” I laid out a design thinking approach to planning a change.

Like most activities, the level of effort is commensurate with the level of risk. Above I provide some different activities that can happen based on the risk inherent in the process and problem being evaluated.

This is a great reason why Living Risk Assessments are so critical to an organization.

Living vs Ad hoc risk assessments

Design Lifecycle within PDCA – Planning

In the post “Review of Process/Procedure” I mentioned how the document draft and review cycle can be seen as an iterative design cycle. In this post I want to expand on the design lifecycle as a fundamental expression of PDCA that sits at the heart of all we do.

PDCA, a refresher

PDCA (and it’s variants) are a pretty tried and true model for process improvement. In the PDCA model a plan is structured in four steps: P (plan) D (do) C (check) A (act). The intention is create a structured cycle that allows the process to flow in accordance with the objectives to be achieved (P), execute what was planned (D), check whether the objectives were achieved with emphasis on the verification of what went right and what went wrong (C) and identify factors of success or failure to feed a new process of planning (A).

Conceptually, the organization will be a fast turning wheel of endlessly learning from mistakes and seeking to maximize processes in order to remain forever in pursuit of strategic objectives, endlessly searching for the maximum efficiency and effectiveness of the system.

PDCA cycle driving continuous improvement

Design Lifecycle

This design lifecycle just takes the PDCA spiral and spreads it across time. At the same time it breaks down a standard set of activities and recognizes the stage gates from moving between startup (or experiment) and continuous improvement.

Design Lifecycle

Identifying the Problem (Plan)

At it’s heart problem-solving requires understanding a set of requirements and building for success.

I always go back to the IEEE definition of “A requirement is a condition or capability needed by a user to solve a problem or achieve an objective; a condition or capability that must be met or possessed by a system or system component to satisfy a contract ,standard, specification , or other formally imposed document; a document representation of condition or capability “

A requirement can be explicitly stated, implicit, inherited or derived from other requirements.

The first place to look for requirements is the organization itself.

Understanding the needs of the organization

The cultural needs of the organization drives the whole problem-solving and requirement gathering activity and it starts by being clear on Strategy and understanding the goals and objectives and how these goals percolate to the different business processes that we are improving. This gives a good starting point to focus on what opportunities to be explored and what problems to be solved.

It is not uncommon in the problem-solving phase that the objectives/needs are not known, so we must work our way through figuring out what the initial need is. Go back to the fundamentals of understanding the business processes “as-is” and review existing regulations, standards, guidelines and other internal sources of requirements followed currently. This is the time to interview stakeholders and go the GEMBA.

We state the problem, and re-frame it. And now we can move on to Requirement Elicitation.

Identifying the Problem

Requirement Elicitation

Requirement Elicitation is the process of probing and facilitating the stakeholders to provide more clarity and granular details pertaining to the (usual) high-level requirement gathered so far. This is a discovery process, exploratory in nature, focusing on finding enough details so that a solution can be envisioned and developed. Elicitation is not an isolated activity, and has been happening throughout the process by all the discussion, interaction, analysis, verification and validation up to now.

You should be engaging with knowledge management throughout the cycle, but ensure there is specific engagement here.

It is a progressive process where the requirement clarity ushers in increments and may need multiple rounds of probing/discussions. As the new details are uncovered the requirements are further elaborated and detailed. There are a whole toolbox of elicitation techniques and like any engagement it is important to properly prepare.

Requirement Elicitation

Requirement Analysis

Requirement Analysis pertains to extracting the requirement out of the heaps of information acquired from various stakeholders and communicated and turned into documentation in a form that is easily understood by the stakeholders, including the project team. Here we are engaging in requirement refinement, modification, clarification, validation & finalization and engaging in extensive communication.

A requirement can be classified as:

We build for traceability here, so as we build and test solutions we can always trace back to the requirements.

Design the Solution

Building for the solution includes change management. Any solution focuses both on the technical, the organization and the people.

Ensure you leverage risk management.

Change Management Approach

The Place of Empathy

In this design process, we address and use empathy to acquire insight into users’ (stakeholders) needs and inform the design process and create a relevant solution. Using an approach informed by cognitive empathy, we apply different methods to build up that competence and insight, enabling us to prioritize the needs of the users and make the results of the process more desirable.

Psychological safety, reflexivity and sense-making inform our work.

Prepare for Startup

By engaging in Design Thinking we are ready for Startup. Moving through the three steps of:

We have created a plan to execute against. Startup, which can often be Experimentation, is it’s own, future, post.

Forget the technology, Quality 4.0 is all about thinking

Quality 4.0 is Industry 4.0 which is really just:

  • A ton of sensors (cheap, reliable sensors for everyone)
  • Data everywhere! (So much data. Honest data is good. Trust us.)
  • Collaboration (Because that never happened before technology)
  • Machine learning (this never ends well in the movies)

However, Quality 4.0 is really a lot more than the technology, it is all about using that technology to improve our quality management systems. So Quality 4.0 is really all about understanding that the world around us, and thus the organizations we work in, is full of complex and interconnected challenges and increasingly open systems of communication, and that we can no longer afford to address complex issues as we have in the past. The very simple idea behind Quality 4.0 is that current and future challenges requires thinking that is consistent with a living world of complexity and change.

As such there is nothing really new about Quality 4.0; it is just a consolidation of a lot of themes of change management, knowledge management and above all system thinking.

System Thinking requires quality professionals to develop the skills to operate in a paradigm where we see our people, organizations, processes and technology as part of the world, a set of dynamic entities that display continually emerging patterns arising from the interactions among many interdependent connecting components.

There are lots of tools and methodologies for managing systems. Frankly, a whole lot of them are the same that have been in use in quality for decades; others are new tools. The crucial thing to remember about Quality 4.0 is that it is an additive and transformative way to look at quality, and quite frankly one can go back and read Deming and see the majority of this there.

When I work on systems (which is according to my job description my core function), I keep some principles always in mind.

Principle Description
Balance The system creates value for the multiple stakeholders. While the ideal is to develop a design that maximizes the value for all the key stakeholders, the designer often has to compromise and balance the needs of the various stakeholders.
Congruence The degree to which the system components are aligned and consistent with each other and the other organizational systems, culture, plans, processes, information, resource decisions, and actions.
Convenience The system is designed to be as convenient as possible for the participants to implement (a.k.a. user friendly). System includes specific processes, procedures, and controls only when necessary.
Coordination System components are interconnected and harmonized with the other (internal and external) components, systems, plans, processes, information, and resource decisions toward common action or effort. This is beyond congruence and is achieved when the individual components of a system operate as a fully interconnected unit.
Elegance Complexity vs. benefit — the system includes only enough complexity as is necessary to meet the stakeholder’s needs. In other words, keep the design as simple as possible and no more while delivering the desired benefits. It often requires looking at the system in new ways.
Human Participants in the system are able to find joy, purpose and meaning in their work.
Learning Knowledge management, with opportunities for reflection and learning (learning loops), is designed into the system. Reflection and learning are built into the system at key points to encourage single- and double-loop learning from experience to improve future implementation and to systematically evaluate the design of the system itself.
Sustainability The system effectively meets the near- and long-term needs of the current stakeholders without compromising the ability of future generations of stakeholders to meet their own needs.

In order to be successful utilizing these principles when designing systems and processes we need to keep user at the forefront — striving to be sensitive to the user, to understand them, their situation and feelings: to be more empathetic.

components of empathy

We leverage both the affective component and the cognitive component of empathetic reasoning, in short we need to both share and understand.

We are in short asking 5 major questions:

  • What is the purpose of the system? What happens in the system?
  • What is the system? What’s inside? What’s outside? Set the boundaries, the internal elements and elements of the system’s environment.
  • What are the internal structure and dependencies?
  • How does the system behave? What are the system’s emergent behaviors and do we understand their causes and dynamics?
  • What is the context? Usually in the terms of bigger systems and interacting systems.

Think holistically, think empathetically with the user, and ask questions about system behavior. Everything else falls into place from there.