Applying Jobs-to-Be-Done to Risk Management

In my recent exploration of the Jobs-to-Be-Done (JTBD) tool for process improvement, I examined how this customer-centric approach could revolutionize our understanding of deviation management. I want to extend that analysis to another fundamental challenge in pharmaceutical quality: risk management.

As we grapple with increasing regulatory complexity, accelerating technological change, and the persistent threat of risk blindness, most organizations remain trapped in what I call “compliance theater”—performing risk management activities that satisfy auditors but fail to build genuine organizational resilience. JTBD is a useful tool as we move beyond this theater toward risk management that actually creates value.

The Risk Management Jobs Users Actually Hire

When quality professionals, executives, and regulatory teams engage with risk management processes, what job are they really trying to accomplish? The answer reveals a profound disconnect between organizational intent and actual capability.

The Core Functional Job

“When facing uncertainty that could impact product quality, patient safety, or business continuity, I want to systematically understand and address potential threats, so I can make confident decisions and prevent surprise failures.”

This job statement immediately exposes the inadequacy of most risk management systems. They focus on documentation rather than understanding, assessment rather than decision enablement, and compliance rather than prevention.

The Consumption Jobs: The Hidden Workload

Risk management involves numerous consumption jobs that organizations often ignore:

  • Evaluation and Selection: “I need to choose risk assessment methodologies that match our operational complexity and regulatory environment.”
  • Implementation and Training: “I need to build organizational risk capability without creating bureaucratic overhead.”
  • Maintenance and Evolution: “I need to keep our risk approach current as our business and threat landscape evolves.”
  • Integration and Communication: “I need to ensure risk insights actually influence business decisions rather than gathering dust in risk registers.”

These consumption jobs represent the difference between risk management systems that organizations grudgingly tolerate and those they genuinely want to “hire.”

The Eight-Step Risk Management Job Map

Applying JTBD’s universal job map to risk management reveals where current approaches systematically fail:

1. Define: Establishing Risk Context

What users need: Clear understanding of what they’re assessing, why it matters, and what decisions the risk analysis will inform.

Current reality: Risk assessments often begin with template completion rather than context establishment, leading to generic analyses that don’t support actual decision-making.

2. Locate: Gathering Risk Intelligence

What users need: Access to historical data, subject matter expertise, external intelligence, and tacit knowledge about how things actually work.

Current reality: Risk teams typically work from documentation rather than engaging with operational reality, missing the pattern recognition and apprenticeship dividend that experienced practitioners possess.

3. Prepare: Creating Assessment Conditions

What users need: Diverse teams, psychological safety for honest risk discussions, and structured approaches that challenge rather than confirm existing assumptions.

Current reality: Risk assessments often involve homogeneous teams working through predetermined templates, perpetuating the GI Joe fallacy—believing that knowledge of risk frameworks prevents risky thinking.

4. Confirm: Validating Assessment Readiness

What users need: Confidence that they have sufficient information, appropriate expertise, and clear success criteria before proceeding.

Current reality: Risk assessments proceed regardless of information quality or team readiness, driven by schedule rather than preparation.

5. Execute: Conducting Risk Analysis

What users need: Systematic identification of risks, analysis of interconnections, scenario testing, and development of robust mitigation strategies.

Current reality: Risk analysis often becomes risk scoring—reducing complex phenomena to numerical ratings that provide false precision rather than genuine insight.

6. Monitor: Tracking Risk Reality

What users need: Early warning systems that detect emerging risks and validate the effectiveness of mitigation strategies.

Current reality: Risk monitoring typically involves periodic register updates rather than active intelligence gathering, missing the dynamic nature of risk evolution.

7. Modify: Adapting to New Information

What users need: Responsive adjustment of risk strategies based on monitoring feedback and changing conditions.

Current reality: Risk assessments often become static documents, updated only during scheduled reviews rather than when new information emerges.

8. Conclude: Capturing Risk Learning

What users need: Systematic capture of risk insights, pattern recognition, and knowledge transfer that builds organizational risk intelligence.

Current reality: Risk analysis conclusions focus on compliance closure rather than learning capture, missing opportunities to build the organizational memory that prevents risk blindness.

The Emotional and Social Dimensions

Risk management involves profound emotional and social jobs that traditional approaches ignore:

  • Confidence: Risk practitioners want to feel genuinely confident that significant threats have been identified and addressed, not just that procedures have been followed.
  • Intellectual Satisfaction: Quality professionals are attracted to rigorous analysis and robust reasoning—risk management should engage their analytical capabilities, not reduce them to form completion.
  • Professional Credibility: Risk managers want to be perceived as strategic enablers rather than bureaucratic obstacles—as trusted advisors who help organizations navigate uncertainty rather than create administrative burden.
  • Organizational Trust: Executive teams want assurance that their risk management capabilities are genuinely protective, not merely compliant.

What’s Underserved: The Innovation Opportunities

JTBD analysis reveals four critical areas where current risk management approaches systematically underserve user needs:

Risk Intelligence

Current systems document known risks but fail to develop early warning capabilities, pattern recognition across multiple contexts, or predictive insights about emerging threats. Organizations need risk management that builds institutional awareness, not just institutional documentation.

Decision Enablement

Risk assessments should create confidence for strategic decisions, enable rapid assessment of time-sensitive opportunities, and provide scenario planning that prepares organizations for multiple futures. Instead, most risk management creates decision paralysis through endless analysis.

Organizational Capability

Effective risk management should build risk literacy across all levels, create cultural resilience that enables honest risk conversations, and develop adaptive capacity to respond when risks materialize. Current approaches often centralize risk thinking rather than distributing risk capability.

Stakeholder Trust

Risk management should enable transparent communication about threats and mitigation strategies, demonstrate competence in risk anticipation, and provide regulatory confidence in organizational capabilities. Too often, risk management creates opacity rather than transparency.

Canvas representation of the JBTD

Moving Beyond Compliance Theater

The JTBD framework helps us address a key challenge in risk management: many organizations place excessive emphasis on “table stakes” such as regulatory compliance and documentation requirements, while neglecting vital aspects like intelligence, enablement, capability, and trust that contribute to genuine resilience.

This represents a classic case of process myopia—becoming so focused on risk management activities that we lose sight of the fundamental job those activities should accomplish. Organizations perfect their risk registers while remaining vulnerable to surprise failures, not because they lack risk management processes, but because those processes fail to serve the jobs users actually need accomplished.

Design Principles for User-Centered Risk Management

  • Context Over Templates: Begin risk analysis with clear understanding of decisions to be informed rather than forms to be completed.
  • Intelligence Over Documentation: Prioritize systems that build organizational awareness and pattern recognition rather than risk libraries.
  • Engagement Over Compliance: Create risk processes that attract rather than burden users, recognizing that effective risk management requires active intellectual participation.
  • Learning Over Closure: Structure risk activities to build institutional memory and capability rather than simply completing assessment cycles.
  • Integration Over Isolation: Ensure risk insights flow naturally into operational decisions rather than remaining in separate risk management systems.

Hiring Risk Management for Real Jobs

The most dangerous risk facing pharmaceutical organizations may be risk management systems that create false confidence while building no real capability. JTBD analysis reveals why: these systems optimize for regulatory approval rather than user needs, creating elaborate processes that nobody genuinely wants to “hire.”

True risk management begins with understanding what jobs users actually need accomplished: building confidence for difficult decisions, developing organizational intelligence about threats, creating resilience against surprise failures, and enabling rather than impeding business progress. Organizations that design risk management around these jobs will develop competitive advantages in an increasingly uncertain world.

The choice is clear: continue performing compliance theater, or build risk management systems that organizations genuinely want to hire. In a world where zemblanity—the tendency to encounter negative, foreseeable outcomes—threatens every quality system, only the latter approach offers genuine protection.

Risk management should not be something organizations endure. It should be something they actively seek because it makes them demonstrably better at navigating uncertainty and protecting what matters most.

The Jobs-to-Be-Done (JTBD): Origins, Function, and Value for Quality Systems

In the relentless march of quality and operational improvement, frameworks, methodologies and tools abound but true breakthrough is rare. There is a persistent challenge: organizations often become locked into their own best practices, relying on habitual process reforms that seldom address the deeper why of operational behavior. This “process myopia”—where the visible sequence of tasks occludes the real purpose—runs in parallel to risk blindness, leaving many organizations vulnerable to the slow creep of inefficiency, bias, and ultimately, quality failures.

The Jobs-to-Be-Done (JTBD) tool offers an effective method for reorientation. Rather than focusing on processes or systems as static routines, JTBD asks a deceptively simple question: What job are people actually hiring this process or tool to do? In deviation management, audit response, even risk assessment itself, the answer to this question is the gravitational center on which effective redesign can be based.

What Does It Mean to Hire a Process?

To “hire” a process—even when it is a regulatory obligation—means viewing the process not merely as a compliance requirement, but as a tool or mechanism that stakeholders use to achieve specific, desirable outcomes beyond simple adherence. In Jobs-to-Be-Done (JTBD), the idea of “hiring” a process reframes organizational behavior: stakeholders (such as quality professionals, operators, managers, or auditors) are seen as engaging with the process to get particular jobs done—such as ensuring product safety, demonstrating control to regulators, reducing future risk, or creating operational transparency.

When a process is regulatory-mandated—such as deviation management, change control, or batch release—the “hiring” metaphor recognizes two coexisting realities:

Dual Functions: Compliance and Value Creation

  • Compliance Function: The organization must follow the process to satisfy legal, regulatory, or contractual obligations. Not following is not an option; it’s legally or organizationally enforced.
  • Functional “Hiring”: Even for required processes, users “hire” the process to accomplish additional jobs—like protecting patients, facilitating learning from mistakes, or building organizational credibility. A well-designed process serves both external (regulatory) and internal (value-creating) goals.

Implications for Process Design

  • Stakeholders still have choices in how they interact with the process—they can engage deeply (to learn and improve) or superficially (for box-checking), depending on how well the process helps them do their “real” job.
  • If a process is viewed only as a regulatory tax, users will find ways to shortcut, minimally comply, or bypass the spirit of the requirement, undermining learning and risk mitigation.
  • Effective design ensures the process delivers genuine value, making “compliance” a natural by-product of a process stakeholders genuinely want to “hire”—because it helps them achieve something meaningful and important.

Practical Example: Deviation Management

  • Regulatory “Must”: Deviations must be documented and investigated under GMP.
  • Users “Hire” the Process to: Identify real risks early, protect quality, learn from mistakes, and demonstrate control in audits.
  • If the process enables those jobs well, it will be embraced and used effectively. If not, it becomes paperwork compliance—and loses its potential as a learning or risk-reduction tool.

To “hire” a process under regulatory obligation is to approach its use intentionally, ensuring it not only satisfies external requirements but also delivers real value for those required to use it. The ultimate goal is to design a process that people would choose to “hire” even if it were not mandatory—because it supports their intrinsic goals, such as maintaining quality, learning, and risk control.

Unpacking Jobs-to-Be-Done: The Roots of Customer-Centricity

Historical Genesis: From Marketing Myopia to Outcome-Driven Innovation

The JTBD’s intellectual lineage traces back to Theodore Levitt’s famous adage: “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.” This insight, presented in his seminal 1960 Harvard Business Review article “Marketing Myopia,” underscores the fatal flaw of most process redesigns: overinvestment in features, tools, and procedures, while neglecting the underlying human need or outcome.

This thinking resonates strongly with Peter Drucker’s core dictum that “the purpose of a business is to create and keep a customer”—and that marketing and innovation, not internal optimization, are the only valid means to this end. Both Drucker and Levitt’s insights form the philosophical substrate for JTBD, framing the product, system, or process not as an end in itself, but as a means to enable desired change in someone’s “real world”.

Modern JTBD: Ulwick, Christensen, and Theory Development

Tony Ulwick, after experiencing firsthand the failure of IBM’s PCjr product, launched a search to discover how organizations could systematically identify the outcomes customers (or process users) use to judge new offerings. Ulwick formalized jobs-as-process thinking, and by marrying Six Sigma concepts with innovation research, developed the “Outcome-Driven Innovation” (ODI) method, later shared with Clayton Christensen at Harvard.

Clayton Christensen, in his disruption theory research, sharpened the framing: customers don’t simply buy products—they “hire” them to get a job done, to make progress in their lives or work. He and Bob Moesta extended this to include the emotional and social dimensions of these jobs, and added nuance on how jobs can signal category-breaking opportunities for disruptive innovation. In essence, JTBD isn’t just about features; it’s about the outcome and the experience of progress.

The JTBD tool is now well-established in business, product development, health care, and increasingly, internal process improvement.

What Is a “Job” and How Does JTBD Actually Work?

Core Premise: The “Job” as the Real Center of Process Design

A “Job” in JTBD is not a task or activity—it is the progress someone seeks in a specific context. In regulated quality systems, this reframing prompts a pivotal question: For every step in the process, what is the user actually trying to achieve?

JTBD Statement Structure:

When [situation], I want to [job], so I can [desired outcome].

  • “When a process deviation occurs, I want to quickly and accurately assess impact, so I can protect product quality without delaying production.”
  • “When reviewing supplier audit responses, I want to identify meaningful risk signals, so I can challenge assumptions before they become failures.”

The Mechanics: Job Maps, Outcome Statements, and Dimensional Analysis

Job Map:

JTBD practitioners break the “job” down into a series of steps—the job map—outlining the user’s journey to achieve the desired progress. Ulwick’s “Universal Job Map” includes steps like: Define and plan, Locate inputs, Prepare, Confirm and validate, Execute, Monitor, Modify, and Conclude.

Dimension Analysis:
A full JTBD approach considers not only the functional needs (what must be accomplished), but also emotional (how users want to feel), social (how users want to appear), and cost (what users have to give up).

Outcome Statements:
JTBD expresses desired process outcomes in solution-agnostic language: To [achieve a specific goal], [user] must [perform action] to [produce a result].

The Relationship Between Job Maps and Process Maps

Job maps and process maps represent fundamentally different approaches to understanding and documenting work, despite both being visual tools that break down activities into sequential steps. Understanding their relationship reveals why each serves distinct purposes in organizational improvement efforts.

Core Distinction: Purpose vs. Execution

Job Maps focus on what customers or users are trying to accomplish—their desired outcomes and progress independent of any specific solution or current method. A job map asks: “What is the person fundamentally trying to achieve at each step?”

Process Maps focus on how work currently gets done—the specific activities, decisions, handoffs, and systems involved in executing a workflow. A process map asks: “What are the actual steps, roles, and systems involved in completing this work?”

Job Map Structure

Job maps follow a universal eight-step method regardless of industry or solution:

  1. Define – Determine goals and plan resources
  2. Locate – Gather required inputs and information
  3. Prepare – Set up the environment for execution
  4. Confirm – Verify readiness to proceed
  5. Execute – Carry out the core activity
  6. Monitor – Assess progress and performance
  7. Modify – Make adjustments as needed
  8. Conclude – Finish or prepare for repetition

Process Map Structure

Process maps vary significantly based on the specific workflow being documented and typically include:

  • Tasks and activities performed by different roles
  • Decision points where choices affect the flow
  • Handoffs between departments or systems
  • Inputs and outputs at each step
  • Time and resource requirements
  • Exception handling and alternate paths

Perspective and Scope

Job Maps maintain a solution-agnostic perspective. We can actually get pretty close to universal industry job maps, because whatever approach an individual organization takes, the job map remains the same because it captures the underlying functional need, not the method of fulfillment. A job map starts an improvement effort, helping us understand what needs to exist.

Process Maps are solution-specific. They document exactly how a particular organization, system, or workflow operates, including specific tools, roles, and procedures currently in use. The process map defines what is, and is an outcome of process improvement.

JTBD vs. Design Thinking, and Other Process Redesign Models

Most process improvement methodologies—including classic “design thinking”—center around incremental improvement, risk minimization, and stakeholder consensus. As previously critiqued , design thinking’s participatory workshops and empathy prototypes can often reinforce conservative bias, indirectly perpetuating the status quo. The tendency to interview, ideate, and choose the “least disruptive” option can perpetuate “GI Joe Fallacy”: knowing is not enough; action emerges only through challenged structures and direct engagement.

JTBD’s strength?

It demands that organizations reframe the purpose and metrics of every step and tool: not “How do we optimize this investigation template?”; but rather, “Does this investigation process help users make actual progress towards safer, more effective risk detection?” JTBD uncovers latent needs, both explicit and tacit, that design thinking’s post-it note workshops often fail to surface.

Why JTBD Is Invaluable for Process Design in Quality Systems

JTBD Enables Auditable Process Redesign

In pharmaceutical manufacturing, deviation management is a linchpin process—defining how organizations identify, document, investigate, and respond to events that depart from expected norms. Classic improvement initiatives target cycle time, documentation accuracy, or audit readiness. But JTBD pushes deeper.

Example JTBD Analysis for Deviations:

  • Trigger: A deviation is detected.
  • Job: “I want to report and contextualize the event accurately, so I can ensure an effective response without causing unnecessary disruption.”
  • Desired Outcome: Minimized product quality risk, transparency of root causes, actionable learning, regulatory confidence.

By mapping out the jobs of different deviation process stakeholders—production staff, investigation leaders, quality approvers, regulatory auditors—organizations can surface unmet needs: e.g., “Accelerating cross-functional root cause analysis while maintaining unbiased investigation integrity”; “Helping frontline operators feel empowered rather than blamed for honest reporting”; “Ensuring remediation is prioritized and tracked.”

Revealing Hidden Friction and Underserved Needs

JTBD methodology surfaces both overt and tacit pain points, often ignored in traditional process audits:

  • Operators “hire” process workarounds when formal documentation is slow or punitive.
  • Investigators seek intuitive data access, not just fields for “root cause.”
  • Approvers want clarity, not bureaucracy.
  • Regulatory reviewers “hire” the deviation process to provide organizational intelligence—not just box-checking.

A JTBD-based diagnostic invariably shows where job performance is low, but process compliance is high—a warning sign of process myopia and risk blindness.

Practical JTBD for Deviation Management: Step-by-Step Example

Job Statement and Context Definition

Define user archetypes:

  • Frontline Production Staff: “When a deviation occurs, I want a frictionless way to report it, so I can get support and feedback without being blamed.”
  • Quality Investigator: “When reviewing deviations, I want accessible, chronological data so I can detect patterns and act swiftly before escalation.”
  • Quality Leader: “When analyzing deviation trends, I want systemic insights that allow for proactive action—not just retrospection.”

Job Mapping: Stages of Deviation Lifecycle

  • Trigger/Detection: Event recognition (pattern recognition)—often leveraging both explicit SOPs and staff tacit knowledge.
  • Reporting: Document the event in a way that preserves context and allows for nuanced understanding.
  • Assessment: Rapid triage—“Is this risk emergent or routine? Is there unseen connection to a larger trend?” “Does this impact the product?”
  • Investigation: “Does the process allow multidisciplinary problem-solving, or does it force siloed closure? Are patterns shared across functions?”
  • Remediation: Job statement: “I want assurance that action will prevent recurrence and create meaningful learning.”
  • Closure and Learning Loop: “Does the process enable reflective practice and cognitive diversity—can feedback loops improve risk literacy?”

JTBD mapping reveals specific breakpoints: documentation systems that prioritize completeness over interpretability, investigation timelines that erode engagement, premature closure.

Outcome Statements for Metrics

Instead of “deviations closed on time,” measure:

  • Number of deviations generating actionable cross-functional insights.
  • Staff perception of process fairness and learning.
  • Time to credible remediation vs. time to closure.
  • Audit reviewer alignment with risk signals detected pre-close, not only post-mortem.

JTBD and the Apprenticeship Dividend: Pattern Recognition and Tacit Knowledge

JTBD, when deployed authentically, actively supports the development of deeper pattern recognition and tacit knowledge—qualities essential for risk resilience.

  • Structured exposure programs ensure users “hire” the process to learn common and uncommon risks.
  • Cognitive diversity teams ensures the job of “challenging assumptions” is not just theoretical.
  • True process improvement emerges when the system supports practice, reflection, and mentoring—outcomes unmeasurable by conventional improvement metrics.

JTBD Limitations: Caveats and Critical Perspective

No methodology is infallible. JTBD is only as powerful as the organization’s willingness to confront uncomfortable truths and challenge compliance-driven inertia:

  • Rigorous but Demanding: JTBD synthesis is non-“snackable” and lacks the pop-management immediacy of other tools.
  • Action Over Awareness: Knowing the job to be done is not sufficient; structures must enable action.
  • Regulatory Realities: Quality processes must satisfy regulatory standards, which are not always aligned with lived user experience. JTBD should inform, not override, compliance strategies.
  • Skill and Culture: Successful use demands qualitative interviewing skill, genuine cross-functional buy-in, and a culture of psychological safety—conditions not easily created.

Despite these challenges, JTBD remains unmatched for surfacing hidden process failures, uncovering underserved needs, and catalyzing redesign where it matters most.

Breaking Through the Status Quo

Many organizations pride themselves on their calibration routines, investigation checklists, and digital documentation platforms. But the reality is that these systems are often “hired” not to create learning—but to check boxes, push responsibility, and sustain the illusion of control. This leads to risk blindess and organizations systematically make themselves vulnerable when process myopia replaces real learning – zemblanity.

JTBD’s foundational question—“What job are we hiring this process to do?”—is more than a strategic exercise. It is a countermeasure against stagnation and blindness. It insists on radical honesty, relentless engagement, and humility before the complexity of operational reality. For deviation management, JTBD is a tool not just for compliance, but for organizational resilience and quality excellence.

Quality leaders should invest in JTBD not as a “one more tool,” but as a philosophical commitment: a way to continually link theory to action, root cause to remediation, and process improvement to real progress. Only then will organizations break free of procedural conservatism, cure risk blindness, and build systems worthy of trust and regulatory confidence.

Maturity Models, Utilizing the Validation Program as an Example

Maturity models offer significant benefits to organizations by providing a structured framework for benchmarking and assessment. Organizations can clearly understand their strengths and weaknesses by evaluating their current performance and maturity level in specific areas or processes. This assessment helps identify areas for improvement and sets a baseline for measuring progress over time. Benchmarking against industry standards or best practices also allows organizations to see how they compare to their peers, fostering a competitive edge.

One of the primary advantages of maturity models is their role in fostering a culture of continuous improvement. They provide a roadmap for growth and development, encouraging organizations to strive for higher maturity levels. This continuous improvement mindset helps organizations stay agile and adaptable in a rapidly changing business environment. By setting clear goals and milestones, maturity models guide organizations in systematically addressing deficiencies and enhancing their capabilities.

Standardization and consistency are also key benefits of maturity models. They help establish standardized practices across teams and departments, ensuring that processes are executed with the same level of quality and precision. This standardization reduces variability and errors, leading to more reliable and predictable outcomes. Maturity models create a common language and framework for communication, fostering collaboration and alignment toward shared organizational goals.

The use of maturity models significantly enhances efficiency and effectiveness. Organizations can increase productivity and use their resources by identifying areas for streamlining operations and optimizing workflows. This leads to reduced errors, minimized rework, and improved process efficiency. The focus on continuous improvement also means that organizations are constantly seeking ways to refine and enhance their operations, leading to sustained gains in efficiency.

Maturity models play a crucial role in risk reduction and compliance. They assist organizations in identifying potential risks and implementing measures to mitigate them, ensuring compliance with relevant regulations and standards. This proactive approach to risk management helps organizations avoid costly penalties and reputational damage. Moreover, maturity models improve strategic planning and decision-making by providing a data-backed foundation for setting priorities and making informed choices.

Finally, maturity models improve communication and transparency within organizations. Providing a common communication framework increases transparency and builds trust among employees. This improved communication fosters a sense of shared purpose and collaboration, essential for achieving organizational goals. Overall, maturity models serve as valuable tools for driving continuous improvement, enhancing efficiency, and fostering a culture of excellence within organizations.

Business Process Maturity Model (BPMM)

A structured framework used to assess and improve the maturity of an organization’s business processes, it provides a systematic methodology to evaluate the effectiveness, efficiency, and adaptability of processes within an organization, guiding continuous improvement efforts.

Key Characteristics of BPMM

Assessment and Classification: BPMM helps organizations understand their current process maturity level and identify areas for improvement. It classifies processes into different maturity levels, each representing a progressive improvement in process management.

Guiding Principles: The model emphasizes a process-centric approach focusing on continuous improvement. Key principles include aligning improvements with business goals, standardization, measurement, stakeholder involvement, documentation, training, technology enablement, and governance.

Incremental Levels

    BPMM typically consists of five levels, each building on the previous one:

    1. Initial: Processes are ad hoc and chaotic, with little control or consistency.
    2. Managed: Basic processes are established and documented, but results may vary.
    3. Standardized: Processes are well-documented, standardized, and consistently executed across the organization.
    4. Predictable: Processes are quantitatively measured and controlled, with data-driven decision-making.
    5. Optimizing: Continuous process improvement is ingrained in the organization’s culture, focusing on innovation and optimization.

    Benefits of BPMM

    • Improved Process Efficiency: By standardizing and optimizing processes, organizations can achieve higher efficiency and consistency, leading to better resource utilization and reduced errors.
    • Enhanced Customer Satisfaction: Mature processes lead to higher product and service quality, which improves customer satisfaction.
    • Better Change Management: Higher process maturity increases an organization’s ability to navigate change and realize project benefits.
    • Readiness for Technology Deployment: BPMM helps ensure organizational readiness for new technology implementations, reducing the risk of failure.

    Usage and Implementation

    1. Assessment: Organizations can conduct BPMM assessments internally or with the help of external appraisers. These assessments involve reviewing process documentation, interviewing employees, and analyzing process outputs to determine maturity levels.
    2. Roadmap for Improvement: Organizations can develop a roadmap for progressing to higher maturity levels based on the assessment results. This roadmap includes specific actions to address identified deficiencies and improve process capabilities.
    3. Continuous monitoring and regular evaluations are crucial to ensure that processes remain effective and improvements are sustained over time.

    A BPMM Example: Validation Program based on ASTM E2500

    To apply the Business Process Maturity Model (BPMM) to a validation program aligned with ASTM E2500, we need to evaluate the program’s maturity across the five levels of BPMM while incorporating the key principles of ASTM E2500. Here’s how this application might look:

    Level 1: Initial

    At this level, the validation program is ad hoc and lacks standardization:

    • Validation activities are performed inconsistently across different projects or departments.
    • There’s limited understanding of ASTM E2500 principles.
    • Risk assessment and scientific rationale for validation activities are not systematically applied.
    • Documentation is inconsistent and often incomplete.

    Level 2: Managed

    The validation program shows some structure but lacks organization-wide consistency:

    • Basic validation processes are established but may not fully align with ASTM E2500 guidelines.
    • Some risk assessment tools are used, but not consistently across all projects.
    • Subject Matter Experts (SMEs) are involved, but their roles are unclear.
    • There’s increased awareness of the need for scientific justification in validation activities.

    Level 3: Standardized

    The validation program is well-defined and consistently implemented:

    • Validation processes are standardized across the organization and align with ASTM E2500 principles.
    • Risk-based approaches are consistently used to determine the scope and extent of validation activities.
    • SMEs are systematically involved in the design review and verification processes.
    • The concept of “verification” replaces traditional IQ/OQ/PQ, focusing on critical aspects that impact product quality and patient safety.
    • Quality risk management tools (e.g., impact assessments, risk management) are routinely used to identify critical quality attributes and process parameters.

    Level 4: Predictable

    The validation program is quantitatively managed and controlled:

    • Key Performance Indicators (KPIs) for validation activities are established and regularly monitored.
    • Data-driven decision-making is used to continually improve the efficiency and effectiveness of validation processes.
    • Advanced risk management techniques are employed to predict and mitigate potential issues before they occur.
    • There’s a strong focus on leveraging supplier documentation and expertise to streamline validation efforts.
    • Engineering procedures for quality activities (e.g., vendor technical assessments and installation verification) are formalized and consistently applied.

    Level 5: Optimizing

    The validation program is characterized by continuous improvement and innovation:

    • There’s a culture of continuous improvement in validation processes, aligned with the latest industry best practices and regulatory expectations.
    • Innovation in validation approaches is encouraged, always maintaining alignment with ASTM E2500 principles.
    • The organization actively contributes to developing industry standards and best practices in validation.
    • Validation activities are seamless integrated with other quality management systems, supporting a holistic approach to product quality and patient safety.
    • Advanced technologies (e.g., artificial intelligence, machine learning) may be leveraged to enhance risk assessment and validation strategies.

    Key Considerations for Implementation

    1. Risk-Based Approach: At higher maturity levels, the validation program should fully embrace the risk-based approach advocated by ASTM E2500, focusing efforts on aspects critical to product quality and patient safety.
    2. Scientific Rationale: As maturity increases, there should be a stronger emphasis on scientific understanding and justification for validation activities, moving away from a checklist-based approach.
    3. SME Involvement: Higher maturity levels should see increased and earlier involvement of SMEs in the validation process, from equipment selection to verification.
    4. Supplier Integration: More mature programs will leverage supplier expertise and documentation effectively, reducing redundant testing and improving efficiency.
    5. Continuous Improvement: At the highest maturity level, the validation program should have mechanisms in place for continuous evaluation and improvement of processes, always aligned with ASTM E2500 principles and the latest regulatory expectations.

    Process and Enterprise Maturity Model (PEMM),

    The Process and Enterprise Maturity Model (PEMM), developed by Dr. Michael Hammer, is a comprehensive framework designed to help organizations assess and improve their process maturity. It is a corporate roadmap and benchmarking tool for companies aiming to become process-centric enterprises.

    Key Components of PEMM

    PEMM is structured around two main dimensions: Process Enablers and Organizational Capabilities. Each dimension is evaluated on a scale to determine the maturity level.

    Process Enablers

    These elements directly impact the performance and effectiveness of individual processes. They include:

    • Design: The structure and documentation of the process.
    • Performers: The individuals or teams executing the process.
    • Owner: The person responsible for the process.
    • Infrastructure: The tools, systems, and resources supporting the process.
    • Metrics: The measurements used to evaluate process performance.

    Organizational Capabilities

    These capabilities create an environment that supports and sustains high-performance processes. They include:

    • Leadership: The commitment and support from top management.
    • Culture: The organizational values and behaviors that promote process excellence.
    • Expertise: The skills and knowledge required to manage and improve processes.
    • Governance: The mechanisms to oversee and guide process management activities.

    Maturity Levels

    Both Process Enablers and Organizational Capabilities are assessed on a scale from P0 to P4 (for processes) and E0 to E4 (for enterprise capabilities):

    • P0/E0: Non-existent or ad hoc processes and capabilities.
    • P1/E1: Basic, but inconsistent and poorly documented.
    • P2/E2: Defined and documented, but not fully integrated.
    • P3/E3: Managed and measured, with consistent performance.
    • P4/E4: Optimized and continuously improved.

    Benefits of PEMM

    • Self-Assessment: PEMM is designed to be simple enough for organizations to conduct their own assessments without needing external consultants.
    • Empirical Evidence: It encourages the collection of data to support process improvements rather than relying on intuition.
    • Engagement: Involves all levels of the organization in the process journey, turning employees into advocates for change.
    • Roadmap for Improvement: Provides a clear path for organizations to follow in their process improvement efforts.

    Application of PEMM

    PEMM can be applied to any type of process within an organization, whether customer-facing or internal, core or support, transactional or knowledge-intensive. It helps organizations:

    • Assess Current Maturity: Identify the current state of process and enterprise capabilities.
    • Benchmark: Compare against industry standards and best practices.
    • Identify Improvements: Pinpoint areas that need enhancement.
    • Track Progress: Monitor the implementation and effectiveness of process improvements.

    A PEMM Example: Validation Program based on ASTM E2500

    To apply the Process and Enterprise Maturity Model (PEMM) to an ASTM E2500 validation program, we can evaluate the program’s maturity across the five process enablers and four enterprise capabilities defined in PEMM. Here’s how this application might look:

    Process Enablers

    Design:

      • P-1: Basic ASTM E2500 approach implemented, but not consistently across all projects
      • P-2: ASTM E2500 principles applied consistently, with clear definition of requirements, specifications, and verification activities
      • P-3: Risk-based approach fully integrated into design process, with SME involvement from the start
      • P-4: Continuous improvement of ASTM E2500 implementation based on lessons learned and industry best practices

      Performers:

        • P-1: Some staff trained on ASTM E2500 principles
        • P-2: All relevant staff trained and understand their roles in the ASTM E2500 process
        • P-3: Staff proactively apply risk-based thinking and scientific rationale in validation activities
        • P-4: Staff contribute to improving the ASTM E2500 process and mentor others

        Owner:

          • P-1: Validation program has a designated owner, but role is not well-defined
          • P-2: Clear ownership of the ASTM E2500 process with defined responsibilities
          • P-3: Owner actively manages and improves the ASTM E2500 process
          • P-4: Owner collaborates across departments to optimize the validation program

          Infrastructure:

            • P-1: Basic tools in place to support ASTM E2500 activities
            • P-2: Integrated systems for managing requirements, risk assessments, and verification activities
            • P-3: Advanced tools for risk management and data analysis to support decision-making
            • P-4: Cutting-edge technology leveraged to enhance efficiency and effectiveness of the validation program

            Metrics:

              • P-1: Basic metrics tracked for validation activities
              • P-2: Comprehensive set of metrics established to measure ASTM E2500 process performance
              • P-3: Metrics used to drive continuous improvement of the validation program
              • P-4: Predictive analytics used to anticipate and prevent issues in validation activities

              Enterprise Capabilities

              Leadership:

                • E-1: Leadership aware of ASTM E2500 principles
                • E-2: Leadership actively supports ASTM E2500 implementation
                • E-3: Leadership drives cultural change to fully embrace risk-based validation approach
                • E-4: Leadership promotes ASTM E2500 principles beyond the organization, influencing industry standards

                Culture:

                  • E-1: Some recognition of the importance of risk-based validation
                  • E-2: Culture of quality and risk-awareness developing across the organization
                  • E-3: Strong culture of scientific thinking and continuous improvement in validation activities
                  • E-4: Innovation in validation approaches encouraged and rewarded

                  Expertise:

                    • E-1: Basic understanding of ASTM E2500 principles among key staff
                    • E-2: Dedicated team of ASTM E2500 experts established
                    • E-3: Deep expertise in risk-based validation approaches across multiple departments
                    • E-4: Organization recognized as thought leader in ASTM E2500 implementation

                    Governance:

                      • E-1: Basic governance structure for validation activities in place
                      • E-2: Clear governance model aligning ASTM E2500 with overall quality management system
                      • E-3: Cross-functional governance ensuring consistent application of ASTM E2500 principles
                      • E-4: Governance model that adapts to changing regulatory landscape and emerging best practices

                      To use this PEMM assessment:

                      1. Evaluate your validation program against each enabler and capability, determining the current maturity level (P-1 to P-4 for process enablers, E-1 to E-4 for enterprise capabilities).
                      2. Identify areas for improvement based on gaps between current and desired maturity levels.
                      3. Develop action plans to address these gaps, focusing on moving to the next maturity level for each enabler and capability.
                      4. Regularly reassess the program to track progress and adjust improvement efforts as needed.

                      Comparison Table

                      AspectBPMMPEMM
                      CreatorObject Management Group (OMG)Dr. Michael Hammer
                      PurposeAssess and improve business process maturityRoadmap and benchmarking for process-centricity
                      StructureFive levels: Initial, Managed, Standardized, Predictable, OptimizingTwo components: Process Enablers (P0-P4), Organizational Capabilities (E0-E4)
                      FocusProcess-centric, incremental improvementProcess enablers and organizational capabilities
                      Assessment MethodOften requires external appraisersDesigned for self-assessment
                      Guiding PrinciplesStandardization, measurement, continuous improvementEmpirical evidence, simplicity, organizational engagement
                      ApplicationsEnterprise systems, business process improvement, benchmarkingProcess reengineering, organizational engagement, benchmarking

                      In summary, while both BPMM and PEMM aim to improve business processes, BPMM is more structured and detailed, often requiring external appraisers, and focuses on incremental process improvement across organizational boundaries. In contrast, PEMM is designed for simplicity and self-assessment, emphasizing the role of process enablers and organizational capabilities to foster a supportive environment for process improvement. Both have advantages, and keeping both in mind while developing processes is key.

                      Design Problem Solving into the Process

                      Good processes and systems have ways designed into them to identify when a problem occurs, and ensure it gets the right rigor of problem-solving. A model like Art Smalley’s can be helpful here.

                      Each and every process should go through the following steps:

                      1. Define those problems that should be escalated and those that should not. Everyone working in a process should have the same definition of what is a problem. Often times we end up with a hierarchy of issues that are solved within the process – Level 1 – and those processes that go to a root cause process (deviation/CAPA) – level 2.
                      2. Identify the ways to notice a problem. Make the work as visual as possible so it is easier to detect the problem.
                      3. Define the escalation method. There should be one clear way to surface a problem. There are many ways to create a signal, but it should be simple, timely, and very clear.

                      These three elements make up the request for help.

                      The next two steps make up the response to that request.

                      1. Who is the right person to respond? Supervisor? Area management? Process Owner? Quality?
                      2. How does the individual respond, and most importantly when? This should be standardized so the other end of that help chain is not wondering whether, when, and in what form that help is going to arrive.

                      In order for this to work, it is important to identify clear ownership of the problem. There always must be one person clearly accountable, even if only responsible for bits, so they can push the problem forward.

                      It is easy for problem-solving to stall. So make sure progress is transparent. Knowing what is being worked on, and what is not, is critical.

                      Prioritization is key. Not every problem needs solving so have a mechanism to ensure the right problems are being solved in the process.

                      Problem solving within a process

                      Enabling the Process Owner to Drive Improvement

                      The process owner is a central part of business process management yet is often the one we take for granted. In this session, the speaker will share through case study how organizations can build strong process owners and leverage them to drive improvement in a highly regulated environment. Participants in this session will learn: ~how to identify process owners and competencies for success, ~how to build a change management program that leverages process owners as the guiding coalition, and ~how to create and execute a training program for process owners

                      2022 ASQ WORLD CONFERENCE ON QUALITY & IMPROVEMENT

                      The presentation I gave at the 2022 World Conference on Quality & Improvement.