Compliance Is Not Waste: Reading Quality Through Lean and the Theory of Constraints

There is a conversation that happens, in various forms, in nearly every manufacturing organization I have observed over twenty-five years in this industry. It happens in budget reviews, in operational excellence steering committees, in the hallway outside a QA office, and — most damagingly — in the unexpressed assumptions that shape how an organization is actually structured and run.

The conversation goes something like this: We spend too much on compliance. If we could just get leaner — cut the forms, shrink the quality team, streamline the approvals — we would move faster, cost less, and be more competitive. Quality and compliance are the tax we pay for being in a regulated industry. They are necessary. But they are waste.

This belief is so deeply embedded in some organizations that it never even surfaces as a conversation. It is just the water they swim in. Quality exists to satisfy regulators. Lean exists to eliminate waste. Regulators require quality. Therefore, quality is irreducible waste that must be minimized subject to regulatory tolerance.

I want to argue that this framing is not merely incomplete — it is structurally wrong in a way that causes specific, traceable organizational failures. And I want to use the frameworks these organizations claim to love — lean thinking and the Theory of Constraints — to show exactly why.

The Problem With “Necessary Non-Value-Added”

Let’s start with the lean taxonomy, because the misreading begins there.

Lean thinking, as Womack and Jones articulated it in their 1996 codification of the Toyota Production System, begins with a deceptively simple question: what does the customer value? Value is defined as a capability delivered to the customer at the right time, at the right quality, at the right price — as the customer defines it, not as we do. Everything else is waste. And waste, in the lean vocabulary, comes in varieties that have been systematically catalogued as the seven forms of muda: overproduction, waiting, transport, over-processing, inventory, motion, and defects.

This taxonomy is useful. But the translation of lean from Toyota to regulated industries has consistently produced a subtle and damaging error: the misclassification of compliance activity.

Standard lean frameworks distinguish three types of activities:

  • Value-added (VA): transforms the product or service in a way the customer is willing to pay for, done right the first time
  • Necessary non-value-added (NNVA): does not directly create value, but cannot currently be eliminated — regulatory compliance, documentation, inspections
  • Pure non-value-added (NVA): contributes nothing to the customer and should be eliminated

The intent of this classification is sound. But in practice, the “necessary” in NNVA becomes heard as “tolerated.” And tolerated waste, in organizations under cost pressure, becomes something to minimize — to satisfy the regulator with the least possible resource investment. The goal shifts from building quality into the process to performing the ritual that proves quality exists.

This is compliance theater. And it is not lean. It is the opposite of lean.

The lean enterprise insight that most organizations never reach is this: compliance activity, properly understood, is not in the NNVA category at all. When it is functioning correctly, it is in the value-added category — because patients, the ultimate customers of pharmaceutical manufacturing, explicitly require that their medicines be manufactured in a controlled, verified, and trustworthy way. Regulatory requirements are the formalized expression of what patients and society are, in fact, willing to pay for. Meeting them is not a tax on production. It is production’s purpose.

Lean Enterprise Institute’s own post-Womack thinking, which increasingly frames lean around value creation rather than waste elimination, is instructive here: “Why it’s better to focus on value, not waste.” The insight is that waste-focused thinking is derivative. You identify waste by understanding value first. Organizations that never ask what quality really provides to the patient — what value their compliance system is actually creating — will inevitably misclassify it.


What the Theory of Constraints Sees

If lean thinking provides the value framework that should reframe compliance, the Theory of Constraints provides the systems lens that explains why misclassifying compliance is so operationally dangerous.

Eli Goldratt, who introduced TOC through his 1984 book The Goal, summarized his entire philosophy in a single word when challenged by an interviewer: focus. TOC’s central observation is that every system is limited in its throughput by a single constraint — the weakest link in the chain — and that improving anything other than the constraint does not improve the system. In fact, local optimization of non-constraint resources can actively harm the system by increasing WIP, creating queues at the constraint, and masking the real problem.

Goldratt’s five focusing steps are the operating framework:

  1. Identify the constraint — the single resource or process that limits system throughput
  2. Exploit the constraint — squeeze every unit of capacity from it without additional investment
  3. Subordinate everything else to the constraint — make all other decisions serve the constraint’s needs
  4. Elevate the constraint — if still limiting, invest to increase its capacity
  5. Repeat — never let inertia become the new constraint

The insight for quality and compliance comes from steps two and three, and it is counterintuitive.

Poor quality before the constraint wastes constraint capacity. Every defect, every rework event, every out-of-specification result that reaches the constraint forces the constraint to process something that should have been caught earlier, or to process it again. A 5% improvement in quality yield at the constraint — a modest target — can produce a 50% improvement in system profit, because the constraint governs everything downstream of it. That is not a theoretical number. That is the arithmetic of constrained systems.

Poor quality after the constraint is equally damaging. Rework events downstream consume capacity that was produced at the constraint — the most expensive capacity in the system. A batch that fails release review, a product recall, a regulatory hold — each of these destroys throughput that originated at the constraint and cannot be recovered.

Now run this logic through a pharmaceutical manufacturing operation and ask: what happens when the quality system is treated as a cost to minimize? When the Quality Unit is under-resourced, change control is a bureaucratic hurdle rather than a knowledge management tool, CAPA is reactive rather than preventive, and environmental monitoring produces aspirational data rather than representative data?

What happens is that the quality system stops protecting the constraint. Instead of catching defects early and cheaply, it catches them late and expensively — or not at all, until a regulator finds them. The cost of poor quality does not disappear when you reduce the quality function. It defers and compounds. Most manufacturing quality experts agree that the cost of a defect increases tenfold at each major processing point — and by a factor of one hundred if the defective product reaches distribution. The invisible ledger is always open. You are either paying now, in quality investment, or you are accruing a much larger liability for later.

Compliance as Variation Reduction — The Real Alignment

There is a deeper argument to be made here, one that goes beyond the accounting of defect costs.

Lean and compliance share a root cause.

Lean compliance theory, drawing on cybernetic systems thinking and promise theory, articulates it cleanly: waste is the manifestation of risk that has become reality. The root cause of both waste and risk is uncertainty — what lean practitioners call variation or variability. The act of regulation — through feedback and feedforward controls — reduces that variation. This is the fundamental principle underlying both Lean Six Sigma in operations and compliance functions like quality management and safety programs. Both regulate processes to reduce uncertainty. Both create the stable, predictable conditions that enable efficient production.

Think about what pharmaceutical GMP actually requires, stripped of its bureaucratic expression. It requires that processes be defined, controlled, verified, and improved. It requires that deviations be investigated and root causes addressed. It requires that changes be evaluated for their effect on quality before implementation. It requires that data be accurate, complete, and contemporaneous. These are not arbitrary regulatory preferences. They are the description of a system that has low variation, high predictability, and consequently high throughput.

In Womack and Jones’s framework, the third principle of lean thinking is flow — removing the obstacles that cause work to stop, wait, batch, and pile up. A quality system that works correctly is flow. It prevents the batch failures, the contamination events, the regulatory holds, the supply disruptions that break flow catastrophically. The lean practitioner who sees GMP documentation as an interruption to flow has misread both lean and GMP.

The 3Ms of waste in lean thinking — muda (waste), mura (unevenness), and muri (overburden) — are illuminating here. An underpowered, compliance-theater quality system does not eliminate any of these. It creates all three:

  • Muda in the form of failed batches, investigations, reprocessing, rework, and recalls — the most expensive forms of waste in pharmaceutical manufacturing
  • Mura in the form of uneven production flow punctuated by deviations, regulatory actions, and supply disruptions — exactly the opposite of what lean seeks to achieve
  • Muri in the form of overburden on operators and quality staff who are simultaneously trying to run a manufacturing operation and manage the fallout from a quality system that was never built to actually prevent problems

A compliance system that is properly resourced, well-designed, and genuinely embedded in operations reduces muda, mura, and muri. That is the lean outcome. The path to lean pharmaceutical manufacturing runs through quality, not around it.

The Failure Modes: Where Organizations Actually Go Wrong

Having established the theoretical case, let me be direct about what the failure modes actually look like. They are not hypothetical. They are documented, expensive, and recurring.

The Cost-Cutting Misapplication of Lean

The most visible example in recent history is Boeing’s 737 MAX program.

Boeing was once a genuine lean practitioner — an organization that had absorbed Toyota’s thinking deeply enough to produce an extraordinary engineering track record. What happened in the 737 MAX era was not lean. It was what lean practitioners have called L.A.M.E. — Lean As Misguidedly Executed. Leadership used the language and tools of lean to justify cost-cutting and schedule compression, while systematically stripping out the quality oversight that lean actually depends on.

Suppliers were pressured to cut costs by 15% under “Partnering for Success” programs. Engineers and quality specialists were eliminated. The FAA’s oversight authority was progressively delegated back to Boeing’s own employees. And when the 737 MAX-9 door plug blew out during an Alaska Airlines flight at 16,000 feet, a subsequent FAA audit found Boeing had failed 33 of 89 quality control standards.

The 737 MAX grounding alone cost over $20 billion in direct expenses, compensation, and legal settlements. Boeing’s market share in commercial aviation declined as Airbus surpassed them in orders and deliveries. Ongoing quality issues caused delivery halts and revenue losses. The cost of eliminating “unnecessary” quality oversight turned out to be far larger than the overhead that was eliminated.

The lean post-mortem is unambiguous: “Boeing executives failed to lead, waved off lean.” The failure was not that lean was applied — it was that the actual principles of lean were abandoned in favor of their most superficial interpretation (cut costs, move faster) while their substance (build quality in, respect people, create stable flow) was ignored. As one analysis put it plainly: “Lean isn’t about cost-cutting — it’s about flow, quality, and customer value. When Lean is used as a blunt instrument for savings, it destroys the very efficiencies it’s meant to create.”

The Compliance Theater Misapplication

If Boeing represents lean misapplied to destroy quality, Ranbaxy represents the complementary failure: a compliance system that was performed rather than practiced.

Ranbaxy Laboratories’ case is now a case study in pharmaceutical regulatory enforcement. In 2013, Ranbaxy USA pleaded guilty to felony charges and agreed to pay $500 million to resolve charges relating to the manufacture and distribution of adulterated drugs. The specific violations tell the story precisely: stability testing conducted weeks or months after the dates reported to the FDA; stability tests run on the same day rather than at prescribed intervals months apart; samples stored in conditions that did not meet specifications without disclosure. Batch records from all manufacturing sites were found deficient.

What happened at Ranbaxy was not a series of individual compliance lapses. It was a quality system that existed primarily as documentation — as evidence for regulators — rather than as a genuine operational control. The effort spent on making things look compliant vastly exceeded the effort spent on being compliant. That is the ultimate form of compliance theater: the appearance of quality activity without its substance.

The TOC lens is revealing here. If the quality system is not actually catching defects and preventing problems, where is the constraint? In the case of a compliance-theater operation, the constraint is regulatory scrutiny itself. The organization is spending significant resources managing the appearance of compliance, managing the relationship with regulators, responding to warning letters, and paying settlements — all of which are forms of waste so catastrophic they dwarf any savings that were made by underinvesting in the quality system. The “constraint” they failed to identify was their own integrity.

Toyota Got Lost

Toyota’s own history over the last two decades is a reminder that no philosophy, however elegant, is immunity. The company that codified the Toyota Production System and became synonymous with lean excellence has also experienced very public quality and compliance crises, most notably the 2009–2011 unintended acceleration recalls and a series of subsequent safety campaigns. These episodes are not just automotive gossip; for a regulated-industry audience, they are a case study in how even a mature lean culture can drift under growth pressure, global complexity, and an erosion of problem-solving discipline.

The 2009–2011 crisis centered on reports of sudden unintended acceleration involving millions of Toyota and Lexus vehicles worldwide, triggering recalls for floor mat entrapment, “sticking” accelerator pedals, and software updates for anti-lock braking in hybrids. U.S. regulators at NHTSA and NASA ultimately found no evidence of a systemic electronic throttle defect, but they did identify concrete mechanical and design issues (pedals slow to return to idle, floor mats trapping pedals) and criticized Toyota for delayed, fragmented defect reporting and recall initiation. In parallel, plaintiffs’ experts highlighted software safety weaknesses and single‑points‑of‑failure in throttle control logic, arguing that the company’s legendary jidoka had not fully migrated into software-era hazard analysis and safety-critical code practices.

Operationally, the recall crisis broke some of the myths around Toyota’s infallibility. At its peak, Toyota recalled nearly eight million vehicles in the U.S. for unintended acceleration‑related issues, with multiple waves of actions as new failure modes and affected models were identified. Internal documents and U.S. Department of Transportation timelines show a pattern that should look uncomfortably familiar to anyone in pharma: early field signals treated as noise, hesitance to escalate to formal defect status, narrow-scope countermeasures that addressed symptoms (floor mats) while ignoring systemic design or process questions, and a compliance posture that was more defensive than transparent until the crisis forced a reset. The financial and reputational consequences were significant—billions in recall and litigation costs and a visible dent in Toyota’s carefully cultivated quality halo.

Nor did the challenges end there. In the 2010s and 2020s Toyota has continued to run substantial safety campaigns: Takata airbag inflator replacements across many models; software issues that could deactivate ABS and traction control in certain RAV4s; and repeated fuel pump recalls for stalling risk across Toyota and Lexus vehicles, including an expanded 2025 campaign to replace high‑pressure fuel pumps with improved designs at no cost to customers. In each case, the factual pattern is that defects made it into production fleets at scale, often with multi‑year lag between field emergence and comprehensive corrective action. For a lean practitioner, this is the signature of a detection and escalation system that is no longer as hypersensitive as the original Toyota plants were in the era when any worker could and would pull the andon cord, and the company would swarm the problem until it was structurally addressed.

The internal and external post‑mortems on the unintended acceleration crisis are blunt about cultural drift. Analyses from academics and management scholars describe how rapid global expansion, aggressive cost targets, and supply chain complexity strained Toyota’s traditional problem‑solving routines and engineering review cycles. The incident forced a re‑emphasis on the very principles the Toyota Way is built on—genchi genbutsu (go and see), nemawashi (consensus‑building around facts), and a preference for stopping and fixing problems at the source rather than managing around them. Toyota has since tightened defect reporting to regulators, institutionalized global quality task forces, and expanded its use of standard work and software safety analysis as active problem‑solving tools, not just documentation for compliance. The lesson for pharma is not that “even Toyota has recalls,” which is a trivial observation, but that even the originator of lean can drift into treating compliance and external reporting as transactional obligations when business pressure mounts—and that recovering from that drift requires a deliberate recommitment to treating safety and quality as constraints around which the system must be designed, not as externalities to be managed.

The Quality Unit Authority Problem

More recently, and closer to home in pharmaceutical manufacturing, the pattern of Quality Unit failures in FDA warning letters documents a systemic organizational failure that follows a recognizable logic.

In 2025, FDA issued warning letters to pharmaceutical companies in China, India, and Malaysia, each citing Quality Unit deficiencies. The Chinese firm failed to establish an adequate Quality Unit with authority to ensure compliance. The Indian firm’s Quality Unit failed to maintain data integrity — torn batch records, damaged testing chromatograms, improperly completed forms. The Malaysian facility’s Quality Unit failed to provide adequate oversight of its OTC products. FDA inspection data shows Quality Unit-related citations in 6.2% of US facilities versus 23.1% in Asian operations — reflecting not a cultural difference in rigor but a structural difference in how the Quality Unit is positioned within organizational hierarchies.

These failures have a common root. When the Quality Unit lacks authority — when it is organizationally subordinated to production, when its resistance to release decisions is treated as an obstacle rather than a protection, when its resource requests are chronically undermet — it cannot perform its function. And in TOC terms, this is precisely the problem of failing to subordinate everything to the constraint.

In a pharmaceutical manufacturing system, quality assurance of the product — the thing that makes it safe and effective for patients — is the constraint on throughput in the most important sense. Not in the sense that quality should be slow or bureaucratic. But in the sense that releasing a product that is not genuinely safe and effective is not throughput. It is waste of the most catastrophic variety. A Quality Unit with insufficient authority to slow or stop a release decision it has serious concerns about is a quality system that cannot prevent the worst outcomes.

The FDA’s position is explicit: the Quality Unit is “not just a compliance requirement, but a foundational function in pharmaceutical manufacturing.” “Deficiencies in QU oversight are interpreted not as isolated failures, but as signs of systemic weaknesses in the quality management system.”

The Overinterpretation Problem: Lean Cuts in the Right Place

I want to be careful here not to construct an argument that justifies any amount of quality overhead as value-added. That would be equally wrong, and the pharmaceutical industry has its own version of this error.

Good Manufacturing Practice regulations are designed to ensure that products are consistently produced and controlled according to defined quality standards. But it is common for organizations to overinterpret regulations, leading to unnecessary processes that inflate costs and reduce efficiency without improving quality or patient safety. This is the mirror image of the compliance-theater failure: rather than cutting quality substance while maintaining quality appearance, these organizations build elaborate quality structures that are internally consistent but not actually calibrated to risk.

This is muri — overburden. And in TOC terms, it has a specific effect: it creates the appearance that quality is the constraint when it is not. When operations staff wait weeks for change control approvals on low-risk process improvements, when validation cycles run to years for straightforward equipment qualifications, when analysts spend more time in the quality system than at the bench — the quality function has become an organizational bottleneck. Not because quality itself is a bottleneck, but because the quality system is poorly designed.

This matters because it feeds the anti-quality narrative in organizations. When operations leaders experience quality as slow, expensive, and bureaucratically burdensome, their intuition that “quality is waste” feels confirmed. The correct response is not to strip the quality system further but to redesign it — to apply lean thinking to the quality system itself, asking what activities genuinely produce the outcomes (patient safety, regulatory confidence, process knowledge) that we are trying to achieve, and eliminating the administrative overhead that has accumulated without contributing to those outcomes.

The pharmaceutical industry has a specific version of this challenge in the regulatory change environment. When manufacturing objectives are primarily targeted toward compliance requirements rather than patient expectations, you get short-sighted decision-making. The CAPA system is a canonical example: set in motion primarily after failures rather than truly preventively, applied inconsistently, and treated as an administrative obligation rather than a learning mechanism.

Right-sizing the quality system is lean work. It requires honest value stream mapping of the quality system itself — every procedure, every review cycle, every approval gate — and the willingness to ask whether each step genuinely contributes to quality outcomes or whether it has calcified into ritual. Risk-based approaches to quality management, allocating rigorous controls to high-risk activities and lighter-touch approaches to lower-risk ones, are the lean answer to GMP over-engineering. They are not a compromise with compliance. They are what compliance looks like when it is designed well.

Applying the Five Focusing Steps to Your Quality System

Let me be concrete about what it looks like to apply TOC thinking to a quality and compliance system. Not as a theoretical exercise, but as an operational analysis tool.

Step 1: Identify the Constraint

What, in your current quality system, is genuinely limiting throughput — not fake throughput (releasing batches that will later fail), but real throughput (consistently delivering products that meet patient needs and regulatory requirements)?

In some organizations, the constraint is investigation capacity. The investigation queue grows faster than it can be cleared. Deviations sit open for months. Root cause analysis is shallow because the team is perpetually in triage. Every new excursion that enters the system competes for attention with fifty that are already open. This is a true quality constraint — and it cascades. Open deviations block batch releases. Shallow root cause analysis means the same problem recurs. The organization is perpetually fighting fires it never fully extinguishes.

In others, the constraint is change control. Every process improvement, every equipment modification, every procedure update must pass through a change control process that is under-resourced, under-authority, and systematically slow. The result is operational stagnation — the organization cannot improve because the mechanism for capturing and implementing improvements is clogged.

In still others, the constraint is not quality function capacity at all, but quality culture. Operations staff that do not understand why quality controls exist — or that have learned to perform around them rather than with them — create a perpetual stream of deviations, documentation errors, and control failures that consume quality function capacity and prevent any sustainable improvement.

Identifying the real constraint requires honest data. Not the data in your quality system dashboard (which measures what you already decided to measure), but the data you get from spending time in the system: how long does a CAPA stay open? What fraction of investigations reach a root cause that is actually predictive — specific enough that preventing the cause would prevent the recurrence? Where do change requests die in the queue?

Step 2: Exploit the Constraint

Before investing in more resources, what can be done to use the existing constraint capacity more effectively?

For an investigation-constrained quality system, this might mean risk-stratifying deviations more aggressively so that the team’s best analytical capacity is reserved for high-impact events rather than being consumed equally by every logbook discrepancy. It might mean developing better templates and analytical frameworks so that each investigation starts from a higher baseline. It might mean training operations staff to capture more complete and accurate initial event descriptions so that investigations start with better data.

For a change control-constrained system, it might mean implementing tiered review pathways — a fast track for low-risk changes with minimal documentation burden, a standard track for moderate-risk changes, and full review only for high-risk changes that warrant it. This is not a compromise with GMP; it is a GMP-endorsed approach. ICH Q10 and FDA’s process validation guidance both explicitly support risk-based approaches to managing change.

Exploitation, in Goldratt’s sense, means getting the most out of the constraint without additional investment. In a quality context, this is about eliminating waste from quality processes — the scheduling conflicts, the approval queues, the unnecessary review loops, the redundant documentation — so that the actual analytical and judgment work gets as much of the available time as possible.

Step 3: Subordinate Everything Else to the Constraint

This is the step that most organizations skip, and it is where the most significant organizational change is required.

If investigation capacity is the constraint, then everything else in the system should be designed to protect it. Operations practices should minimize the defect rate entering the investigation queue — not to avoid scrutiny but to ensure that when investigations are required, they address genuinely significant events rather than being consumed by administrative noise. Quality management review cycles should be scheduled around the investigation queue, not around calendar convenience. Resource allocation decisions should prioritize the investigation function.

If quality culture is the constraint, then everything else must serve the culture-building effort. Training programs, visual management, how leaders respond to deviations, whether the organizational response to an excursion is blame or learning — all of these must be subordinated to the culture goal. This is not soft management theory. It is the arithmetic of constrained systems: if you cannot change the constraint, the constraint governs everything.

The organizational corollary is pointed: if quality and compliance are genuinely in the value-creating part of the system — if they are what makes throughput real rather than illusory — then everything else should subordinate to them. Production schedules, headcount decisions, capital investment priorities. Not because quality is more important as an abstract value, but because optimizing around the constraint is the only rational strategy in a constrained system.

Step 4: Elevate the Constraint

When exploitation and subordination have been exhausted and the constraint still limits throughput, it is time to invest. In a quality context, this might mean increasing investigation staffing, implementing better analytical tools, investing in training programs, or redesigning quality system architecture.

The important discipline here is sequencing. Organizations that jump immediately to “elevate” — buying an expensive quality management software system, hiring a large team, deploying complex digital tools — before exploiting and subordinating the constraint often find that the investment does not move the needle. The constraint shifts, or the new resources are consumed by the same structural inefficiencies that created the constraint in the first place.

Pharma quality and IT investments offer endless examples of this error. EQMS implementations that automate a broken process rather than fixing it. Electronic batch records deployed over fundamentally flawed process designs. Environmental monitoring platforms generating beautifully formatted reports of data that was never representative to begin with. The complexity multiplies. The actual quality outcome does not improve. Quality teams drown in documentation while missing the real signals.

Step 5: Repeat

This is where the lean and TOC frameworks converge most explicitly: perfection is not a state; it is a direction of travel. Once the current constraint is broken, the next constraint emerges. The goal is not to eliminate all constraints — that is impossible — but to keep identifying them, keep improving, and never let inertia become the new constraint.

Goldratt’s warning in Step 5 is unusually direct: do not let inertia become the constraint. This is the failure mode of organizations that solved a quality problem once and then stopped. A CAPA that addressed the root cause but was never verified for effectiveness. A validation that was robust at implementation but never updated as the process evolved. An environmental monitoring program that was representative of operations as they existed three years ago but has never been revised to reflect current facility loading or process changes.

In lean terms, this is the pursuit of perfection — Womack and Jones’s fifth principle. Not as an abstract aspiration, but as an operational discipline of continuously questioning whether current controls are still calibrated to current risk.

The Culture Behind the Framework

All of this — the lean principles, the TOC analysis, the five focusing steps — is intellectual scaffolding. The organizations that consistently fail at compliance are not failing because they lack frameworks. They are failing because they have the wrong culture, and culture is upstream of systems.

In the organizations where lean is misapplied to eliminate quality (Boeing), where compliance is performed rather than practiced (Ranbaxy), where the Quality Unit lacks authority to function as a genuine check on production decisions (the 2025 warning letters), there is a common cultural feature: the short term is consistently prioritized over the long term. Schedule pressure defeats quality judgment. This quarter’s cost reduction defeats next quarter’s reliability investment. The immediate discomfort of a delayed release is weighted more heavily than the long-term cost of a recall.

This is not unique to any particular industry or geography. The 70% lean implementation failure rate documented in Industry Week surveys is not primarily a problem of methodology. Kaizen Institute research identifies it clearly: 30-40% of lean success is tools; 60-70% is people. Organizations that treat lean as a toolkit to deploy — rather than a philosophy to embody — get the tools without the outcomes.

The same is true of quality culture. FDA’s analysis of pharmaceutical quality management maturity consistently identifies culture as the decisive variable: “When manufacturing objectives are targeted to meet compliance requirements rather than patient expectations, you get short-sighted decision making.” The Quality Maturity Model that FDA has been developing through its quality metrics initiative is explicitly designed to measure and encourage quality culture that goes beyond cGMP requirements — to recognize that sustainable quality performance requires an organizational identity, not just a management system.

What does quality culture look like when it is working? It looks like operations leadership that treats a quality hold as information rather than obstruction. It looks like Quality Unit staff who understand what they are protecting and why — who can articulate the patient impact of the decisions they are making. It looks like investigations that are genuinely curious rather than defensively conclusory. It looks like change control that is used as a knowledge management tool, capturing what was learned from each change rather than just documenting that it happened.

It also looks like a willingness to spend real money on quality infrastructure — not because regulators require it, but because the organization understands that quality investment is throughput investment. FDA’s own economic analysis of pharmaceutical quality management is unambiguous: poor quality management practices have caused billions of dollars in lost revenue over two decades, with annual costs of labor to manage drug shortages running from $216–359 million. The individual firm economics are equally clear: failed batches, recalls, regulatory remediation programs, consent decrees — these costs vastly exceed the investment that would have prevented them.

What to Ask of Your Own Organization

If you want to stress-test whether your organization has the right mental model of compliance and quality, there are a few questions that cut to it quickly. Treat these not as a checklist but as conversation starters — the kind of conversations that reveal whether the water you are swimming in is the right water.

On classification and value

  • How does your organization describe quality in budget conversations? Is it a cost center or an investment? What evidence would change that framing?
  • If you were to map your quality system activities against the lean value taxonomy — value-added, necessary non-value-added, pure waste — where would the bulk of quality work fall? How confident are you in that assessment? Who made it, and were quality professionals part of the conversation?

On the constraint

  • Where does throughput (good product to patients) actually get limited in your system? Is the quality system one of those places? If so, is it limited because quality function is under-resourced, or because the quality system is poorly designed?
  • What happens in your organization when a quality hold intersects with a production schedule pressure? Who wins? What are the cultural and structural forces that produce that outcome?
  • Where in your quality system is the most expensive rework — the events that consume the most time, consume the most analytical capacity, generate the most re-review? Are those events being prevented, or just managed after the fact?

On waste in the quality system

  • What fraction of your CAPA actions close with a genuine, specific root cause that is different from the proximate cause? What fraction close with “operator retraining” regardless of what the investigation found?
  • How long does it take to change a low-risk SOP? If the answer is three months, you have a change control system that is producing muri without reducing muda. What would it take to redesign that pathway?
  • Which of your GMP requirements are genuinely risk-proportionate, and which reflect accumulated regulatory overinterpretation? When was the last time your organization asked that question systematically?

On culture

  • If a quality professional in your organization identifies a serious concern about a batch and recommends a hold, how does that decision get made? What is the organizational pressure on that professional? What happens to them if they are wrong?
  • When deviations occur, is the first question “who is accountable?” or “what does this tell us about our system?” Both questions have their place. The sequence matters.
  • Does your organization treat the cost of poor quality as a real cost — tracked, reported, and weighed against quality investment decisions? Or does the accounting system make poor quality costs invisible while quality investment costs are highly visible?

A Different Synthesis

The organizations that get this right — that build quality and compliance systems that genuinely support lean performance rather than impeding it — share a set of operational beliefs that are worth naming explicitly.

They believe that quality is not a department. It is a property of the system. The Quality Unit has a specific role, authority, and set of responsibilities. But quality outcomes are produced by the entire organization — by operations staff who understand why controls exist, by engineering teams who build quality into process design, by leadership that treats quality data as decision-relevant information rather than audit risk management.

They believe that the cost of poor quality is always larger than the cost of good quality. Not in some abstract, long-run way, but in the specific arithmetic of their own operation. They track it. They use it in investment decisions. They make it visible.

They believe that compliance is not the ceiling of performance, it is the floor. FDA’s Quality Maturity Model, the ICH Q10 pharmaceutical quality system guidance, the latest revisions of Annex 1 and the proposed Annex 15 expansion — all of these are regulatory frameworks that explicitly contemplate continuous improvement beyond minimum compliance. Organizations that reach the floor and stop moving are not lean organizations. They are organizations waiting for the next deviation.

And they believe that lean thinking applies to the quality system itself. Not as an excuse to cut quality oversight, but as a discipline of honest evaluation: which quality activities genuinely contribute to patient outcomes and regulatory confidence, and which have accumulated as ritual? The right answer is not “all quality activity is valuable.” The right answer requires ongoing, rigorous inquiry.

The Conclusion That Is Not a Conclusion

I have been careful throughout this piece not to argue that compliance is easy, or that the regulatory burden on pharmaceutical manufacturing is always perfectly calibrated, or that every FDA requirement reflects ideal risk management. These are complicated, contentious questions that deserve their own treatment.

What I have argued is narrower and, I think, more robust: the belief that compliance and quality are categories of waste — necessary wastes, tolerated costs — is structurally wrong when examined through the frameworks that organizations claim to use. Lean thinking, correctly applied, classifies quality as value-creating when the customer (the patient) genuinely requires it. The Theory of Constraints shows that quality failures destroy constraint capacity and that protecting the constraint requires, not optional, quality investment. The 3Ms of waste — muda, mura, muri — are produced by quality underinvestment, not by quality itself.

The organizations that have learned this the hardest way — Boeing through $20 billion in direct losses and two crashes, Ranbaxy through $500 million in fines and permanent reputational damage, dozens of pharmaceutical manufacturers through consent decrees and import alerts — did not fail because they over-invested in quality. They failed because they convinced themselves, using superficial applications of lean thinking, that quality was the waste to be minimized.

The frameworks were not wrong. The reading was.

The useful question is not “how little can we spend on compliance?” The useful question is “what does a quality system look like that genuinely creates value — that prevents the defects, controls the variation, captures the knowledge, and enables the throughput that makes patient outcomes and organizational sustainability possible simultaneously?”

That question is harder to answer. It requires real analysis, real investment, and a cultural commitment to treating quality outcomes as the measure of success rather than compliance checkboxes as the proxy for it.

But it is the only question that the lean tradition and the Theory of Constraints, correctly read, actually ask.

When 483s Reveal Zemblanity: The Catalent Investigation – A Case Study in Systemic Quality Failure

The Catalent Indiana 483 form from July 2025 reads like a textbook example of my newest word, zemblanity, in risk management—the patterned, preventable misfortune that accrues not from blind chance, but from human agency and organizational design choices that quietly hardwire failure into our operations.

Twenty hair contamination deviations. Seven months to notify suppliers. Critical equipment failures dismissed as “not impacting SISPQ.” Media fill programs missing the very interventions they should validate. This isn’t random bad luck—it’s a quality system that has systematically normalized exactly the kinds of deviations that create inspection findings.

The Architecture of Inevitable Failure

Reading through the six major observations, three systemic patterns emerge that align perfectly with the hidden architecture of failure I discussed in my recent post on zemblanity.

Pattern 1: Investigation Theatre Over Causal Understanding

Observation 1 reveals what happens when investigations become compliance exercises rather than learning tools. The hair contamination trend—20 deviations spanning multiple product codes—received investigation resources proportional to internal requirement, not actual risk. As I’ve written about causal reasoning versus negative reasoning, these investigations focused on what didn’t happen rather than understanding the causal mechanisms that allowed hair to systematically enter sterile products.

The tribal knowledge around plunger seating issues exemplifies this perfectly. Operators developed informal workarounds because the formal system failed them, yet when this surfaced during an investigation, it wasn’t captured as a separate deviation worthy of systematic analysis. The investigation closed the immediate problem without addressing the systemic failure that created the conditions for operator innovation in the first place.

Pattern 2: Trend Blindness and Pattern Fragmentation

The most striking aspect of this 483 is how pattern recognition failed across multiple observations. Twenty-three work orders on critical air handling systems. Ten work orders on a single critical water system. Recurring membrane failures. Each treated as isolated maintenance issues rather than signals of systematic degradation.

This mirrors what I’ve discussed about normalization of deviance—where repeated occurrences of problems that don’t immediately cause catastrophe gradually shift our risk threshold. The work orders document a clear pattern of equipment degradation, yet each was risk-assessed as “not impacting SISPQ” without apparent consideration of cumulative or interactive effects.

Pattern 3: Control System Fragmentation

Perhaps most revealing is how different control systems operated in silos. Visual inspection systems that couldn’t detect the very defects found during manual inspection. Environmental monitoring that didn’t include the most critical surfaces. Media fills that omitted interventions documented as root causes of previous failures.

This isn’t about individual system inadequacy—it’s about what happens when quality systems evolve as collections of independent controls rather than integrated barriers designed to work together.

Solutions: From Zemblanity to Serendipity

Drawing from the approaches I’ve developed on this blog, here’s how Catalent could transform their quality system from one that breeds inevitable failure to one that creates conditions for quality serendipity:

Implement Causally Reasoned Investigations

The Energy Safety Canada white paper I discussed earlier this year offers a powerful framework for moving beyond counterfactual analysis. Instead of concluding that operators “failed to follow procedure” regarding stopper installation, investigate why the procedure was inadequate for the equipment configuration. Instead of noting that supplier notification was delayed seven months, understand the systemic factors that made immediate notification unlikely.

Practical Implementation:

  • Retrain investigators in causal reasoning techniques
  • Require investigation sponsors (area managers) to set clear expectations for causal analysis
  • Implement structured causal analysis tools like Cause-Consequence Analysis
  • Focus on what actually happened and why it made sense to people at the time
  • Implement rubrics to guide consistency

Build Integrated Barrier Systems

The take-the-best heuristic I recently explored offers a powerful lens for barrier analysis. Rather than implementing multiple independent controls, identify the single most causally powerful barrier that would prevent each failure type, then design supporting barriers that enhance rather than compete with the primary control.

For hair contamination specifically:

  • Implement direct stopper surface monitoring as the primary barrier
  • Design visual inspection systems specifically to detect proteinaceous particles
  • Create supplier qualification that includes contamination risk assessment
  • Establish real-time trend analysis linking supplier lots to contamination events

Establish Dynamic Trend Integration

Traditional trending treats each system in isolation—environmental monitoring trends, deviation trends, CAPA trends, maintenance trends. The Catalent 483 shows what happens when these parallel trend systems fail to converge into integrated risk assessment.

Integrated Trending Framework:

  • Create cross-functional trend review combining all quality data streams
  • Implement predictive analytics linking maintenance patterns to quality risks
  • Establish trigger points where equipment degradation patterns automatically initiate quality investigations
  • Design Product Quality Reviews that explicitly correlate equipment performance with product quality data

Transform CAPA from Compliance to Learning

The recurring failures documented in this 483—repeated hair findings after CAPA implementation, continued equipment failures after “repair”—reflect what I’ve called the effectiveness paradox. Traditional CAPA focuses on thoroughness over causal accuracy.

CAPA Transformation Strategy:

  • Implement a proper CAPA hierarchy, prioritizing elimination and replacement over detection and mitigation
  • Establish effectiveness criteria before implementation, not after
  • Create learning-oriented CAPA reviews that ask “What did this teach us about our system?”
  • Link CAPA effectiveness directly to recurrence prevention rather than procedural compliance

Build Anticipatory Quality Architecture

The most sophisticated element would be creating what I call “quality serendipity”—systems that create conditions for positive surprises rather than inevitable failures. This requires moving from reactive compliance to anticipatory risk architecture.

Anticipatory Elements:

  • Implement supplier performance modeling that predicts contamination risk before it manifests
  • Create equipment degradation models that trigger quality assessment before failure
  • Establish operator feedback systems that capture emerging risks in real-time
  • Design quality reviews that explicitly seek weak signals of system stress

The Cultural Foundation

None of these technical solutions will work without addressing the cultural foundation that allowed this level of systematic failure to persist. The 483’s most telling detail isn’t any single observation—it’s the cumulative picture of an organization where quality indicators were consistently rationalized rather than interrogated.

As I’ve written about quality culture, without psychological safety and learning orientation, people won’t commit to building and supporting robust quality systems. The tribal knowledge around plunger seating, the normalization of recurring equipment failures, the seven-month delay in supplier notification—these suggest a culture where adaptation to system inadequacy became preferable to system improvement.

The path forward requires leadership that creates conditions for quality serendipity: reward pattern recognition over problem solving, celebrate early identification of weak signals, and create systems that make the right choice the easy choice.

Beyond Compliance: Building Anti-Fragile Quality

The Catalent 483 offers more than a cautionary tale—it provides a roadmap for quality transformation. Every observation represents an invitation to build quality systems that become stronger under stress rather than more brittle.

Organizations that master this transformation—moving from zemblanity-generating systems to serendipity-creating ones—will find that quality becomes not just a regulatory requirement but a competitive advantage. They’ll detect risks earlier, respond more effectively, and create the kind of operational resilience that turns disruption into opportunity.

The choice is clear: continue managing quality as a collection of independent compliance activities, or build integrated systems designed to create the conditions for sustained quality success. The Catalent case shows us what happens when we choose poorly. The frameworks exist to choose better.


What patterns of “inevitable failure” do you see in your own quality systems? How might shifting from negative reasoning to causal understanding transform your approach to investigations? Share your thoughts—this conversation about quality transformation is one we need to have across the industry.

The Importance of a Quality Plan

In the ever-evolving landscape of pharmaceutical manufacturing, quality management has become a cornerstone of success. Two key frameworks guiding this pursuit of excellence are the ICH Q10 Pharmaceutical Quality System and the FDA’s Quality Management Maturity (QMM) program. At the heart of these initiatives lies the quality plan – a crucial document that outlines an organization’s approach to ensuring consistent product quality and continuous improvement.

What is a Quality Plan?

A quality plan serves as a roadmap for achieving quality objectives and ensuring that all stakeholders are aligned in their pursuit of excellence.

Key components of a quality plan typically include:

  1. Organizational objectives to drive quality
  2. Steps involved in the processes
  3. Allocation of resources, responsibilities, and authority
  4. Specific documented standards, procedures, and instructions
  5. Testing, inspection, and audit programs
  6. Methods for measuring achievement of quality objectives

Aligning with ICH Q10 Management Responsibilities

ICH Q10 provides a model for an effective pharmaceutical quality system that goes beyond the basic requirements of Good Manufacturing Practice (GMP). To meet ICH Q10 management responsibilities, a quality plan should address the following areas:

1. Management Commitment

The quality plan should clearly articulate top management’s commitment to quality. This includes allocating necessary resources, participating in quality system oversight, and fostering a culture of quality throughout the organization.

2. Quality Policy and Objectives

Align your quality plan with your organization’s overall quality policy. Define specific, measurable quality objectives that support the broader goals of quality realization, establishing and maintaining a state of control, and facilitating continual improvement.

3. Planning

Outline the strategic approach to quality management, including how quality considerations are integrated into product lifecycle stages from development through to discontinuation.

4. Resource Management

Detail how resources (human, financial, and infrastructural) will be allocated to support quality initiatives. This includes provisions for training and competency development of personnel.

5. Management Review

Establish a process for regular management review of the quality system’s performance. This should include assessing the need for changes to the quality policy, objectives, and other elements of the quality system.

Aligning with FDA’s Quality Management Maturity Model

The FDA’s QMM program aims to encourage pharmaceutical manufacturers to go beyond basic compliance and foster a culture of quality and continuous improvement. To align your quality plan with QMM principles, consider incorporating the following elements:

1. Quality Culture

Describe how your organization will foster a strong quality culture mindset. This includes promoting open communication, encouraging employee engagement in quality initiatives, and recognizing quality-focused behaviors.

2. Continuous Improvement

Detail processes for identifying areas where quality management practices can be enhanced. This might include regular assessments, benchmarking against industry best practices, and implementing improvement projects.

3. Risk Management

Outline a proactive approach to risk management that goes beyond basic compliance. This should include processes for identifying, assessing, and mitigating risks to product quality and supply chain reliability.

4. Performance Metrics

Define key performance indicators (KPIs) that will be used to measure and monitor quality performance. These metrics should align with the FDA’s focus on product quality, patient safety, and supply chain reliability.

5. Knowledge Management

Describe systems and processes for capturing, sharing, and utilizing knowledge gained throughout the product lifecycle. This supports informed decision-making and continuous improvement.

The SOAR Analysis

A SOAR Analysis is a strategic planning framework that focuses on an organization’s positive aspects and future potential. The acronym SOAR stands for Strengths, Opportunities, Aspirations, and Results.

Key Components

  1. Strengths: This quadrant identifies what the organization excels at, its assets, capabilities, and greatest accomplishments.
  2. Opportunities: This section explores external circumstances, potential for growth, and how challenges can be reframed as opportunities.
  3. Aspirations: This part focuses on the organization’s vision for the future, dreams, and what it aspires to achieve.
  4. Results: This quadrant outlines the measurable outcomes that will indicate success in achieving the organization’s aspirations.

Characteristics and Benefits

  • Positive Focus: Unlike SWOT analysis, SOAR emphasizes strengths and opportunities rather than weaknesses and threats.
  • Collaborative Approach: It engages stakeholders at all levels of the organization, promoting a shared vision.
  • Action-Oriented: SOAR is designed to guide constructive conversations and lead to actionable strategies.
  • Future-Focused: While addressing current strengths and opportunities, SOAR also projects a vision for the future.

Application

SOAR analysis is typically conducted through team brainstorming sessions and visualized using a 2×2 matrix. It can be applied to various contexts, including business strategy, personal development, and organizational change.

By leveraging existing strengths and opportunities to pursue shared aspirations and measurable results, SOAR analysis provides a framework for positive organizational growth and strategic planning.

The SOAR Analysis for Quality Plan Writing

Utilizing a SOAR (Strengths, Opportunities, Aspirations, Results) analysis can be an effective approach to drive the writing of a quality plan. This strategic planning tool focuses on positive aspects and future potential, making it particularly useful for developing a forward-looking quality plan. Here’s how you can leverage SOAR analysis in this process:

Conducting the SOAR Analysis

Strengths

Begin by identifying your organization’s current strengths related to quality. Consider:

  • Areas where your organization excels in quality management
  • Significant quality-related accomplishments
  • Unique quality offerings that set you apart from competitors

Ask questions like:

  • What are our greatest quality-related assets and capabilities?
  • Where do we consistently meet or exceed quality standards?

Opportunities

Next, explore external opportunities that could enhance your quality initiatives. Look for:

  • Emerging technologies that could improve quality processes
  • Market trends that emphasize quality
  • Potential partnerships or collaborations to boost quality efforts

Consider:

  • How can we leverage external circumstances to improve our quality?
  • What new skills or resources could elevate our quality standards?

Aspirations

Envision your preferred future state for quality in your organization. This step involves:

  • Defining what you want to be known for in terms of quality
  • Aligning quality goals with overall organizational vision

Ask:

  • What is our ideal quality scenario?
  • How can we integrate quality excellence into our long-term strategy?

Results

Finally, determine measurable outcomes that will indicate success in your quality initiatives. This includes:

  • Specific, quantifiable quality metrics
  • Key performance indicators (KPIs) for quality improvement
  • Key behavior indicators (KBIs) and Key risk indicators (KRIs)

Consider:

  • How will we measure progress towards our quality goals?
  • What tangible results will demonstrate our quality aspirations have been achieved?

Writing the Quality Plan

With the SOAR analysis complete, use the insights gained to craft your quality plan:

  1. Executive Summary: Provide an overview of your quality vision, highlighting key strengths and opportunities identified in the SOAR analysis.
  2. Quality Objectives: Translate your aspirations into concrete, measurable objectives. Ensure these align with the strengths and opportunities identified.
  3. Strategic Initiatives: Develop action plans that leverage your strengths to capitalize on opportunities and achieve your quality aspirations. For each initiative, specify:
    • Resources required
    • Timeline for implementation
    • Responsible parties
  4. Performance Metrics: Establish a system for tracking the results identified in your SOAR analysis. Include both leading and lagging indicators of quality performance.
  5. Continuous Improvement: Outline processes for regular review and refinement of the quality plan, incorporating feedback and new insights as they emerge.
  6. Resource Allocation: Based on the strengths and opportunities identified, detail how resources will be allocated to support quality initiatives.
  7. Training and Development: Address any skill gaps identified during the SOAR analysis, outlining plans for employee training and development in quality-related areas.
  8. Risk Management: While SOAR focuses on positives, acknowledge potential challenges and outline strategies to mitigate risks to quality objectives.

By utilizing the SOAR analysis framework, your quality plan will be grounded in your organization’s strengths, aligned with external opportunities, inspired by aspirational goals, and focused on measurable results. This approach ensures a positive, forward-looking quality strategy that engages stakeholders and drives continuous improvement.

A well-crafted quality plan serves as a bridge between regulatory requirements, industry best practices, and an organization’s specific quality goals. By aligning your quality plan with ICH Q10 management responsibilities and the FDA’s Quality Management Maturity model, you create a robust framework for ensuring product quality, fostering continuous improvement, and building a resilient, quality-focused organization.

Maturity Models, Utilizing the Validation Program as an Example

Maturity models offer significant benefits to organizations by providing a structured framework for benchmarking and assessment. Organizations can clearly understand their strengths and weaknesses by evaluating their current performance and maturity level in specific areas or processes. This assessment helps identify areas for improvement and sets a baseline for measuring progress over time. Benchmarking against industry standards or best practices also allows organizations to see how they compare to their peers, fostering a competitive edge.

One of the primary advantages of maturity models is their role in fostering a culture of continuous improvement. They provide a roadmap for growth and development, encouraging organizations to strive for higher maturity levels. This continuous improvement mindset helps organizations stay agile and adaptable in a rapidly changing business environment. By setting clear goals and milestones, maturity models guide organizations in systematically addressing deficiencies and enhancing their capabilities.

Standardization and consistency are also key benefits of maturity models. They help establish standardized practices across teams and departments, ensuring that processes are executed with the same level of quality and precision. This standardization reduces variability and errors, leading to more reliable and predictable outcomes. Maturity models create a common language and framework for communication, fostering collaboration and alignment toward shared organizational goals.

The use of maturity models significantly enhances efficiency and effectiveness. Organizations can increase productivity and use their resources by identifying areas for streamlining operations and optimizing workflows. This leads to reduced errors, minimized rework, and improved process efficiency. The focus on continuous improvement also means that organizations are constantly seeking ways to refine and enhance their operations, leading to sustained gains in efficiency.

Maturity models play a crucial role in risk reduction and compliance. They assist organizations in identifying potential risks and implementing measures to mitigate them, ensuring compliance with relevant regulations and standards. This proactive approach to risk management helps organizations avoid costly penalties and reputational damage. Moreover, maturity models improve strategic planning and decision-making by providing a data-backed foundation for setting priorities and making informed choices.

Finally, maturity models improve communication and transparency within organizations. Providing a common communication framework increases transparency and builds trust among employees. This improved communication fosters a sense of shared purpose and collaboration, essential for achieving organizational goals. Overall, maturity models serve as valuable tools for driving continuous improvement, enhancing efficiency, and fostering a culture of excellence within organizations.

Business Process Maturity Model (BPMM)

A structured framework used to assess and improve the maturity of an organization’s business processes, it provides a systematic methodology to evaluate the effectiveness, efficiency, and adaptability of processes within an organization, guiding continuous improvement efforts.

Key Characteristics of BPMM

Assessment and Classification: BPMM helps organizations understand their current process maturity level and identify areas for improvement. It classifies processes into different maturity levels, each representing a progressive improvement in process management.

Guiding Principles: The model emphasizes a process-centric approach focusing on continuous improvement. Key principles include aligning improvements with business goals, standardization, measurement, stakeholder involvement, documentation, training, technology enablement, and governance.

Incremental Levels

    BPMM typically consists of five levels, each building on the previous one:

    1. Initial: Processes are ad hoc and chaotic, with little control or consistency.
    2. Managed: Basic processes are established and documented, but results may vary.
    3. Standardized: Processes are well-documented, standardized, and consistently executed across the organization.
    4. Predictable: Processes are quantitatively measured and controlled, with data-driven decision-making.
    5. Optimizing: Continuous process improvement is ingrained in the organization’s culture, focusing on innovation and optimization.

    Benefits of BPMM

    • Improved Process Efficiency: By standardizing and optimizing processes, organizations can achieve higher efficiency and consistency, leading to better resource utilization and reduced errors.
    • Enhanced Customer Satisfaction: Mature processes lead to higher product and service quality, which improves customer satisfaction.
    • Better Change Management: Higher process maturity increases an organization’s ability to navigate change and realize project benefits.
    • Readiness for Technology Deployment: BPMM helps ensure organizational readiness for new technology implementations, reducing the risk of failure.

    Usage and Implementation

    1. Assessment: Organizations can conduct BPMM assessments internally or with the help of external appraisers. These assessments involve reviewing process documentation, interviewing employees, and analyzing process outputs to determine maturity levels.
    2. Roadmap for Improvement: Organizations can develop a roadmap for progressing to higher maturity levels based on the assessment results. This roadmap includes specific actions to address identified deficiencies and improve process capabilities.
    3. Continuous monitoring and regular evaluations are crucial to ensure that processes remain effective and improvements are sustained over time.

    A BPMM Example: Validation Program based on ASTM E2500

    To apply the Business Process Maturity Model (BPMM) to a validation program aligned with ASTM E2500, we need to evaluate the program’s maturity across the five levels of BPMM while incorporating the key principles of ASTM E2500. Here’s how this application might look:

    Level 1: Initial

    At this level, the validation program is ad hoc and lacks standardization:

    • Validation activities are performed inconsistently across different projects or departments.
    • There’s limited understanding of ASTM E2500 principles.
    • Risk assessment and scientific rationale for validation activities are not systematically applied.
    • Documentation is inconsistent and often incomplete.

    Level 2: Managed

    The validation program shows some structure but lacks organization-wide consistency:

    • Basic validation processes are established but may not fully align with ASTM E2500 guidelines.
    • Some risk assessment tools are used, but not consistently across all projects.
    • Subject Matter Experts (SMEs) are involved, but their roles are unclear.
    • There’s increased awareness of the need for scientific justification in validation activities.

    Level 3: Standardized

    The validation program is well-defined and consistently implemented:

    • Validation processes are standardized across the organization and align with ASTM E2500 principles.
    • Risk-based approaches are consistently used to determine the scope and extent of validation activities.
    • SMEs are systematically involved in the design review and verification processes.
    • The concept of “verification” replaces traditional IQ/OQ/PQ, focusing on critical aspects that impact product quality and patient safety.
    • Quality risk management tools (e.g., impact assessments, risk management) are routinely used to identify critical quality attributes and process parameters.

    Level 4: Predictable

    The validation program is quantitatively managed and controlled:

    • Key Performance Indicators (KPIs) for validation activities are established and regularly monitored.
    • Data-driven decision-making is used to continually improve the efficiency and effectiveness of validation processes.
    • Advanced risk management techniques are employed to predict and mitigate potential issues before they occur.
    • There’s a strong focus on leveraging supplier documentation and expertise to streamline validation efforts.
    • Engineering procedures for quality activities (e.g., vendor technical assessments and installation verification) are formalized and consistently applied.

    Level 5: Optimizing

    The validation program is characterized by continuous improvement and innovation:

    • There’s a culture of continuous improvement in validation processes, aligned with the latest industry best practices and regulatory expectations.
    • Innovation in validation approaches is encouraged, always maintaining alignment with ASTM E2500 principles.
    • The organization actively contributes to developing industry standards and best practices in validation.
    • Validation activities are seamless integrated with other quality management systems, supporting a holistic approach to product quality and patient safety.
    • Advanced technologies (e.g., artificial intelligence, machine learning) may be leveraged to enhance risk assessment and validation strategies.

    Key Considerations for Implementation

    1. Risk-Based Approach: At higher maturity levels, the validation program should fully embrace the risk-based approach advocated by ASTM E2500, focusing efforts on aspects critical to product quality and patient safety.
    2. Scientific Rationale: As maturity increases, there should be a stronger emphasis on scientific understanding and justification for validation activities, moving away from a checklist-based approach.
    3. SME Involvement: Higher maturity levels should see increased and earlier involvement of SMEs in the validation process, from equipment selection to verification.
    4. Supplier Integration: More mature programs will leverage supplier expertise and documentation effectively, reducing redundant testing and improving efficiency.
    5. Continuous Improvement: At the highest maturity level, the validation program should have mechanisms in place for continuous evaluation and improvement of processes, always aligned with ASTM E2500 principles and the latest regulatory expectations.

    Process and Enterprise Maturity Model (PEMM),

    The Process and Enterprise Maturity Model (PEMM), developed by Dr. Michael Hammer, is a comprehensive framework designed to help organizations assess and improve their process maturity. It is a corporate roadmap and benchmarking tool for companies aiming to become process-centric enterprises.

    Key Components of PEMM

    PEMM is structured around two main dimensions: Process Enablers and Organizational Capabilities. Each dimension is evaluated on a scale to determine the maturity level.

    Process Enablers

    These elements directly impact the performance and effectiveness of individual processes. They include:

    • Design: The structure and documentation of the process.
    • Performers: The individuals or teams executing the process.
    • Owner: The person responsible for the process.
    • Infrastructure: The tools, systems, and resources supporting the process.
    • Metrics: The measurements used to evaluate process performance.

    Organizational Capabilities

    These capabilities create an environment that supports and sustains high-performance processes. They include:

    • Leadership: The commitment and support from top management.
    • Culture: The organizational values and behaviors that promote process excellence.
    • Expertise: The skills and knowledge required to manage and improve processes.
    • Governance: The mechanisms to oversee and guide process management activities.

    Maturity Levels

    Both Process Enablers and Organizational Capabilities are assessed on a scale from P0 to P4 (for processes) and E0 to E4 (for enterprise capabilities):

    • P0/E0: Non-existent or ad hoc processes and capabilities.
    • P1/E1: Basic, but inconsistent and poorly documented.
    • P2/E2: Defined and documented, but not fully integrated.
    • P3/E3: Managed and measured, with consistent performance.
    • P4/E4: Optimized and continuously improved.

    Benefits of PEMM

    • Self-Assessment: PEMM is designed to be simple enough for organizations to conduct their own assessments without needing external consultants.
    • Empirical Evidence: It encourages the collection of data to support process improvements rather than relying on intuition.
    • Engagement: Involves all levels of the organization in the process journey, turning employees into advocates for change.
    • Roadmap for Improvement: Provides a clear path for organizations to follow in their process improvement efforts.

    Application of PEMM

    PEMM can be applied to any type of process within an organization, whether customer-facing or internal, core or support, transactional or knowledge-intensive. It helps organizations:

    • Assess Current Maturity: Identify the current state of process and enterprise capabilities.
    • Benchmark: Compare against industry standards and best practices.
    • Identify Improvements: Pinpoint areas that need enhancement.
    • Track Progress: Monitor the implementation and effectiveness of process improvements.

    A PEMM Example: Validation Program based on ASTM E2500

    To apply the Process and Enterprise Maturity Model (PEMM) to an ASTM E2500 validation program, we can evaluate the program’s maturity across the five process enablers and four enterprise capabilities defined in PEMM. Here’s how this application might look:

    Process Enablers

    Design:

      • P-1: Basic ASTM E2500 approach implemented, but not consistently across all projects
      • P-2: ASTM E2500 principles applied consistently, with clear definition of requirements, specifications, and verification activities
      • P-3: Risk-based approach fully integrated into design process, with SME involvement from the start
      • P-4: Continuous improvement of ASTM E2500 implementation based on lessons learned and industry best practices

      Performers:

        • P-1: Some staff trained on ASTM E2500 principles
        • P-2: All relevant staff trained and understand their roles in the ASTM E2500 process
        • P-3: Staff proactively apply risk-based thinking and scientific rationale in validation activities
        • P-4: Staff contribute to improving the ASTM E2500 process and mentor others

        Owner:

          • P-1: Validation program has a designated owner, but role is not well-defined
          • P-2: Clear ownership of the ASTM E2500 process with defined responsibilities
          • P-3: Owner actively manages and improves the ASTM E2500 process
          • P-4: Owner collaborates across departments to optimize the validation program

          Infrastructure:

            • P-1: Basic tools in place to support ASTM E2500 activities
            • P-2: Integrated systems for managing requirements, risk assessments, and verification activities
            • P-3: Advanced tools for risk management and data analysis to support decision-making
            • P-4: Cutting-edge technology leveraged to enhance efficiency and effectiveness of the validation program

            Metrics:

              • P-1: Basic metrics tracked for validation activities
              • P-2: Comprehensive set of metrics established to measure ASTM E2500 process performance
              • P-3: Metrics used to drive continuous improvement of the validation program
              • P-4: Predictive analytics used to anticipate and prevent issues in validation activities

              Enterprise Capabilities

              Leadership:

                • E-1: Leadership aware of ASTM E2500 principles
                • E-2: Leadership actively supports ASTM E2500 implementation
                • E-3: Leadership drives cultural change to fully embrace risk-based validation approach
                • E-4: Leadership promotes ASTM E2500 principles beyond the organization, influencing industry standards

                Culture:

                  • E-1: Some recognition of the importance of risk-based validation
                  • E-2: Culture of quality and risk-awareness developing across the organization
                  • E-3: Strong culture of scientific thinking and continuous improvement in validation activities
                  • E-4: Innovation in validation approaches encouraged and rewarded

                  Expertise:

                    • E-1: Basic understanding of ASTM E2500 principles among key staff
                    • E-2: Dedicated team of ASTM E2500 experts established
                    • E-3: Deep expertise in risk-based validation approaches across multiple departments
                    • E-4: Organization recognized as thought leader in ASTM E2500 implementation

                    Governance:

                      • E-1: Basic governance structure for validation activities in place
                      • E-2: Clear governance model aligning ASTM E2500 with overall quality management system
                      • E-3: Cross-functional governance ensuring consistent application of ASTM E2500 principles
                      • E-4: Governance model that adapts to changing regulatory landscape and emerging best practices

                      To use this PEMM assessment:

                      1. Evaluate your validation program against each enabler and capability, determining the current maturity level (P-1 to P-4 for process enablers, E-1 to E-4 for enterprise capabilities).
                      2. Identify areas for improvement based on gaps between current and desired maturity levels.
                      3. Develop action plans to address these gaps, focusing on moving to the next maturity level for each enabler and capability.
                      4. Regularly reassess the program to track progress and adjust improvement efforts as needed.

                      Comparison Table

                      AspectBPMMPEMM
                      CreatorObject Management Group (OMG)Dr. Michael Hammer
                      PurposeAssess and improve business process maturityRoadmap and benchmarking for process-centricity
                      StructureFive levels: Initial, Managed, Standardized, Predictable, OptimizingTwo components: Process Enablers (P0-P4), Organizational Capabilities (E0-E4)
                      FocusProcess-centric, incremental improvementProcess enablers and organizational capabilities
                      Assessment MethodOften requires external appraisersDesigned for self-assessment
                      Guiding PrinciplesStandardization, measurement, continuous improvementEmpirical evidence, simplicity, organizational engagement
                      ApplicationsEnterprise systems, business process improvement, benchmarkingProcess reengineering, organizational engagement, benchmarking

                      In summary, while both BPMM and PEMM aim to improve business processes, BPMM is more structured and detailed, often requiring external appraisers, and focuses on incremental process improvement across organizational boundaries. In contrast, PEMM is designed for simplicity and self-assessment, emphasizing the role of process enablers and organizational capabilities to foster a supportive environment for process improvement. Both have advantages, and keeping both in mind while developing processes is key.

                      ISO 9000 and 10000 Series and Quality Culture

                      At the SQA’s Quality College, I presented a workshop on Quality Culture. In interests of time, I glossed over the ISOs and wanted to come back and treat them in more detail.

                      ISO 9000 is a set of international standards on quality management and quality assurance developed to help companies effectively document the quality system elements needed to maintain an efficient quality system. Designed to be general in approach, they are not specific to any one industry and can be applied to organizations of any size.

                      There are some 25 series 9000 standards, with the core for this topic being:

                      • ISO 9000 Quality management systems -Fundamentals and vocabulary
                      • ISO 9001 Quality management systems – Requirements
                      • ISO 9004 Managing for the sustained success of an organization – A quality management approach

                      The ISO 10000 series supports standards in the ISO 9000 series with more specific guidelines, there are several here relevant to the question of Quality Culture:

                      • ISO 10010 Quality management — Guidance to understand, evaluate and improve organizational quality culture
                      • ISO 10015 Quality Management – Guidelines for competence management and people development Training
                      • ISO 10018 Quality Management – Guidelines on People Involvement and Competence