The Pre-Mortem

A pre-mortem is a proactive risk management exercise that enables pharmaceutical teams to anticipate and mitigate failures before they occur. This tool can transform compliance from a reactive checklist into a strategic asset for safeguarding product quality.


Pre-Mortems in Pharmaceutical Quality Systems

In GMP environments, where deviations in drug substance purity or drug product stability can cascade into global recalls, pre-mortems provide a structured framework to challenge assumptions. For example, a team developing a monoclonal antibody might hypothesize that aggregation occurred during drug substance purification due to inadequate temperature control in bioreactors. By contrast, a tablet manufacturing team might explore why dissolution specifications failed because of inconsistent API particle size distribution. These exercises align with ICH Q9’s requirement for systematic hazard analysis and ICH Q10’s emphasis on knowledge management, forcing teams to document tacit insights about process boundaries and failure modes.

Pre-mortems excel at identifying “unknown unknowns” through creative thinking. Their value lies in uncovering risks traditional assessments miss. As a tool it can usually be strongly leveraged to identify areas for focus that may need a deeper tool, such as an FMEA. In practice, pre-mortems and FMEA are synergistic through a layered approach which satisfies ICH Q9’s requirement for both creative hazard identification and structured risk evaluation, turning hypothetical failures into validated control strategies.

By combining pre-mortems’ exploratory power with FMEA’s rigor, teams can address both systemic and technical risks, ensuring compliance while advancing operational resilience.


Implementing Pre-Mortems

1. Scenario Definition and Stakeholder Engagement

Begin by framing the hypothetical failure, the risk question. For drug substances, this might involve declaring, “The API batch was rejected due to genotoxic impurity levels exceeding ICH M7 limits.” For drug products, consider, “Lyophilized vials failed sterility testing due to vial closure integrity breaches.” Assemble a team spanning technical operations, quality control, and regulatory affairs to ensure diverse viewpoints.

2. Failure Mode Elicitation

To overcome groupthink biases in traditional brainstorming, teams should begin with brainwriting—a silent, written idea-generation technique. The prompt is a request to list reasons behind the risk question, such as “List reasons why the API batch failed impurity specifications”. Participants anonymously write risks on structured templates for 10–15 minutes, ensuring all experts contribute equally.

The collected ideas are then synthesized into a fishbone (Ishikawa) diagram, categorizing causes relevant branches, using a 6 M technique.

This method ensures comprehensive risk identification while maintaining traceability for regulatory audits.

3. Risk Prioritization and Control Strategy Development

Risks identified during the pre-mortem are evaluated using a severity-probability-detectability matrix, structured similarly to Failure Mode and Effects Analysis (FMEA).

4. Integration into Pharmaceutical Quality Systems

Mitigation plans are formalized in in control strategies and other mechanisms.


Case Study: Preventing Drug Substance Oxidation in a Small Molecule API

A company developing an oxidation-prone API conducted a pre-mortem anticipating discoloration and potency loss. The exercise revealed:

  • Drug substance risk: Inadequate nitrogen sparging during final isolation led to residual oxygen in crystallization vessels.
  • Drug product risk: Blister packaging with insufficient moisture barrier exacerbated degradation.

Mitigations included installing dissolved oxygen probes in purification tanks and switching to aluminum-foil blisters with desiccants. Process validation batches showed a 90% reduction in oxidation byproducts, avoiding a potential FDA Postmarketing Commitment

Assessing the Strength of Knowledge: A Framework for Decision-Making

ICH Q9(R1) emphasizes that knowledge is fundamental to effective risk management. The guideline states that “QRM is part of building knowledge and understanding risk scenarios, so that appropriate risk control can be decided upon for use during the commercial manufacturing phase.” 

We need to recognize the inverse relationship between knowledge and uncertainty in risk assessment. ICH Q9(R1) notes that uncertainty may be reduced “via effective knowledge management, which enables accumulated and new information (both internal and external) to be used to support risk-based decisions throughout the product lifecycle”

In order to gauge the confidence in risk assessment we need to gauge our knowledge strength.

The Spectrum of Knowledge Strength

Knowledge strength can be categorized into three levels: weak, medium, and strong. Each level is determined by specific criteria that assess the reliability, consensus, and depth of understanding surrounding a particular subject.

Indicators of Weak Knowledge

Knowledge is considered weak if it exhibits one or more of the following characteristics:

  1. Oversimplified Assumptions: The foundations of the knowledge rely on strong simplifications that may not accurately represent reality.
  2. Lack of Reliable Data: There is little to no data available, or the existing information is highly unreliable or irrelevant.
  3. Expert Disagreement: There is significant disagreement among experts in the field.
  4. Poor Understanding of Phenomena: The underlying phenomena are poorly understood, and available models are either non-existent or known to provide inaccurate predictions.
  5. Unexamined Knowledge: The knowledge has not been thoroughly scrutinized, potentially overlooking critical “unknown knowns.”

Hallmarks of Strong Knowledge

On the other hand, knowledge is deemed strong when it meets all of the following criteria (where relevant):

  1. Reasonable Assumptions: The assumptions made are considered very reasonable and well-grounded.
  2. Abundant Reliable Data: Large amounts of reliable and relevant data or information are available.
  3. Expert Consensus: There is broad agreement among experts in the field.
  4. Well-Understood Phenomena: The phenomena involved are well understood, and the models used provide predictions with the required accuracy.
  5. Thoroughly Examined: The knowledge has been rigorously examined and tested.

The Middle Ground: Medium Strength Knowledge

Cases that fall between weak and strong are classified as medium strength knowledge. This category can be flexible, allowing for a broader range of scenarios to be considered strong. For example, knowledge could be classified as strong if at least one (or more) of the strong criteria are met while none of the weak criteria are present.

Strong vs Weak Knowledge

A Simplified Approach

For practical applications, a simplified version of this framework can be used:

  • Strong: All criteria for strong knowledge are met.
  • Medium: One or two criteria for strong knowledge are not met.
  • Weak: Three or more criteria for strong knowledge are not met.

Implications for Decision-Making

Understanding the strength of our knowledge is crucial for effective decision-making. Strong knowledge provides a solid foundation for confident choices, while weak knowledge signals the need for caution and further investigation.

When faced with weak knowledge:

  • Seek additional information or expert opinions
  • Consider multiple scenarios and potential outcomes
  • Implement risk mitigation strategies

When working with strong knowledge:

  • Make decisions with greater confidence
  • Focus on implementation and optimization
  • Monitor outcomes to validate and refine understanding

Knowledge Strength and Uncertainty

The concept of knowledge strength aligns closely with the levels of uncertainty.

Strong Knowledge and Low Uncertainty (Levels 1-2)

Strong knowledge typically corresponds to lower levels of uncertainty:

  • Level 1 Uncertainty: This aligns closely with strong knowledge, where outcomes can be estimated with reasonable accuracy within a single system model. Strong knowledge is characterized by reasonable assumptions, abundant reliable data, and well-understood phenomena, which enable accurate predictions.
  • Level 2 Uncertainty: While displaying alternative futures, this level still operates within a single system where probability estimates can be applied confidently. Strong knowledge often allows for this level of certainty, as it involves broad expert agreement and thoroughly examined information.

Medium Knowledge and Moderate Uncertainty (Level 3)

Medium strength knowledge often corresponds to Level 3 uncertainty:

  • Level 3 Uncertainty: This level involves “a multiplicity of plausible futures” with multiple interacting systems, but still within a known range of outcomes. Medium knowledge strength might involve some gaps or disagreements but still provides a foundation for identifying potential outcomes.

Weak Knowledge and Deep Uncertainty (Level 4)

Weak knowledge aligns most closely with the deepest level of uncertainty:

  • Level 4 Uncertainty: This level leads to an “unknown future” where we don’t understand the system and are aware of crucial unknowns. Weak knowledge, characterized by oversimplified assumptions, lack of reliable data, and poor understanding of phenomena, often results in this level of deep uncertainty.

Implications for Decision-Making

  1. When knowledge is strong and uncertainty is low (Levels 1-2), decision-makers can rely more confidently on predictions and probability estimates.
  2. As knowledge strength decreases and uncertainty increases (Levels 3-4), decision-makers must adopt more flexible and adaptive approaches to account for a wider range of possible futures.
  3. The principle that “uncertainty should always be considered at the deepest proposed level” unless proven otherwise aligns with the cautious approach of assessing knowledge strength. This ensures that potential weaknesses in knowledge are not overlooked.

Conclusion

By systematically evaluating the strength of our knowledge using this framework, we can make more informed decisions, identify areas that require further investigation, and better understand the limitations of our current understanding. Remember, the goal is not always to achieve perfect knowledge but to recognize the level of certainty we have and act accordingly.

Ambiguity

Ambiguity is present in virtually all real-life situations and are those ‘situations in which we do not have sufficient information to quantify the stochastic nature of the problem. It is a lack of knowledge as
to the ‘basic rules of the game’ where cause-and-effect are not understood and there is no precedent for
making predictions as to what to expect

Ambiguity is often used, especially in the context of VUCA, to cover situations in situations that have:

  • Doubt about the nature of cause and effect
  • Little to no historical information to predict the outcome
  • Difficult to forecast or plan for

It is important to answer whether there are risks of lack of experience and predictability that might affect the situation, and interrogate our unknown unknowns.

People are ambiguity averse in that they prefer situations in which probabilities are perfectly known to situations in which they are unknown.

Ambiguity is best resolved by experimentation.

Levels of Uncertainty

Walker et al. (2010) developed a taxonomy of “levels of uncertainty”, ranging from Level 1 to Level 4,
which is useful in problem-solving:

  • Level 1 uncertainties are defined as relatively minor – as representing “a clear enough future” set within a “single system model” whereby outcomes can be estimated with reasonable accuracy;
  • Level 2 uncertainties display “alternative futures” but, again, within a single system in which probability estimates can be applied with confidence.

Levels 3 and 4 uncertainties are described as representing “deep uncertainty”.

  • Level 3 uncertainties are described as “a multiplicity of plausible futures”, in which multiple systems interact, but in which we can identify “a known range of outcomes”
  • Level 4 uncertainties lead us to an “unknown future” in which we don’t understand the system: we know only that there is something, or are some things, that we know we don’t know.

This hierarchy can be useful to help us think carefully about whether the uncertainty behind a problem can be defined in terms of a Level 1 prediction, with parameters for variation. Or, can it be resolved as group of Level 2 possibilities with probability estimates for each? Can the issue only be understood as a set of different Level 3 futures, each with a clear set of defined outcomes, or only by means of a Level 4 statement to the effect that we know only that there is something crucial that we don’t yet know?

There is often no clear or unanimous view of whether a particular uncertainty is set at a specific level. Uncertainty should always be considered at the deepest proposed level, unless or until those that propose this level can be convinced by an evidence-based argument that it should be otherwise.

Sources

  • Walker, W.E., Marchau, V.A.W.J. and Swanson, D. (2010) “Addressing Deep Uncertainty using Adaptive Policies: Introduction to Section 2”, Technological Forecasting & Social Change, 77: 917–23.

Types of Uncertainty

XKCD “Epistemic Uncertainty” https://xkcd.com/2440/

An important part of innovation, risk management, change management, continuous improvement is overcoming the fear of the unknown. We humans are wired with an intense aversion to both risk and uncertainty. Research shows that both have separate neural reactions and that choices with ambiguous outcomes trigger a stronger fear response than do risky choices. Additional research shows that the risk itself isn’t so much the problem, but the uncertainty is: we are afraid primarily because we don’t know the outcome and less so because of the risk.

There are three types of uncertainty:

  • Aleatoric Uncertainty: The uncertainty of quantifiable probabilities.
  • Epistemic Uncertainty: The uncertainty of knowledge. 
  • Knightian Uncertainty: The uncertainty of nonquantifiable risk.
A Two-Dimensional Framework for Characterizing Uncertainty from “Distinguishing Two Dimensions of Uncertainty” by Craig R. Fox and Gülden Ülkümen

I wrote more on this in my post “Uncertainty and Subjectivity in Risk Management.” This post mostly stems from wanting an excuse to share a funny comic.