Understanding the Distinction Between Impact and Risk

Two concepts—impact and risk — are often discussed but sometimes conflated within quality systems. While related, these concepts serve distinct purposes and drive different decisions throughout the quality system. Let’s explore.

The Fundamental Difference: Impact vs. Risk

The difference between impact and risk is fundamental to effective quality management. The difference between impact and risk is critical. Impact is best thought of as ‘What do I need to do to make the change.’ Risk is ‘What could go wrong in making this change?'”

Impact assessment focuses on evaluating the effects of a proposed change on various elements such as documentation, equipment, processes, and training. It helps identify the scope and reach of a change. Risk assessment, by contrast, looks ahead to identify potential failures that might occur due to the change – it’s preventive and focused on possible consequences.

This distinction isn’t merely academic – it directly affects how we approach actions and decisions in our quality systems, impacting core functions of CAPA, Change Control and Management Review.

AspectImpactRisk
DefinitionThe effect or influence a change, event, or deviation has on product quality, process, or systemThe probability and severity of harm or failure occurring as a result of a change, event, or deviation
FocusWhat is affected and to what extent (scope and magnitude of consequences)What could go wrong, how likely it is to happen, and how severe the outcome could be
Assessment TypeEvaluates the direct consequences of an action or eventEvaluates the likelihood and severity of potential adverse outcomes
Typical UseUsed in change control to determine which documents, systems, or processes are impactedUsed to prioritize actions, allocate resources, and implement controls to minimize negative outcomes
MeasurementUsually described qualitatively (e.g., minor, moderate, major, critical)Often quantified by combining probability and impact scores to assign a risk level (e.g., low, medium, high)
ExampleA change in raw material supplier impacts the manufacturing process and documentation.The risk is that the new supplier’s material could fail to meet quality standards, leading to product defects.

Change Control: Different Questions, Different Purposes

Within change management, the PIC/S Recommendation PI 054-1 notes that “In some cases, especially for simple and minor/low risk changes, an impact assessment is sufficient to document the risk-based rationale for a change without the use of more formal risk assessment tools or approaches.”

Impact Assessment in Change Control

  • Determines what documentation requires updating
  • Identifies affected systems, equipment, and processes
  • Establishes validation requirements
  • Determines training needs

Risk Assessment in Change Control

  • Identifies potential failures that could result from the change
  • Evaluates possible consequences to product quality and patient safety
  • Determines likelihood of those consequences occurring
  • Guides preventive measures

A common mistake is conflating these concepts or shortcutting one assessment. For example, companies often rush to designate changes as “like-for-like” without supporting data, effectively bypassing proper risk assessment. This highlights why maintaining the distinction is crucial.

Validation: Complementary Approaches

In validation, the impact-risk distinction shapes our entire approach.

Impact in validation relates to identifying what aspects of product quality could be affected by a system or process. For example, when qualifying manufacturing equipment, we determine which critical quality attributes (CQAs) might be influenced by the equipment’s performance.

Risk assessment in validation explores what could go wrong with the equipment or process that might lead to quality failures. Risk management plays a pivotal role in validation by enabling a risk-based approach to defining validation strategies, ensuring regulatory compliance, mitigating product quality and safety risks, facilitating continuous improvement, and promoting cross-functional collaboration.

In Design Qualification, we verify that the critical aspects (CAs) and critical design elements (CDEs) necessary to control risks identified during the quality risk assessment (QRA) are present in the design. This illustrates how impact assessment (identifying critical aspects) works together with risk assessment (identifying what could go wrong).

When we perform Design Review and Design Qualification, we focus on Critical Aspects: Prioritize design elements that directly impact product quality and patient safety. Here, impact assessment identifies critical aspects, while risk assessment helps prioritize based on potential consequences.

Following Design Qualification, Verification activities such as Installation Qualification (IQ), Operational Qualification (OQ), and Performance Qualification (PQ) serve to confirm that the system or equipment performs as intended under actual operating conditions. Here, impact assessment identifies the specific parameters and functions that must be verified to ensure no critical quality attributes are compromised. Simultaneously, risk assessment guides the selection and extent of tests by focusing on areas with the highest potential for failure or deviation. This dual approach ensures that verification not only confirms the intended impact of the design but also proactively mitigates risks before routine use.

Validation does not end with initial qualification. Continuous Validation involves ongoing monitoring and trending of process performance and product quality to confirm that the validated state is maintained over time. Impact assessment plays a role in identifying which parameters and quality attributes require ongoing scrutiny, while risk assessment helps prioritize monitoring efforts based on the likelihood and severity of potential deviations. This continuous cycle allows quality systems to detect emerging risks early and implement corrective actions promptly, reinforcing a proactive, risk-based culture that safeguards product quality throughout the product lifecycle.

Data Integrity: A Clear Example

Data integrity offers perhaps the clearest illustration of the impact-risk distinction.

As I’ve previously noted, Data quality is not a risk. It is a causal factor in the failure or severity. Poor data quality isn’t itself a risk; rather, it’s a factor that can influence the severity or likelihood of risks.

When assessing data integrity issues:

  • Impact assessment identifies what data is affected and which processes rely on that data
  • Risk assessment evaluates potential consequences of data integrity lapses

In my risk-based data integrity assessment methodology, I use a risk rating system that considers both impact and risk factors:

Risk RatingActionMitigation
>25High Risk-Potential Impact to Patient Safety or Product QualityMandatory
12-25Moderate Risk-No Impact to Patient Safety or Product Quality but Potential Regulatory RiskRecommended
<12Negligible DI RiskNot Required

This system integrates both impact (on patient safety or product quality) and risk (likelihood and detectability of issues) to guide mitigation decisions.

The Golden Day: Impact and Risk in Deviation Management

The Golden Day concept for deviation management provides an excellent practical example. Within the first 24 hours of discovering a deviation, we conduct:

  1. An impact assessment to determine:
    • Which products, materials, or batches are affected
    • Potential effects on critical quality attributes
    • Possible regulatory implications
  2. A risk assessment to evaluate:
    • Patient safety implications
    • Product quality impact
    • Compliance with registered specifications
    • Level of investigation required

This impact assessment is also the initial risk assessment, which will help guide the level of effort put into the deviation. This statement shows how the two concepts, while distinct, work together to inform quality decisions.

Quality Escalation: When Impact Triggers a Response

In quality escalation, we often use specific criteria based on both impact and risk:

Escalation CriteriaExamples of Quality Events for Escalation
Potential to adversely affect quality, safety, efficacy, performance or compliance of product– Contamination – Product defect/deviation from process parameters or specification – Significant GMP deviations
Product counterfeiting, tampering, theft– Product counterfeiting, tampering, theft reportable to Health Authority – Lost/stolen IMP
Product shortage likely to disrupt patient care– Disruption of product supply due to product quality events
Potential to cause patient harm associated with a product quality event– Urgent Safety Measure, Serious Breach, Significant Product Complaint

These criteria demonstrate how we use both impact (what’s affected) and risk (potential consequences) to determine when issues require escalation.

Both Are Essential

Understanding the difference between impact and risk fundamentally changes how we approach quality management. Impact assessment without risk assessment may identify what’s affected but fails to prevent potential issues. Risk assessment without impact assessment might focus on theoretical problems without understanding the actual scope.

The pharmaceutical quality system requires both perspectives:

  1. Impact tells us the scope – what’s affected
  2. Risk tells us the consequences – what could go wrong

By maintaining this distinction and applying both concepts appropriately across change control, validation, and data integrity management, we build more robust quality systems that not only comply with regulations but actually protect product quality and patient safety.

When to Widen the Investigation

“there is no retrospective review of batch records for batches within expiry, to identify any other process deviations performed without the appropriate corresponding documentation including risk assessment(s).” – 2025 Warning Letter from the US FDA to Sanofi

This comment is about an instance where Sanofi deviated from the validated process by using an unvalidated single use component. Instead of self-identifying, creating a deviation and doing the right change control activities, the company just kept on deviating by using a non-controlled document.

This is a big problem for lots of reasons, from uncontrolled documents, to not using the change control system, to breaking the validated state. What the language quoted above really brings to bear is the question, when should we evaluate our records for other similar instances of this happening, so we can address it.

When a deviation investigation reveals recurring bad decision-making, it is crucial to expand the investigation and conduct a retrospective review of batch records. A good cutoff of this can be only for batches within expiry. This expanded investigation helps identify any other process deviations that may have occurred but were not discovered or documented at the time. Here’s when and how to approach this situation:

Triggers for Expanding the Investigation

  1. Recurring Deviations: If the same or similar deviations are found to be recurring, it indicates a systemic issue that requires a broader investigation.
  2. Pattern of Human Errors: When a pattern of human errors or poor decision-making is identified, it suggests potential underlying issues in training, procedures, or processes.
  3. Critical Deviations: For deviations classified as critical, a more thorough investigation is typically warranted, including a retrospective review.
  4. Potential Impact on Product Quality: If there’s a strong possibility that undiscovered deviations could affect product quality or patient safety, an expanded investigation becomes necessary.

Conducting the Retrospective Review

  1. Timeframe: Review batch records for all batches within expiry, typically covering at least two years of production. Similarily for issues in the FUSE program you might look since the last requalification, or from a decide to go backwards in concentric circles based on what you find.
  2. Scope: Examine not only the specific process where the deviation was found but also related processes or areas that could be affected. Reviewing related processes is critical.
  3. Data Analysis: Utilize statistical tools and trending analysis techniques to identify patterns or anomalies in the historical data.
  4. Cross-Functional Approach: Involve a team of subject matter experts from relevant departments to ensure a comprehensive review.
  5. Documentation Review: Examine batch production records, laboratory control records, equipment logs, and any other relevant documentation.
  6. Root Cause Analysis: Apply root cause analysis techniques to understand the underlying reasons for the recurring issues.

Key Considerations

  • Risk Assessment: Prioritize the review based on the potential risk to product quality and patient safety.
  • Data Integrity: Ensure that any retrospective data used is reliable and has maintained its integrity.
  • Corrective Actions: Develop and implement corrective and preventive actions (CAPAs) based on the findings of the expanded investigation.
  • Regulatory Reporting: Assess the need for notifying regulatory authorities based on the severity and impact of the findings.

By conducting a thorough retrospective review when recurring bad decision-making is identified, companies can uncover hidden issues, improve their quality systems, and prevent future deviations. This proactive approach not only enhances compliance but also contributes to continuous improvement in pharmaceutical manufacturing processes.

In the case of an issue that rises to a regulatory observation this becomes a firm must. The agency has raised a significant concern and they will want proof that this is a limited issue or that you are holistically dealing with it across the organization.

Concentric Circles of Investigation

Each layer of the investigation may require holistic looks. Utilizing the example above we have:

Layer of ProblemFurther Investigation to Answer
Use of unassessed component outside of GMP controlsWhat other unassessed components were used in the manufacturing process(s)
Failure to document a temporary changeWhere else were temporary changes not executed
Deviated from validated processWhere else were there significant deviations from validated processes there were not reported
Problems with componentsWhat other components are having problems that are not being reported and addressed

Take a risk-based approach here is critical.

Environmental Impact for Risk Assessments

Contamination occurs in two ways:

  • Environmental contamination results from the ingress of contaminants from the surrounding production areas or even from outside environments
  • Cross-contamination is defined as contamination of a starting material, intermediate product or finished product with another starting material or product during production.

Whether performing risk assessments or impact assessments there are six factors to consider in order to determine environmental impact and to inform contamination control.

  1. Amenability of equipment and surfaces to cleaning and sanitization
  2. Personnel presence and flow
  3. Material flow
  4. Proximity to open product or exposed direct product-contact material
  5. Interventions/operations by personnel and their complexity
  6. Frequency of interventions/process operations.

Treating All Investigations the Same

Stephanie Gaulding, a colleague in the ASQ, recently wrote an excellent post for Redica on “How to Avoid Three Common Deviation Investigation Pitfalls“, a subject near and dear to my heart.

The three pitfalls Stephanie gives are:

  1. Not getting to root case
  2. Inadequate scoping
  3. Treating investigations the same

All three are right on the nose, and I’ve posted a bunch on the topics. Definitely go and read the post.

What I want to delve deeper into is Stephanie’s point that “Deviation systems should also be built to triage events into risk-based categories with sufficient time allocated to each category to drive risk-based investigations and focus the most time and effort on the highest risk and most complex events.”

That is an accurate breakdown, and exactly what regulators are asking for. However, I think the implementation of risk-based categories can sometimes lead to confusion, and we can spend some time unpacking the concept.

Risk is the possible effect of uncertainty. Risk is often described in terms of risk sources, potential events, their consequences, and their likelihoods (where we get likelihoodXseverity from).

But there are a lot of types of uncertainty, IEC31010 “Risk management – risk management techniques” lists the following examples:

  • uncertainty as to the truth of assumptions, including presumptions about how people or systems might behave
  • variability in the parameters on which a decision is to be based
  • uncertainty in the validity or accuracy of models which have been established to make predictions about the future
  • events (including changes in circumstances or conditions) whose occurrence, character or consequences are uncertain
  • uncertainty associated with disruptive events
  • the uncertain outcomes of systemic issues, such as shortages of competent staff, that can have wide ranging impacts which cannot be clearly defined lack of knowledge which arises when uncertainty is recognized but not fully understood
  • unpredictability
  • uncertainty arising from the limitations of the human mind, for example in understanding complex data, predicting situations with long-term consequences or making bias-free judgments.

Most of these are only, at best, obliquely relevant to risk categorizing deviations.

So it is important to first build the risk categories on consequences. At the end of the day these are the consequence that matter in the pharmaceutical/medical device world:

  • harm to the safety, rights, or well-being of patients, subjects or participants (human or non-human)
  • compromised data integrity so that confidence in the results, outcome, or decision dependent on the data is impacted

These are some pretty hefty areas and really hard for the average user to get their minds around. This is why building good requirements, and understanding how systems work is so critical. Building breadcrumbs in our procedures to let folks know what deviations are in what category is a good best practice.

There is nothing wrong with recognizing that different areas have different decision trees. Harm to safety in GMP can mean different things than safety in a GLP study.

The second place I’ve seen this go wrong has to do with likelihood, and folks getting symptom confused with problem confused with cause.

bridge with a gap

All deviations are with a situation that is different in some way from expected results. Deviations start with the symptom, and through analysis end up with a root cause. So when building your decision-tree, ensure it looks at symptoms and how the symptom is observed. That is surprisingly hard to do, which is why a lot of deviation criticality scales tend to focus only on severity.

4 major types of symptoms

Success/Failure Space, or Why We Can Sometimes Seem Pessimistic

When evaluating a system we can look at it in two ways. We can identify ways a thing can fail or the various ways it can succeed.

Success/Failure Space

These are really just two sides of the coin in many ways, with identifiable points in success space coinciding with analogous points in failure space. “Maximum anticipated success” in success space coincides with “minimum anticipated failure” in failure space.

Like everything, how we frame the question helps us find answers. Certain questions require us to think in terms of failure space, others in success. There are advantages in both, but in risk management, the failure space is incredibly valuable.

It is generally easier to attain concurrence on what constitutes failure than it is to agree on what constitutes success. We may desire a house that has great windows, high ceilings, a nice yard. However, the one we buy can have a termite-infested foundation, bad electrical work, and a roof full of leaks. Whether the house is great is a matter of opinion, but we certainly know all it is a failure based on the high repair bills we are going to accrue.

Success tends to be associated with the efficiency of a system, the amount of output, the degree of usefulness. These characteristics are describable by continuous variables which are not easily modeled in terms of simple discrete events, such as “water is not hot” which characterizes the failure space. Failure, in particular, complete failure, is generally easy to define, whereas the event, success, maybe more difficult to tie down

Theoretically the number of ways in which a system can fail and the number of ways in which a system can ·succeed are both infinite, from a practical standpoint there are generally more ways to success than there are to failure. From a practical point of view, the size of the population in the failure space is less than the size of the population in the success space. This leads to risk management focusing on the failure space.

The failure space maps really well to nominal scales for severity, which can be helpful as you build your own scales for risk assessments.

For example, let’s look at an example of a morning commute.

Example of the failure space for a morning commute