Understanding the Distinction Between Impact and Risk

Two concepts—impact and risk — are often discussed but sometimes conflated within quality systems. While related, these concepts serve distinct purposes and drive different decisions throughout the quality system. Let’s explore.

The Fundamental Difference: Impact vs. Risk

The difference between impact and risk is fundamental to effective quality management. The difference between impact and risk is critical. Impact is best thought of as ‘What do I need to do to make the change.’ Risk is ‘What could go wrong in making this change?'”

Impact assessment focuses on evaluating the effects of a proposed change on various elements such as documentation, equipment, processes, and training. It helps identify the scope and reach of a change. Risk assessment, by contrast, looks ahead to identify potential failures that might occur due to the change – it’s preventive and focused on possible consequences.

This distinction isn’t merely academic – it directly affects how we approach actions and decisions in our quality systems, impacting core functions of CAPA, Change Control and Management Review.

AspectImpactRisk
DefinitionThe effect or influence a change, event, or deviation has on product quality, process, or systemThe probability and severity of harm or failure occurring as a result of a change, event, or deviation
FocusWhat is affected and to what extent (scope and magnitude of consequences)What could go wrong, how likely it is to happen, and how severe the outcome could be
Assessment TypeEvaluates the direct consequences of an action or eventEvaluates the likelihood and severity of potential adverse outcomes
Typical UseUsed in change control to determine which documents, systems, or processes are impactedUsed to prioritize actions, allocate resources, and implement controls to minimize negative outcomes
MeasurementUsually described qualitatively (e.g., minor, moderate, major, critical)Often quantified by combining probability and impact scores to assign a risk level (e.g., low, medium, high)
ExampleA change in raw material supplier impacts the manufacturing process and documentation.The risk is that the new supplier’s material could fail to meet quality standards, leading to product defects.

Change Control: Different Questions, Different Purposes

Within change management, the PIC/S Recommendation PI 054-1 notes that “In some cases, especially for simple and minor/low risk changes, an impact assessment is sufficient to document the risk-based rationale for a change without the use of more formal risk assessment tools or approaches.”

Impact Assessment in Change Control

  • Determines what documentation requires updating
  • Identifies affected systems, equipment, and processes
  • Establishes validation requirements
  • Determines training needs

Risk Assessment in Change Control

  • Identifies potential failures that could result from the change
  • Evaluates possible consequences to product quality and patient safety
  • Determines likelihood of those consequences occurring
  • Guides preventive measures

A common mistake is conflating these concepts or shortcutting one assessment. For example, companies often rush to designate changes as “like-for-like” without supporting data, effectively bypassing proper risk assessment. This highlights why maintaining the distinction is crucial.

Validation: Complementary Approaches

In validation, the impact-risk distinction shapes our entire approach.

Impact in validation relates to identifying what aspects of product quality could be affected by a system or process. For example, when qualifying manufacturing equipment, we determine which critical quality attributes (CQAs) might be influenced by the equipment’s performance.

Risk assessment in validation explores what could go wrong with the equipment or process that might lead to quality failures. Risk management plays a pivotal role in validation by enabling a risk-based approach to defining validation strategies, ensuring regulatory compliance, mitigating product quality and safety risks, facilitating continuous improvement, and promoting cross-functional collaboration.

In Design Qualification, we verify that the critical aspects (CAs) and critical design elements (CDEs) necessary to control risks identified during the quality risk assessment (QRA) are present in the design. This illustrates how impact assessment (identifying critical aspects) works together with risk assessment (identifying what could go wrong).

When we perform Design Review and Design Qualification, we focus on Critical Aspects: Prioritize design elements that directly impact product quality and patient safety. Here, impact assessment identifies critical aspects, while risk assessment helps prioritize based on potential consequences.

Following Design Qualification, Verification activities such as Installation Qualification (IQ), Operational Qualification (OQ), and Performance Qualification (PQ) serve to confirm that the system or equipment performs as intended under actual operating conditions. Here, impact assessment identifies the specific parameters and functions that must be verified to ensure no critical quality attributes are compromised. Simultaneously, risk assessment guides the selection and extent of tests by focusing on areas with the highest potential for failure or deviation. This dual approach ensures that verification not only confirms the intended impact of the design but also proactively mitigates risks before routine use.

Validation does not end with initial qualification. Continuous Validation involves ongoing monitoring and trending of process performance and product quality to confirm that the validated state is maintained over time. Impact assessment plays a role in identifying which parameters and quality attributes require ongoing scrutiny, while risk assessment helps prioritize monitoring efforts based on the likelihood and severity of potential deviations. This continuous cycle allows quality systems to detect emerging risks early and implement corrective actions promptly, reinforcing a proactive, risk-based culture that safeguards product quality throughout the product lifecycle.

Data Integrity: A Clear Example

Data integrity offers perhaps the clearest illustration of the impact-risk distinction.

As I’ve previously noted, Data quality is not a risk. It is a causal factor in the failure or severity. Poor data quality isn’t itself a risk; rather, it’s a factor that can influence the severity or likelihood of risks.

When assessing data integrity issues:

  • Impact assessment identifies what data is affected and which processes rely on that data
  • Risk assessment evaluates potential consequences of data integrity lapses

In my risk-based data integrity assessment methodology, I use a risk rating system that considers both impact and risk factors:

Risk RatingActionMitigation
>25High Risk-Potential Impact to Patient Safety or Product QualityMandatory
12-25Moderate Risk-No Impact to Patient Safety or Product Quality but Potential Regulatory RiskRecommended
<12Negligible DI RiskNot Required

This system integrates both impact (on patient safety or product quality) and risk (likelihood and detectability of issues) to guide mitigation decisions.

The Golden Day: Impact and Risk in Deviation Management

The Golden Day concept for deviation management provides an excellent practical example. Within the first 24 hours of discovering a deviation, we conduct:

  1. An impact assessment to determine:
    • Which products, materials, or batches are affected
    • Potential effects on critical quality attributes
    • Possible regulatory implications
  2. A risk assessment to evaluate:
    • Patient safety implications
    • Product quality impact
    • Compliance with registered specifications
    • Level of investigation required

This impact assessment is also the initial risk assessment, which will help guide the level of effort put into the deviation. This statement shows how the two concepts, while distinct, work together to inform quality decisions.

Quality Escalation: When Impact Triggers a Response

In quality escalation, we often use specific criteria based on both impact and risk:

Escalation CriteriaExamples of Quality Events for Escalation
Potential to adversely affect quality, safety, efficacy, performance or compliance of product– Contamination – Product defect/deviation from process parameters or specification – Significant GMP deviations
Product counterfeiting, tampering, theft– Product counterfeiting, tampering, theft reportable to Health Authority – Lost/stolen IMP
Product shortage likely to disrupt patient care– Disruption of product supply due to product quality events
Potential to cause patient harm associated with a product quality event– Urgent Safety Measure, Serious Breach, Significant Product Complaint

These criteria demonstrate how we use both impact (what’s affected) and risk (potential consequences) to determine when issues require escalation.

Both Are Essential

Understanding the difference between impact and risk fundamentally changes how we approach quality management. Impact assessment without risk assessment may identify what’s affected but fails to prevent potential issues. Risk assessment without impact assessment might focus on theoretical problems without understanding the actual scope.

The pharmaceutical quality system requires both perspectives:

  1. Impact tells us the scope – what’s affected
  2. Risk tells us the consequences – what could go wrong

By maintaining this distinction and applying both concepts appropriately across change control, validation, and data integrity management, we build more robust quality systems that not only comply with regulations but actually protect product quality and patient safety.

The Hidden Pitfalls of Naïve Realism in Problem Solving, Risk Management, and Decision Making

Naïve realism—the unconscious belief that our perception of reality is objective and universally shared—acts as a silent saboteur in professional and personal decision-making. While this mindset fuels confidence, it also blinds us to alternative perspectives, amplifies cognitive biases, and undermines collaborative problem-solving. This blog post explores how this psychological trap distorts critical processes and offers actionable strategies to counteract its influence, drawing parallels to frameworks like the Pareto Principle and insights from risk management research.

Problem Solving: When Certainty Breeds Blind Spots

Naïve realism convinces us that our interpretation of a problem is the only logical one, leading to overconfidence in solutions that align with preexisting beliefs. For instance, teams often dismiss contradictory evidence in favor of data that confirms their assumptions. A startup scaling a flawed product because early adopters praised it—while ignoring churn data—exemplifies this trap. The Pareto Principle’s “vital few” heuristic can exacerbate this bias by oversimplifying complex issues. Organizations might prioritize frequent but low-impact problems, neglecting rare yet catastrophic risks, such as cybersecurity vulnerabilities masked by daily operational hiccups.

Functional fixedness, another byproduct of naïve realism, stifles innovation by assuming resources can only be used conventionally. To mitigate this pitfall, teams should actively challenge assumptions through adversarial brainstorming, asking questions like “Why will this solution fail?” Involving cross-functional teams or external consultants can also disrupt echo chambers, injecting fresh perspectives into problem-solving processes.

Risk Management: The Illusion of Objectivity

Risk assessments are inherently subjective, yet naïve realism convinces decision-makers that their evaluations are purely data-driven. Overreliance on historical data, such as prioritizing minor customer complaints over emerging threats, mirrors the Pareto Principle’s “static and historical bias” pitfall.

Reactive devaluation further complicates risk management. Organizations can counteract these biases by appropriately leveraging risk management to drive subjectivity out while better accounting for uncertainty. Simulating worst-case scenarios, such as sudden supplier price hikes or regulatory shifts, also surfaces blind spots that static models overlook.

Decision Making: The Myth of the Rational Actor

Even in data-driven cultures, subjectivity stealthily shapes choices. Leaders often overestimate alignment within teams, mistaking silence for agreement. Individuals frequently insist their assessments are objective despite clear evidence of self-enhancement bias. This false consensus erodes trust and stifles dissent with the assumption that future preferences will mirror current ones.

Organizations must normalize dissent through anonymous voting or “red team” exercises to dismantle these myths, including having designated critics scrutinize plans. Adopting probabilistic thinking, where outcomes are assigned likelihoods instead of binary predictions, reduces overconfidence.

Acknowledging Subjectivity: Three Practical Steps

1. Map Mental Models

Mapping mental models involves systematically documenting and challenging assumptions to ensure compliance, quality, and risk mitigation. For example, during risk assessments or deviation investigations, teams should explicitly outline their assumptions about processes, equipment, and personnel. Statements such as “We assume the equipment calibration schedule is sufficient to prevent deviations” or “We assume operator training is adequate to avoid errors” can be identified and critically evaluated.

Foster a culture of continuous improvement and accountability by stress-testing assumptions against real-world data—such as audit findings, CAPA (Corrective and Preventive Actions) trends, or process performance metrics—to reveal gaps that might otherwise go unnoticed. For instance, a team might discover that while calibration schedules meet basic requirements, they fail to account for unexpected environmental variables that impact equipment accuracy.

By integrating assumption mapping into routine GMP activities like risk assessments, change control reviews, and deviation investigations, organizations can ensure their decision-making processes are robust and grounded in evidence rather than subjective beliefs. This practice enhances compliance and strengthens the foundation for proactive quality management.

2. Institutionalize ‘Beginner’s Mind’

A beginner’s mindset is about approaching situations with openness, curiosity, and a willingness to learn as if encountering them for the first time. This mindset challenges the assumptions and biases that often limit creativity and problem-solving. In team environments, fostering a beginner’s mindset can unlock fresh perspectives, drive innovation, and create a culture of continuous improvement. However, building this mindset in teams requires intentional strategies and ongoing reinforcement to ensure it is actively utilized.

What is a Beginner’s Mindset?

At its core, a beginner’s mindset involves setting aside preconceived notions and viewing problems or opportunities with fresh eyes. Unlike experts who may rely on established knowledge or routines, individuals with a beginner’s mindset embrace uncertainty and ask fundamental questions such as “Why do we do it this way?” or “What if we tried something completely different?” This perspective allows teams to challenge the status quo, uncover hidden opportunities, and explore innovative solutions that might be overlooked.

For example, adopting this mindset in the workplace might mean questioning long-standing processes that no longer serve their purpose or rethinking how resources are allocated to align with evolving goals. By removing the constraints of “we’ve always done it this way,” teams can approach challenges with curiosity and creativity.

How to Build a Beginner’s Mindset in Teams

Fostering a beginner’s mindset within teams requires deliberate actions from leadership to create an environment where curiosity thrives. Here are some key steps to build this mindset:

  1. Model Curiosity and Openness
    Leaders play a critical role in setting the tone for their teams. By modeling curiosity—asking questions, admitting gaps in knowledge, and showing enthusiasm for learning—leaders demonstrate that it is safe and encouraged to approach work with an open mind. For instance, during meetings or problem-solving sessions, leaders can ask questions like “What haven’t we considered yet?” or “What would we do if we started from scratch?” This signals to team members that exploring new ideas is valued over rigid adherence to past practices.
  2. Encourage Questioning Assumptions
    Teams should be encouraged to question their assumptions regularly. Structured exercises such as “assumption audits” can help identify ingrained beliefs that may no longer hold true. By challenging assumptions, teams open themselves up to new insights and possibilities.
  3. Create Psychological Safety
    A beginner’s mindset flourishes in environments where team members feel safe taking risks and sharing ideas without fear of judgment or failure. Leaders can foster psychological safety by emphasizing that mistakes are learning opportunities rather than failures. For example, during project reviews, instead of focusing solely on what went wrong, leaders can ask, “What did we learn from this experience?” This shifts the focus from blame to growth and encourages experimentation.
  4. Rotate Roles and Responsibilities
    Rotating team members across roles or projects is an effective way to cultivate fresh perspectives. When individuals step into unfamiliar areas of responsibility, they are less likely to rely on habitual thinking and more likely to approach tasks with curiosity and openness. For instance, rotating quality assurance personnel into production oversight roles can reveal inefficiencies or risks that might have been overlooked due to overfamiliarity within silos.
  5. Provide Opportunities for Learning
    Continuous learning is essential for maintaining a beginner’s mindset. Organizations should invest in training programs, workshops, or cross-functional collaborations that expose teams to new ideas and approaches. For example, inviting external speakers or consultants to share insights from other industries can inspire innovative thinking within teams by introducing them to unfamiliar concepts or methodologies.
  6. Use Structured Exercises for Fresh Thinking
    Design Thinking exercises or brainstorming techniques like “reverse brainstorming” (where participants imagine how to create the worst possible outcome) can help teams break free from conventional thinking patterns. These activities force participants to look at problems from unconventional angles and generate novel solutions.

Ensuring Teams Utilize a Beginner’s Mindset

Building a beginner’s mindset is only half the battle; ensuring it is consistently applied requires ongoing reinforcement:

  • Integrate into Processes: Embed beginner’s mindset practices into regular workflows such as project kickoffs, risk assessments, or strategy sessions. For example, make it standard practice to start meetings by revisiting assumptions or brainstorming alternative approaches before diving into execution plans.
  • Reward Curiosity: Recognize and reward behaviors that reflect a beginner’s mindset—such as asking insightful questions, proposing innovative ideas, or experimenting with new approaches—even if they don’t immediately lead to success.
  • Track Progress: Use metrics like the number of new ideas generated during brainstorming sessions or the diversity of perspectives incorporated into decision-making processes to measure how well teams utilize a beginner’s mindset.
  • Reflect Regularly: Encourage teams to reflect on using the beginner’s mindset through retrospectives or debriefs after significant projects and events. Questions like “How did our openness to new ideas impact our results?” or “What could we do differently next time?” help reinforce the importance of maintaining this perspective.

Organizations can ensure their teams consistently leverage the power of a beginner’s mindset by cultivating curiosity, creating psychological safety, and embedding practices that challenge conventional thinking into daily operations. This drives innovation and fosters adaptability and resilience in an ever-changing business landscape.

3. Revisit Assumptions by Practicing Strategic Doubt

Assumptions are the foundation of decision-making, strategy development, and problem-solving. They represent beliefs or premises we take for granted, often without explicit evidence. While assumptions are necessary to move forward in uncertain environments, they are not static. Over time, new information, shifting circumstances, or emerging trends can render them outdated or inaccurate. Periodically revisiting core assumptions is essential to ensure decisions remain relevant, strategies stay robust, and organizations adapt effectively to changing realities.

Why Revisiting Assumptions Matters

Assumptions often shape the trajectory of decisions and strategies. When left unchecked, they can lead to flawed projections, misallocated resources, and missed opportunities. For example, Kodak’s assumption that film photography would dominate forever led to its downfall in the face of digital innovation. Similarly, many organizations assume their customers’ preferences or market conditions will remain stable, only to find themselves blindsided by disruptive changes. Revisiting assumptions allows teams to challenge these foundational beliefs and recalibrate their approach based on current realities.

Moreover, assumptions are frequently made with incomplete knowledge or limited data. As new evidence emerges, whether through research, technological advancements, or operational feedback, testing these assumptions against reality is critical. This process ensures that decisions are informed by the best available information rather than outdated or erroneous beliefs.

How to Periodically Revisit Core Assumptions

Revisiting assumptions requires a structured approach integrating critical thinking, data analysis, and collaborative reflection.

1. Document Assumptions from the Start

The first step is identifying and articulating assumptions explicitly during the planning stages of any project or strategy. For instance, a team launching a new product might document assumptions about market size, customer preferences, competitive dynamics, and regulatory conditions. By making these assumptions visible and tangible, teams create a baseline for future evaluation.

2. Establish Regular Review Cycles

Revisiting assumptions should be institutionalized as part of organizational processes rather than a one-off exercise. Build assumption audits into the quality management process. During these sessions, teams critically evaluate whether their assumptions still hold true in light of recent data or developments. This ensures that decision-making remains agile and responsive to change.

3. Use Feedback Loops

Feedback loops provide real-world insights into whether assumptions align with reality. Organizations can integrate mechanisms such as surveys, operational metrics, and trend analyses into their workflows to continuously test assumptions.

4. Test Assumptions Systematically

Not all assumptions carry equal weight; some are more critical than others. Teams can prioritize testing based on three parameters: severity (impact if the assumption is wrong), probability (likelihood of being inaccurate), and cost of resolution (resources required to validate or adjust). 

5. Encourage Collaborative Reflection

Revisiting assumptions is most effective when diverse perspectives are involved. Bringing together cross-functional teams—including leaders, subject matter experts, and customer-facing roles—ensures that blind spots are uncovered and alternative viewpoints are considered. Collaborative workshops or strategy recalibration sessions can facilitate this process by encouraging open dialogue about what has changed since the last review.

6. Challenge Assumptions with Data

Assumptions should always be validated against evidence rather than intuition alone. Teams can leverage predictive analytics tools to assess whether their assumptions align with emerging trends or patterns. 

How Organizations Can Ensure Assumptions Are Utilized Effectively

To ensure revisited assumptions translate into actionable insights, organizations must integrate them into decision-making processes:

Monitor Continuously: Establish systems for continuously monitoring critical assumptions through dashboards or regular reporting mechanisms. This allows leadership to identify invalidated assumptions promptly and course-correct before significant risks materialize.

Update Strategies and Goals: Adjust goals and objectives based on revised assumptions to maintain alignment with current realities. 

Refine KPIs: Key Performance Indicators (KPIs) should evolve alongside updated assumptions to reflect shifting priorities and external conditions. Metrics that once seemed relevant may need adjustment as new data emerges.

Embed Assumption Testing into Culture: Encourage teams to view assumption testing as an ongoing practice rather than a reactive measure. Leaders can model this behavior by openly questioning their own decisions and inviting critique from others.

From Certainty to Curious Inquiry

Naïve realism isn’t a personal failing but a universal cognitive shortcut. By recognizing its influence—whether in misapplying the Pareto Principle or dismissing dissent—we can reframe conflicts as opportunities for discovery. The goal isn’t to eliminate subjectivity but to harness it, transforming blind spots into lenses for sharper, more inclusive decision-making.

The path to clarity lies not in rigid certainty but in relentless curiosity.

Quality Escalation Best Practices: Ensuring GxP Compliance and Patient Safety

Quality escalation is a critical process in maintaining the integrity of products, particularly in industries governed by Good Practices (GxP) such as pharmaceuticals and biotechnology. Effective escalation ensures that issues are addressed promptly, preventing potential risks to product quality and patient safety. This blog post will explore best practices for quality escalation, focusing on GxP compliance and the implications for regulatory notifications.

Understanding Quality Escalation

Quality escalation involves raising unresolved issues to higher management levels for timely resolution. This process is essential in environments where compliance with GxP regulations is mandatory. The primary goal is to ensure that products are manufactured, tested, and distributed in a manner that maintains their quality and safety.

This is a requirement across all the regulations, including clinical. ICH E6(r3) emphasizes the importance of effective monitoring and oversight to ensure that clinical trials are conducted in compliance with GCP and regulatory requirements. This includes identifying and addressing issues promptly.

Key Triggers for Escalation

Identifying triggers for escalation is crucial. Common triggers include:

  • Regulatory Compliance Issues: Non-compliance with regulatory requirements can lead to product quality issues and necessitate escalation.
  • Quality Control Failures: Failures in quality control processes, such as testing or inspection, can impact product safety and quality.
  • Data Integrity: Significant concerns and failures in quality of data.
  • Supply Chain Disruptions: Disruptions in the supply chain can affect the availability of critical components or materials, potentially impacting product quality.
  • Patient Safety Concerns: Any issues related to patient safety, such as adverse events or potential safety risks, should be escalated immediately.
Escalation CriteriaExamples of Quality Events for Escalation
Potential to adversely affect quality, safety, efficacy, performance or compliance of product (commercial or clinical)•Contamination (product, raw material, equipment, micro; environmental)
•Product defect/deviation from process parameters or specification (on file with agencies, e.g. CQAs and CPPs)
•Significant GMP deviations
•Incorrect/deficient labeling
•Product complaints (significant PC, trends in PCs)
•OOS/OOT (e.g.; stability)
Product counterfeiting, tampering, theft•Product counterfeiting, tampering, theft reportable to Health Authority (HA)
•Lost/stolen IMP
•Fraud or misconduct associated with counterfeiting, tampering, theft
•Potential to impact product supply (e.g.; removal, correction, recall)
Product shortage likely to disrupt patient care and/or reportable to HA•Disruption of product supply due to product quality events, natural disasters (business continuity disruption), OOS impact, capacity constraints
Potential to cause patient harm associated with a product quality event•Urgent Safety Measure, Serious Breach, Significant Product Compliant, Safety Signal that are determined associated with a product quality event
Significant GMP non-compliance/event•Non-compliance or non-conformance event with potential to impact product performance meeting specification, safety efficacy or regulatory requirements
Regulatory Compliance Event•Significant (critical, repeat) regulatory inspection findings; lack of commitment adherence
•Notification of directed/for cause inspection
•Notification of Health Authority correspondence indicating potential regulatory action

Best Practices for Quality Escalation

  1. Proactive Identification: Encourage a culture where team members proactively identify potential issues. Early detection can prevent minor problems from escalating into major crises.
  2. Clear Communication Channels: Establish clear communication channels and protocols for escalating issues. This ensures that the right people are informed promptly and can take appropriate action.
  3. Documentation and Tracking: Use a central repository to document and track issues. This helps in identifying trends, implementing corrective actions, and ensuring compliance with regulatory requirements.
  4. Collaborative Resolution: Foster collaboration between different departments and stakeholders to resolve issues efficiently. This includes involving quality assurance, quality control, and regulatory affairs teams as necessary.
  5. Regulatory Awareness: Be aware of regulatory requirements and ensure that all escalations are handled in a manner that complies with these regulations. This includes maintaining confidentiality when necessary and ensuring transparency with regulatory bodies.

GxP Impact and Regulatory Notifications

In industries governed by GxP, any significant quality issues may require notification to regulatory bodies. This includes situations where product quality or patient safety is compromised. Best practices for handling such scenarios include:

  • Prompt Notification: Notify regulatory bodies promptly if there is a risk to public health or if regulatory requirements are not met.
  • Comprehensive Reporting: Ensure that all reports to regulatory bodies are comprehensive, including details of the issue, actions taken, and corrective measures implemented.
  • Continuous Improvement: Use escalations as opportunities to improve processes and prevent future occurrences. This includes conducting root cause analyses and implementing preventive actions.

Fit with Quality Management Review

This fits within the Quality Management Review band, being an ad hoc triggered review of significant issues, ensuring appropriate leadership attention, and allowing key decisions to be made in a timely manner.

Conclusion

Quality escalation is a vital component of maintaining product quality and ensuring patient safety in GxP environments. By implementing best practices such as proactive issue identification, clear communication, and collaborative resolution, organizations can effectively manage risks and comply with regulatory requirements. Understanding when and how to escalate issues is crucial for preventing potential crises and ensuring that products meet the highest standards of quality and safety.

Escalation of Critical Events

Event management systems need to have an escalation mechanism to ensure critical events are quickly elevated to a senior level to ensure organization-wide timely reactions.

Consistent Event Reporting

There are many reasons for a fast escalation.

  • Events that trigger reporting to Regulatory Agencies (e.g. Serious Breach, Urgent Safety Measures (UK), Field Alerts, Biological Product Deviation, Medical Device Report)
  • Events that require immediate action to prevent additional harm from across the organization
  • Events that require marshalling resources from large parts of the organization

GMP

GCP

GPVP

GLP

Research

IT

         Impact to data integrity

       Impact to product quality/supply

       Impact to data integrity

       Data/privacy breach

       Event impacting on-time compliance rates (not isolated/steady state)

       Impact to data integrity

       Impact to data integrity

       Reference GxP area for Impact resulting from/linked to system error/failure

       Product Quality/ CMC events in accordance with MRB criteria (or other events of similar scope of impact)

       Impact to study integrity

       Impact to subject’s safety, rights or welfare

       Gaps in reporting/ collection of potential AEs

       Impact to study integrity

       Impact to study integrity

       System design, testing, deployment, upgrade, etc. event impacting GxP data integrity or regulatory compliance

       Recurring event with broad scope of impact

       Recurring event with broad scope of impact

       Recurring event with broad scope of impact

       Recurring event with broad scope of impact

       Recurring event with broad scope of impact

       Recurring event with broad scope of impact

       Impact to program milestones & corporate goals

       Impact to program milestones & corporate goals

       Impact to program milestones & corporate goals

       Impact to program milestones & corporate goals

       Impact to program milestones & corporate goals

       Potential Falsified or Counterfeit Product

       Potential Fraud or Misconduct

       Potential Fraud or Misconduct

       Credible Risk of Product Shortage

       Quality event with patient safety risk/gap

       GxP Data Breach

       Potential Product Recall

       Significant Quality Event Notified to Regulatory Authority

       System error or failure with significant GxP compliance impact

·       Potential Critical Finding Resulting from Regulatory Authority Inspection or Audit by External Body/Third Party

·       Quality Event/Observation Classified as Critical (Event or Internal Audit) Notification from Regulatory Authority or other External Authority of Findings of Significant/Critical Quality Deficiency (inspection or other than through inspection)

o   e.g.; Refusal to File, Notification of Inadequate Response to Inspection Findings (e.g.; Other Action Indicated (FDA classification), Warning Letter

 

You can drill down to a lower, more practical level, like this

Escalation Criteria

Examples of Quality Events for Escalation

Potential to adversely affect quality, safety, efficacy, performance or compliance of product (commercial or clinical)

       Contamination (product, raw material, equipment, micro; environmental)

       Product defect/deviation from process parameters or specification (on file with agencies)

       Significant GMP deviations

       Incorrect/deficient labeling

       Product complaints (significant PC, trends in PCs)

       OOS/OOT (e.g., stability)

Product counterfeiting, tampering, theft

       Product counterfeiting, tampering, theft reportable to Health Authority (HA)

       Lost/stolen IMP

       Fraud or misconduct associated with counterfeiting, tampering, theft

       Potential to impact product supply (e.g., removal, correction, recall)

Product shortage likely to disrupt patient care and/or reportable to HA

       Disruption of product supply due to product quality events, natural disasters (business continuity disruption), OOS impact, capacity constraints

Potential to cause patient harm associated with a product quality event

       Urgent Safety Measure, Serious Breach, Significant Product Compliant, Safety Signal that are determined associated with a product quality event

Significant GMP non-compliance/event

       Non-compliance or non-conformance event with potential to impact product performance meeting specification, safety efficacy or regulatory requirements

Regulatory Compliance Event

       Significant (critical, repeat) regulatory inspection findings, lack of commitment adherence

       Notification of directed/for cause inspection

       Notification of HA correspondence indicating potential regulatory action

 

An updated and expanded version of this is found here.

Management Review – a Structured Analysis of Reality

What is Management Review?

ISO9001:2015 states “Top management shall review the organization’s quality management system, at planned intervals, to ensure its continuing suitability, adequacy, effectiveness and alignment with the strategic direction of the organization.”

Management review takes inputs of system performance and converts it to outputs that drive improvement.

Just about every standard and guidance aligns with the ISO9001:2015 structure.

The Use of PowerPoint in Management Review

Everyone makes fun of PowerPoint, and yet it is still with us. As a mechanism for formal communication it is the go-to form, and I do not believe that will change anytime soon.

One of the best pieces of research on PowerPoint and management review is Kaplan’s examination of PowerPoint slides used in a manufacturing firm. Kaplan found that generating slides was “embedded in the discursive practices of strategic knowledge production” and made up “part of the epistemic machinery that undergirds the know-ledge production culture.” Further, “the affordances of PowerPoint,” Kaplan pointed out, “enabled the difficult task of collaborating to negotiate meaning in an uncertain environment, creating spaces for discussion, making recombinations possible, [and] allowing for adjustments as ideas evolved”. She concluded that PowerPoint slide decks should be regarded not as merely effective or ineffective reports but rather as an essential part of strategic decision making.

Kaplan’s findings are not isolated, there is a broad wealth of relevant research in the fields of genre and composition studies as well as research on material objects that draw similar conclusions. Powerpoint, as a method of formal communication, can be effective.

Management Review as Formal Communication

Management review is a formal communication and by understanding how these formal communications participate in the fixed and emergent conditions of knowledge work as prescribed, being-composed, and materialized-texts-in-use, we can understand how to better structure our knowledge sharing.

Management review mediates between Work-As-Imagined and Work-As-Done.

As-Prescribed

The quality management reviews have “fixity” and bring a reliable structure to the knowledge-work process by specifying what needs to become known and by when, forming a step-by-step learning process.

As-Being-Composed

Quality management always starts with a plan for activities, but in the process of providing analysis through management review, the organization learns much more about the topic, discovers new ideas, and uncover inconsistencies in our thinking that cause us to step back, refine, and sometimes radically change our plan. By engaging in the writing of these presentations we make the tacit knowledge explicit.

A successful management review imagines the audience who needs the information, asks questions, raises objections, and brings to the presentation a body of experience and a perspective that differs from that of the party line. Management review should be a process of dialogue that draws inferences and constructs relationships between ideas, apply logic to build complex arguments, reformulate ideas, reflects on what is already known, and comes to understand the material in a new way.

As-Materialized

Management review is a textually mediated conversation that enables knowledge integration within and
across groups in, and outside of, the organization. The records of management review are focal points around which users can discuss what they have learned, discover diverse understandings, and depersonalize debate. Management review records drive the process of incorporating the different domain specific
knowledge of various decision makers and experts into some form of systemic group knowledge and applies that knowledge to decision making and action.

Sources

  • Alvesson, M. (2004). Knowledge work and knowledge-intensive firms. Oxford University Press.
  • Bazerman, C. (2003). What is not institutionally visible does not count: The problem of making activity assessable, accountable, and plannable. In C. Bazerman & D. Russell (Eds.), Writing selves/writing societies: Research from activity perspectives (pp. 428–482). WAC Clearinghouse
  • Edmondson, A. C. (2012). Teaming: How organizations learn, innovate, and compete in the knowledge economy. Jossey-Bass
  • Kaplan, S. (2015). Strategy and PowerPoint: An inquiry into the epistemic culture and machinery of strategy making. Organization Science, 22, 320–346.
  • Levitin, D. J. (2014). The organized mind: Thinking straight in the age of information overload. Penguin
  • Mengis, J. (2007). Integrating knowledge through communication: The case of experts and decision makers. In Proceedings of the 2007 International Conference on Organizational Knowledge, Learning, and Capabilities (pp. 699–720). OLKC. Retrieved from https://warwick.ac.uk/fac/soc/wbs/conf/olkc/archive/olkc2/papers/mengis.pdf