Reducing Subjectivity in Quality Risk Management: Aligning with ICH Q9(R1)

In a previous post, I discussed how overcoming subjectivity in risk management and decision-making requires fostering a culture of quality and excellence. This is an issue that it is important to continue to evaluate and push for additional improvement.

The revised ICH Q9(R1) guideline, finalized in January 2023, introduces critical updates to Quality Risk Management (QRM) practices, emphasizing the need to address subjectivity, enhance formality, improve risk-based decision-making, and manage product availability risks. These revisions aim to ensure that QRM processes are more science-driven, knowledge-based, and effective in safeguarding product quality and patient safety. Two years later it is important to continue to build on key strategies for reducing subjectivity in QRM and aligning with the updated requirements.

Understanding Subjectivity in QRM

Subjectivity in QRM arises from personal opinions, biases, heuristics, or inconsistent interpretations of risks by stakeholders. This can impact every stage of the QRM process—from hazard identification to risk evaluation and mitigation. The revised ICH Q9(R1) explicitly addresses this issue by introducing a new subsection, “Managing and Minimizing Subjectivity,” which emphasizes that while subjectivity cannot be entirely eliminated, it can be controlled through structured approaches.

The guideline highlights that subjectivity often stems from poorly designed scoring systems, differing perceptions of hazards and risks among stakeholders, and cognitive biases. To mitigate these challenges, organizations must adopt robust strategies that prioritize scientific knowledge and data-driven decision-making.

Strategies to Reduce Subjectivity

Leveraging Knowledge Management

ICH Q9(R1) underscores the importance of knowledge management as a tool to reduce uncertainty and subjectivity in risk assessments. Effective knowledge management involves systematically capturing, organizing, and applying internal and external knowledge to inform QRM activities. This includes maintaining centralized repositories for technical data, fostering real-time information sharing across teams, and learning from past experiences through structured lessons-learned processes.

By integrating knowledge management into QRM, organizations can ensure that decisions are based on comprehensive data rather than subjective estimations. For example, using historical data on process performance or supplier reliability can provide objective insights into potential risks.

To integrate knowledge management (KM) more effectively into quality risk management (QRM), organizations can implement several strategies to ensure decisions are based on comprehensive data rather than subjective estimations:

Establish Robust Knowledge Repositories

Create centralized, easily accessible repositories for storing and organizing historical data, lessons learned, and best practices. These repositories should include:

  • Process performance data
  • Supplier reliability metrics
  • Deviation and CAPA records
  • Audit findings and inspection observations
  • Technology transfer documentation

By maintaining these repositories, organizations can quickly access relevant historical information when conducting risk assessments.

Implement Knowledge Mapping

Conduct knowledge mapping exercises to identify key sources of knowledge within the organization. This process helps to:

Use the resulting knowledge maps to guide risk assessment teams to relevant information and expertise.

Develop Data Analytics Capabilities

Invest in data analytics tools and capabilities to extract meaningful insights from historical data. For example:

  • Use statistical process control to identify trends in manufacturing performance
  • Apply machine learning algorithms to predict potential quality issues based on historical patterns
  • Utilize data visualization tools to present complex risk data in an easily understandable format

These analytics can provide objective, data-driven insights into potential risks and their likelihood of occurrence.

Integrate KM into QRM Processes

Embed KM activities directly into QRM processes to ensure consistent use of available knowledge:

  • Include a knowledge gathering step at the beginning of risk assessments
  • Require risk assessment teams to document the sources of knowledge used in their analysis
  • Implement a formal process for capturing new knowledge generated during risk assessments

This integration helps ensure that all relevant knowledge is considered and that new insights are captured for future use.

Foster a Knowledge-Sharing Culture

Encourage a culture of knowledge sharing and collaboration within the organization:

  • Implement mentoring programs to facilitate the transfer of tacit knowledge
  • Establish communities of practice around key risk areas
  • Recognize and reward employees who contribute valuable knowledge to risk management efforts

By promoting knowledge sharing, organizations can tap into the collective expertise of their workforce to improve risk assessments.

Implementing Structured Risk-Based Decision-Making

The revised guideline introduces a dedicated section on risk-based decision-making, emphasizing the need for structured approaches that consider the complexity, uncertainty, and importance of decisions. Organizations should establish clear criteria for decision-making processes, define acceptable risk tolerance levels, and use evidence-based methods to evaluate options.

Structured decision-making tools can help standardize how risks are assessed and prioritized. Additionally, calibrating expert opinions through formal elicitation techniques can further reduce variability in judgments.

Addressing Cognitive Biases

Cognitive biases—such as overconfidence or anchoring—can distort risk assessments and lead to inconsistent outcomes. To address this, organizations should provide training on recognizing common biases and their impact on decision-making. Encouraging diverse perspectives within risk assessment teams can also help counteract individual biases.

For example, using cross-functional teams ensures that different viewpoints are considered when evaluating risks, leading to more balanced assessments. Regularly reviewing risk assessment outputs for signs of bias or inconsistencies can further enhance objectivity.

Enhancing Formality in QRM

ICH Q9(R1) introduces the concept of a “formality continuum,” which aligns the level of effort and documentation with the complexity and significance of the risk being managed. This approach allows organizations to allocate resources effectively by applying less formal methods to lower-risk issues while reserving rigorous processes for high-risk scenarios.

For instance, routine quality checks may require minimal documentation compared to a comprehensive risk assessment for introducing new manufacturing technologies. By tailoring formality levels appropriately, organizations can ensure consistency while avoiding unnecessary complexity.

Calibrating Expert Opinions

We need to recognize the importance of expert knowledge in QRM activities, but also acknowledges the potential for subjectivity and bias in expert judgments. We need to ensure we:

  • Implement formal processes for expert opinion elicitation
  • Use techniques to calibrate expert judgments, especially when estimating probabilities
  • Provide training on common cognitive biases and their impact on risk assessment
  • Employ diverse teams to counteract individual biases
  • Regularly review risk assessment outputs for signs of bias or inconsistencies

Calibration techniques may include:

  • Structured elicitation protocols that break down complex judgments into more manageable components
  • Feedback and training to help experts align their subjective probability estimates with actual frequencies of events
  • Using multiple experts and aggregating their judgments through methods like Cooke’s classical model
  • Employing facilitation techniques to mitigate groupthink and encourage independent thinking

By calibrating expert opinions, organizations can leverage valuable expertise while minimizing subjectivity in risk assessments.

Utilizing Cooke’s Classical Model

Cooke’s Classical Model is a rigorous method for evaluating and combining expert judgments to quantify uncertainty. Here are the key steps for using the Classical Model to evaluate expert judgment:

Select and calibrate experts:
    • Choose 5-10 experts in the relevant field
    • Have experts assess uncertain quantities (“calibration questions”) for which true values are known or will be known soon
    • These calibration questions should be from the experts’ domain of expertise
    Elicit expert assessments:
      • Have experts provide probabilistic assessments (usually 5%, 50%, and 95% quantiles) for both calibration questions and questions of interest
      • Document experts’ reasoning and rationales
      Score expert performance:
      • Evaluate experts on two measures:
        a) Statistical accuracy: How well their probabilistic assessments match the true values of calibration questions
        b) Informativeness: How precise and focused their uncertainty ranges are
      Calculate performance-based weights:
        • Derive weights for each expert based on their statistical accuracy and informativeness scores
        • Experts performing poorly on calibration questions receive little or no weight
        Combine expert assessments:
        • Use the performance-based weights to aggregate experts’ judgments on the questions of interest
        • This creates a “Decision Maker” combining the experts’ assessments
        Validate the combined assessment:
        • Evaluate the performance of the weighted combination (“Decision Maker”) using the same scoring as for individual experts
        • Compare to equal-weight combination and best-performing individual experts
        Conduct robustness checks:
        • Perform cross-validation by using subsets of calibration questions to form weights
        • Assess how well performance on calibration questions predicts performance on questions of interest

        The Classical Model aims to create an optimal aggregate assessment that outperforms both equal-weight combinations and individual experts. By using objective performance measures from calibration questions, it provides a scientifically defensible method for evaluating and synthesizing expert judgment under uncertainty.

        Using Data to Support Decisions

        ICH Q9(R1) emphasizes the importance of basing risk management decisions on scientific knowledge and data. The guideline encourages organizations to:

        • Develop robust knowledge management systems to capture and maintain product and process knowledge
        • Create standardized repositories for technical data and information
        • Implement systems to collect and convert data into usable knowledge
        • Gather and analyze relevant data to support risk-based decisions
        • Use quantitative methods where feasible, such as statistical models or predictive analytics

        Specific approaches for using data in QRM may include:

        • Analyzing historical data on process performance, deviations, and quality issues to inform risk assessments
        • Employing statistical process control and process capability analysis to evaluate and monitor risks
        • Utilizing data mining and machine learning techniques to identify patterns and potential risks in large datasets
        • Implementing real-time data monitoring systems to enable proactive risk management
        • Conducting formal data quality assessments to ensure decisions are based on reliable information

        Digitalization and emerging technologies can support data-driven decision making, but remember that validation requirements for these technologies should not be overlooked.

        Improving Risk Assessment Tools

        The design of risk assessment tools plays a critical role in minimizing subjectivity. Tools with well-defined scoring criteria and clear guidance on interpreting results can reduce variability in how risks are evaluated. For example, using quantitative methods where feasible—such as statistical models or predictive analytics—can provide more objective insights compared to qualitative scoring systems.

        Organizations should also validate their tools periodically to ensure they remain fit-for-purpose and aligned with current regulatory expectations.

        Leverage Good Risk Questions

        A well-formulated risk question can significantly help reduce subjectivity in quality risk management (QRM) activities. Here’s how a good risk question contributes to reducing subjectivity:

        Clarity and Focus

        A good risk question provides clarity and focus for the risk assessment process. By clearly defining the scope and context of the risk being evaluated, it helps align all participants on what specifically needs to be assessed. This alignment reduces the potential for individual interpretations and subjective assumptions about the risk scenario.

        Specific and Measurable Terms

        Effective risk questions use specific and measurable terms rather than vague or ambiguous language. For example, instead of asking “What are the risks to product quality?”, a better question might be “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months?”. The specificity in the latter question helps anchor the assessment in objective, measurable criteria.

        Factual Basis

        A well-crafted risk question encourages the use of factual information and data rather than opinions or guesses. It should prompt the risk assessment team to seek out relevant data, historical information, and scientific knowledge to inform their evaluation. This focus on facts and evidence helps minimize the influence of personal biases and subjective judgments.

        Standardized Approach

        Using a consistent format for risk questions across different assessments promotes a standardized approach to risk identification and analysis. This consistency reduces variability in how risks are framed and evaluated, thereby decreasing the potential for subjective interpretations.

        Objective Criteria

        Good risk questions often incorporate or imply objective criteria for risk evaluation. For instance, a question like “What factors could lead to a deviation from the acceptable range of 5-10% for impurity Y?” sets clear, objective parameters for the assessment, reducing the room for subjective interpretation of what constitutes a significant risk.

        Promotes Structured Thinking

        Well-formulated risk questions encourage structured thinking about potential hazards, their causes, and consequences. This structured approach helps assessors focus on objective factors and causal relationships rather than relying on gut feelings or personal opinions.

        Facilitates Knowledge Utilization

        A good risk question should prompt the assessment team to utilize available knowledge effectively. It encourages the team to draw upon relevant data, past experiences, and scientific understanding, thereby grounding the assessment in objective information rather than subjective impressions.

        By crafting risk questions that embody these characteristics, QRM practitioners can significantly reduce the subjectivity in risk assessments, leading to more reliable, consistent, and scientifically sound risk management decisions.

        Fostering a Culture of Continuous Improvement

        Reducing subjectivity in QRM is an ongoing process that requires a commitment to continuous improvement. Organizations should regularly review their QRM practices to identify areas for enhancement and incorporate feedback from stakeholders. Investing in training programs that build competencies in risk assessment methodologies and decision-making frameworks is essential for sustaining progress.

        Moreover, fostering a culture that values transparency, collaboration, and accountability can empower teams to address subjectivity proactively. Encouraging open discussions about uncertainties or disagreements during risk assessments can lead to more robust outcomes.

        Conclusion

        The revisions introduced in ICH Q9(R1) represent a significant step forward in addressing long-standing challenges associated with subjectivity in QRM. By leveraging knowledge management, implementing structured decision-making processes, addressing cognitive biases, enhancing formality levels appropriately, and improving risk assessment tools, organizations can align their practices with the updated guidelines while ensuring more reliable and science-based outcomes.

        It has been two years, it is long past time be be addressing these in your risk management process and quality system.

        Ultimately, reducing subjectivity not only strengthens compliance with regulatory expectations but also enhances the quality of pharmaceutical products and safeguards patient safety—a goal that lies at the heart of effective Quality Risk Management.

        Assessing the Strength of Knowledge: A Framework for Decision-Making

        ICH Q9(R1) emphasizes that knowledge is fundamental to effective risk management. The guideline states that “QRM is part of building knowledge and understanding risk scenarios, so that appropriate risk control can be decided upon for use during the commercial manufacturing phase.” 

        We need to recognize the inverse relationship between knowledge and uncertainty in risk assessment. ICH Q9(R1) notes that uncertainty may be reduced “via effective knowledge management, which enables accumulated and new information (both internal and external) to be used to support risk-based decisions throughout the product lifecycle”

        In order to gauge the confidence in risk assessment we need to gauge our knowledge strength.

        The Spectrum of Knowledge Strength

        Knowledge strength can be categorized into three levels: weak, medium, and strong. Each level is determined by specific criteria that assess the reliability, consensus, and depth of understanding surrounding a particular subject.

        Indicators of Weak Knowledge

        Knowledge is considered weak if it exhibits one or more of the following characteristics:

        1. Oversimplified Assumptions: The foundations of the knowledge rely on strong simplifications that may not accurately represent reality.
        2. Lack of Reliable Data: There is little to no data available, or the existing information is highly unreliable or irrelevant.
        3. Expert Disagreement: There is significant disagreement among experts in the field.
        4. Poor Understanding of Phenomena: The underlying phenomena are poorly understood, and available models are either non-existent or known to provide inaccurate predictions.
        5. Unexamined Knowledge: The knowledge has not been thoroughly scrutinized, potentially overlooking critical “unknown knowns.”

        Hallmarks of Strong Knowledge

        On the other hand, knowledge is deemed strong when it meets all of the following criteria (where relevant):

        1. Reasonable Assumptions: The assumptions made are considered very reasonable and well-grounded.
        2. Abundant Reliable Data: Large amounts of reliable and relevant data or information are available.
        3. Expert Consensus: There is broad agreement among experts in the field.
        4. Well-Understood Phenomena: The phenomena involved are well understood, and the models used provide predictions with the required accuracy.
        5. Thoroughly Examined: The knowledge has been rigorously examined and tested.

        The Middle Ground: Medium Strength Knowledge

        Cases that fall between weak and strong are classified as medium strength knowledge. This category can be flexible, allowing for a broader range of scenarios to be considered strong. For example, knowledge could be classified as strong if at least one (or more) of the strong criteria are met while none of the weak criteria are present.

        Strong vs Weak Knowledge

        A Simplified Approach

        For practical applications, a simplified version of this framework can be used:

        • Strong: All criteria for strong knowledge are met.
        • Medium: One or two criteria for strong knowledge are not met.
        • Weak: Three or more criteria for strong knowledge are not met.

        Implications for Decision-Making

        Understanding the strength of our knowledge is crucial for effective decision-making. Strong knowledge provides a solid foundation for confident choices, while weak knowledge signals the need for caution and further investigation.

        When faced with weak knowledge:

        • Seek additional information or expert opinions
        • Consider multiple scenarios and potential outcomes
        • Implement risk mitigation strategies

        When working with strong knowledge:

        • Make decisions with greater confidence
        • Focus on implementation and optimization
        • Monitor outcomes to validate and refine understanding

        Knowledge Strength and Uncertainty

        The concept of knowledge strength aligns closely with the levels of uncertainty.

        Strong Knowledge and Low Uncertainty (Levels 1-2)

        Strong knowledge typically corresponds to lower levels of uncertainty:

        • Level 1 Uncertainty: This aligns closely with strong knowledge, where outcomes can be estimated with reasonable accuracy within a single system model. Strong knowledge is characterized by reasonable assumptions, abundant reliable data, and well-understood phenomena, which enable accurate predictions.
        • Level 2 Uncertainty: While displaying alternative futures, this level still operates within a single system where probability estimates can be applied confidently. Strong knowledge often allows for this level of certainty, as it involves broad expert agreement and thoroughly examined information.

        Medium Knowledge and Moderate Uncertainty (Level 3)

        Medium strength knowledge often corresponds to Level 3 uncertainty:

        • Level 3 Uncertainty: This level involves “a multiplicity of plausible futures” with multiple interacting systems, but still within a known range of outcomes. Medium knowledge strength might involve some gaps or disagreements but still provides a foundation for identifying potential outcomes.

        Weak Knowledge and Deep Uncertainty (Level 4)

        Weak knowledge aligns most closely with the deepest level of uncertainty:

        • Level 4 Uncertainty: This level leads to an “unknown future” where we don’t understand the system and are aware of crucial unknowns. Weak knowledge, characterized by oversimplified assumptions, lack of reliable data, and poor understanding of phenomena, often results in this level of deep uncertainty.

        Implications for Decision-Making

        1. When knowledge is strong and uncertainty is low (Levels 1-2), decision-makers can rely more confidently on predictions and probability estimates.
        2. As knowledge strength decreases and uncertainty increases (Levels 3-4), decision-makers must adopt more flexible and adaptive approaches to account for a wider range of possible futures.
        3. The principle that “uncertainty should always be considered at the deepest proposed level” unless proven otherwise aligns with the cautious approach of assessing knowledge strength. This ensures that potential weaknesses in knowledge are not overlooked.

        Conclusion

        By systematically evaluating the strength of our knowledge using this framework, we can make more informed decisions, identify areas that require further investigation, and better understand the limitations of our current understanding. Remember, the goal is not always to achieve perfect knowledge but to recognize the level of certainty we have and act accordingly.

        Assessing the Quality of Our Risk Management Activities

        Twenty years on, risk management in the pharmaceutical world continues to be challenging. Ensure that risk assessments are systematic, structured, and based on scientific knowledge. A large part of the ICH Q9(R1) revision was written to address continued struggles with subjectivity, formality, and decision-making. And quite frankly, it’s clear to me that we, as an industry, are still working to absorb those messages these last two years.

        A big challenge is that we struggle to measure the effectiveness of our risk assessments. Quite frankly, this is a great place for a rubric.

        Luckily, we have a good tool out there to adopt: the Risk Analysis Quality Test (RAQT1.0), developed by the Society for Risk Analysis (SRA). This comprehensive framework is designed to evaluate and improve the quality of risk assessments. We can apply this tool to meet the requirements of the International Conference on Harmonisation (ICH) Q9, which outlines quality risk management principles for the pharmaceutical industry. From that, we can drive continued improvement in our risk management activities.

        Components of RAQT1.0

        The Risk Analysis Quality Test consists of 76 questions organized into 15 categories:

        • Framing the Analysis and Its Interface with Decision Making
        • Capturing the Risk Generating Process (RGP)
        • Communication
        • Stakeholder Involvement
        • Assumptions and Scope Boundary Issues
        • Proactive Creation of Alternative Courses of Action
        • Basis of Knowledge
        • Data Limitations
        • Analysis Limitations
        • Uncertainty
        • Consideration of Alternative Analysis Approaches
        • Robustness and Resilience of Action Strategies
        • Model and Analysis Validation and Documentation
        • Reporting
        • Budget and Schedule Adequacy

        Application to ICH Q9 Requirements

        ICH Q9 emphasizes the importance of a systematic and structured risk assessment process. The RAQT can be used to ensure that risk assessments are thorough and meet quality standards. For example, Category G (Basis of Knowledge) and Category H (Data Limitations) help in evaluating the scientific basis and data quality of the risk assessment, aligning with ICH Q9’s requirement for using available knowledge and data.

        The RAQT’s Category B (Capturing the Risk Generating Process) and Category C (Communication) can help in identifying and communicating risks effectively. This aligns with ICH Q9’s requirement to identify potential risks based on scientific knowledge and understanding of the process.

        Categories such as Category I (Analysis Limitations) and Category J (Uncertainty) in the RAQT help in analyzing the risks and addressing uncertainties, which is a key aspect of ICH Q9. These categories ensure that the analysis is robust and considers all relevant factors.

        The RAQT’s Category A (Framing the Analysis and Its Interface with Decision Making) and Category F (Proactive Creation of Alternative Courses of Action) are crucial for evaluating risks and developing mitigation strategies. This aligns with ICH Q9’s requirement to evaluate risks and determine the need for risk reduction.

        Categories like Category L (Robustness and Resilience of Action Strategies) and Category M (Model and Analysis Validation and Documentation) in the RAQT help in ensuring that the risk control measures are robust and well-documented. This is consistent with ICH Q9’s emphasis on implementing and reviewing controls.

        Category D (Stakeholder Involvement) of the RAQT ensures that stakeholders are engaged in the risk management process, which is a requirement under ICH Q9 for effective communication and collaboration.

        The RAQT can be applied both retrospectively and prospectively, allowing for the evaluation of past risk assessments and the planning of future ones. This aligns with ICH Q9’s requirement for periodic review and continuous improvement of the risk management process.

        Creating a Rubric

        To make this actionable we need a tool, a rubric, to allow folks to evaluate what goods look like. I would insert this tool into the quality oversite of risk management.

        Category A: Framing the Analysis and Its Interface With Decision Making

        CriteriaExcellent (4)Good (3)Fair (2)Poor (1)
        Problem DefinitionClearly and comprehensively defines the problem, including all relevant aspects and stakeholdersAdequately defines the problem with most relevant aspects consideredPartially defines the problem with some key aspects missingPoorly defines the problem or misses critical aspects
        Analytical ApproachSelects and justifies an optimal analytical approach, demonstrating deep understanding of methodologiesChooses an appropriate analytical approach with reasonable justificationSelects a somewhat relevant approach with limited justificationChooses an inappropriate approach or provides no justification
        Data Collection and ManagementThoroughly identifies all necessary data sources and outlines a comprehensive data management planIdentifies most relevant data sources and provides a adequate data management planIdentifies some relevant data sources and offers a basic data management planFails to identify key data sources or lacks a coherent data management plan
        Stakeholder IdentificationComprehensively identifies all relevant stakeholders and their interestsIdentifies most key stakeholders and their primary interestsIdentifies some stakeholders but misses important ones or their interestsFails to identify major stakeholders or their interests
        Decision-Making ContextProvides a thorough analysis of the decision-making context, including constraints and opportunitiesAdequately describes the decision-making context with most key factors consideredPartially describes the decision-making context, missing some important factorsPoorly describes or misunderstands the decision-making context
        Alignment with Organizational GoalsDemonstrates perfect alignment between the analysis and broader organizational objectivesShows good alignment with organizational goals, with minor gapsPartially aligns with organizational goals, with significant gapsFails to align with or contradicts organizational goals
        Communication StrategyDevelops a comprehensive strategy for communicating results to all relevant decision-makersOutlines a good communication strategy covering most key decision-makersProvides a basic communication plan with some gapsLacks a clear strategy for communicating results to decision-makers

        This rubric provides a framework for assessing the quality of work in framing an analysis and its interface with decision-making. It covers key aspects such as problem definition, analytical approach, data management, stakeholder consideration, decision-making context, alignment with organizational goals, and communication strategy. Each criterion is evaluated on a scale from 1 (Poor) to 4 (Excellent), allowing for nuanced assessment of performance in each area.

        To use this rubric effectively:

        1. Adjust the criteria and descriptions as needed to fit your specific context or requirements.
        2. Ensure that the expectations for each level (Excellent, Good, Fair, Poor) are clear and distinguishable.

        My next steps will be to add specific examples or indicators for each level to provide more guidance to both assessors and those being assessed.

        I also may, depending on internal needs, want to assign different weights to each criterion based on their relative importance in your specific context. In this case I think each ends up being pretty similar.

        I would then go and add the other sections. For example, here is category B with some possible weighting.

        Category B: Capturing the Risk Generating Process (RGP)

        ComponentWeight FactorExcellentSatisfactoryNeeds ImprovementPoor
        B1. Comprehensiveness4The analysis includes: i) A structured taxonomy of hazards/events demonstrating comprehensiveness ii) Each scenario spelled out with causes and types of change iii) Explicit addressing of potential “Black Swan” events iv) Clear description of implications of such events for risk managementThe analysis includes 3 out of 4 elements from the Excellent criteria, with minor gaps that do not significantly impact understandingThe analysis includes only 2 out of 4 elements from the Excellent criteria, or has significant gaps in comprehensivenessThe analysis includes 1 or fewer elements from the Excellent criteria, severely lacking in comprehensiveness
        B2. Basic Structure of RGP2Clearly identifies and accounts for the basic structure of the RGP (e.g. linear, chaotic, complex adaptive) AND Uses appropriate mathematical structures (e.g. linear, quadratic, exponential) that match the RGP structureIdentifies the basic structure of the RGP BUT does not fully align mathematical structures with the RGPAttempts to identify the RGP structure but does so incorrectly or incompletely OR Uses mathematical structures that do not align with the RGPDoes not identify or account for the basic structure of the RGP
        B3. Complexity of RGP3Lists all important causal and associative links in the RGP AND Demonstrates how each link is accounted for in the analysisLists most important causal and associative links in the RGP AND Demonstrates how most links are accounted for in the analysisLists some causal and associative links but misses key elements OR Does not adequately demonstrate how links are accounted for in the analysisDoes not list causal and associative links or account for them in the analysis
        B4. Early Warning Detection3Includes a clear process for detecting early warnings of potential surprising risk aspects, beyond just concrete eventsIncludes a process for detecting early warnings, but it may be limited in scope or not fully developedMentions the need for early warning detection but does not provide a clear processDoes not address early warning detection
        B5. System Changes2Fully considers the possibility of system changes AND Establishes adequate mechanisms to detect those changesConsiders the possibility of system changes BUT mechanisms to detect changes are not fully developedMentions the possibility of system changes but does not adequately consider or establish detection mechanismsDoes not consider or address the possibility of system changes

          I definitely need to go back and add more around structure requirements. The SRA RAQT tool needs some more interpretation here.

          Category C: Risk Communication

          ComponentWeight FactorExcellentSatisfactoryNeeds ImprovementPoor
          C1. Integration of Communication into Risk Analysis3Communication is fully integrated into the risk analysis following established norms). All aspects of the methodology are clearly addressed including context establishment, risk assessment (identification, analysis, evaluation), and risk treatment. There is clear evidence of pre-assessment, management, appraisal, characterization and evaluation. Knowledge about the risk is thoroughly categorized.Communication is integrated into the risk analysis following most aspects of established norms. Most key elements of methodologies like ISO 31000 or IRGC are addressed, but some minor aspects may be missing or unclear. Knowledge about the risk is categorized, but may lack some detail.Communication is partially integrated into the risk analysis, but significant aspects of established norms are missing. Only some elements of methodologies like ISO 31000 or IRGC are addressed. Knowledge categorization about the risk is incomplete or unclear.There is little to no evidence of communication being integrated into the risk analysis following established norms. Methodologies like ISO 31000 or IRGC are not followed. Knowledge about the risk is not categorized.
          C2. Adequacy of Risk Communication3All considerations for effective risk communication have been applied to ensure adequacy between analysts and decision makers, analysts and other stakeholders, and decision makers and stakeholders. There is clear evidence that all parties agree the communication is adequate.Most considerations for effective risk communication have been applied. Communication appears adequate between most parties, but there may be minor gaps or areas where agreement on adequacy is not explicitly stated.Some considerations for effective risk communication have been applied, but there are significant gaps. Communication adequacy is questionable between one or more sets of parties. There is limited evidence of agreement on communication adequacy.Few to no considerations for effective risk communication have been applied. There is no evidence of adequate communication between analysts, decision makers, and stakeholders. There is no indication of agreement on communication adequacy.

          Category D: Stakeholder Involvement

          CriteriaWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
          Stakeholder Identification4All relevant stakeholders are systematically and comprehensively identifiedMost relevant stakeholders are identified, with minor omissionsSome relevant stakeholders are identified, but significant groups are missedFew or no relevant stakeholders are identified
          Stakeholder Consultation3All identified stakeholders are thoroughly consulted, with their perceptions and concerns fully consideredMost identified stakeholders are consulted, with their main concerns consideredSome stakeholders are consulted, but consultation is limited in scope or depthFew or no stakeholders are consulted
          Stakeholder Engagement3Stakeholders are actively engaged throughout the entire risk management process, including problem framing, decision-making, and implementationStakeholders are engaged in most key stages of the risk management processStakeholders are engaged in some aspects of the risk management process, but engagement is inconsistentStakeholders are minimally engaged or not engaged at all in the risk management process
          Effectiveness of Involvement2All stakeholders would agree that they were effectively consulted and engagedMost stakeholders would agree that they were adequately consulted and engagedSome stakeholders may feel their involvement was insufficient or ineffectiveMost stakeholders would likely feel their involvement was inadequate or ineffective

          Category E: Assumptions and Scope Boundary Issues

          CriterionWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
          E1. Important assumptions and implications listed4All important assumptions and their implications for risk management are systematically listed in clear language understandable to decision makers. Comprehensive and well-organized.Most important assumptions and implications are listed in language generally clear to decision makers. Some minor omissions or lack of clarity.Some important assumptions and implications are listed, but significant gaps exist. Language is not always clear to decision makers.Few or no important assumptions and implications are listed. Language is unclear or incomprehensible to decision makers.
          E2. Risks of assumption deviations evaluated3Risks of all significant assumptions deviating from the actual Risk Generating Process are thoroughly evaluated. Consequences and implications are clearly communicated to decision makers.Most risks of significant assumption deviations are evaluated. Consequences and implications are generally communicated to decision makers, with minor gaps.Some risks of assumption deviations are evaluated, but significant gaps exist. Communication to decision makers is incomplete or unclear.Few or no risks of assumption deviations are evaluated. Little to no communication of consequences and implications to decision makers.
          E3. Scope boundary issues and implications listed3All important scope boundary issues and their implications for risk management are systematically listed in clear language understandable to decision makers. Comprehensive and well-organized.Most important scope boundary issues and implications are listed in language generally clear to decision makers. Some minor omissions or lack of clarity.Some important scope boundary issues and implications are listed, but significant gaps exist. Language is not always clear to decision makers.Few or no important scope boundary issues and implications are listed. Language is unclear or incomprehensible to decision makers.

          Category F: Proactive Creation of Alternative Courses of Action

          CriteriaWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
          Systematic generation of alternatives4A comprehensive and structured process is used to systematically generate a wide range of alternative courses of action, going well beyond initially considered optionsA deliberate process is used to generate multiple alternative courses of action beyond those initially consideredSome effort is made to generate alternatives, but the process is not systematic or comprehensiveLittle to no effort is made to generate alternatives beyond those initially considered
          Goal-focused creation3All generated alternatives are clearly aligned with and directly address the stated goals of the analysisMost generated alternatives align with the stated goals of the analysisSome generated alternatives align with the goals, but others seem tangential or unrelatedGenerated alternatives (if any) do not align with or address the stated goals
          Consideration of robust/resilient options3Multiple robust and resilient alternatives are developed to address various uncertainty scenariosAt least one robust or resilient alternative is developed to address uncertaintyRobustness and resilience are considered, but not fully incorporated into alternativesRobustness and resilience are not considered in alternative generation
          Examination of unintended consequences2Thorough examination of potential unintended consequences for each alternative, including action-reaction spiralsSome examination of potential unintended consequences for most alternativesLimited examination of unintended consequences for some alternativesNo consideration of potential unintended consequences
          Documentation of alternative creation process1The process of alternative generation is fully documented, including rationale for each alternativeThe process of alternative generation is mostly documentedThe process of alternative generation is partially documentedThe process of alternative generation is not documented

          Category G: Basis of Knowledge

          CriterionWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
          G1. Characterization of knowledge basis4All inputs are clearly characterized (empirical, expert elicitation, testing, modeling, etc.). Distinctions between broadly accepted and novel analyses are explicitly stated.Most inputs are characterized, with some minor omissions. Distinctions between accepted and novel analyses are mostly clear.Some inputs are characterized, but significant gaps exist. Limited distinction between accepted and novel analyses.Little to no characterization of knowledge basis. No distinction between accepted and novel analyses.
          G2. Strength of knowledge adequacy3Strength of knowledge is thoroughly characterized in terms of its adequacy to support risk management decisions. Limitations are clearly articulated.Strength of knowledge is mostly characterized, with some minor gaps in relating to decision support adequacy.Limited characterization of knowledge strength. Unclear how it relates to decision support adequacy.No characterization of knowledge strength or its adequacy for decision support.
          G3. Communication of knowledge limitations4All knowledge limitations and their implications for risk management are clearly communicated to decision makers in understandable language.Most knowledge limitations and implications are communicated, with minor clarity issues.Some knowledge limitations are communicated, but significant gaps exist in clarity or completeness.Knowledge limitations are not communicated or are presented in a way decision makers cannot understand.
          G4. Consideration of surprises and unforeseen events3Thorough consideration of potential surprises and unforeseen events (Black Swans). Their importance is clearly articulated.Consideration of surprises and unforeseen events is present, with some minor gaps in articulating their importance.Limited consideration of surprises and unforeseen events. Their importance is not clearly articulated.No consideration of surprises or unforeseen events.
          G5. Conflicting expert opinions2All conflicting expert opinions are systematically considered and reported to decision makers as a source of uncertainty.Most conflicting expert opinions are considered and reported, with minor omissions.Some conflicting expert opinions are considered, but significant gaps exist in reporting or consideration.Conflicting expert opinions are not considered or reported.
          G6. Consideration of unconsidered knowledge2Explicit measures are implemented to check for knowledge outside the analysis group (e.g., independent review).Some measures are in place to check for outside knowledge, but they may not be comprehensive.Limited consideration of knowledge outside the analysis group. No formal measures in place.No consideration of knowledge outside the analysis group.
          G7. Consideration of disregarded low-probability events1Explicit measures are implemented to check for events disregarded due to low probabilities based on critical assumptions.Some consideration of low-probability events, but measures may not be comprehensive.Limited consideration of low-probability events. No formal measures in place.No consideration of events disregarded due to low probabilities.

          This rubric, once done, is a tool to guide assessment and provide feedback. It should be flexible enough to accommodate unique aspects of individual work while maintaining consistent standards across evaluations. I’d embed it in the quality approval step.

          Requirements for Knowledge Management

          I was recently reviewing the updated Q9(R1) Annex 1- Q8/Q9/Q10 Questions & Answers (R5) related to ICH Q9(R1) Quality Risk Management (QRM) that were approved on 30 October 2024 and what they say about knowledge management. While there are some fun new questions asked, I particularly like “Do regulatory agencies expect to see a formal knowledge management approach during inspections?”

          To which the answer was: “No. There is no regulatory requirement for a formal knowledge management system. However. it is expected that knowledge from different processes and
          systems is appropriately utilised. Note: ‘formal’ in this context means a structured approach using a recognised methodology or (IT-) tool, executing and documenting something in a transparent and detailed manner.”

          What does appropriately utilized mean? What is the standard for determining it? The agencies are quite willing to leave that to you to figure out.

          As usual I think it is valuable to agree upon a few core assumptions for what appropriate utilization of knowledge management might look like.

          Accessibility and Sharing

          Knowledge should be easily accessible to those who need it within the organization. This means:

          • Implementing centralized knowledge repositories or databases
          • Ensuring information is structured and organized for easy retrieval
          • Fostering a culture of knowledge sharing among employees

          Relevance and Accuracy

          Appropriately utilized knowledge is:

          • Up-to-date and accurate
          • Relevant to the specific needs of the organization and its employees
          • Regularly reviewed and updated to maintain its value

          Integration into Processes

          Knowledge should be integrated into the organization’s workflows and decision-making processes:

          • Incorporated into standard operating procedures
          • Used to inform strategic planning and problem-solving
          • Applied to improve efficiency and productivity

          Measurable Impact

          Appropriate utilization of knowledge should result in tangible benefits:

          • Improved decision-making
          • Increased productivity and efficiency
          • Enhanced innovation and problem-solving capabilities
          • Reduced duplication of efforts

          Continuous Improvement

          Appropriate utilization of knowledge includes a commitment to ongoing improvement:

          • Regular assessment of knowledge management processes
          • Gathering feedback from users
          • Adapting strategies based on changing organizational needs

          Process Mapping to Process Modeling – The Next Step

          In the last two posts (here and here) I’ve been talking about how process mapping is a valuable set of techniques to create a visual representation of the processes within an organization. Fundamental tools, every quality professional should be fluent in them.

          The next level of maturity is process modeling which involves creating a digital representation of a process that can be analyzed, simulated, and optimized. Way more comprehensive, and frankly, very very hard to do and maintain.

          Process MapProcess ModelWhy is this Important?
          Notation ambiguousStandardized notation conventionStandardized notation conventions for process modeling, such as Business Process Model and Notation (BPMN), drive clarity, consistency, communication and process improvements.
          Precision usually lackingAs precise as neededPrecision drives model accuracy and effectiveness. Too often process maps are all over the place.
          Icons (representing process components made up or loosely definedIcons are objectively defined and standardizedThe use of common modeling conventions ensures that all process creators represent models consistently, regardless of who in the organization created them.
          Relationship of icons portrayed visuallyIcon relationships definite and explained in annotations, process model glossary, and process narrativesReducing ambiguity, improving standardization and easing knowledge transfer are the whole goal here. And frankly, the average process map can fall really short.
          Limited to portrayal of simple ideasCan depict appropriate complexityWe need to strive  to represent complex workflows in a visually comprehensible manner, striking a balance between detail and clarity. The ability to have scalable detail cannot be undersold.
          One-time snapshotCan grow, evolve, matureHow many times have you sat down to a project and started fresh with a process map? Enough said.
          May be created with simple drawing toolsCreated with a tool appropriate to the needThe right tool for the right job
          Difficult to use for the simplest manual simulationsMay provide manual or automated process simulationIn w world of more and more automation, being able to do a good process simulation is critical.
          Difficult to link with related diagram or mapVertical and horizontal linking, showing relationships among processes and different process levelsProcesses don’t stand along, they are interconnected in a variety of ways. Being able to move up and down in detail and across the process family is great for diagnosing problems.
          Uses simple file storage with no inherent relationshipsUses a repository of related models within a BPM systemIt is fairly common to do process maps and keep them separate, maybe in an SOP, but more often in a dozen different, unconnected places, making it difficult to put your hands on it. Process modeling maturity moves us towards a library approach, with drives knowledge management.
          Appropriate for quick capture of ideasAppropriate for any level of process capture, analysis and designProcesses are living and breathing, our tools should take that into account.

          This is all about moving to a process repository and away from a document mindset. I think it is a great shame that the eQMS players don’t consider this part of their core mission. This is because most quality units don’t see this as part of their core mission. We as quality leaders should be seeing process management as critical for future success. This is all about profound knowledge and utilizing it to drive true improvements.