Reducing Subjectivity in Quality Risk Management: Aligning with ICH Q9(R1)

In a previous post, I discussed how overcoming subjectivity in risk management and decision-making requires fostering a culture of quality and excellence. This is an issue that it is important to continue to evaluate and push for additional improvement.

The revised ICH Q9(R1) guideline, finalized in January 2023, introduces critical updates to Quality Risk Management (QRM) practices, emphasizing the need to address subjectivity, enhance formality, improve risk-based decision-making, and manage product availability risks. These revisions aim to ensure that QRM processes are more science-driven, knowledge-based, and effective in safeguarding product quality and patient safety. Two years later it is important to continue to build on key strategies for reducing subjectivity in QRM and aligning with the updated requirements.

Understanding Subjectivity in QRM

Subjectivity in QRM arises from personal opinions, biases, heuristics, or inconsistent interpretations of risks by stakeholders. This can impact every stage of the QRM process—from hazard identification to risk evaluation and mitigation. The revised ICH Q9(R1) explicitly addresses this issue by introducing a new subsection, “Managing and Minimizing Subjectivity,” which emphasizes that while subjectivity cannot be entirely eliminated, it can be controlled through structured approaches.

The guideline highlights that subjectivity often stems from poorly designed scoring systems, differing perceptions of hazards and risks among stakeholders, and cognitive biases. To mitigate these challenges, organizations must adopt robust strategies that prioritize scientific knowledge and data-driven decision-making.

Strategies to Reduce Subjectivity

Leveraging Knowledge Management

ICH Q9(R1) underscores the importance of knowledge management as a tool to reduce uncertainty and subjectivity in risk assessments. Effective knowledge management involves systematically capturing, organizing, and applying internal and external knowledge to inform QRM activities. This includes maintaining centralized repositories for technical data, fostering real-time information sharing across teams, and learning from past experiences through structured lessons-learned processes.

By integrating knowledge management into QRM, organizations can ensure that decisions are based on comprehensive data rather than subjective estimations. For example, using historical data on process performance or supplier reliability can provide objective insights into potential risks.

To integrate knowledge management (KM) more effectively into quality risk management (QRM), organizations can implement several strategies to ensure decisions are based on comprehensive data rather than subjective estimations:

Establish Robust Knowledge Repositories

Create centralized, easily accessible repositories for storing and organizing historical data, lessons learned, and best practices. These repositories should include:

  • Process performance data
  • Supplier reliability metrics
  • Deviation and CAPA records
  • Audit findings and inspection observations
  • Technology transfer documentation

By maintaining these repositories, organizations can quickly access relevant historical information when conducting risk assessments.

Implement Knowledge Mapping

Conduct knowledge mapping exercises to identify key sources of knowledge within the organization. This process helps to:

Use the resulting knowledge maps to guide risk assessment teams to relevant information and expertise.

Develop Data Analytics Capabilities

Invest in data analytics tools and capabilities to extract meaningful insights from historical data. For example:

  • Use statistical process control to identify trends in manufacturing performance
  • Apply machine learning algorithms to predict potential quality issues based on historical patterns
  • Utilize data visualization tools to present complex risk data in an easily understandable format

These analytics can provide objective, data-driven insights into potential risks and their likelihood of occurrence.

Integrate KM into QRM Processes

Embed KM activities directly into QRM processes to ensure consistent use of available knowledge:

  • Include a knowledge gathering step at the beginning of risk assessments
  • Require risk assessment teams to document the sources of knowledge used in their analysis
  • Implement a formal process for capturing new knowledge generated during risk assessments

This integration helps ensure that all relevant knowledge is considered and that new insights are captured for future use.

Foster a Knowledge-Sharing Culture

Encourage a culture of knowledge sharing and collaboration within the organization:

  • Implement mentoring programs to facilitate the transfer of tacit knowledge
  • Establish communities of practice around key risk areas
  • Recognize and reward employees who contribute valuable knowledge to risk management efforts

By promoting knowledge sharing, organizations can tap into the collective expertise of their workforce to improve risk assessments.

Implementing Structured Risk-Based Decision-Making

The revised guideline introduces a dedicated section on risk-based decision-making, emphasizing the need for structured approaches that consider the complexity, uncertainty, and importance of decisions. Organizations should establish clear criteria for decision-making processes, define acceptable risk tolerance levels, and use evidence-based methods to evaluate options.

Structured decision-making tools can help standardize how risks are assessed and prioritized. Additionally, calibrating expert opinions through formal elicitation techniques can further reduce variability in judgments.

Addressing Cognitive Biases

Cognitive biases—such as overconfidence or anchoring—can distort risk assessments and lead to inconsistent outcomes. To address this, organizations should provide training on recognizing common biases and their impact on decision-making. Encouraging diverse perspectives within risk assessment teams can also help counteract individual biases.

For example, using cross-functional teams ensures that different viewpoints are considered when evaluating risks, leading to more balanced assessments. Regularly reviewing risk assessment outputs for signs of bias or inconsistencies can further enhance objectivity.

Enhancing Formality in QRM

ICH Q9(R1) introduces the concept of a “formality continuum,” which aligns the level of effort and documentation with the complexity and significance of the risk being managed. This approach allows organizations to allocate resources effectively by applying less formal methods to lower-risk issues while reserving rigorous processes for high-risk scenarios.

For instance, routine quality checks may require minimal documentation compared to a comprehensive risk assessment for introducing new manufacturing technologies. By tailoring formality levels appropriately, organizations can ensure consistency while avoiding unnecessary complexity.

Calibrating Expert Opinions

We need to recognize the importance of expert knowledge in QRM activities, but also acknowledges the potential for subjectivity and bias in expert judgments. We need to ensure we:

  • Implement formal processes for expert opinion elicitation
  • Use techniques to calibrate expert judgments, especially when estimating probabilities
  • Provide training on common cognitive biases and their impact on risk assessment
  • Employ diverse teams to counteract individual biases
  • Regularly review risk assessment outputs for signs of bias or inconsistencies

Calibration techniques may include:

  • Structured elicitation protocols that break down complex judgments into more manageable components
  • Feedback and training to help experts align their subjective probability estimates with actual frequencies of events
  • Using multiple experts and aggregating their judgments through methods like Cooke’s classical model
  • Employing facilitation techniques to mitigate groupthink and encourage independent thinking

By calibrating expert opinions, organizations can leverage valuable expertise while minimizing subjectivity in risk assessments.

Utilizing Cooke’s Classical Model

Cooke’s Classical Model is a rigorous method for evaluating and combining expert judgments to quantify uncertainty. Here are the key steps for using the Classical Model to evaluate expert judgment:

Select and calibrate experts:
    • Choose 5-10 experts in the relevant field
    • Have experts assess uncertain quantities (“calibration questions”) for which true values are known or will be known soon
    • These calibration questions should be from the experts’ domain of expertise
    Elicit expert assessments:
      • Have experts provide probabilistic assessments (usually 5%, 50%, and 95% quantiles) for both calibration questions and questions of interest
      • Document experts’ reasoning and rationales
      Score expert performance:
      • Evaluate experts on two measures:
        a) Statistical accuracy: How well their probabilistic assessments match the true values of calibration questions
        b) Informativeness: How precise and focused their uncertainty ranges are
      Calculate performance-based weights:
        • Derive weights for each expert based on their statistical accuracy and informativeness scores
        • Experts performing poorly on calibration questions receive little or no weight
        Combine expert assessments:
        • Use the performance-based weights to aggregate experts’ judgments on the questions of interest
        • This creates a “Decision Maker” combining the experts’ assessments
        Validate the combined assessment:
        • Evaluate the performance of the weighted combination (“Decision Maker”) using the same scoring as for individual experts
        • Compare to equal-weight combination and best-performing individual experts
        Conduct robustness checks:
        • Perform cross-validation by using subsets of calibration questions to form weights
        • Assess how well performance on calibration questions predicts performance on questions of interest

        The Classical Model aims to create an optimal aggregate assessment that outperforms both equal-weight combinations and individual experts. By using objective performance measures from calibration questions, it provides a scientifically defensible method for evaluating and synthesizing expert judgment under uncertainty.

        Using Data to Support Decisions

        ICH Q9(R1) emphasizes the importance of basing risk management decisions on scientific knowledge and data. The guideline encourages organizations to:

        • Develop robust knowledge management systems to capture and maintain product and process knowledge
        • Create standardized repositories for technical data and information
        • Implement systems to collect and convert data into usable knowledge
        • Gather and analyze relevant data to support risk-based decisions
        • Use quantitative methods where feasible, such as statistical models or predictive analytics

        Specific approaches for using data in QRM may include:

        • Analyzing historical data on process performance, deviations, and quality issues to inform risk assessments
        • Employing statistical process control and process capability analysis to evaluate and monitor risks
        • Utilizing data mining and machine learning techniques to identify patterns and potential risks in large datasets
        • Implementing real-time data monitoring systems to enable proactive risk management
        • Conducting formal data quality assessments to ensure decisions are based on reliable information

        Digitalization and emerging technologies can support data-driven decision making, but remember that validation requirements for these technologies should not be overlooked.

        Improving Risk Assessment Tools

        The design of risk assessment tools plays a critical role in minimizing subjectivity. Tools with well-defined scoring criteria and clear guidance on interpreting results can reduce variability in how risks are evaluated. For example, using quantitative methods where feasible—such as statistical models or predictive analytics—can provide more objective insights compared to qualitative scoring systems.

        Organizations should also validate their tools periodically to ensure they remain fit-for-purpose and aligned with current regulatory expectations.

        Leverage Good Risk Questions

        A well-formulated risk question can significantly help reduce subjectivity in quality risk management (QRM) activities. Here’s how a good risk question contributes to reducing subjectivity:

        Clarity and Focus

        A good risk question provides clarity and focus for the risk assessment process. By clearly defining the scope and context of the risk being evaluated, it helps align all participants on what specifically needs to be assessed. This alignment reduces the potential for individual interpretations and subjective assumptions about the risk scenario.

        Specific and Measurable Terms

        Effective risk questions use specific and measurable terms rather than vague or ambiguous language. For example, instead of asking “What are the risks to product quality?”, a better question might be “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months?”. The specificity in the latter question helps anchor the assessment in objective, measurable criteria.

        Factual Basis

        A well-crafted risk question encourages the use of factual information and data rather than opinions or guesses. It should prompt the risk assessment team to seek out relevant data, historical information, and scientific knowledge to inform their evaluation. This focus on facts and evidence helps minimize the influence of personal biases and subjective judgments.

        Standardized Approach

        Using a consistent format for risk questions across different assessments promotes a standardized approach to risk identification and analysis. This consistency reduces variability in how risks are framed and evaluated, thereby decreasing the potential for subjective interpretations.

        Objective Criteria

        Good risk questions often incorporate or imply objective criteria for risk evaluation. For instance, a question like “What factors could lead to a deviation from the acceptable range of 5-10% for impurity Y?” sets clear, objective parameters for the assessment, reducing the room for subjective interpretation of what constitutes a significant risk.

        Promotes Structured Thinking

        Well-formulated risk questions encourage structured thinking about potential hazards, their causes, and consequences. This structured approach helps assessors focus on objective factors and causal relationships rather than relying on gut feelings or personal opinions.

        Facilitates Knowledge Utilization

        A good risk question should prompt the assessment team to utilize available knowledge effectively. It encourages the team to draw upon relevant data, past experiences, and scientific understanding, thereby grounding the assessment in objective information rather than subjective impressions.

        By crafting risk questions that embody these characteristics, QRM practitioners can significantly reduce the subjectivity in risk assessments, leading to more reliable, consistent, and scientifically sound risk management decisions.

        Fostering a Culture of Continuous Improvement

        Reducing subjectivity in QRM is an ongoing process that requires a commitment to continuous improvement. Organizations should regularly review their QRM practices to identify areas for enhancement and incorporate feedback from stakeholders. Investing in training programs that build competencies in risk assessment methodologies and decision-making frameworks is essential for sustaining progress.

        Moreover, fostering a culture that values transparency, collaboration, and accountability can empower teams to address subjectivity proactively. Encouraging open discussions about uncertainties or disagreements during risk assessments can lead to more robust outcomes.

        Conclusion

        The revisions introduced in ICH Q9(R1) represent a significant step forward in addressing long-standing challenges associated with subjectivity in QRM. By leveraging knowledge management, implementing structured decision-making processes, addressing cognitive biases, enhancing formality levels appropriately, and improving risk assessment tools, organizations can align their practices with the updated guidelines while ensuring more reliable and science-based outcomes.

        It has been two years, it is long past time be be addressing these in your risk management process and quality system.

        Ultimately, reducing subjectivity not only strengthens compliance with regulatory expectations but also enhances the quality of pharmaceutical products and safeguards patient safety—a goal that lies at the heart of effective Quality Risk Management.

        Multi-Criteria Decision-Making to Drive Risk Control

        To be honest, too often, we perform a risk assessment not to make decisions but to justify an already existing risk assessment. The risk assessment may help define a few additional action items and determine how rigorous to be about a few things. It actually didn’t make much of an impact on the already-decided path forward. This is some pretty bad risk management and decision-making.

        For highly important decisions with high uncertainty or complexity, it is useful to consider the options/alternatives that exist and assess the benefits and risks of each before deciding on a path forward. Thoroughly identifying options/alternatives and assessing the benefits and risks of each can help the decision-making process and ultimately reduce risk.

        An effective, highly structured decision-making process can help answer the question, ‘How can we compare the consequences of the various options before deciding?

        The most challenging risk decisions are characterized by having several different, important things to consider in an environment where there are often multiple stakeholders and, often, multiple decision-makers. 

        In Multi-Criteria Decision-Making (MCDM), the primary objective is the structured consideration of the available alternatives (options) for achieving the objectives in order to make the most informed decision, leading to the best outcome.

        In a Quality Risk Management context, the decision-making concerns making informed decisions in the face of uncertainty about risks related to the quality (and/or availability) of medicines.

        Key Concepts of MCDM

        1. Conflicting Criteria: MCDM deals with situations where criteria conflict. For example, when purchasing a car, one might need to balance cost, comfort, safety, and fuel economy, which often do not align perfectly.
        2. Explicit Evaluation: Unlike intuitive decision-making, MCDM involves a structured approach to explicitly evaluate multiple criteria, which is crucial when the stakes are high, such as deciding whether to build additional manufacturing capacity for a product under development.
        3. Types of Problems:
        • Multiple-Criteria Evaluation Problems: These involve a finite number of alternatives known at the beginning. The goal is to find the best alternative or a set of good alternatives based on their performance across multiple criteria.
        • Multiple-Criteria Design Problems: In these problems, alternatives are not explicitly known and must be found by solving a mathematical model. The number of alternatives can be very large, often exponentially.

        Preference Information: The methods used in MCDM often require preference information from decision-makers (DMs) to differentiate between solutions. This can be done at various stages of the decision-making process, such as prior articulation of preferences, which transforms the problem into a single-criterion problem.

        MCDM focuses on risk and uncertainty by explicitly weighing criteria and trade-offs between them. Multi-criteria decision-making (MCDM) differs from traditional decision-making methods in several key ways:

        1. Explicit Consideration of Multiple Criteria: Traditional decision-making often focuses on a single criterion like cost or profit. MCDM explicitly considers multiple criteria simultaneously, which may be conflicting, such as cost, quality, safety, and environmental impact[1]. This allows for a more comprehensive evaluation of alternatives.
        2. Structured Approach: MCDM provides a structured framework for evaluating alternatives against multiple criteria rather than relying solely on intuition or experience. It involves techniques like weighting criteria, scoring alternatives, and aggregating scores to rank or choose the best option.
        3. Transparency and Consistency: MCDM methods aim to make decision-making more transparent, consistent, and less susceptible to individual biases. The criteria, weights, and evaluation process are explicitly defined, allowing for better justification and reproducibility of decisions.
        4. Quantitative Analysis: Many MCDM methods employ quantitative techniques, such as mathematical models, optimization algorithms, and decision support systems. This enables a more rigorous and analytical approach compared to traditional qualitative methods.
        5. Handling Complexity: MCDM is particularly useful for complex decision problems involving many alternatives, conflicting objectives, and multiple stakeholders. Traditional methods may struggle to handle such complexity effectively.
        6. Stakeholder Involvement: Some MCDM methods, like the Analytic Hierarchy Process (AHP), facilitate the involvement of multiple stakeholders and the incorporation of their preferences and judgments. This can lead to more inclusive and accepted decisions.
        7. Trade-off Analysis: MCDM techniques often involve analyzing trade-offs between criteria, helping decision-makers understand the implications of prioritizing certain criteria over others. This can lead to more informed and balanced decisions.

        While traditional decision-making methods rely heavily on experience, intuition, and qualitative assessments, MCDM provides a more structured, analytical, and comprehensive approach, particularly in complex situations with conflicting criteria.

        Multi-Criteria Decision-Making (MCDM) is typically performed following these steps:

        1. Define the Decision Problem: Clearly state the problem or decision to be made, identify the stakeholders involved, and determine the desired outcome or objective.
        2. Establish Criteria: Identify the relevant criteria that will be used to evaluate the alternatives. These criteria should be measurable, independent, and aligned with the objectives. Involve stakeholders in selecting and validating the criteria.
        3. Generate Alternatives: Develop a comprehensive list of potential alternatives or options that could solve the problem. Use techniques like brainstorming, benchmarking, or scenario analysis to generate diverse alternatives.
        4. Gather Performance Data: Assess how each alternative performs against each criterion. This may involve quantitative data, expert judgments, or qualitative assessments.
        5. Assign Criteria Weights: By assigning weights, determine each criterion’s relative importance or priority. This can be done through methods like pairwise comparisons, swing weighting, or direct rating. Stakeholder input is crucial here.
        6. Apply MCDM Method: Choose an appropriate MCDM technique based on the problem’s nature and the available data. Some popular methods include: Analytic Hierarchy Process (AHP); Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS); ELimination and Choice Expressing REality (ELECTRE); Preference Ranking Organization METHod for Enrichment of Evaluations (PROMETHEE); and, Multi-Attribute Utility Theory (MAUT).
        7. Evaluate and Rank Alternatives: Apply the chosen MCDM method to evaluate and rank the alternatives based on their performance against the weighted criteria. This may involve mathematical models, software tools, or decision support systems.
        8. Sensitivity Analysis: Perform sensitivity analysis to assess the robustness of the results and understand how changes in criteria weights or performance scores might affect the ranking or choice of alternatives.
        9. Make the Decision: Based on the MCDM analysis, select the most preferred alternative or develop an action plan based on the ranking of alternatives. Involve stakeholders in the final decision-making process.
        10. Monitor and Review: Implement the chosen alternative and monitor its performance. Review the decision periodically, and if necessary, repeat the MCDM process to adapt to changing circumstances or new information.

        MCDM is an iterative process; stakeholder involvement, transparency, and clear communication are crucial. Additionally, the specific steps and techniques may vary depending on the problem’s complexity, the data’s availability, and the decision-maker’s preferences.

        MCDM TechniqueDescriptionApplicationKey Features
        Analytic Hierarchy Process (AHP)A structured technique for organizing and analyzing complex decisions, using mathematics and psychology.Widely used in business, government, and healthcare for prioritizing and decision-making.Pairwise comparisons, consistency checks, and hierarchical structuring of criteria and alternatives.
        Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)Based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution and the longest geometric distance from the negative ideal solution.Frequently used in engineering, management, and human resource management for ranking and selection problems.Compensatory aggregation, normalization of criteria, and calculation of geometric distances.
        Elimination and Choice Expressing Reality (ELECTRE)An outranking method that compares alternatives by considering both qualitative and quantitative criteria. It uses a pairwise comparison approach to eliminate less favorable alternatives.Commonly used in project selection, resource allocation, and environmental management.Use of concordance and discordance indices, handling of both qualitative and quantitative data, and ability to deal with incomplete rankings.
        Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE)An outranking method that uses preference functions to compare alternatives based on multiple criteria. It provides a complete ranking of alternatives.Applied in various fields such as logistics, finance, and environmental management.Preference functions, visual interactive modules (GAIA), and sensitivity analysis.
        Multi-Attribute Utility Theory (MAUT)Involves converting multiple criteria into a single utility function, which is then used to evaluate and rank alternatives. It takes into account the decision-maker’s risk preferences and uncertainties.Used in complex decision-making scenarios involving risk and uncertainty, such as policy analysis and strategic planning.Utility functions, probabilistic weights, and handling of uncertainty.
        Popular MCDM Techniques

        Self-Checking in Work-As-Done

        Self-checking is one of the most effective tools we can teach and use. Rooted in the four aspects of risk-based thinking (anticipate, monitor, respond, and learn), it refers to the procedures and checks that employees perform as part of their routine tasks to ensure the quality and accuracy of their work. This practice is often implemented in industries where precision is critical, and errors can lead to significant consequences. For instance, in manufacturing or engineering, workers might perform self-checks to verify that their work meets the required specifications before moving on to the next production stage.

        A proactive approach enhances the reliability, safety, and quality of various systems and practices by allowing for immediate detection and correction of errors, thereby preventing potential failures or flaws from escalating into more significant issues.

        The memory aid STAR (stop, think, act, review) helps the user recall the thoughts and actions associated with self-checking.

        1. Stop – Just before conducting a task, pause to:
          • Eliminate distractions.
          • Focus attention on the task.
        2. Think – Understand what will happen when the action is performed.
          • Verify the action is appropriate.
          • Recall the critical parameters and the action’s expected result(s).
          • Consider contingencies to mitigate harm if an unexpected result occurs.
          • If there is any doubt, STOP and get help.
        3. Act – Perform the task per work-as-prescribed
        4. Review – Verify that the expected result is obtained.
          • Verify the desired change in critical parameters.
          • Stop work if criteria are not met.
          • Perform the contingency if an unexpected result occurs.

        Risk Based Thinking

        Risk-based thinking is a crucial component of modern quality management systems and consists of four key aspects: anticipate, monitor, respond, and learn. Each aspect ensures an organization can effectively manage and mitigate risks, enhancing overall performance and reliability.

        Anticipate

        Anticipating risks involves proactively identifying and analyzing potential risks that could impact the organization’s operations or objectives. This step is about foreseeing problems before they occur and planning how to address them. It requires a thorough understanding of the organization’s processes, the external and internal factors that could affect these processes, and the potential consequences of various risks. By anticipating risks, organizations can prepare more effectively and prevent many issues from occurring.

        Monitor

        Monitoring involves continuously observing and tracking the operational environment to detect risk indicators early. This ongoing process helps catch deviations from expected outcomes or standards, which could indicate the emergence of a risk. Effective monitoring relies on establishing metrics that help to quickly and accurately identify when things are starting to veer off course. This real-time data collection is crucial for enabling timely responses to potential threats.

        Respond

        Responding to risks is about taking appropriate actions to manage or mitigate identified risks based on their severity and potential impact. This step involves implementing the planned risk responses that were developed during the anticipation phase. The effectiveness of these responses often depends on the speed and decisiveness of the actions taken. Responses can include adjusting processes, reallocating resources, or activating contingency plans. The goal is to minimize the organization’s and its stakeholders’ negative impact.

        Learn

        Learning from the management of risks is a critical component that closes the loop of risk-based thinking. This aspect involves analyzing the outcomes of risk responses and understanding what worked well and what did not. Learning from these experiences is essential for continuous improvement. It helps organizations refine risk management processes, improve response strategies, and better prepare for future risks. This iterative learning process ensures that risk management efforts are increasingly effective over time.

        The four aspects of risk-based thinking—anticipate, monitor, respond, and learn—form a continuous cycle that helps organizations manage uncertainties proactively. This approach protects the organization from potential downsides and enables it to seize opportunities that arise from a well-understood risk landscape. Organizations can enhance their resilience and adaptability by embedding these practices into everyday operations.

        Implementing Risk-Based Thinking

        1. Understand the Concept of Risk-Based Thinking

        Risk-based thinking involves a proactive approach to identifying, analyzing, and addressing risks. This mindset should be ingrained in the organization’s culture and used as a basis for decision-making.

        2. Identify Risks and Opportunities

        Identify potential risks and opportunities. This can be achieved through various methods such as SWOT analysis, brainstorming sessions, and process mapping. It’s crucial to involve people at all levels of the organization since they can provide diverse perspectives on potential risks and opportunities.

        3. Analyze and Prioritize Risks

        Once risks and opportunities are identified, they should be analyzed to understand their potential impact and likelihood. This analysis will help prioritize which risks need immediate attention and which opportunities should be pursued.

        4. Plan and Implement Responses

        After prioritizing, develop strategies to address these risks and opportunities. Plans should include preventive measures for risks and proactive steps to seize opportunities. Integrating these plans into the organization’s overall strategy and daily operations is important to ensure they are effective.

        5. Monitor and Review

        Implementing risk-based thinking is not a one-time activity but an ongoing process. Regular monitoring and reviewing of risks, opportunities, and the effectiveness of responses are crucial. This can be done through regular audits, performance evaluations, and feedback mechanisms. Adjustments should be made based on these reviews to improve the risk management process.

        6. Learn and Improve

        Organizations should learn from their experiences in managing risks and opportunities. This involves analyzing what worked well and what didn’t and using this information to improve future risk management efforts. Continuous improvement should be a key goal, aligning with the Plan-Do-Check-Act (PDCA) cycle.

        7. Documentation and Compliance

        Maintaining proper documentation is essential for tracking and managing risk-based thinking activities. Documents such as risk registers, action plans, and review reports should be updated and readily available.

        8. Training and Culture

        Training and cultural adaptation are necessary to implement risk-based thinking effectively. All employees should be trained on the principles of risk-based thinking and how to apply them in their roles. Creating a culture encouraging open communication about risks and supporting risk-taking within defined limits is also vital.

        Evaluating Controls as Part of Risk Management

        When I teach an introductory risk management class, I usually use an icebreaker of “What is the riskiest activity you can think of doing. Inevitably you will get some version of skydiving, swimming with sharks, jumping off bridges. This activity is great because it starts all conversations around likelihood and severity. At heart, the question brings out the concept of risk important activities and the nature of controls.

        The things people think of, such as skydiving, are great examples of activities that are surrounded by activities that control risk. The very activity is based on accepting reducing risk as low as possible and then proceeding in the safest possible pathway. These risk important activities are the mechanism just before a critical step that:

        1. Ensure the appropriate transfer of information and skill
        2. Ensure the appropriate number of actions to reduce risk
        3. Influence the presence or effectiveness of barriers
        4. Influence the ability to maintain positive control of the moderation of hazards

        Risk important activities is a concept important to safety-thought and are at the center of a lot of human error reduction tools and practices. Risk important activities are all about thinking through the right set of controls, building them into the procedure, and successfully executing them before reaching the critical step of no return. Checklists are a great example of this mindset at work, but there are a ton of ways of doing them.

        In the hospital they use a great thought process, “Five rights of Safe Medication Practices” that are: 1) right patient, 2) right drug, 3) right dose, 4) right route, and 5) right time. Next time you are getting medication in the doctor’s office or hospital evaluate just what your caregiver is doing and how it fits into that process. Those are examples of risk important activities.

        Assessing controls during risk assessment

        Risk is affected by the overall effectiveness of any controls that are in place.

        The key aspects of controls are:

        • the mechanism by which the controls are intended to modify risk
        • whether the controls are in place, are capable of operating as intended, and are achieving the expected results
        • whether there are shortcomings in the design of controls or the way they are applied
        • whether there are gaps in controls
        • whether controls function independently, or if they need to function collectively to be effective
        • whether there are factors, conditions, vulnerabilities or circumstances that can reduce or eliminate control effectiveness including common cause failures
        • whether controls themselves introduce additional risks.

        A risk can have more than one control and controls can affect more than one risk.

        We always want to distinguish between controls that change likelihood, consequences or both, and controls that change how the burden of risk is shared between stakeholders

        Any assumptions made during risk analysis about the actual effect and reliability of controls should be validated where possible, with a particular emphasis on individual or combinations of controls that are assumed to have a substantial modifying effect. This should take into account information gained through routine monitoring and review of controls.

        Risk Important Activities, Critical Steps and Process

        Critical steps are the way we meet our critical-to-quality requirements. The activities that ensure our product/service meets the needs of the organization.

        These critical steps are the points of no-return, the point where the work-product is transformed into something else. Risk important activities are what we do to remove the danger of executing that critical step.

        Beyond that critical step, you have rejection or rework. When I am cooking there is a lot of prep work which can be a mixture of critical steps, from which there is no return. I break the egg wrong and get eggshells in my batter, there is a degree of rework necessary. This is true for all our processes.

        The risk-based approach to the process is to understand the critical steps and mitigate controls.

        We are thinking through the following:

        • Critical Step: The action that triggers irreversibility. Think in terms of critical-to-quality attributes.
        • Input: What came before in the process
        • Output: The desired result (positive) or the possible difficulty (negative)
        • Preconditions: Technical conditions that must exist before the critical step
        • Resources: What is needed for the critical step to be completed
        • Local factors: Things that could influence the critical step. When human beings are involved, this is usually what can influence the performer’s thinking and actions before and during the critical step
        • Defenses: Controls, barriers and safeguards

        Risk Management Mindset

        Good risk management requires a mindset that includes the following attributes:

        • Expect to be surprised: Our processes are usually underspecified and there is a lot of hidden knowledge. Risk management serves to interrogate the unknowns
        • Possess a chronic sense of unease: There is no such thing as perfect processes, procedures, training, design, planning. Past performance is not a guarantee of future success.
        • Bend, not break: Everything is dynamic, especially risk. Quality comes from adaptability.
        • Learn: Learn from what goes well, from mistakes, have a learning culture
        • Embrace humility: No one knows everything, bring those in who know what you do not.
        • Acknowledge differences between work-as-imagined and work-as-done: Work to reduce the differences.
        • Value collaboration: Diversity of input
        • Drive out subjectivity: Understand how opinions are formed and decisions are made.
        • Systems Thinking: Performance emerges from complex, interconnected and interdependent systems and their components

        The Role of Monitoring

        One cannot control risk, or even successfully identify it unless a system is able flexibly to monitor both its own performance (what happens inside the system’s boundary) and what happens in the environment (outside the system’s boundary). Monitoring improves the ability to cope with possible risks

        When performing the risk assessment, challenge existing monitoring and ensure that the right indicators are in place. But remember, monitoring itself is a low-effectivity control.

        Ensure that there are leading indicators, which can be used as valid precursors for changes and events that are about to happen.

        For each monitoring control, as yourself the following:

        IndicatorHow have the indicators been defined? (By analysis, by tradition, by industry consensus, by the regulator, by international standards, etc.)
        RelevanceWhen was the list created? How often is it revised? On which basis is it revised? Who is responsible for maintaining the list?
        TypeHow many of the indicators are of the ‘leading,’ type and how many are of the lagging? Do indicators refer to single or aggregated measurements?
        ValidityHow is the validity of an indicator established (regardless of whether it is leading or lagging)? Do indicators refer to an articulated process model, or just to ‘common sense’?
        DelayFor lagging indicators, how long is the typical lag? Is it acceptable?
        Measurement typeWhat is the nature of the measurements? Qualitative or quantitative? (If quantitative, what kind of scaling is used?)
        Measurement frequencyHow often are the measurements made? (Continuously, regularly, every now and then?)
        AnalysisWhat is the delay between measurement and analysis/interpretation? How many of the measurements are directly meaningful and how many require analysis of some kind? How are the results communicated and used?
        StabilityAre the measured effects transient or permanent?
        Organization SupportIs there a regular inspection scheme or -schedule? Is it properly resourced? Where does this measurement fit into the management review?

        Key risk indicators come into play here.

        Hierarchy of Controls

        Not every control is the same. This principle applies to both current control and planning future controls.