Reducing Subjectivity in Quality Risk Management: Aligning with ICH Q9(R1)

In a previous post, I discussed how overcoming subjectivity in risk management and decision-making requires fostering a culture of quality and excellence. This is an issue that it is important to continue to evaluate and push for additional improvement.

The revised ICH Q9(R1) guideline, finalized in January 2023, introduces critical updates to Quality Risk Management (QRM) practices, emphasizing the need to address subjectivity, enhance formality, improve risk-based decision-making, and manage product availability risks. These revisions aim to ensure that QRM processes are more science-driven, knowledge-based, and effective in safeguarding product quality and patient safety. Two years later it is important to continue to build on key strategies for reducing subjectivity in QRM and aligning with the updated requirements.

Understanding Subjectivity in QRM

Subjectivity in QRM arises from personal opinions, biases, heuristics, or inconsistent interpretations of risks by stakeholders. This can impact every stage of the QRM process—from hazard identification to risk evaluation and mitigation. The revised ICH Q9(R1) explicitly addresses this issue by introducing a new subsection, “Managing and Minimizing Subjectivity,” which emphasizes that while subjectivity cannot be entirely eliminated, it can be controlled through structured approaches.

The guideline highlights that subjectivity often stems from poorly designed scoring systems, differing perceptions of hazards and risks among stakeholders, and cognitive biases. To mitigate these challenges, organizations must adopt robust strategies that prioritize scientific knowledge and data-driven decision-making.

Strategies to Reduce Subjectivity

Leveraging Knowledge Management

ICH Q9(R1) underscores the importance of knowledge management as a tool to reduce uncertainty and subjectivity in risk assessments. Effective knowledge management involves systematically capturing, organizing, and applying internal and external knowledge to inform QRM activities. This includes maintaining centralized repositories for technical data, fostering real-time information sharing across teams, and learning from past experiences through structured lessons-learned processes.

By integrating knowledge management into QRM, organizations can ensure that decisions are based on comprehensive data rather than subjective estimations. For example, using historical data on process performance or supplier reliability can provide objective insights into potential risks.

To integrate knowledge management (KM) more effectively into quality risk management (QRM), organizations can implement several strategies to ensure decisions are based on comprehensive data rather than subjective estimations:

Establish Robust Knowledge Repositories

Create centralized, easily accessible repositories for storing and organizing historical data, lessons learned, and best practices. These repositories should include:

  • Process performance data
  • Supplier reliability metrics
  • Deviation and CAPA records
  • Audit findings and inspection observations
  • Technology transfer documentation

By maintaining these repositories, organizations can quickly access relevant historical information when conducting risk assessments.

Implement Knowledge Mapping

Conduct knowledge mapping exercises to identify key sources of knowledge within the organization. This process helps to:

Use the resulting knowledge maps to guide risk assessment teams to relevant information and expertise.

Develop Data Analytics Capabilities

Invest in data analytics tools and capabilities to extract meaningful insights from historical data. For example:

  • Use statistical process control to identify trends in manufacturing performance
  • Apply machine learning algorithms to predict potential quality issues based on historical patterns
  • Utilize data visualization tools to present complex risk data in an easily understandable format

These analytics can provide objective, data-driven insights into potential risks and their likelihood of occurrence.

Integrate KM into QRM Processes

Embed KM activities directly into QRM processes to ensure consistent use of available knowledge:

  • Include a knowledge gathering step at the beginning of risk assessments
  • Require risk assessment teams to document the sources of knowledge used in their analysis
  • Implement a formal process for capturing new knowledge generated during risk assessments

This integration helps ensure that all relevant knowledge is considered and that new insights are captured for future use.

Foster a Knowledge-Sharing Culture

Encourage a culture of knowledge sharing and collaboration within the organization:

  • Implement mentoring programs to facilitate the transfer of tacit knowledge
  • Establish communities of practice around key risk areas
  • Recognize and reward employees who contribute valuable knowledge to risk management efforts

By promoting knowledge sharing, organizations can tap into the collective expertise of their workforce to improve risk assessments.

Implementing Structured Risk-Based Decision-Making

The revised guideline introduces a dedicated section on risk-based decision-making, emphasizing the need for structured approaches that consider the complexity, uncertainty, and importance of decisions. Organizations should establish clear criteria for decision-making processes, define acceptable risk tolerance levels, and use evidence-based methods to evaluate options.

Structured decision-making tools can help standardize how risks are assessed and prioritized. Additionally, calibrating expert opinions through formal elicitation techniques can further reduce variability in judgments.

Addressing Cognitive Biases

Cognitive biases—such as overconfidence or anchoring—can distort risk assessments and lead to inconsistent outcomes. To address this, organizations should provide training on recognizing common biases and their impact on decision-making. Encouraging diverse perspectives within risk assessment teams can also help counteract individual biases.

For example, using cross-functional teams ensures that different viewpoints are considered when evaluating risks, leading to more balanced assessments. Regularly reviewing risk assessment outputs for signs of bias or inconsistencies can further enhance objectivity.

Enhancing Formality in QRM

ICH Q9(R1) introduces the concept of a “formality continuum,” which aligns the level of effort and documentation with the complexity and significance of the risk being managed. This approach allows organizations to allocate resources effectively by applying less formal methods to lower-risk issues while reserving rigorous processes for high-risk scenarios.

For instance, routine quality checks may require minimal documentation compared to a comprehensive risk assessment for introducing new manufacturing technologies. By tailoring formality levels appropriately, organizations can ensure consistency while avoiding unnecessary complexity.

Calibrating Expert Opinions

We need to recognize the importance of expert knowledge in QRM activities, but also acknowledges the potential for subjectivity and bias in expert judgments. We need to ensure we:

  • Implement formal processes for expert opinion elicitation
  • Use techniques to calibrate expert judgments, especially when estimating probabilities
  • Provide training on common cognitive biases and their impact on risk assessment
  • Employ diverse teams to counteract individual biases
  • Regularly review risk assessment outputs for signs of bias or inconsistencies

Calibration techniques may include:

  • Structured elicitation protocols that break down complex judgments into more manageable components
  • Feedback and training to help experts align their subjective probability estimates with actual frequencies of events
  • Using multiple experts and aggregating their judgments through methods like Cooke’s classical model
  • Employing facilitation techniques to mitigate groupthink and encourage independent thinking

By calibrating expert opinions, organizations can leverage valuable expertise while minimizing subjectivity in risk assessments.

Utilizing Cooke’s Classical Model

Cooke’s Classical Model is a rigorous method for evaluating and combining expert judgments to quantify uncertainty. Here are the key steps for using the Classical Model to evaluate expert judgment:

Select and calibrate experts:
    • Choose 5-10 experts in the relevant field
    • Have experts assess uncertain quantities (“calibration questions”) for which true values are known or will be known soon
    • These calibration questions should be from the experts’ domain of expertise
    Elicit expert assessments:
      • Have experts provide probabilistic assessments (usually 5%, 50%, and 95% quantiles) for both calibration questions and questions of interest
      • Document experts’ reasoning and rationales
      Score expert performance:
      • Evaluate experts on two measures:
        a) Statistical accuracy: How well their probabilistic assessments match the true values of calibration questions
        b) Informativeness: How precise and focused their uncertainty ranges are
      Calculate performance-based weights:
        • Derive weights for each expert based on their statistical accuracy and informativeness scores
        • Experts performing poorly on calibration questions receive little or no weight
        Combine expert assessments:
        • Use the performance-based weights to aggregate experts’ judgments on the questions of interest
        • This creates a “Decision Maker” combining the experts’ assessments
        Validate the combined assessment:
        • Evaluate the performance of the weighted combination (“Decision Maker”) using the same scoring as for individual experts
        • Compare to equal-weight combination and best-performing individual experts
        Conduct robustness checks:
        • Perform cross-validation by using subsets of calibration questions to form weights
        • Assess how well performance on calibration questions predicts performance on questions of interest

        The Classical Model aims to create an optimal aggregate assessment that outperforms both equal-weight combinations and individual experts. By using objective performance measures from calibration questions, it provides a scientifically defensible method for evaluating and synthesizing expert judgment under uncertainty.

        Using Data to Support Decisions

        ICH Q9(R1) emphasizes the importance of basing risk management decisions on scientific knowledge and data. The guideline encourages organizations to:

        • Develop robust knowledge management systems to capture and maintain product and process knowledge
        • Create standardized repositories for technical data and information
        • Implement systems to collect and convert data into usable knowledge
        • Gather and analyze relevant data to support risk-based decisions
        • Use quantitative methods where feasible, such as statistical models or predictive analytics

        Specific approaches for using data in QRM may include:

        • Analyzing historical data on process performance, deviations, and quality issues to inform risk assessments
        • Employing statistical process control and process capability analysis to evaluate and monitor risks
        • Utilizing data mining and machine learning techniques to identify patterns and potential risks in large datasets
        • Implementing real-time data monitoring systems to enable proactive risk management
        • Conducting formal data quality assessments to ensure decisions are based on reliable information

        Digitalization and emerging technologies can support data-driven decision making, but remember that validation requirements for these technologies should not be overlooked.

        Improving Risk Assessment Tools

        The design of risk assessment tools plays a critical role in minimizing subjectivity. Tools with well-defined scoring criteria and clear guidance on interpreting results can reduce variability in how risks are evaluated. For example, using quantitative methods where feasible—such as statistical models or predictive analytics—can provide more objective insights compared to qualitative scoring systems.

        Organizations should also validate their tools periodically to ensure they remain fit-for-purpose and aligned with current regulatory expectations.

        Leverage Good Risk Questions

        A well-formulated risk question can significantly help reduce subjectivity in quality risk management (QRM) activities. Here’s how a good risk question contributes to reducing subjectivity:

        Clarity and Focus

        A good risk question provides clarity and focus for the risk assessment process. By clearly defining the scope and context of the risk being evaluated, it helps align all participants on what specifically needs to be assessed. This alignment reduces the potential for individual interpretations and subjective assumptions about the risk scenario.

        Specific and Measurable Terms

        Effective risk questions use specific and measurable terms rather than vague or ambiguous language. For example, instead of asking “What are the risks to product quality?”, a better question might be “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months?”. The specificity in the latter question helps anchor the assessment in objective, measurable criteria.

        Factual Basis

        A well-crafted risk question encourages the use of factual information and data rather than opinions or guesses. It should prompt the risk assessment team to seek out relevant data, historical information, and scientific knowledge to inform their evaluation. This focus on facts and evidence helps minimize the influence of personal biases and subjective judgments.

        Standardized Approach

        Using a consistent format for risk questions across different assessments promotes a standardized approach to risk identification and analysis. This consistency reduces variability in how risks are framed and evaluated, thereby decreasing the potential for subjective interpretations.

        Objective Criteria

        Good risk questions often incorporate or imply objective criteria for risk evaluation. For instance, a question like “What factors could lead to a deviation from the acceptable range of 5-10% for impurity Y?” sets clear, objective parameters for the assessment, reducing the room for subjective interpretation of what constitutes a significant risk.

        Promotes Structured Thinking

        Well-formulated risk questions encourage structured thinking about potential hazards, their causes, and consequences. This structured approach helps assessors focus on objective factors and causal relationships rather than relying on gut feelings or personal opinions.

        Facilitates Knowledge Utilization

        A good risk question should prompt the assessment team to utilize available knowledge effectively. It encourages the team to draw upon relevant data, past experiences, and scientific understanding, thereby grounding the assessment in objective information rather than subjective impressions.

        By crafting risk questions that embody these characteristics, QRM practitioners can significantly reduce the subjectivity in risk assessments, leading to more reliable, consistent, and scientifically sound risk management decisions.

        Fostering a Culture of Continuous Improvement

        Reducing subjectivity in QRM is an ongoing process that requires a commitment to continuous improvement. Organizations should regularly review their QRM practices to identify areas for enhancement and incorporate feedback from stakeholders. Investing in training programs that build competencies in risk assessment methodologies and decision-making frameworks is essential for sustaining progress.

        Moreover, fostering a culture that values transparency, collaboration, and accountability can empower teams to address subjectivity proactively. Encouraging open discussions about uncertainties or disagreements during risk assessments can lead to more robust outcomes.

        Conclusion

        The revisions introduced in ICH Q9(R1) represent a significant step forward in addressing long-standing challenges associated with subjectivity in QRM. By leveraging knowledge management, implementing structured decision-making processes, addressing cognitive biases, enhancing formality levels appropriately, and improving risk assessment tools, organizations can align their practices with the updated guidelines while ensuring more reliable and science-based outcomes.

        It has been two years, it is long past time be be addressing these in your risk management process and quality system.

        Ultimately, reducing subjectivity not only strengthens compliance with regulatory expectations but also enhances the quality of pharmaceutical products and safeguards patient safety—a goal that lies at the heart of effective Quality Risk Management.

        The Lack of Objectivity in Quality Management

        ICH Q9(r1) can be reviewed as a revision that addresses long-standing issues of subjectivity in risk management. Subjectivity is a widespread problem throughout the quality sphere, posing significant challenges because it introduces personal biases, emotions, and opinions into decision-making processes that should ideally be driven by objective data and facts.

        • Inconsistent Decision-Making: Subjective decision-making can lead to inconsistencies because different individuals may have varying opinions and biases. This inconsistency can result in unpredictable outcomes and make it challenging to establish standardized processes. For example, one manager might prioritize customer satisfaction based on personal experiences, while another might focus on cost-cutting, leading to conflicting strategies within the same organization.
        • Bias and Emotional Influence: Subjectivity often involves emotional influence, which can cloud judgment and lead to decisions not in the organization’s best interest. For instance, a business owner might make decisions based on a personal attachment to a product or service rather than its market performance or profitability. This emotional bias can prevent the business from making necessary changes or investments, ultimately harming its growth and sustainability.
        • Risk Management Issues: In risk assessments, subjectivity can significantly impact the identification and evaluation of risks. Subjective assessments may overlook critical risks or overemphasize less significant ones, leading to inadequate risk management strategies. Objective, data-driven risk assessments are essential to accurately identify and mitigate potential threats to the business. See ICHQ9(r1).
        • Difficulty in Measuring Performance: Subjective criteria are often more complicated to quantify and measure, making it challenging to track performance and progress accurately. Objective metrics, such as key performance indicators (KPIs), provide clear, measurable data that can be used to assess the effectiveness of business processes and make informed decisions.
        • Potential for Misalignment: Subjective decision-making can lead to misalignment between business goals and outcomes. For example, if subjective opinions drive project management decisions, the project may deviate from its original scope, timeline, or budget, resulting in unmet objectives and dissatisfied stakeholders.
        • Impact on Team Dynamics: Subjectivity can also affect team dynamics and morale. Decisions perceived as biased or unfair can lead to dissatisfaction and conflict among team members. Objective decision-making, based on transparent criteria and data, helps build trust and ensures that all team members are aligned with the business’s goals.

        Every organization I’ve been in has a huge problem with subjectivity, and I’m confident in asserting none of us are doing enough to deal with the lack of objectivity, and we mostly rely on our intuition instead of on objective guidelines that will create unambiguous, holistic, and
        universally usable models.

        Understand the Decisions We Make

        Every day, we make many decisions, sometimes without even noticing it. These decisions fall into four categories:

        • Acceptances: It is a binary choice between accepting or rejecting;
        • Choices: Opting for a subset from a group of alternatives;
        • Constructions: Creating an ideal solution given accessible resources;
        • Evaluations: Here, commitments back up the statements of worth to act

        These decisions can be simple or complex, with manifold criteria and several perspectives. Decision-making is the process of choosing an option among manifold alternatives.

        The Fallacy of Expert Immunity is a Major Source of Subjectivity

        There is a widely incorrect belief that experts are impartial and immune to biases. However, the truth is that no one is immune to bias, not even experts. In many ways, experts are more susceptible to certain biases. The very making of expertise creates and underpins many of the biases.  For example, experience and training make experts engage in more selective attention, use chunking and schemas (typical activities and their sequence), and rely on heuristics and expectations arising from past base rate experiences, utilizing a whole range of top-down cognitive processes that create a priori assumptions and expectations.

        These cognitive processes often enable experts to make quick and accurate decisions. However, these mechanisms also create bias that can lead them in the wrong direction. Regardless of the utilities (and vulnerability) of such cognitive processing in experts, they do not make experts immune from bias, and indeed, expertise and experience may actually increase (or even cause) certain biases. Experts across domains are subject to cognitive vulnerabilities.

        Even when experts are made aware of and acknowledge their biases, they nevertheless think they can overcome them by mere willpower. This is the illusion of control. Combating and countering these biases requires taking specific steps—willpower alone is inadequate to deal with the various manifestations of bias.

        In fact, trying to deal with bias through the illusion of control may actually increase the bias due to “ironic processing” or “ironic rebound.” Hence, trying to minimize bias by willpower makes you think of it more and increases its effect. This is similar to a judge instructing jurors to disregard specific evidence. By doing so, the judge makes the jurors notice this evidence even more.

        Such fallacies’ beliefs prevent dealing with biases because they dismiss their powers and existence. We need to acknowledge the impact of biases and understand their sources to take appropriate measures when needed and when possible to combat their effects.

        FallacyIncorrect Belief
        Ethical IssuesIt only happens to corrupt and unscrupulous individuals, an issue of morals and personal integrity, a question of personal character.
        Bad ApplesIt only happens to corrupt and unscrupulous individuals. It is an issue of morals and personal integrity, a question of personal character.
        Expert ImmunityExperts are impartial and are not affected because bias does not impact competent experts doing their job with integrity.
        Technological ProtectionUsing technology, instrumentation, automation, or artificial intelligence guarantees protection from human biases.
        Blind SpotOther experts are affected by bias, but not me. I am not biased; it is the other experts who are biased.
        Illusion of ControlI am aware that bias impacts me, and therefore, I can control and counter its affect. I can overcome bias by mere willpower.
        Six Fallacies that Increase Subjectivity

          Mitigating Subjectivity

          There are four basic strategies to mitigate the impact of subjectivity.

          Data-Driven Decision Making

          Utilize data and analytics to inform decisions, reducing reliance on personal opinions and biases.

          • Establish clear metrics with key performance indicators (KPI), key behavior indicators (KBI), and key risk indicators (KRI) that are aligned with objectives.
          • Implement robust data collection and analysis systems to gather relevant, high-quality data.
          • Use data visualization tools to present information in an easily digestible format.
          • Train employees on data literacy and interpretation to ensure proper use of data insights.
          • Regularly review and update data sources to maintain relevance and accuracy.

          Standardized Processes

          Implement standardized processes and procedures to ensure consistency and fairness in decision-making.

          • Document and formalize decision-making procedures across the organization.
          • Create standardized templates, checklists, and rubrics for evaluating options and making decisions.
          • Implement a consistent review and approval process for major decisions.
          • Regularly audit and update standardized processes to ensure they remain effective and relevant.

          Education, Training, and Awareness

          Educate and train employees and managers on the importance of objective decision-making and recognizing and minimizing personal biases.

          • Conduct regular training sessions on cognitive biases and their impact on decision-making.
          • Provide resources and tools to help employees recognize and mitigate their own biases.
          • Encourage a culture of open discussion and constructive challenge to promote diverse perspectives.
          • Implement mentoring programs to share knowledge and best practices for objective decision-making.

          Digital Tools

          Leverage digital tools and software to automate and streamline processes, reducing the potential for subjective influence. The last two is still more aspiration than reality.

          • Implement workflow management tools to ensure consistent application of standardized processes.
          • Use collaboration platforms to facilitate transparent and inclusive decision-making processes.
          • Adopt decision support systems that use algorithms and machine learning to provide recommendations based on data analysis.
          • Leverage artificial intelligence and predictive analytics to identify patterns and trends that may not be apparent to human decision-makers.

          Acountable People

          We tend to jumble forms of accountability in an organization, often confusing between a people manager and a technical manager. I think its very important to differentiate between the two.

          People managers deal with human resources and team dynamics, while technical managers deal with managing design, execution, and improvement. They can be the same person, but we need to recognize the differences and resource appropriately. Too often we blur the two roles and as a result neither is done well.

          I’ve talked on this blog about a few of the technical manager types: Process Owners, the ASTM E2500 SME/Molecule Steward, and Knowledge Owners. There are certainly others out there. In the table below I added two more for comparison:

          • a qualified person from OSHA, because I think this is a great generic look at the concept
          • The EU Qualifed Person. Industry relevant and one that often gets confused in execution.
          AspectQualified Person (OSHA Definition)Qualified Person (EU)Knowledge OwnerASTM E2500 SMEProcess Owner
          Primary FocusEnsuring compliance with safety standards and solving technical problemsCertifying that each batch of a medicinal product meets all required provisionsManaging and maintaining knowledge within a specific domainEnsuring manufacturing systems meet quality and safety standardsManaging and optimizing a specific business process
          Key ResponsibilitiesSolve or resolve problems related to the subject matter, work, or projectCertify batches meet GMP and regulatory standardsMaintain and update knowledge baseDefine system needs and identify critical aspectsDefine process goals, purpose, and KPIs
          Design and install systems to improve safetyEnsure compliance with market authorization requirementsValidate and broadcast new knowledgeDevelop and execute verification strategiesCommunicate with key players and stakeholders
          Ensure compliance with laws and standardsOversee quality control and assurance processesProvide training and supportReview system designs and manage risksAnalyze process performance and identify improvements
          May not have the authority to stop workConduct audits and inspectionsMonitor and update knowledge assetsLead quality risk management effortsEnsure process compliance with regulations and standards
          Skills RequiredTechnical expertise in the areaDegree in pharmacy, biology, chemistry, or related fieldSubject matter expertise in specific knowledge domainTechnical understanding of manufacturing systems and equipmentLeadership and communication skills
          Certification, degree, or other professional recognitionSeveral years of experience in pharmaceutical manufacturingAnalytical and validation skillsRisk management and verification skillsAnalytical and problem-solving skills
          Ability to solve technical problemsRegistered with the competent authority in the EU member stateTraining and support skillsContinuous improvement and change management skillsAbility to define and monitor KPIs
          AuthorityAuthority to design and install safety systemsAuthority to certify batches and ensure complianceAuthority over knowledge management processes and contentAuthority to define and verify critical aspects of systemsAuthority to make decisions and implement changes in the process
          Interaction with OthersCollaborates with production and quality control teamsWorks with quality control, assurance, and regulatory teamsWorks with various departments to ensure knowledge is shared and utilizedCollaborates with project stakeholders and engineering teamsCommunicates with project leaders, process users, and other stakeholders
          Examples of ActivitiesReviewing batch documentation and certifying productsCertifying each batch of medicinal products before releaseValidating new knowledge submissionsConducting quality risk analyses and verification testsDefining process objectives and mission statements
          Ensuring compliance with GMP and regulatory standardsEnsuring compliance with GMP and regulatory standardsProviding training on knowledge management systemsReviewing system designs and managing changesMonitoring process performance and compliance
          Overseeing investigations related to quality issuesOverseeing quality control and assurance processesUpdating and maintaining knowledge databasesLeading continuous improvement effortsIdentifying and implementing process improvements
          Industry ContextPrimarily in construction, manufacturing, and safety-critical industriesPharmaceutical and biotechnology industries within the EUApplicable across various industries, especially information-heavy sectorsPrimarily in pharmaceutical and biotechnology industriesApplicable in any industry with defined business processes
          Comparison table
          • Qualified Person (OSHA Definition): Focuses on ensuring compliance with safety standards and solving technical problems. They possess technical expertise and professional recognition and are responsible for designing and installing safety systems.
          • Qualified Person (EU): Ensures that each batch of medicinal products meets all required provisions before release. They are responsible for compliance with GMP and regulatory standards and must be registered with the competent authority in the EU member state.
          • Knowledge Owner: Manages and disseminates knowledge within an organization. They ensure that knowledge is accurate, up-to-date, and accessible, and they provide training and support to facilitate knowledge sharing.
          • ASTM E2500 SME: Ensures that manufacturing systems meet quality and safety standards. They define system needs, develop verification strategies, manage risks, and lead continuous improvement efforts.
          • Process Owner: Manages and optimizes specific business processes. They define process goals, monitor performance, ensure compliance with standards, and implement improvements to enhance efficiency and effectiveness.

          Common Themes

          Subject Matter Expertise

          • All roles require a high level of subject matter expertise in their respective domains, whether it’s technical knowledge, regulatory compliance, manufacturing processes, or business processes.
          • This expertise is typically gained through formal education, certifications, extensive training, and practical experience.

          Ensuring Compliance and Quality

          • A key responsibility across these roles is ensuring compliance with relevant laws, regulations, standards, and quality requirements.

          Risk Identification and Management

          • These roles are all responsible for identifying potential risks, hazards, or process inefficiencies.
          • They are expected to develop and implement strategies to mitigate or eliminate these risks, ensuring the safety of operations and the quality of products or processes.

          Continuous Improvement and Change Management

          • They are involved in continuous improvement efforts, identifying areas for optimization and implementing changes to enhance efficiency, quality, and knowledge sharing.
          • They are responsible for managing change processes, ensuring smooth transitions, and minimizing disruptions.

          Authority and Decision-Making

          • Most of these roles have a certain level of authority and decision-making power within their respective domains.

          Collaboration and Knowledge Sharing

          • Effective collaboration and knowledge sharing are essential for these roles to succeed.

          While these roles have distinct responsibilities and focus areas, they share common goals of ensuring compliance, managing risks, driving continuous improvement, and leveraging subject matter expertise to achieve organizational objectives and maintain high standards of quality and safety. They are more similar than dissimilar and should be looked at holistically within the organization.

          Build Your Knowledge Base

          Engaging with knowledge and Knowledge Management are critical parts of development. The ability to navigate the flood of available data to find accurate information is tied directly to individuals’ existing knowledge and their skills at distinguishing credible information from misleading content.

          There is ample evidence that many individuals lack the ability to accurately judge their understanding or the quality and accuracy of their performance (i.e., calibration). To truly develop our knowledge, we need to be engaged in deliberative practice. But to truly calibrate requires feedback, guidance, and coaching that you may not have access to within our organizations. This requires effort and deliberate building of a system and processes.

          Information can be found with little mental effort but without critical analysis of its legitimacy or validity, the ease of information can actually work against the development of deeper-processing strategies. It is really easy to go-online and get an answer, but unless learners put themselves in positions to struggle cognitively with an issue, and unless they have occasions to transform or reframe problems, their likelihood of progressing into competence is jeopardized.

          The more learners forge principled knowledge in a professional domain, the greater their reported interest in and identity with that field. Therefore, without the active pursuit of knowledge, these individuals’ interest in professional development may wane and their progress toward expertise may stall. This is why I find professional societies so critical, and why I am always pushing people to step up.

          My constant goal as a mentor is to help people do the following:

          • Refuse to be lulled into accepting a role as passive consumers of information, striving instead to be active producers of knowledge
          • Probe and critically analyze the information they encounter, rather
            than accepting quick, simple answers
          • Forge a meaningful interest in the profession and personal connections to members
            of professional communities, instead of relying on moment-by-moment stimulation and superficial relationships

          If we are going to step up to the challenges ahead of us, to address the skill gaps we are seeing, we each need to be deliberate in how we develop and deliberate in how we build our organizations to support development.

          Expert Intuition and Risk Management

          Saturday Morning Breakfast Cereal source http://smbc-comics.com/comic/horrible

          Risk management is a crucial aspect of any organization or project. However, it is often subject to human errors in subjective risk judgments. This is because most risk assessment methods rely on subjective inputs from experts. Without certain precautions, experts can make consistent errors in judgment about uncertainty and risk.

          There are methods that can correct the systemic errors that people make, but very few organizations implement them. As a result, there is often an almost universal understatement of risk. We need to keep in mind a few rules about experience and expertise.

          • Experience is a nonrandom, nonscientific sample of events throughout our lifetime.
          • Experience is memory-based and we are very selective regarding what we choose to remember,
          • What we conclude from our experience can be full of logical errors
          • Unless we get reliable feedback on past decisions, there is no reason to believe our experience will tell us much.

          No matter how much experience we accumulate, we seem to be very inconsistent in its application.

          Experts have unconscious heuristics and biases that impact their judgment, some important ones include:

          • Misconceptions of chance: If you flip a coin six times, which result is more likely (H= heads, T= tails): HHHTTT or HTHTTH? They are both equal, but many people assume that because the first series looks “less random” than the second, it must be less likely. This is an example of representativeness bias. We appear to judge odds based on what we assume to be representative scenarios. Human beings easily confuse patterns and randomness.
          • The conjunction fallacy: We often see specific events as more likely than broader categories of events.
          • Irrational belief in small samples
          • Disregarding variance in small samples. Small samples have more random variance that large samples is considered less than it should be.
          • Insensitivity to prior probabilities: People tend to ignore the past and focus on new information when making subjective estimates.

          This is all about overconfidence as an expert, which will consistently underestimate risks.

          What are some ways to overcome this? I recommend the following be built into your risk management system.

          • Pretend you are in the future looking back at failure. Start with the assumption that a major disaster did happen and describe how it happened.
          • Look to risks from others. Gather a list of related failures, for example, regulatory agency observations, and think of risks in relation to those.
          • Include Everyone. Your organization has numerous experts on all sorts of specific risks. Make the effort to survey representatives of just about every job level.
          • Do peer reviews. Check assumptions by showing them to peers who are not immersed in the assessment.
          • Implement metrics for performance. The Brier score is a way to evaluate the result of predictions both by how often the team was right and by the probability the estimated for getting a correct answer.

          Further Reading

          Here are some sources that discuss the topic of human errors and subjective judgments in risk management: