Communication Loops and Silos: A Barrier to Effective Decision Making in Complex Industries

In complex industries such as aviation and biotechnology, effective communication is crucial for ensuring safety, quality, and efficiency. However, the presence of communication loops and silos can significantly hinder these efforts. The concept of the “Tower of Babel” problem, as explored in the aviation sector by Follet, Lasa, and Mieusset in HS36, highlights how different professional groups develop their own languages and operate within isolated loops, leading to misunderstandings and disconnections. This article has really got me thinking about similar issues in my own industry.

The Tower of Babel Problem: A Thought-Provoking Perspective

The HS36 article provides a thought-provoking perspective on the “Tower of Babel” problem, where each aviation professional feels in control of their work but operates within their own loop. This phenomenon is reminiscent of the biblical story where a common language becomes fragmented, causing confusion and separation among people. In modern industries, this translates into different groups using their own jargon and working in isolation, making it difficult for them to understand each other’s perspectives and challenges.

For instance, in aviation, air traffic controllers (ATCOs), pilots, and managers each have their own “loop,” believing they are in control of their work. However, when these loops are disconnected, it can lead to miscommunication, especially when each group uses different terminology and operates under different assumptions about how work should be done (work-as-prescribed vs. work-as-done). This issue is equally pertinent in the biotech industry, where scientists, quality assurance teams, and regulatory affairs specialists often work in silos, which can impede the development and approval of new products.

Tower of Babel by Joos de Momper, Old Masters Museum

Impact on Decision Making

Decision making in biotech is heavily influenced by Good Practice (GxP) guidelines, which emphasize quality, safety, and compliance – and I often find that the aviation industry, as a fellow highly regulated industry, is a great place to draw perspective.

When communication loops are disconnected, decisions may not fully consider all relevant perspectives. For example, in GMP (Good Manufacturing Practice) environments, quality control teams might focus on compliance with regulatory standards, while research and development teams prioritize innovation and efficiency. If these groups do not effectively communicate, decisions might overlook critical aspects, such as the practicality of implementing new manufacturing processes or the impact on product quality.

Furthermore, ICH Q9(R1) guideline emphasizes the importance of reducing subjectivity in Quality Risk Management (QRM) processes. Subjectivity can arise from personal opinions, biases, or inconsistent interpretations of risks by stakeholders, impacting every stage of QRM. To combat this, organizations must adopt structured approaches that prioritize scientific knowledge and data-driven decision-making. Effective knowledge management is crucial in this context, as it involves systematically capturing, organizing, and applying internal and external knowledge to inform QRM activities.

Academic Research on Communication Loops

Research in organizational behavior and communication highlights the importance of bridging these silos. Studies have shown that informal interactions and social events can significantly improve relationships and understanding among different professional groups (Katz & Fodor, 1963). In the biotech industry, fostering a culture of open communication can help ensure that GxP decisions are well-rounded and effective.

Moreover, the concept of “work-as-done” versus “work-as-prescribed” is relevant in biotech as well. Operators may adapt procedures to fit practical realities, which can lead to discrepancies between intended and actual practices. This gap can be bridged by encouraging feedback and continuous improvement processes, ensuring that decisions reflect both regulatory compliance and operational feasibility.

Case Studies and Examples

  1. Aviation Example: The HS36 article provides a compelling example of how disconnected loops can hinder effective decision making in aviation. For instance, when a standardized phraseology was introduced, frontline operators felt that this change did not account for their operational needs, leading to resistance and potential safety issues. This illustrates how disconnected loops can hinder effective decision making.
  2. Product Development: In the development of a new biopharmaceutical, different teams might have varying priorities. If the quality assurance team focuses solely on regulatory compliance without fully understanding the manufacturing challenges faced by production teams, this could lead to delays or quality issues. By fostering cross-functional communication, these teams can align their efforts to ensure both compliance and operational efficiency.
  3. ICH Q9(R1) Example: The revised ICH Q9(R1) guideline emphasizes the need to manage and minimize subjectivity in QRM. For instance, in assessing the risk of a new manufacturing process, a structured approach using historical data and scientific evidence can help reduce subjective biases. This ensures that decisions are based on comprehensive data rather than personal opinions.
  4. Technology Deployment: . A recent FDA Warning Letter to Sanofi highlighted the importance of timely technological upgrades to equipment and facility infrastructure. This emphasizes that staying current with technological advancements is essential for maintaining regulatory compliance and ensuring product quality. However the individual loops of decision making amongst the development teams, operations and quality can lead to major mis-steps.

Strategies for Improvement

To overcome the challenges posed by communication loops and silos, organizations can implement several strategies:

  • Promote Cross-Functional Training: Encourage professionals to explore other roles and challenges within their organization. This can help build empathy and understanding across different departments.
  • Foster Informal Interactions: Organize social events and informal meetings where professionals from different backgrounds can share experiences and perspectives. This can help bridge gaps between silos and improve overall communication.
  • Define Core Knowledge: Establish a minimum level of core knowledge that all stakeholders should possess. This can help ensure that everyone has a basic understanding of each other’s roles and challenges.
  • Implement Feedback Loops: Encourage continuous feedback and improvement processes. This allows organizations to adapt procedures to better reflect both regulatory requirements and operational realities.
  • Leverage Knowledge Management: Implement robust knowledge management systems to reduce subjectivity in decision-making processes. This involves capturing, organizing, and applying internal and external knowledge to inform QRM activities.

Combating Subjectivity in Decision Making

In addition to bridging communication loops, reducing subjectivity in decision making is crucial for ensuring quality and safety. The revised ICH Q9(R1) guideline provides several strategies for this:

  • Structured Approaches: Use structured risk assessment tools and methodologies to minimize personal biases and ensure that decisions are based on scientific evidence.
  • Data-Driven Decision Making: Prioritize data-driven decision making by leveraging historical data and real-time information to assess risks and opportunities.
  • Cognitive Bias Awareness: Train stakeholders to recognize and mitigate cognitive biases that can influence risk assessments and decision-making processes.

Conclusion

In complex industries effective communication is essential for ensuring safety, quality, and efficiency. The presence of communication loops and silos can lead to misunderstandings and poor decision making. By promoting cross-functional understanding, fostering informal interactions, and implementing feedback mechanisms, organizations can bridge these gaps and improve overall performance. Additionally, reducing subjectivity in decision making through structured approaches and data-driven decision making is critical for ensuring compliance with GxP guidelines and maintaining product quality. As industries continue to evolve, addressing these communication challenges will be crucial for achieving success in an increasingly interconnected world.


References:

  • Follet, S., Lasa, S., & Mieusset, L. (n.d.). The Tower of Babel Problem in Aviation. In HindSight Magazine, HS36. Retrieved from https://skybrary.aero/sites/default/files/bookshelf/hs36/HS36-Full-Magazine-Hi-Res-Screen-v3.pdf
  • Katz, D., & Fodor, J. (1963). The Structure of a Semantic Theory. Language, 39(2), 170–210.
  • Dekker, S. W. A. (2014). The Field Guide to Understanding Human Error. Ashgate Publishing.
  • Shorrock, S. (2023). Editorial. Who are we to judge? From work-as-done to work-as-judged. HindSight, 35, Just Culture…Revisited. Brussels: EUROCONTROL.

Reducing Subjectivity in Quality Risk Management: Aligning with ICH Q9(R1)

In a previous post, I discussed how overcoming subjectivity in risk management and decision-making requires fostering a culture of quality and excellence. This is an issue that it is important to continue to evaluate and push for additional improvement.

The revised ICH Q9(R1) guideline, finalized in January 2023, introduces critical updates to Quality Risk Management (QRM) practices, emphasizing the need to address subjectivity, enhance formality, improve risk-based decision-making, and manage product availability risks. These revisions aim to ensure that QRM processes are more science-driven, knowledge-based, and effective in safeguarding product quality and patient safety. Two years later it is important to continue to build on key strategies for reducing subjectivity in QRM and aligning with the updated requirements.

Understanding Subjectivity in QRM

Subjectivity in QRM arises from personal opinions, biases, heuristics, or inconsistent interpretations of risks by stakeholders. This can impact every stage of the QRM process—from hazard identification to risk evaluation and mitigation. The revised ICH Q9(R1) explicitly addresses this issue by introducing a new subsection, “Managing and Minimizing Subjectivity,” which emphasizes that while subjectivity cannot be entirely eliminated, it can be controlled through structured approaches.

The guideline highlights that subjectivity often stems from poorly designed scoring systems, differing perceptions of hazards and risks among stakeholders, and cognitive biases. To mitigate these challenges, organizations must adopt robust strategies that prioritize scientific knowledge and data-driven decision-making.

Strategies to Reduce Subjectivity

Leveraging Knowledge Management

ICH Q9(R1) underscores the importance of knowledge management as a tool to reduce uncertainty and subjectivity in risk assessments. Effective knowledge management involves systematically capturing, organizing, and applying internal and external knowledge to inform QRM activities. This includes maintaining centralized repositories for technical data, fostering real-time information sharing across teams, and learning from past experiences through structured lessons-learned processes.

By integrating knowledge management into QRM, organizations can ensure that decisions are based on comprehensive data rather than subjective estimations. For example, using historical data on process performance or supplier reliability can provide objective insights into potential risks.

To integrate knowledge management (KM) more effectively into quality risk management (QRM), organizations can implement several strategies to ensure decisions are based on comprehensive data rather than subjective estimations:

Establish Robust Knowledge Repositories

Create centralized, easily accessible repositories for storing and organizing historical data, lessons learned, and best practices. These repositories should include:

  • Process performance data
  • Supplier reliability metrics
  • Deviation and CAPA records
  • Audit findings and inspection observations
  • Technology transfer documentation

By maintaining these repositories, organizations can quickly access relevant historical information when conducting risk assessments.

Implement Knowledge Mapping

Conduct knowledge mapping exercises to identify key sources of knowledge within the organization. This process helps to:

Use the resulting knowledge maps to guide risk assessment teams to relevant information and expertise.

Develop Data Analytics Capabilities

Invest in data analytics tools and capabilities to extract meaningful insights from historical data. For example:

  • Use statistical process control to identify trends in manufacturing performance
  • Apply machine learning algorithms to predict potential quality issues based on historical patterns
  • Utilize data visualization tools to present complex risk data in an easily understandable format

These analytics can provide objective, data-driven insights into potential risks and their likelihood of occurrence.

Integrate KM into QRM Processes

Embed KM activities directly into QRM processes to ensure consistent use of available knowledge:

  • Include a knowledge gathering step at the beginning of risk assessments
  • Require risk assessment teams to document the sources of knowledge used in their analysis
  • Implement a formal process for capturing new knowledge generated during risk assessments

This integration helps ensure that all relevant knowledge is considered and that new insights are captured for future use.

Foster a Knowledge-Sharing Culture

Encourage a culture of knowledge sharing and collaboration within the organization:

  • Implement mentoring programs to facilitate the transfer of tacit knowledge
  • Establish communities of practice around key risk areas
  • Recognize and reward employees who contribute valuable knowledge to risk management efforts

By promoting knowledge sharing, organizations can tap into the collective expertise of their workforce to improve risk assessments.

Implementing Structured Risk-Based Decision-Making

The revised guideline introduces a dedicated section on risk-based decision-making, emphasizing the need for structured approaches that consider the complexity, uncertainty, and importance of decisions. Organizations should establish clear criteria for decision-making processes, define acceptable risk tolerance levels, and use evidence-based methods to evaluate options.

Structured decision-making tools can help standardize how risks are assessed and prioritized. Additionally, calibrating expert opinions through formal elicitation techniques can further reduce variability in judgments.

Addressing Cognitive Biases

Cognitive biases—such as overconfidence or anchoring—can distort risk assessments and lead to inconsistent outcomes. To address this, organizations should provide training on recognizing common biases and their impact on decision-making. Encouraging diverse perspectives within risk assessment teams can also help counteract individual biases.

For example, using cross-functional teams ensures that different viewpoints are considered when evaluating risks, leading to more balanced assessments. Regularly reviewing risk assessment outputs for signs of bias or inconsistencies can further enhance objectivity.

Enhancing Formality in QRM

ICH Q9(R1) introduces the concept of a “formality continuum,” which aligns the level of effort and documentation with the complexity and significance of the risk being managed. This approach allows organizations to allocate resources effectively by applying less formal methods to lower-risk issues while reserving rigorous processes for high-risk scenarios.

For instance, routine quality checks may require minimal documentation compared to a comprehensive risk assessment for introducing new manufacturing technologies. By tailoring formality levels appropriately, organizations can ensure consistency while avoiding unnecessary complexity.

Calibrating Expert Opinions

We need to recognize the importance of expert knowledge in QRM activities, but also acknowledges the potential for subjectivity and bias in expert judgments. We need to ensure we:

  • Implement formal processes for expert opinion elicitation
  • Use techniques to calibrate expert judgments, especially when estimating probabilities
  • Provide training on common cognitive biases and their impact on risk assessment
  • Employ diverse teams to counteract individual biases
  • Regularly review risk assessment outputs for signs of bias or inconsistencies

Calibration techniques may include:

  • Structured elicitation protocols that break down complex judgments into more manageable components
  • Feedback and training to help experts align their subjective probability estimates with actual frequencies of events
  • Using multiple experts and aggregating their judgments through methods like Cooke’s classical model
  • Employing facilitation techniques to mitigate groupthink and encourage independent thinking

By calibrating expert opinions, organizations can leverage valuable expertise while minimizing subjectivity in risk assessments.

Utilizing Cooke’s Classical Model

Cooke’s Classical Model is a rigorous method for evaluating and combining expert judgments to quantify uncertainty. Here are the key steps for using the Classical Model to evaluate expert judgment:

Select and calibrate experts:
    • Choose 5-10 experts in the relevant field
    • Have experts assess uncertain quantities (“calibration questions”) for which true values are known or will be known soon
    • These calibration questions should be from the experts’ domain of expertise
    Elicit expert assessments:
      • Have experts provide probabilistic assessments (usually 5%, 50%, and 95% quantiles) for both calibration questions and questions of interest
      • Document experts’ reasoning and rationales
      Score expert performance:
      • Evaluate experts on two measures:
        a) Statistical accuracy: How well their probabilistic assessments match the true values of calibration questions
        b) Informativeness: How precise and focused their uncertainty ranges are
      Calculate performance-based weights:
        • Derive weights for each expert based on their statistical accuracy and informativeness scores
        • Experts performing poorly on calibration questions receive little or no weight
        Combine expert assessments:
        • Use the performance-based weights to aggregate experts’ judgments on the questions of interest
        • This creates a “Decision Maker” combining the experts’ assessments
        Validate the combined assessment:
        • Evaluate the performance of the weighted combination (“Decision Maker”) using the same scoring as for individual experts
        • Compare to equal-weight combination and best-performing individual experts
        Conduct robustness checks:
        • Perform cross-validation by using subsets of calibration questions to form weights
        • Assess how well performance on calibration questions predicts performance on questions of interest

        The Classical Model aims to create an optimal aggregate assessment that outperforms both equal-weight combinations and individual experts. By using objective performance measures from calibration questions, it provides a scientifically defensible method for evaluating and synthesizing expert judgment under uncertainty.

        Using Data to Support Decisions

        ICH Q9(R1) emphasizes the importance of basing risk management decisions on scientific knowledge and data. The guideline encourages organizations to:

        • Develop robust knowledge management systems to capture and maintain product and process knowledge
        • Create standardized repositories for technical data and information
        • Implement systems to collect and convert data into usable knowledge
        • Gather and analyze relevant data to support risk-based decisions
        • Use quantitative methods where feasible, such as statistical models or predictive analytics

        Specific approaches for using data in QRM may include:

        • Analyzing historical data on process performance, deviations, and quality issues to inform risk assessments
        • Employing statistical process control and process capability analysis to evaluate and monitor risks
        • Utilizing data mining and machine learning techniques to identify patterns and potential risks in large datasets
        • Implementing real-time data monitoring systems to enable proactive risk management
        • Conducting formal data quality assessments to ensure decisions are based on reliable information

        Digitalization and emerging technologies can support data-driven decision making, but remember that validation requirements for these technologies should not be overlooked.

        Improving Risk Assessment Tools

        The design of risk assessment tools plays a critical role in minimizing subjectivity. Tools with well-defined scoring criteria and clear guidance on interpreting results can reduce variability in how risks are evaluated. For example, using quantitative methods where feasible—such as statistical models or predictive analytics—can provide more objective insights compared to qualitative scoring systems.

        Organizations should also validate their tools periodically to ensure they remain fit-for-purpose and aligned with current regulatory expectations.

        Leverage Good Risk Questions

        A well-formulated risk question can significantly help reduce subjectivity in quality risk management (QRM) activities. Here’s how a good risk question contributes to reducing subjectivity:

        Clarity and Focus

        A good risk question provides clarity and focus for the risk assessment process. By clearly defining the scope and context of the risk being evaluated, it helps align all participants on what specifically needs to be assessed. This alignment reduces the potential for individual interpretations and subjective assumptions about the risk scenario.

        Specific and Measurable Terms

        Effective risk questions use specific and measurable terms rather than vague or ambiguous language. For example, instead of asking “What are the risks to product quality?”, a better question might be “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months?”. The specificity in the latter question helps anchor the assessment in objective, measurable criteria.

        Factual Basis

        A well-crafted risk question encourages the use of factual information and data rather than opinions or guesses. It should prompt the risk assessment team to seek out relevant data, historical information, and scientific knowledge to inform their evaluation. This focus on facts and evidence helps minimize the influence of personal biases and subjective judgments.

        Standardized Approach

        Using a consistent format for risk questions across different assessments promotes a standardized approach to risk identification and analysis. This consistency reduces variability in how risks are framed and evaluated, thereby decreasing the potential for subjective interpretations.

        Objective Criteria

        Good risk questions often incorporate or imply objective criteria for risk evaluation. For instance, a question like “What factors could lead to a deviation from the acceptable range of 5-10% for impurity Y?” sets clear, objective parameters for the assessment, reducing the room for subjective interpretation of what constitutes a significant risk.

        Promotes Structured Thinking

        Well-formulated risk questions encourage structured thinking about potential hazards, their causes, and consequences. This structured approach helps assessors focus on objective factors and causal relationships rather than relying on gut feelings or personal opinions.

        Facilitates Knowledge Utilization

        A good risk question should prompt the assessment team to utilize available knowledge effectively. It encourages the team to draw upon relevant data, past experiences, and scientific understanding, thereby grounding the assessment in objective information rather than subjective impressions.

        By crafting risk questions that embody these characteristics, QRM practitioners can significantly reduce the subjectivity in risk assessments, leading to more reliable, consistent, and scientifically sound risk management decisions.

        Fostering a Culture of Continuous Improvement

        Reducing subjectivity in QRM is an ongoing process that requires a commitment to continuous improvement. Organizations should regularly review their QRM practices to identify areas for enhancement and incorporate feedback from stakeholders. Investing in training programs that build competencies in risk assessment methodologies and decision-making frameworks is essential for sustaining progress.

        Moreover, fostering a culture that values transparency, collaboration, and accountability can empower teams to address subjectivity proactively. Encouraging open discussions about uncertainties or disagreements during risk assessments can lead to more robust outcomes.

        Conclusion

        The revisions introduced in ICH Q9(R1) represent a significant step forward in addressing long-standing challenges associated with subjectivity in QRM. By leveraging knowledge management, implementing structured decision-making processes, addressing cognitive biases, enhancing formality levels appropriately, and improving risk assessment tools, organizations can align their practices with the updated guidelines while ensuring more reliable and science-based outcomes.

        It has been two years, it is long past time be be addressing these in your risk management process and quality system.

        Ultimately, reducing subjectivity not only strengthens compliance with regulatory expectations but also enhances the quality of pharmaceutical products and safeguards patient safety—a goal that lies at the heart of effective Quality Risk Management.

        Subject Matter Expert in Validation

        In ASTM E2500, a Subject Matter Expert (SME) is an individual with specialized knowledge and technical understanding of critical aspects of manufacturing systems and equipment. The SME plays a crucial role throughout the project lifecycle, from defining needs to verifying and accepting systems. They are responsible for identifying critical aspects, reviewing system designs, developing verification strategies, and leading quality risk management efforts. SMEs ensure manufacturing systems are designed and verified to meet product quality and patient safety requirements.

        In the ASTM E2500 process, the Subject Matter Experts (SME) has several key responsibilities critical to successfully implementing the standard. These responsibilities include:

        1. Definition of Needs: SMEs define the system’s needs and identify critical aspects that impact product quality and patient safety.
        2. Risk Management: SMEs participate in risk management activities, helping to identify, assess, and manage risks throughout the project lifecycle. This includes conducting quality risk analyses and consistently applying risk management principles.
        3. Verification Strategy Development: SMEs are responsible for planning and defining verification strategies. This involves selecting appropriate test methods, defining acceptance criteria, and ensuring that verification activities are aligned with the project’s critical aspects.
        4. System Design Review: SMEs review system designs to ensure they meet specified requirements and address identified risks. This includes participating in design reviews and providing technical input to optimize system functionality and compliance.
        5. Execution of Verification Tests: SMEs lead the execution of verification tests, ensuring that tests are conducted accurately and that results are thoroughly reviewed. They may also leverage vendor documentation and test results as part of the verification process, provided the vendor’s quality system and technical capabilities are deemed acceptable.
        6. Change Management: SMEs play a crucial role in change management, ensuring that any modifications to the system are properly evaluated, documented, and implemented. This helps maintain the system’s validated state and ensures continuous compliance with regulatory requirements.
        7. Continuous Improvement: SMEs are involved in continuous process improvement efforts, using operational and performance data to identify opportunities for enhancements. They also conduct root-cause analyses of failures and implement technically sound improvements based on gained product knowledge and understanding.

        These responsibilities highlight the SME’s integral role in ensuring that manufacturing systems are designed, verified, and maintained to meet the highest standards of quality and safety, as outlined in ASTM E2500.

        The ASTM E2500 SME is a Process Owner

        ASTM E2500 uses the term SME in the same way we discuss process owners, or what is sometimes called product or molecule stewards. The term should probably be changed to reflect the special role of the SME and the relationship with other stakeholders.

        A Molecule Steward has a specialized role within pharmaceutical and biotechnology companies and oversees the lifecycle of a specific molecule or drug product. This role involves a range of responsibilities, including:

        1. Technical Expertise: Acting as the subject matter expert per ASTM E2500.
        2. Product Control Strategies: Implementing appropriate product control strategies across development and manufacturing sites based on anticipated needs.
        3. Lifecycle Management: Providing end-to-end accountability for a given molecule, from development to late-stage lifecycle management.

        A Molecule Steward ensures a drug product’s successful development, manufacturing, and lifecycle management, maintaining high standards of quality and compliance throughout the process.

        The ASTM E2500 SME (Molecule Steward) and Stakeholders

        In the ASTM E2500 approach, the Subject Matter Expert (Molecule Steward) collaborates closely with various project players to ensure the successful implementation of manufacturing systems.

        Definition of Needs and Requirements

        • Collaboration with Project Teams: SMEs work with project teams from the beginning to define the system’s needs and requirements. This involves identifying critical aspects that impact product quality and patient safety.
        • Input from Multiple Departments: SMEs gather input from different departments, including product/process development, engineering, automation, and validation, to ensure that all critical quality attributes (CQAs) and critical process parameters (CPPs) are considered.

        Risk Management

        • Quality Risk Analysis: SMEs lead the quality risk analysis process, collaborating with QA and other stakeholders to identify and assess risks. This helps focus on critical aspects and consistently apply risk management principles.
        • Vendor Collaboration: SMEs often work with vendors to leverage their expertise in conducting risk assessments and ensuring that vendor documentation meets quality requirements.

        System Design Review

        • Design Review Meetings: SMEs participate in design review meetings with suppliers and project teams to ensure the system design meets the defined needs and critical aspects. This collaborative effort helps in reducing the need for modifications and repeat tests.
        • Supplier Engagement: SMEs engage with suppliers to ensure their design solutions are understood and integrated into the project. This includes reviewing supplier documentation and ensuring compliance with regulatory requirements.

        Verification Strategy Development

        • Developing Verification Plans: SMEs collaborate with QA and engineering teams to develop verification strategies and plans. This involves selecting appropriate test methods, defining acceptance criteria, and ensuring verification activities align with project goals.
        • Execution of Verification Tests: SMEs may work with suppliers to conduct verification tests at the supplier’s site, ensuring that tests are performed accurately and efficiently. This collaboration helps achieve the “right test” at the “right time” objective.

        Change Management

        • Managing Changes: SMEs play a crucial role in the change management process, working with project teams to evaluate, document, and implement changes. This ensures that the system remains in a validated state and continues to meet regulatory requirements.
        • Continuous Improvement: SMEs collaborate with other stakeholders to identify opportunities for process improvements and implement changes based on operational and performance data.

        Documentation and Communication

        • Clear Communication: SMEs ensure clear communication and documentation of all verification activities and acceptance criteria. This involves working closely with QA to validate all critical aspects and ensure compliance with regulatory standards.

        Expert Intuition and Risk Management

        Saturday Morning Breakfast Cereal source http://smbc-comics.com/comic/horrible

        Risk management is a crucial aspect of any organization or project. However, it is often subject to human errors in subjective risk judgments. This is because most risk assessment methods rely on subjective inputs from experts. Without certain precautions, experts can make consistent errors in judgment about uncertainty and risk.

        There are methods that can correct the systemic errors that people make, but very few organizations implement them. As a result, there is often an almost universal understatement of risk. We need to keep in mind a few rules about experience and expertise.

        • Experience is a nonrandom, nonscientific sample of events throughout our lifetime.
        • Experience is memory-based and we are very selective regarding what we choose to remember,
        • What we conclude from our experience can be full of logical errors
        • Unless we get reliable feedback on past decisions, there is no reason to believe our experience will tell us much.

        No matter how much experience we accumulate, we seem to be very inconsistent in its application.

        Experts have unconscious heuristics and biases that impact their judgment, some important ones include:

        • Misconceptions of chance: If you flip a coin six times, which result is more likely (H= heads, T= tails): HHHTTT or HTHTTH? They are both equal, but many people assume that because the first series looks “less random” than the second, it must be less likely. This is an example of representativeness bias. We appear to judge odds based on what we assume to be representative scenarios. Human beings easily confuse patterns and randomness.
        • The conjunction fallacy: We often see specific events as more likely than broader categories of events.
        • Irrational belief in small samples
        • Disregarding variance in small samples. Small samples have more random variance that large samples is considered less than it should be.
        • Insensitivity to prior probabilities: People tend to ignore the past and focus on new information when making subjective estimates.

        This is all about overconfidence as an expert, which will consistently underestimate risks.

        What are some ways to overcome this? I recommend the following be built into your risk management system.

        • Pretend you are in the future looking back at failure. Start with the assumption that a major disaster did happen and describe how it happened.
        • Look to risks from others. Gather a list of related failures, for example, regulatory agency observations, and think of risks in relation to those.
        • Include Everyone. Your organization has numerous experts on all sorts of specific risks. Make the effort to survey representatives of just about every job level.
        • Do peer reviews. Check assumptions by showing them to peers who are not immersed in the assessment.
        • Implement metrics for performance. The Brier score is a way to evaluate the result of predictions both by how often the team was right and by the probability the estimated for getting a correct answer.

        Further Reading

        Here are some sources that discuss the topic of human errors and subjective judgments in risk management:

        Communities of Practice

        Knowledge management is a key enabler for quality, and should firmly be part of our standards of practice and competencies. There is a host of practices, and one tool that should be in our toolboxes as quality professionals is the Community of Practice (COP).

        What is a Community of Practice?

        Wenger, Trayner, and de Laat (2011) defined a Community of Practice as a “learning partnership among people who find it useful to learn from and with each other about a particular domain. They use each other’s experience of practice as a learning resource.” Etienne Wagner is the theoretical origin of the idea of a Community of Practice, as well as a great deal of the subsequent development of the concept.

        Communities of practice are groups of people who share a passion for something that they know how to do, and who interact regularly in order to learn how to do it better. As such, they are a great tool for continuous improvement.

        These communities can be defined by disciplines, by problems, or by situations. They can be internal or external. A group of deviation investigators who want to perform better investigations, contamination control experts sharing across sites, the list is probably endless for whenever there is a shared problem to be solved.

        The idea is to enable practitioners to manage knowledge. Practitioners have a special connection with each other because they share actual experiences. They understand each other’s stories, difficulties, and insights. This allows them to learn from each other and build on each other’s expertise.

        There are three fundamental characteristics of communities:

        • Domain: the area of knowledge that brings the community together, gives it its identity, and defines the key issues that members need to address. A community of practice is not just a personal network: it is about something. Its identity is defined not just by a task, as it would be for a team, but by an “area” of knowledge that needs to be explored and developed.
        • Community: the group of people for whom the domain is relevant, the quality of the relationships among members, and the definition of the boundary between the inside and the outside. A community of practice is not just a Web site or a library; it involves people who interact and who develop relationships that enable them to address problems and share knowledge.
        • Practice: the body of knowledge, methods, tools, stories, cases, documents, which members share and develop together. A community of practice is not merely a community of interest. It brings together practitioners who are involved in doing something. Over time, they accumulate practical knowledge in their domain, which makes a difference to their ability to act individually and collectively.

        The combination of domain, community, and practice is what enables communities of practice to manage knowledge. Domain provides a common focus; community builds relationships that enable collective learning; and practice anchors the learning in what people do. Cultivating communities of practice requires paying attention to all three elements.

        Communities of Practice are different than workgroups or project teams.

        What’s the purpose?Who belongs?What holds it together?How long does it last?
        Community of PracticeTo develop members’ capabilities. To build and exchange knowledgeMembers who share domain and communityCommitment from the organization. Identification with the group’s expertise. PassionAs long as there is interest in maintaining the group
        Formal work groupTo deliver a product or serviceEveryone who reports to the group’s managerJob requirements and common goalsUntil the next reorganization
        Project teamTo accomplish a specific taskEmployee’s assigned by managementThe project’s milestones and goalsUntil the project has been completed
        Informal networkTo collect and pass on business informationFriends and business acquantaincesMutual needsAs long as people have a reason to connect
        Types of organizing blocks

        Establishing a Community of Practice

        Sponsorship

        For a Community of Practice to thrive it is crucial for the organization to provide adequate
        sponsorship. Sponsorship are those leaders who sees that a community can deliver value and therefore makes sure that the community has the resources it needs to function and that its ideas and proposals find their way into the organization. While there is often one specific sponsor, it is more useful to think about the sponsorship structure that enables the communities to thrive and have an impact on the performance of the organization. This includes high-level executive sponsorship as well as the sponsorship of line managers who control the time usage of employees. The role of sponsorship includes:

        • Translating strategic imperatives into a knowledge-centric vision of the organization
        • Legitimizing the work of communities in terms of strategic priorities
        • Channeling appropriate resources to ensure sustained success
        • Giving a voice to the insights and proposals of communities so they affect the way business is conducted
        • Negotiating accountability between line operations and communities (e.g., who decides which “best practices” to adopt)

        Support Structure

        Communities of Practice need organizational support to function. This support includes:

        • A few explicit roles, some of which are recognized by the formal organization and resourced with dedicated time
        • Direct resources for the nurturing of the community infrastructure including meeting places, travel funds, and money for specific projects
        • Technological infrastructure that enables members to communicate regularly and to accumulate documents

        It pays when you use communities of practice in a systematic way to put together a small “support team” of internal
        consultants who provide logistic and process advice for communities, including coaching community leaders, educational activities to raise awareness and skills, facilitation services, communication with management, and
        coordination across the various community of practices. But this is certainly not needed.

        Process Owners and Communities of Practice go hand-in-hand. Often it is either the Process Owner in a governance or organizing role; or the community of practice is made up of process owners across the network.

        Recognition Structure

        Communities of Practice allows its participants to build reputation, a crucial asset in the knowledge economy. Such reputation building depends on both peer and organizational recognition.

        • Peer recognition: community-based feedback and acknowledgement mechanisms that celebrate community participation
        • Organizational recognition: rubric in performance appraisal for community contributions and career paths for people who take on community leadership