Assessing the Quality of Our Risk Management Activities

Twenty years on, risk management in the pharmaceutical world continues to be challenging. Ensure that risk assessments are systematic, structured, and based on scientific knowledge. A large part of the ICH Q9(R1) revision was written to address continued struggles with subjectivity, formality, and decision-making. And quite frankly, it’s clear to me that we, as an industry, are still working to absorb those messages these last two years.

A big challenge is that we struggle to measure the effectiveness of our risk assessments. Quite frankly, this is a great place for a rubric.

Luckily, we have a good tool out there to adopt: the Risk Analysis Quality Test (RAQT1.0), developed by the Society for Risk Analysis (SRA). This comprehensive framework is designed to evaluate and improve the quality of risk assessments. We can apply this tool to meet the requirements of the International Conference on Harmonisation (ICH) Q9, which outlines quality risk management principles for the pharmaceutical industry. From that, we can drive continued improvement in our risk management activities.

Components of RAQT1.0

The Risk Analysis Quality Test consists of 76 questions organized into 15 categories:

  • Framing the Analysis and Its Interface with Decision Making
  • Capturing the Risk Generating Process (RGP)
  • Communication
  • Stakeholder Involvement
  • Assumptions and Scope Boundary Issues
  • Proactive Creation of Alternative Courses of Action
  • Basis of Knowledge
  • Data Limitations
  • Analysis Limitations
  • Uncertainty
  • Consideration of Alternative Analysis Approaches
  • Robustness and Resilience of Action Strategies
  • Model and Analysis Validation and Documentation
  • Reporting
  • Budget and Schedule Adequacy

Application to ICH Q9 Requirements

ICH Q9 emphasizes the importance of a systematic and structured risk assessment process. The RAQT can be used to ensure that risk assessments are thorough and meet quality standards. For example, Category G (Basis of Knowledge) and Category H (Data Limitations) help in evaluating the scientific basis and data quality of the risk assessment, aligning with ICH Q9’s requirement for using available knowledge and data.

The RAQT’s Category B (Capturing the Risk Generating Process) and Category C (Communication) can help in identifying and communicating risks effectively. This aligns with ICH Q9’s requirement to identify potential risks based on scientific knowledge and understanding of the process.

Categories such as Category I (Analysis Limitations) and Category J (Uncertainty) in the RAQT help in analyzing the risks and addressing uncertainties, which is a key aspect of ICH Q9. These categories ensure that the analysis is robust and considers all relevant factors.

The RAQT’s Category A (Framing the Analysis and Its Interface with Decision Making) and Category F (Proactive Creation of Alternative Courses of Action) are crucial for evaluating risks and developing mitigation strategies. This aligns with ICH Q9’s requirement to evaluate risks and determine the need for risk reduction.

Categories like Category L (Robustness and Resilience of Action Strategies) and Category M (Model and Analysis Validation and Documentation) in the RAQT help in ensuring that the risk control measures are robust and well-documented. This is consistent with ICH Q9’s emphasis on implementing and reviewing controls.

Category D (Stakeholder Involvement) of the RAQT ensures that stakeholders are engaged in the risk management process, which is a requirement under ICH Q9 for effective communication and collaboration.

The RAQT can be applied both retrospectively and prospectively, allowing for the evaluation of past risk assessments and the planning of future ones. This aligns with ICH Q9’s requirement for periodic review and continuous improvement of the risk management process.

Creating a Rubric

To make this actionable we need a tool, a rubric, to allow folks to evaluate what goods look like. I would insert this tool into the quality oversite of risk management.

Category A: Framing the Analysis and Its Interface With Decision Making

CriteriaExcellent (4)Good (3)Fair (2)Poor (1)
Problem DefinitionClearly and comprehensively defines the problem, including all relevant aspects and stakeholdersAdequately defines the problem with most relevant aspects consideredPartially defines the problem with some key aspects missingPoorly defines the problem or misses critical aspects
Analytical ApproachSelects and justifies an optimal analytical approach, demonstrating deep understanding of methodologiesChooses an appropriate analytical approach with reasonable justificationSelects a somewhat relevant approach with limited justificationChooses an inappropriate approach or provides no justification
Data Collection and ManagementThoroughly identifies all necessary data sources and outlines a comprehensive data management planIdentifies most relevant data sources and provides a adequate data management planIdentifies some relevant data sources and offers a basic data management planFails to identify key data sources or lacks a coherent data management plan
Stakeholder IdentificationComprehensively identifies all relevant stakeholders and their interestsIdentifies most key stakeholders and their primary interestsIdentifies some stakeholders but misses important ones or their interestsFails to identify major stakeholders or their interests
Decision-Making ContextProvides a thorough analysis of the decision-making context, including constraints and opportunitiesAdequately describes the decision-making context with most key factors consideredPartially describes the decision-making context, missing some important factorsPoorly describes or misunderstands the decision-making context
Alignment with Organizational GoalsDemonstrates perfect alignment between the analysis and broader organizational objectivesShows good alignment with organizational goals, with minor gapsPartially aligns with organizational goals, with significant gapsFails to align with or contradicts organizational goals
Communication StrategyDevelops a comprehensive strategy for communicating results to all relevant decision-makersOutlines a good communication strategy covering most key decision-makersProvides a basic communication plan with some gapsLacks a clear strategy for communicating results to decision-makers

This rubric provides a framework for assessing the quality of work in framing an analysis and its interface with decision-making. It covers key aspects such as problem definition, analytical approach, data management, stakeholder consideration, decision-making context, alignment with organizational goals, and communication strategy. Each criterion is evaluated on a scale from 1 (Poor) to 4 (Excellent), allowing for nuanced assessment of performance in each area.

To use this rubric effectively:

  1. Adjust the criteria and descriptions as needed to fit your specific context or requirements.
  2. Ensure that the expectations for each level (Excellent, Good, Fair, Poor) are clear and distinguishable.

My next steps will be to add specific examples or indicators for each level to provide more guidance to both assessors and those being assessed.

I also may, depending on internal needs, want to assign different weights to each criterion based on their relative importance in your specific context. In this case I think each ends up being pretty similar.

I would then go and add the other sections. For example, here is category B with some possible weighting.

Category B: Capturing the Risk Generating Process (RGP)

ComponentWeight FactorExcellentSatisfactoryNeeds ImprovementPoor
B1. Comprehensiveness4The analysis includes: i) A structured taxonomy of hazards/events demonstrating comprehensiveness ii) Each scenario spelled out with causes and types of change iii) Explicit addressing of potential “Black Swan” events iv) Clear description of implications of such events for risk managementThe analysis includes 3 out of 4 elements from the Excellent criteria, with minor gaps that do not significantly impact understandingThe analysis includes only 2 out of 4 elements from the Excellent criteria, or has significant gaps in comprehensivenessThe analysis includes 1 or fewer elements from the Excellent criteria, severely lacking in comprehensiveness
B2. Basic Structure of RGP2Clearly identifies and accounts for the basic structure of the RGP (e.g. linear, chaotic, complex adaptive) AND Uses appropriate mathematical structures (e.g. linear, quadratic, exponential) that match the RGP structureIdentifies the basic structure of the RGP BUT does not fully align mathematical structures with the RGPAttempts to identify the RGP structure but does so incorrectly or incompletely OR Uses mathematical structures that do not align with the RGPDoes not identify or account for the basic structure of the RGP
B3. Complexity of RGP3Lists all important causal and associative links in the RGP AND Demonstrates how each link is accounted for in the analysisLists most important causal and associative links in the RGP AND Demonstrates how most links are accounted for in the analysisLists some causal and associative links but misses key elements OR Does not adequately demonstrate how links are accounted for in the analysisDoes not list causal and associative links or account for them in the analysis
B4. Early Warning Detection3Includes a clear process for detecting early warnings of potential surprising risk aspects, beyond just concrete eventsIncludes a process for detecting early warnings, but it may be limited in scope or not fully developedMentions the need for early warning detection but does not provide a clear processDoes not address early warning detection
B5. System Changes2Fully considers the possibility of system changes AND Establishes adequate mechanisms to detect those changesConsiders the possibility of system changes BUT mechanisms to detect changes are not fully developedMentions the possibility of system changes but does not adequately consider or establish detection mechanismsDoes not consider or address the possibility of system changes

    I definitely need to go back and add more around structure requirements. The SRA RAQT tool needs some more interpretation here.

    Category C: Risk Communication

    ComponentWeight FactorExcellentSatisfactoryNeeds ImprovementPoor
    C1. Integration of Communication into Risk Analysis3Communication is fully integrated into the risk analysis following established norms). All aspects of the methodology are clearly addressed including context establishment, risk assessment (identification, analysis, evaluation), and risk treatment. There is clear evidence of pre-assessment, management, appraisal, characterization and evaluation. Knowledge about the risk is thoroughly categorized.Communication is integrated into the risk analysis following most aspects of established norms. Most key elements of methodologies like ISO 31000 or IRGC are addressed, but some minor aspects may be missing or unclear. Knowledge about the risk is categorized, but may lack some detail.Communication is partially integrated into the risk analysis, but significant aspects of established norms are missing. Only some elements of methodologies like ISO 31000 or IRGC are addressed. Knowledge categorization about the risk is incomplete or unclear.There is little to no evidence of communication being integrated into the risk analysis following established norms. Methodologies like ISO 31000 or IRGC are not followed. Knowledge about the risk is not categorized.
    C2. Adequacy of Risk Communication3All considerations for effective risk communication have been applied to ensure adequacy between analysts and decision makers, analysts and other stakeholders, and decision makers and stakeholders. There is clear evidence that all parties agree the communication is adequate.Most considerations for effective risk communication have been applied. Communication appears adequate between most parties, but there may be minor gaps or areas where agreement on adequacy is not explicitly stated.Some considerations for effective risk communication have been applied, but there are significant gaps. Communication adequacy is questionable between one or more sets of parties. There is limited evidence of agreement on communication adequacy.Few to no considerations for effective risk communication have been applied. There is no evidence of adequate communication between analysts, decision makers, and stakeholders. There is no indication of agreement on communication adequacy.

    Category D: Stakeholder Involvement

    CriteriaWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    Stakeholder Identification4All relevant stakeholders are systematically and comprehensively identifiedMost relevant stakeholders are identified, with minor omissionsSome relevant stakeholders are identified, but significant groups are missedFew or no relevant stakeholders are identified
    Stakeholder Consultation3All identified stakeholders are thoroughly consulted, with their perceptions and concerns fully consideredMost identified stakeholders are consulted, with their main concerns consideredSome stakeholders are consulted, but consultation is limited in scope or depthFew or no stakeholders are consulted
    Stakeholder Engagement3Stakeholders are actively engaged throughout the entire risk management process, including problem framing, decision-making, and implementationStakeholders are engaged in most key stages of the risk management processStakeholders are engaged in some aspects of the risk management process, but engagement is inconsistentStakeholders are minimally engaged or not engaged at all in the risk management process
    Effectiveness of Involvement2All stakeholders would agree that they were effectively consulted and engagedMost stakeholders would agree that they were adequately consulted and engagedSome stakeholders may feel their involvement was insufficient or ineffectiveMost stakeholders would likely feel their involvement was inadequate or ineffective

    Category E: Assumptions and Scope Boundary Issues

    CriterionWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    E1. Important assumptions and implications listed4All important assumptions and their implications for risk management are systematically listed in clear language understandable to decision makers. Comprehensive and well-organized.Most important assumptions and implications are listed in language generally clear to decision makers. Some minor omissions or lack of clarity.Some important assumptions and implications are listed, but significant gaps exist. Language is not always clear to decision makers.Few or no important assumptions and implications are listed. Language is unclear or incomprehensible to decision makers.
    E2. Risks of assumption deviations evaluated3Risks of all significant assumptions deviating from the actual Risk Generating Process are thoroughly evaluated. Consequences and implications are clearly communicated to decision makers.Most risks of significant assumption deviations are evaluated. Consequences and implications are generally communicated to decision makers, with minor gaps.Some risks of assumption deviations are evaluated, but significant gaps exist. Communication to decision makers is incomplete or unclear.Few or no risks of assumption deviations are evaluated. Little to no communication of consequences and implications to decision makers.
    E3. Scope boundary issues and implications listed3All important scope boundary issues and their implications for risk management are systematically listed in clear language understandable to decision makers. Comprehensive and well-organized.Most important scope boundary issues and implications are listed in language generally clear to decision makers. Some minor omissions or lack of clarity.Some important scope boundary issues and implications are listed, but significant gaps exist. Language is not always clear to decision makers.Few or no important scope boundary issues and implications are listed. Language is unclear or incomprehensible to decision makers.

    Category F: Proactive Creation of Alternative Courses of Action

    CriteriaWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    Systematic generation of alternatives4A comprehensive and structured process is used to systematically generate a wide range of alternative courses of action, going well beyond initially considered optionsA deliberate process is used to generate multiple alternative courses of action beyond those initially consideredSome effort is made to generate alternatives, but the process is not systematic or comprehensiveLittle to no effort is made to generate alternatives beyond those initially considered
    Goal-focused creation3All generated alternatives are clearly aligned with and directly address the stated goals of the analysisMost generated alternatives align with the stated goals of the analysisSome generated alternatives align with the goals, but others seem tangential or unrelatedGenerated alternatives (if any) do not align with or address the stated goals
    Consideration of robust/resilient options3Multiple robust and resilient alternatives are developed to address various uncertainty scenariosAt least one robust or resilient alternative is developed to address uncertaintyRobustness and resilience are considered, but not fully incorporated into alternativesRobustness and resilience are not considered in alternative generation
    Examination of unintended consequences2Thorough examination of potential unintended consequences for each alternative, including action-reaction spiralsSome examination of potential unintended consequences for most alternativesLimited examination of unintended consequences for some alternativesNo consideration of potential unintended consequences
    Documentation of alternative creation process1The process of alternative generation is fully documented, including rationale for each alternativeThe process of alternative generation is mostly documentedThe process of alternative generation is partially documentedThe process of alternative generation is not documented

    Category G: Basis of Knowledge

    CriterionWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    G1. Characterization of knowledge basis4All inputs are clearly characterized (empirical, expert elicitation, testing, modeling, etc.). Distinctions between broadly accepted and novel analyses are explicitly stated.Most inputs are characterized, with some minor omissions. Distinctions between accepted and novel analyses are mostly clear.Some inputs are characterized, but significant gaps exist. Limited distinction between accepted and novel analyses.Little to no characterization of knowledge basis. No distinction between accepted and novel analyses.
    G2. Strength of knowledge adequacy3Strength of knowledge is thoroughly characterized in terms of its adequacy to support risk management decisions. Limitations are clearly articulated.Strength of knowledge is mostly characterized, with some minor gaps in relating to decision support adequacy.Limited characterization of knowledge strength. Unclear how it relates to decision support adequacy.No characterization of knowledge strength or its adequacy for decision support.
    G3. Communication of knowledge limitations4All knowledge limitations and their implications for risk management are clearly communicated to decision makers in understandable language.Most knowledge limitations and implications are communicated, with minor clarity issues.Some knowledge limitations are communicated, but significant gaps exist in clarity or completeness.Knowledge limitations are not communicated or are presented in a way decision makers cannot understand.
    G4. Consideration of surprises and unforeseen events3Thorough consideration of potential surprises and unforeseen events (Black Swans). Their importance is clearly articulated.Consideration of surprises and unforeseen events is present, with some minor gaps in articulating their importance.Limited consideration of surprises and unforeseen events. Their importance is not clearly articulated.No consideration of surprises or unforeseen events.
    G5. Conflicting expert opinions2All conflicting expert opinions are systematically considered and reported to decision makers as a source of uncertainty.Most conflicting expert opinions are considered and reported, with minor omissions.Some conflicting expert opinions are considered, but significant gaps exist in reporting or consideration.Conflicting expert opinions are not considered or reported.
    G6. Consideration of unconsidered knowledge2Explicit measures are implemented to check for knowledge outside the analysis group (e.g., independent review).Some measures are in place to check for outside knowledge, but they may not be comprehensive.Limited consideration of knowledge outside the analysis group. No formal measures in place.No consideration of knowledge outside the analysis group.
    G7. Consideration of disregarded low-probability events1Explicit measures are implemented to check for events disregarded due to low probabilities based on critical assumptions.Some consideration of low-probability events, but measures may not be comprehensive.Limited consideration of low-probability events. No formal measures in place.No consideration of events disregarded due to low probabilities.

    This rubric, once done, is a tool to guide assessment and provide feedback. It should be flexible enough to accommodate unique aspects of individual work while maintaining consistent standards across evaluations. I’d embed it in the quality approval step.

    Share your stories

    As we move through of careers we all have endless incidents that can either be denied and suppressed or acknowledged and framed as “falls,” “failures,” or “mistakes.” These so-called falls all enhance our professional growth. By focusing on the process of falling, and then rising back up, we are able to have a greater understanding of the choices we have made, and the consequences of our choices.

    Sharing and bearing witness to stories of failure from our professional and personal lives provide opportunities for us to explore and get closer to the underlying meaning of our work, our questions of what is it that we are trying to accomplish in our work as quality professionals. Our missteps allow us to identify paths we needed to take or create new stories and new pathways to emerge within the context of our work. As we share stories of tensions, struggles, and falling down, we realized how important these experiences are in the process of learning, of crafting one’s presence as a human being among human beings, of becoming a quality professional.

    We may not have asked for a journey of struggle when we decided to become quality professionals, but the process of becoming tacitly involves struggle and difficulty. There is a clear pattern among individuals who demonstrate the ability to rise strong pain and adversity in that they are able to describe their experiences, and lay meaning to it.

    It is important to recognize that simply recognizing and affirming struggle, or that something is not going as it should, does not necessarily lead to productive change. To make a change and to work towards a culture of excellence we must recognize that emotions and feelings are in the game. Learning to lead is an emotionally-laden process. And early-stage professionals feel exceptionally vulnerable within this process. This field requires early-stage professionals to hone their interpersonal, technical, and organizational skills, all while turning their gaze inward to understanding how their positioning in the organization impacts can be utilized for change. Novice professionals often struggle in terms of communicating ideas orally or in writing, being able to manage multiple tasks at once, staying on top of their technical content, or even thinking critically about who they are in the broader world. Early-stage professionals are always on the brink of vulnerability.

    Share your stories. Help others share theirs.

    I’m organizing a PechaKucha/Ignite event as part of the ASQ’s Team and Workplace Excellence Forum to sharpen our stories. More details coming soon. Start thinking of your stories to share!

    Royalty-free stock photo ID: 642783229

    Lessons Learned as a Developing Leader

    My six years at Sanofi were really the transition from manager to leader. It wasn’t always easy, but this is where I started to truly apply self-awareness to my tasks and expanded my perspectives to move beyond the day-to-day and focus on the strategic needs of building a quality organization.

    I came into the organization really focused on the immediate needs of building a serious change management and change control. This was a site under a consent decree and I felt pressured to have results fast.

    Over time, as the consent decree moved to later stages I shifted focus to being less day-to-day and more about implementing continuous improvements and driving a vision of what quality and excellence really could be.

    I made mistakes. I had successes. I’m leaving quite proud of what I’ve done and the relationships I’ve built. Relationships I am confident will continue.

    I often joke with folks that I started this blog as a public form of journaling. That remains true, and will continue in the future. As I move into my next position, here are my key things to remember:

    1. Focus on outcomes not deliverables with the long term goal of building a quality culture through innovative digital solutions and thus helping shape not only my organization but others beyond it.
    2. Don’t just instruct but inspire. Strive toinspire, to motivate, and to communicate the overall quality philosophy at every opportunity. If my coworkers are truly inspired by and proud of the ideals and values that I help communicate, then they will drive even more improvements.
    3. Communicate Big Quality Ideas. In addition to setting a digital agenda, utilize the platform to create wider strategies for quality, and defining the tone for quality culture by crafting effective, clear, transparent, and consistent messaging that inspires the best.
    4. Slow down. Be humble. Understand that I do not need to prove myself as the smartest person in every room. Encourage people to speak up, respect differences of opinion and champion the best ideas. Breathe.

    Finally, remember the relationships I have and lean into them.

    Not sure if these two posts looking forward and back are useful to anyone else, but they certainly position me for starting my new position on Monday.

    Lessons Learned and Change Management

    One of the hallmarks of a quality culture is learning from our past experiences, to eliminate repeat mistakes and to reproduce success. The more times you do an activity, the more you learn, and the better you get (within limits for simple activities).  Knowledge management is an enabler of quality systems, in part, to focus on learning and thus accelerate learning across the organization as a whole, and not just one person or a team.

    This is where the” lessons learned” process comes in.  There are a lot of definitions of lessons learned out there, but the definition I keep returning to is that a lessons learned is a change in personal or organizational behavior as a result from learning from experience. Ideally, this is a permanent, institutionalized change, and this is often where our quality systems can really drive continuous improvement.

    Lessons learned is activity to lessons identified to updated processes
    Lessons Learned

    Part of Knowledge Management

    The lessons learned process is an application of knowledge management.

    Lessons identified is generate, assess, and share.

    Updated processes (and documents) is contextualize, apply and update.

    Lessons Learned in the Context of Knowledge Management

    Identify Lessons Learned

    Identifying lessons needs to be done regularly, the closer to actual change management and control activities the better. The formality of this exercise depends on the scale of the change. There are basically a few major forms:

    • After action reviews: held daily (or other regular cycle) for high intensity learning. Tends to be very focused on questions of the day.
    • Retrospective: Held at specific periods (for example project gates or change control status changes. Tends to have a specific focus on a single project.
    • Consistency discussions: Held periodically among a community of practice, such as quality reviewers or multiple site process owners. This form looks holistically at all changes over a period of time (weekly, monthly, quarterly). Very effective when linked to a set of leading and lagging indicators.
    • Incident and events: Deviations happen. Make sure you learn the lessons and implement solutions.

    The chosen formality should be based on the level of change. A healthy organization will be utilizing all of these.

    Level of ChangeForm of Lesson Learned
    TransactionalConsistency discussion
    After action (when things go wrong)
    OrganizationalRetrospective
    After action (weekly, daily as needed)
    TransformationalRetrospective
    After action (daily)

    Successful lessons learned:

    • Are based on solid performance data: Based on facts and the analysis of facts.
    • Look at positive and negative experiences.
    • Refer back to the change management process, objectives of the change, and other success criteria
    • Separate experience from opinion as much as possible. A lesson arises from actual experience and is an objective reflection on the results.
    • Generate distinct lessons from which others can learn and take action. A good action avoids generalities.

    In practice there are a lot of similarities between the techniques to facilitate a good lessons learned and a root cause analysis. Start with a good core of questions, starting with the what:

    • What were some of the key issues?
    • What were the success factors?
    • What worked well?
    • What did not work well?
    • What were the challenges and pitfalls?
    • What would you approach differently if you ever did this again?

    From these what questions, we can continue to narrow in on the learnings by asking why and how questions. Ask open questions, and utilize all the techniques of root cause analysis here.

    Then once you are at (or close) to a defined issue for the learning (a root cause), ask a future-tense question to make it actionable, such as:

    • What would your advice be for someone doing this in the future?
    • What would you do next time?

    Press for specifics. if it is not actionable it is not really a learning.

    Update the Process

    Learning implies memory, and an organization’s memories usually require procedures, job aids and other tools to be updated and created. In short, lessons should evolve your process. This is often the responsibility of the change management process owner. You need to make sure the lesson actually takes hold.

    Differences between effectiveness reviews and lesson’s learned

    There are three things to answer in every change

    1. Was the change effective – did it meet the intended purposes
    2. Did the change have any unexpected effects
    3. What can we learn from this change for the next change?

    Effectiveness reviews are 1 and 2 (based on a risk based approach) while lessons learned is 3. Lessons learned contributes to the health of the system and drives continuous improvements in the how we make changes.

    Citations

    • Lesson learned management model for solving incidents. (2017). 2017 12th Iberian Conference on Information Systems and Technologies (CISTI), Information Systems and Technologies (CISTI), 2017 12th Iberian Conference On, 1.
    • Fowlin, J. j & Cennamo, K. (2017). Approaching Knowledge Management Through the Lens of the Knowledge Life Cycle: a Case Study Investigation. TechTrends: Linking Research & Practice to Improve Learning61(1), 55–64. 
    • Michell, V., & McKenzie, J. (2017). Lessons learned: Structuring knowledge codification and abstraction to provide meaningful information for learning. VINE: The Journal of Information & Knowledge Management Systems47(3), 411–428.
    • Milton, N. J. (2010). The Lessons Learned Handbook : Practical Approaches to Learning From Experience. Burlington: Chandos Publishing.
    • Paul R. Carlile. (2004). Transferring, Translating, and Transforming: An Integrative Framework for Managing Knowledge across Boundaries. Organization Science, (5), 555.
    • Secchi, P. (Ed.) (1999). Proceedings of Alerts and Lessons Learned: An Effective way to prevent failures and problems. Technical Report WPP-167. Noordwijk, The Netherlands: ESTEC