Assessing the Quality of Our Risk Management Activities

Twenty years on, risk management in the pharmaceutical world continues to be challenging. Ensure that risk assessments are systematic, structured, and based on scientific knowledge. A large part of the ICH Q9(R1) revision was written to address continued struggles with subjectivity, formality, and decision-making. And quite frankly, it’s clear to me that we, as an industry, are still working to absorb those messages these last two years.

A big challenge is that we struggle to measure the effectiveness of our risk assessments. Quite frankly, this is a great place for a rubric.

Luckily, we have a good tool out there to adopt: the Risk Analysis Quality Test (RAQT1.0), developed by the Society for Risk Analysis (SRA). This comprehensive framework is designed to evaluate and improve the quality of risk assessments. We can apply this tool to meet the requirements of the International Conference on Harmonisation (ICH) Q9, which outlines quality risk management principles for the pharmaceutical industry. From that, we can drive continued improvement in our risk management activities.

Components of RAQT1.0

The Risk Analysis Quality Test consists of 76 questions organized into 15 categories:

  • Framing the Analysis and Its Interface with Decision Making
  • Capturing the Risk Generating Process (RGP)
  • Communication
  • Stakeholder Involvement
  • Assumptions and Scope Boundary Issues
  • Proactive Creation of Alternative Courses of Action
  • Basis of Knowledge
  • Data Limitations
  • Analysis Limitations
  • Uncertainty
  • Consideration of Alternative Analysis Approaches
  • Robustness and Resilience of Action Strategies
  • Model and Analysis Validation and Documentation
  • Reporting
  • Budget and Schedule Adequacy

Application to ICH Q9 Requirements

ICH Q9 emphasizes the importance of a systematic and structured risk assessment process. The RAQT can be used to ensure that risk assessments are thorough and meet quality standards. For example, Category G (Basis of Knowledge) and Category H (Data Limitations) help in evaluating the scientific basis and data quality of the risk assessment, aligning with ICH Q9’s requirement for using available knowledge and data.

The RAQT’s Category B (Capturing the Risk Generating Process) and Category C (Communication) can help in identifying and communicating risks effectively. This aligns with ICH Q9’s requirement to identify potential risks based on scientific knowledge and understanding of the process.

Categories such as Category I (Analysis Limitations) and Category J (Uncertainty) in the RAQT help in analyzing the risks and addressing uncertainties, which is a key aspect of ICH Q9. These categories ensure that the analysis is robust and considers all relevant factors.

The RAQT’s Category A (Framing the Analysis and Its Interface with Decision Making) and Category F (Proactive Creation of Alternative Courses of Action) are crucial for evaluating risks and developing mitigation strategies. This aligns with ICH Q9’s requirement to evaluate risks and determine the need for risk reduction.

Categories like Category L (Robustness and Resilience of Action Strategies) and Category M (Model and Analysis Validation and Documentation) in the RAQT help in ensuring that the risk control measures are robust and well-documented. This is consistent with ICH Q9’s emphasis on implementing and reviewing controls.

Category D (Stakeholder Involvement) of the RAQT ensures that stakeholders are engaged in the risk management process, which is a requirement under ICH Q9 for effective communication and collaboration.

The RAQT can be applied both retrospectively and prospectively, allowing for the evaluation of past risk assessments and the planning of future ones. This aligns with ICH Q9’s requirement for periodic review and continuous improvement of the risk management process.

Creating a Rubric

To make this actionable we need a tool, a rubric, to allow folks to evaluate what goods look like. I would insert this tool into the quality oversite of risk management.

Category A: Framing the Analysis and Its Interface With Decision Making

CriteriaExcellent (4)Good (3)Fair (2)Poor (1)
Problem DefinitionClearly and comprehensively defines the problem, including all relevant aspects and stakeholdersAdequately defines the problem with most relevant aspects consideredPartially defines the problem with some key aspects missingPoorly defines the problem or misses critical aspects
Analytical ApproachSelects and justifies an optimal analytical approach, demonstrating deep understanding of methodologiesChooses an appropriate analytical approach with reasonable justificationSelects a somewhat relevant approach with limited justificationChooses an inappropriate approach or provides no justification
Data Collection and ManagementThoroughly identifies all necessary data sources and outlines a comprehensive data management planIdentifies most relevant data sources and provides a adequate data management planIdentifies some relevant data sources and offers a basic data management planFails to identify key data sources or lacks a coherent data management plan
Stakeholder IdentificationComprehensively identifies all relevant stakeholders and their interestsIdentifies most key stakeholders and their primary interestsIdentifies some stakeholders but misses important ones or their interestsFails to identify major stakeholders or their interests
Decision-Making ContextProvides a thorough analysis of the decision-making context, including constraints and opportunitiesAdequately describes the decision-making context with most key factors consideredPartially describes the decision-making context, missing some important factorsPoorly describes or misunderstands the decision-making context
Alignment with Organizational GoalsDemonstrates perfect alignment between the analysis and broader organizational objectivesShows good alignment with organizational goals, with minor gapsPartially aligns with organizational goals, with significant gapsFails to align with or contradicts organizational goals
Communication StrategyDevelops a comprehensive strategy for communicating results to all relevant decision-makersOutlines a good communication strategy covering most key decision-makersProvides a basic communication plan with some gapsLacks a clear strategy for communicating results to decision-makers

This rubric provides a framework for assessing the quality of work in framing an analysis and its interface with decision-making. It covers key aspects such as problem definition, analytical approach, data management, stakeholder consideration, decision-making context, alignment with organizational goals, and communication strategy. Each criterion is evaluated on a scale from 1 (Poor) to 4 (Excellent), allowing for nuanced assessment of performance in each area.

To use this rubric effectively:

  1. Adjust the criteria and descriptions as needed to fit your specific context or requirements.
  2. Ensure that the expectations for each level (Excellent, Good, Fair, Poor) are clear and distinguishable.

My next steps will be to add specific examples or indicators for each level to provide more guidance to both assessors and those being assessed.

I also may, depending on internal needs, want to assign different weights to each criterion based on their relative importance in your specific context. In this case I think each ends up being pretty similar.

I would then go and add the other sections. For example, here is category B with some possible weighting.

Category B: Capturing the Risk Generating Process (RGP)

ComponentWeight FactorExcellentSatisfactoryNeeds ImprovementPoor
B1. Comprehensiveness4The analysis includes: i) A structured taxonomy of hazards/events demonstrating comprehensiveness ii) Each scenario spelled out with causes and types of change iii) Explicit addressing of potential “Black Swan” events iv) Clear description of implications of such events for risk managementThe analysis includes 3 out of 4 elements from the Excellent criteria, with minor gaps that do not significantly impact understandingThe analysis includes only 2 out of 4 elements from the Excellent criteria, or has significant gaps in comprehensivenessThe analysis includes 1 or fewer elements from the Excellent criteria, severely lacking in comprehensiveness
B2. Basic Structure of RGP2Clearly identifies and accounts for the basic structure of the RGP (e.g. linear, chaotic, complex adaptive) AND Uses appropriate mathematical structures (e.g. linear, quadratic, exponential) that match the RGP structureIdentifies the basic structure of the RGP BUT does not fully align mathematical structures with the RGPAttempts to identify the RGP structure but does so incorrectly or incompletely OR Uses mathematical structures that do not align with the RGPDoes not identify or account for the basic structure of the RGP
B3. Complexity of RGP3Lists all important causal and associative links in the RGP AND Demonstrates how each link is accounted for in the analysisLists most important causal and associative links in the RGP AND Demonstrates how most links are accounted for in the analysisLists some causal and associative links but misses key elements OR Does not adequately demonstrate how links are accounted for in the analysisDoes not list causal and associative links or account for them in the analysis
B4. Early Warning Detection3Includes a clear process for detecting early warnings of potential surprising risk aspects, beyond just concrete eventsIncludes a process for detecting early warnings, but it may be limited in scope or not fully developedMentions the need for early warning detection but does not provide a clear processDoes not address early warning detection
B5. System Changes2Fully considers the possibility of system changes AND Establishes adequate mechanisms to detect those changesConsiders the possibility of system changes BUT mechanisms to detect changes are not fully developedMentions the possibility of system changes but does not adequately consider or establish detection mechanismsDoes not consider or address the possibility of system changes

    I definitely need to go back and add more around structure requirements. The SRA RAQT tool needs some more interpretation here.

    Category C: Risk Communication

    ComponentWeight FactorExcellentSatisfactoryNeeds ImprovementPoor
    C1. Integration of Communication into Risk Analysis3Communication is fully integrated into the risk analysis following established norms). All aspects of the methodology are clearly addressed including context establishment, risk assessment (identification, analysis, evaluation), and risk treatment. There is clear evidence of pre-assessment, management, appraisal, characterization and evaluation. Knowledge about the risk is thoroughly categorized.Communication is integrated into the risk analysis following most aspects of established norms. Most key elements of methodologies like ISO 31000 or IRGC are addressed, but some minor aspects may be missing or unclear. Knowledge about the risk is categorized, but may lack some detail.Communication is partially integrated into the risk analysis, but significant aspects of established norms are missing. Only some elements of methodologies like ISO 31000 or IRGC are addressed. Knowledge categorization about the risk is incomplete or unclear.There is little to no evidence of communication being integrated into the risk analysis following established norms. Methodologies like ISO 31000 or IRGC are not followed. Knowledge about the risk is not categorized.
    C2. Adequacy of Risk Communication3All considerations for effective risk communication have been applied to ensure adequacy between analysts and decision makers, analysts and other stakeholders, and decision makers and stakeholders. There is clear evidence that all parties agree the communication is adequate.Most considerations for effective risk communication have been applied. Communication appears adequate between most parties, but there may be minor gaps or areas where agreement on adequacy is not explicitly stated.Some considerations for effective risk communication have been applied, but there are significant gaps. Communication adequacy is questionable between one or more sets of parties. There is limited evidence of agreement on communication adequacy.Few to no considerations for effective risk communication have been applied. There is no evidence of adequate communication between analysts, decision makers, and stakeholders. There is no indication of agreement on communication adequacy.

    Category D: Stakeholder Involvement

    CriteriaWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    Stakeholder Identification4All relevant stakeholders are systematically and comprehensively identifiedMost relevant stakeholders are identified, with minor omissionsSome relevant stakeholders are identified, but significant groups are missedFew or no relevant stakeholders are identified
    Stakeholder Consultation3All identified stakeholders are thoroughly consulted, with their perceptions and concerns fully consideredMost identified stakeholders are consulted, with their main concerns consideredSome stakeholders are consulted, but consultation is limited in scope or depthFew or no stakeholders are consulted
    Stakeholder Engagement3Stakeholders are actively engaged throughout the entire risk management process, including problem framing, decision-making, and implementationStakeholders are engaged in most key stages of the risk management processStakeholders are engaged in some aspects of the risk management process, but engagement is inconsistentStakeholders are minimally engaged or not engaged at all in the risk management process
    Effectiveness of Involvement2All stakeholders would agree that they were effectively consulted and engagedMost stakeholders would agree that they were adequately consulted and engagedSome stakeholders may feel their involvement was insufficient or ineffectiveMost stakeholders would likely feel their involvement was inadequate or ineffective

    Category E: Assumptions and Scope Boundary Issues

    CriterionWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    E1. Important assumptions and implications listed4All important assumptions and their implications for risk management are systematically listed in clear language understandable to decision makers. Comprehensive and well-organized.Most important assumptions and implications are listed in language generally clear to decision makers. Some minor omissions or lack of clarity.Some important assumptions and implications are listed, but significant gaps exist. Language is not always clear to decision makers.Few or no important assumptions and implications are listed. Language is unclear or incomprehensible to decision makers.
    E2. Risks of assumption deviations evaluated3Risks of all significant assumptions deviating from the actual Risk Generating Process are thoroughly evaluated. Consequences and implications are clearly communicated to decision makers.Most risks of significant assumption deviations are evaluated. Consequences and implications are generally communicated to decision makers, with minor gaps.Some risks of assumption deviations are evaluated, but significant gaps exist. Communication to decision makers is incomplete or unclear.Few or no risks of assumption deviations are evaluated. Little to no communication of consequences and implications to decision makers.
    E3. Scope boundary issues and implications listed3All important scope boundary issues and their implications for risk management are systematically listed in clear language understandable to decision makers. Comprehensive and well-organized.Most important scope boundary issues and implications are listed in language generally clear to decision makers. Some minor omissions or lack of clarity.Some important scope boundary issues and implications are listed, but significant gaps exist. Language is not always clear to decision makers.Few or no important scope boundary issues and implications are listed. Language is unclear or incomprehensible to decision makers.

    Category F: Proactive Creation of Alternative Courses of Action

    CriteriaWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    Systematic generation of alternatives4A comprehensive and structured process is used to systematically generate a wide range of alternative courses of action, going well beyond initially considered optionsA deliberate process is used to generate multiple alternative courses of action beyond those initially consideredSome effort is made to generate alternatives, but the process is not systematic or comprehensiveLittle to no effort is made to generate alternatives beyond those initially considered
    Goal-focused creation3All generated alternatives are clearly aligned with and directly address the stated goals of the analysisMost generated alternatives align with the stated goals of the analysisSome generated alternatives align with the goals, but others seem tangential or unrelatedGenerated alternatives (if any) do not align with or address the stated goals
    Consideration of robust/resilient options3Multiple robust and resilient alternatives are developed to address various uncertainty scenariosAt least one robust or resilient alternative is developed to address uncertaintyRobustness and resilience are considered, but not fully incorporated into alternativesRobustness and resilience are not considered in alternative generation
    Examination of unintended consequences2Thorough examination of potential unintended consequences for each alternative, including action-reaction spiralsSome examination of potential unintended consequences for most alternativesLimited examination of unintended consequences for some alternativesNo consideration of potential unintended consequences
    Documentation of alternative creation process1The process of alternative generation is fully documented, including rationale for each alternativeThe process of alternative generation is mostly documentedThe process of alternative generation is partially documentedThe process of alternative generation is not documented

    Category G: Basis of Knowledge

    CriterionWeightExcellent (4)Satisfactory (3)Needs Improvement (2)Poor (1)
    G1. Characterization of knowledge basis4All inputs are clearly characterized (empirical, expert elicitation, testing, modeling, etc.). Distinctions between broadly accepted and novel analyses are explicitly stated.Most inputs are characterized, with some minor omissions. Distinctions between accepted and novel analyses are mostly clear.Some inputs are characterized, but significant gaps exist. Limited distinction between accepted and novel analyses.Little to no characterization of knowledge basis. No distinction between accepted and novel analyses.
    G2. Strength of knowledge adequacy3Strength of knowledge is thoroughly characterized in terms of its adequacy to support risk management decisions. Limitations are clearly articulated.Strength of knowledge is mostly characterized, with some minor gaps in relating to decision support adequacy.Limited characterization of knowledge strength. Unclear how it relates to decision support adequacy.No characterization of knowledge strength or its adequacy for decision support.
    G3. Communication of knowledge limitations4All knowledge limitations and their implications for risk management are clearly communicated to decision makers in understandable language.Most knowledge limitations and implications are communicated, with minor clarity issues.Some knowledge limitations are communicated, but significant gaps exist in clarity or completeness.Knowledge limitations are not communicated or are presented in a way decision makers cannot understand.
    G4. Consideration of surprises and unforeseen events3Thorough consideration of potential surprises and unforeseen events (Black Swans). Their importance is clearly articulated.Consideration of surprises and unforeseen events is present, with some minor gaps in articulating their importance.Limited consideration of surprises and unforeseen events. Their importance is not clearly articulated.No consideration of surprises or unforeseen events.
    G5. Conflicting expert opinions2All conflicting expert opinions are systematically considered and reported to decision makers as a source of uncertainty.Most conflicting expert opinions are considered and reported, with minor omissions.Some conflicting expert opinions are considered, but significant gaps exist in reporting or consideration.Conflicting expert opinions are not considered or reported.
    G6. Consideration of unconsidered knowledge2Explicit measures are implemented to check for knowledge outside the analysis group (e.g., independent review).Some measures are in place to check for outside knowledge, but they may not be comprehensive.Limited consideration of knowledge outside the analysis group. No formal measures in place.No consideration of knowledge outside the analysis group.
    G7. Consideration of disregarded low-probability events1Explicit measures are implemented to check for events disregarded due to low probabilities based on critical assumptions.Some consideration of low-probability events, but measures may not be comprehensive.Limited consideration of low-probability events. No formal measures in place.No consideration of events disregarded due to low probabilities.

    This rubric, once done, is a tool to guide assessment and provide feedback. It should be flexible enough to accommodate unique aspects of individual work while maintaining consistent standards across evaluations. I’d embed it in the quality approval step.

    Metrics Scoring

    As I develop metrics for FUSE, it is important to have a method rating a metric for effectiveness. Here’s the rubric I’ll be using.

    RelevanceMeasurabilityPrecisionActionabilityPresence of Baseline
    Rating ScaleHow strongly does this metric connect to business objectives?How much effort would it take to track this metric?How often and by what margin does the metric change?Can we clearly articulate actions we would take in response to this metric?Does internal or external baseline data exist to indicate good/poor performance for this metric?
    5Empirically Direct – Data proves the metric directly supports at least one business objectiveAlmost None – Data already collected and visualized in a centralized systemHighly Predictable -– Metric fluctuates narrowly and infrequentlyClear consensus on action, and capability currently exists to take actionBaseline can be based on both internal and external data
    4Logically Direct – Clear logic shows how the metric directly supports at least one business objectiveLow – Data collected and measured consistently, but not aggregated in central system.Somewhat Predictable – Metric fluxtuates either narrowly or infrequentlySome consensus on action, and capability currently exists to take actionBaseline can be based on either internal or external data
    3Empirically Indirect – Data proves the metric indirectly supports at least one business objectiveMedium – Data exists but in local systems, minor collection or measurement challenges may exist.Neither Volatile or PredictableSome consensus on action, and capability to take action expected in the futureBaseline must be based on incomplete or directional data
    2Logically Indirect – Clear logic shows how the metric indirectly supports at least one business objectiveHigh – Inconsistent measurements across sites, data not being collected regularly.Somewhat Volatile – Metric fluctuates either widely or frequentlySome consensus on action, but no current or expected future capability to take actionNo data exists to establish baseline, but data can be generated within six months
    1Unclear – Connection to business objective is unclearPotentially Prohibitive – No defined measurement or collection method in place.Highly Volatile – Metric fluctuates widely and frequentlyNo consensus on actionNo data exists to establish baseline, and data needed will take more than a year to generate
    Weights25%20%20%25%10%

    Measuring Training Effectiveness for Organizational Performance

    When designing training we want to make sure four things happen:

    • Training is used correctly as a solution to a performance problem
    • Training has the the right content, objectives or methods
    • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
    • Training delivers the expected learning

    Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

    The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

    Level 1: Reaction

    Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

    Level 2: Learning

    Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

    Level 3: Behavior

    Level 3 measures whether the learning is transferred into practice in the workplace.

    Level 4: Results

    Measures the effect on the business environment. Do we meet objectives?

    Evaluation LevelCharacteristicsExamples
    Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
    ▪ Did they like the venue, equipment, timing, domestics, etc?
    ▪ Did the trainees like and enjoy the training?
    ▪ Was it a good use of their time?
    ▪ Level of participation
    ▪ Ease and comfort of experience
    ▪ feedback forms based on subjective personal reaction to the training experience
    ▪ Verbal reaction which can be analyzed
    ▪ Post-training surveys or questionnaires
    ▪ Online evaluation or grading by delegates
    ▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
    ▪ typically ‘happy sheets’
    Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
    ▪ Did the trainees learn what intended to be taught?
    ▪ Did the trainee experience what was intended for them to experience?
    ▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
    ▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
    ▪ Typically assessments or tests before and after the training
    ▪ Methods of assessment need to be closely related to the aims of the learning
    ▪ Reliable, clear scoring and measurements need to be established
    ▪ hard-copy, electronic, online or interview style assessments are all possible
    Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
    ▪ Did the trainees put their learning into effect when back on the job?
    ▪ Were the relevant skills and knowledge used?
    ▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
    ▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
    ▪ Was the change in behavior and new level of knowledge sustained?
    ▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
    ▪ Assessments need to be designed to reduce subjective judgment of the observer
    ▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
    ▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
    Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

    Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
    The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
    ▪ This process overlays normal good management practice – it simply needs linking to the training input
    ▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
    4 Levels of Training Effectiveness

    Example in Practice – CAPA

    When building a training program, start with the intended behaviors that will drive results. Evaluating our CAPA program, we have two key aims, which we can apply measures against.

    BehaviorMeasure
    Investigate to find root cause% recurring issues
    Implement actions to eliminate root causePreventive to corrective action ratio

    To support each of these top-level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

    Our four levels to measure training effectiveness will now look like this:

    LevelMeasure
    Level 1: Reaction Personal action plan and a happy sheet
    Level 2: Learning Completion of Rubric on a sample event
    Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
    Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

    This is all about measuring the effectiveness of the transfer of behaviors.

    Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
    Training participants are required to attend follow-up sesions and other transfer interventions.

    What is indicates:
    Individuals and teams are committed to the change and obtaining the intended benefits.
    Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

    What is indicates:
    They key factor of a trainee is attendance, not behavior change.
    The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

    What is indicates:
    The organization has a clear vision and expectation on what the training should accomplish.
    The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

    What is indicates:
    The organization only has a vague idea of what the training should accomplish.
    Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

    What is indicates:
    Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
    Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

    What is indicates:
    Transfer is not considered very important in the organziaiton. Managers have more important things to do.
    Each training ends with careful planning of individual transfer intentions.

    What is indicates:
    Defining transfer intentions is a central component of the training.
    Transfer planning at the end of the training does not take place or only sporadically.

    What is indicates:
    Defining training intentions is not (or not an essential) part of the training.

    Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

    Team Effectiveness

    With much of the work in organizations accomplished through teams it is important to determine the factors that lead to effective as well as ineffective team processes and to better specify how, why, and when they contribute. It doesn’t matter if the team is brought together for a specific project and then disbands, or if it is a fairly permanent part of the organization, similar principles are at work.

    Input-Process-Output model

    The input-process-output model of teams is a great place to start. While simplistic, it can offer a good model of what makes teams works and is applicable to the different types of teams.

    Input factors are the organizational context, team composition, task design that influence the team. Process factors are what mediates between the inputs and desired outputs.

    • Leadership:  The leadership style(s) (participative, facilitative, transformational, directive, etc) of the team leader influences the team toward the achievement of goals.
    • Management support refers to the help or effort provided by senior management to assist the project team, including managerial involvement and resource support.
    • Rewards are the recompense that the organization gives in return for good work.
    • Knowledge/skills are the knowledge, experience and capability of team members to process, interpret, manipulate and use information.
    • Team diversity includes functional diversity as well as overall diversity.
    • Goal clarity is the degree to which the goals of the project are well defined and the importance of the goals to the organization is clearly communicated to all team members.
    • Cooperation is the measure of how well team members work with each other and with other groups.
    • Communication is the exchange of knowledge and information related to tasks with the team (internal) or between team members and external stakeholders (external).
    • Learning activities are the process by which a team takes action, contains feedback and makes changes to improve. Under this fits the PDCA lifecycle, including Lean, SixSigma and similar problem solving methodologies..
    • Cohesion is the spirit of togetherness and support for other team members that helps team members quickly resolve conflicts without residual hard feelings, also referred to as team trust, team spirit, team member support or team member involvement.
    • Effort includes the amount of time that team members devote to the project.
    • Commitment refers to the condition where team members are bound emotionally or intellectually to the project and to each other during the team process.

    Process Factors are usually the focus on team excellence frameworks, such as the ASQ or the PMI.

    Outputs, or outcomes, are the consequences of the team’s actions or activities:

    • Effectiveness is the extent a project achieves the performance expectations of key project stakeholders. Expectations are usually different for different projects and across different stake-holders; thus, various measures have been used to evaluate effectiveness, usually quality, functionality, or reliability. Effectiveness can be meeting customer/user requirements, meeting project goals or some other related set of measures.
    • Efficiency is the ability of the project team to meet its budget and schedule goals and utilize resources within constraints Measures include: adherence to budget, adherence to schedule, resource utilization within constraints, etc.
    • Innovation is the creative accomplishment of teams in generating new ideas, methods, approaches, inventions, or applications and the degree to which the project outputs were novel.

    Under this model we can find a various levers to improve out outcomes and enhance the culture of our teams.

    Lessons Learned and Change Management

    One of the hallmarks of a quality culture is learning from our past experiences, to eliminate repeat mistakes and to reproduce success. The more times you do an activity, the more you learn, and the better you get (within limits for simple activities).  Knowledge management is an enabler of quality systems, in part, to focus on learning and thus accelerate learning across the organization as a whole, and not just one person or a team.

    This is where the” lessons learned” process comes in.  There are a lot of definitions of lessons learned out there, but the definition I keep returning to is that a lessons learned is a change in personal or organizational behavior as a result from learning from experience. Ideally, this is a permanent, institutionalized change, and this is often where our quality systems can really drive continuous improvement.

    Lessons learned is activity to lessons identified to updated processes
    Lessons Learned

    Part of Knowledge Management

    The lessons learned process is an application of knowledge management.

    Lessons identified is generate, assess, and share.

    Updated processes (and documents) is contextualize, apply and update.

    Lessons Learned in the Context of Knowledge Management

    Identify Lessons Learned

    Identifying lessons needs to be done regularly, the closer to actual change management and control activities the better. The formality of this exercise depends on the scale of the change. There are basically a few major forms:

    • After action reviews: held daily (or other regular cycle) for high intensity learning. Tends to be very focused on questions of the day.
    • Retrospective: Held at specific periods (for example project gates or change control status changes. Tends to have a specific focus on a single project.
    • Consistency discussions: Held periodically among a community of practice, such as quality reviewers or multiple site process owners. This form looks holistically at all changes over a period of time (weekly, monthly, quarterly). Very effective when linked to a set of leading and lagging indicators.
    • Incident and events: Deviations happen. Make sure you learn the lessons and implement solutions.

    The chosen formality should be based on the level of change. A healthy organization will be utilizing all of these.

    Level of ChangeForm of Lesson Learned
    TransactionalConsistency discussion
    After action (when things go wrong)
    OrganizationalRetrospective
    After action (weekly, daily as needed)
    TransformationalRetrospective
    After action (daily)

    Successful lessons learned:

    • Are based on solid performance data: Based on facts and the analysis of facts.
    • Look at positive and negative experiences.
    • Refer back to the change management process, objectives of the change, and other success criteria
    • Separate experience from opinion as much as possible. A lesson arises from actual experience and is an objective reflection on the results.
    • Generate distinct lessons from which others can learn and take action. A good action avoids generalities.

    In practice there are a lot of similarities between the techniques to facilitate a good lessons learned and a root cause analysis. Start with a good core of questions, starting with the what:

    • What were some of the key issues?
    • What were the success factors?
    • What worked well?
    • What did not work well?
    • What were the challenges and pitfalls?
    • What would you approach differently if you ever did this again?

    From these what questions, we can continue to narrow in on the learnings by asking why and how questions. Ask open questions, and utilize all the techniques of root cause analysis here.

    Then once you are at (or close) to a defined issue for the learning (a root cause), ask a future-tense question to make it actionable, such as:

    • What would your advice be for someone doing this in the future?
    • What would you do next time?

    Press for specifics. if it is not actionable it is not really a learning.

    Update the Process

    Learning implies memory, and an organization’s memories usually require procedures, job aids and other tools to be updated and created. In short, lessons should evolve your process. This is often the responsibility of the change management process owner. You need to make sure the lesson actually takes hold.

    Differences between effectiveness reviews and lesson’s learned

    There are three things to answer in every change

    1. Was the change effective – did it meet the intended purposes
    2. Did the change have any unexpected effects
    3. What can we learn from this change for the next change?

    Effectiveness reviews are 1 and 2 (based on a risk based approach) while lessons learned is 3. Lessons learned contributes to the health of the system and drives continuous improvements in the how we make changes.

    Citations

    • Lesson learned management model for solving incidents. (2017). 2017 12th Iberian Conference on Information Systems and Technologies (CISTI), Information Systems and Technologies (CISTI), 2017 12th Iberian Conference On, 1.
    • Fowlin, J. j & Cennamo, K. (2017). Approaching Knowledge Management Through the Lens of the Knowledge Life Cycle: a Case Study Investigation. TechTrends: Linking Research & Practice to Improve Learning61(1), 55–64. 
    • Michell, V., & McKenzie, J. (2017). Lessons learned: Structuring knowledge codification and abstraction to provide meaningful information for learning. VINE: The Journal of Information & Knowledge Management Systems47(3), 411–428.
    • Milton, N. J. (2010). The Lessons Learned Handbook : Practical Approaches to Learning From Experience. Burlington: Chandos Publishing.
    • Paul R. Carlile. (2004). Transferring, Translating, and Transforming: An Integrative Framework for Managing Knowledge across Boundaries. Organization Science, (5), 555.
    • Secchi, P. (Ed.) (1999). Proceedings of Alerts and Lessons Learned: An Effective way to prevent failures and problems. Technical Report WPP-167. Noordwijk, The Netherlands: ESTEC