Build Key Risk Indicators

We perform risk assessments; execute risk mitigations; and we end up with four types of inherent risks (parenthesis is opportunities) in our risk register:

  1. Mitigated (or enhanced)
  2. Avoided (or exploited)
  3. Transferred (or shared)
  4. Accepted

We’ve built a set of risk response plans to ensure we are continuing to treat these risks. And now we need to monitor the effectiveness of our risk plan and to ensure that the risks are behaving in the manner anticipated during risk treatment.

The living risk assessment is designed to conduct reassessment of risks after treatment and continuously throughout the life cycle. However, not all systems and risks need to be reassessed continually, and the organization should prioritize which systems should be reassessed based on a schedule.

Identify indicators that inform the organization about the status of the risk without having to conduct a full risk assessment every time. The trending status of these indicators can act as a flag for investigations, which may result in complete risk assessments.

This risk indicator is then a metric that indicates the state of the level of risk. It is important to note that not all indicators show the exact level of risk exposure, instead providing a trend of drivers, causes or intermediary effects of risk.

The most important risks can be categorized as key risks and the indicators for these key risks are known as key risk indicators (KRIs) which can be defined as: A metric that provides a leading or lagging indicator of the current state of risk exposure on key objectives. KRIs can be used to continually assess current and predict potential risk exposures.

These KRIs need to have a strong relationship with the key performance indicators of the organization.

KRIs are monitored through Quality Management Review.

A good rule of thumb is as you identify the key performance indicators to assess the performance of a specific process, product, system or function you then identify the risks and the KRIs for that objective.

Strive to have leading indicators that measure the elements that influences the risk performance. Lagging indicators will measure they actual performance of the risk controls.

These KRIs qualitatively or quantitatively present the risk exposure by having a strong relationship qirh the risk, its intermediate output or its drivers.

Let’s think in terms of a pharmaceutical supply chain. We’ve done our risk assessments and end up with a top level view like this:

For the risk column we should have some good probabilities and impacts and mitigations in place. We can then chose some KRIs to monitor, such as

  1. Nonconformance rate
  2. Supplier score card
  3. Lab error rate
  4. Product Complaints

As we develop, our KRIs can get more specific and focused. A good KRI is:

  • Quantifiable
  • Measurable (accurately and precisely) 
  • Can be validated (have a high level of confidence) 
  • Relevant (measuring the right thing associated with decisions) 

In developing a KRI to serve as a leading indicator for potential future occurrences of a risk, it can be helpful to think through the chain of events that led to the event so that management can uncover the ultimate driver (i.e., root cause(s)) of the risk event. When KRIs for root cause events and intermediate events are monitored, we are in an enviable position to identify early mitigation strategies that can begin to reduce or eliminate the impact associated with an emerging risk event.

These KRIs will help us monitor and quantify our risk exposure. They help our organizations compare business objectives and strategy to actual performance to isolate changes, measure the effectiveness of processes or projects, and demonstrate changes in the frequency or impact of a specific risk event.

Effective KRIs can provide value to the organization in a variety of ways. Potential value may be derived from each of the following contributions:

  • Risk Appetite – KRIs require the determination of appropriate thresholds for action at different levels within the organization. By mapping KRI measures to identified risk appetite and tolerance levels, KRIs can be a useful tool for better articulating the risk appetite that best represents the organizational mindset.
  • Risk and Opportunity Identification – KRIs can be designed to alert management to trends that may adversely affect the achievement of organizational objectives or may indicate the presence of new opportunities.
  • Risk Treatment – KRIs can initiate action to mitigate developing risks by serving as triggering mechanisms. KRIs can serve as controls by defining limits to certain actions.

The Risk Register

Every organization should ask themselves seven questions about the health of their risk management program.

  1. Do you have a risk management plan?
  2. Have you identified and captured your risks in a risk register?
  3. How have you evaluated and prioritized your risks?
  4. Have you engaged the appropriate stakeholders in the risk identification and evaluation processes?
  5. What about risk owners? Does each risk have a risk owner?
  6. Have the risk owners developed risk response plans for the highest risks?
  7. Are you facilitating a review of your risks periodically, resulting in updates to the risk register and effective risk responses?

At the heart of this program sits the Risk Register, which brings together information about risks to inform those exposed to risks and those who have responsibility for their management. A risk register is used to record and track information about individual risks and how they are being controlled. It can be used to communicate information about risks to stakeholders and highlight particularly important risks. While it can be used at any level of the organization where there are a large number of risks, controls and treatments that need to be tracked, a risk register really shines as a central component of a quality management review. The risk register includes:

  • List of risks, failure modes or hazards and expected outcomes
  • A statement about the probability of consequences occurring
  • Sources or causes of the risk
  • Priority or risk levels
  • What is currently being done to control the risk
  • Risk owner
  • Actual outcome, if and when available

Risks are generally listed individually as separate events but interdependencies should be flagged.

In recording information about risks, the distinction between risks (the potential effects of what might happen) and risk sources (how or why it might happen) and controls that might fail should be explicit. It can also be useful to indicate the early warning signs that an event might be about to occur.

Many risk registers also include some rating of the significance of a risk, an indication of whether a risk is considered to be acceptable or tolerable, or whether further treatment is needed and the reasons for this decision. Where a significance rating is applied to a risk based on consequences and their likelihood, this should take account of the possibility that controls will fail. A level of risk should not be allocated for the failure of a control as if it were an independent risk.

A risk register is used as the basis for tracking implementation of proposed treatments, so it should contain information about treatments and how they will be implemented, or make reference to other documents or data bases with this information. (Such information can include risk owners, actions, action owners, action business case summaries, budgets and timelines, etc.). This living document can usually roll (or even serve as) the Quality Plan.

Strengths of risk registers include the following.

  • Information about risks is brought together in a form where actions required can be identified and tracked.
  • Information about different risks is presented in a comparable format, which can be used to indicate priorities and is relatively easy to interrogate.
  • The construction of a risk register usually involves many people and raises general awareness of the need to manage risk.

By doing this, the risk register serves as a central underpining for the organization as it builds a risk culture, driving transparency and accountability.

Building Risk Based Thinking in the Organization requires a strong governance structure


Pay attention the the following limitations:

  • Risks captured in risk registers are typically based on events, which can make it difficult to accurately characterize some forms of risk
  • The apparent ease of use can give misplaced confidence in the information because it can be difficult to describe risks consistently and sources of risk, risks, and weaknesses in controls for risk are often confused.
  • There are many different ways to describe a risk and any priority allocated will depend on the way the risk is described and the level of disaggregation of the issue.
  • Considerable effort is required to keep a risk register up to date (for example, all proposed treatments should be listed as current controls once they are implemented, new risks should be continually added and those that no longer exist removed).
  • Risks are typically captured in risk registers individually. This can make it difficult to consolidate information to develop an overall treatment program.

Artifacts, like the risk register, both demonstrate and channel culture. Invest the time in your organization’s register, and you will reap dividends towards developing a risk friendly culture.

Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with with the intended behaviors that will drive results. Evaluating our CAPA program, we have three key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

Site Training Needs

Institute training on the job.

Principle 6, W. Edwards Deming

(a) Each person engaged in the manufacture, processing, packing, or holding of a drug product shall have education, training, and experience, or any combination thereof, to enable that person to perform the assigned functions. Training shall be in the particular operations that the employee performs and in current good manufacturing practice (including the current good manufacturing practice regulations in this chapter and written procedures required by these regulations) as they relate to the employee’s functions. Training in current good manufacturing practice shall be conducted by qualified individuals on a continuing basis and with sufficient frequency to assure that employees remain familiar with CGMP requirements applicable to them.

(b) Each person responsible for supervising the manufacture, processing, packing, or holding of a drug product shall have the education, training, and experience, or any combination thereof, to perform assigned functions in such a manner as to provide assurance that the drug product has the safety, identity, strength, quality, and purity that it purports or is represented to possess.

(c) There shall be an adequate number of qualified personnel to perform and supervise the manufacture, processing, packing, or holding of each drug product.

US FDA 21CFR 210.25

All parts of the Pharmaceutical Quality system should be adequately resourced with competent personnel, and suitable and sufficient premises, equipment and facilities.

EU EMA/INS/GMP/735037/201 2.1

The organization shall determine and provide the resources needed for the establishment,
implementation, maintenance and continual improvement of the quality management system. The organization shall consider:

a) the capabilities of, and constraints on, existing internal resources;
b) what needs to be obtained from external providers.

ISO 9001:2015 requirement 7.1.1

It is critical to have enough people with the appropriate level of training to execute their tasks.

It is fairly easy to define the individual training plan, stemming from the job description and the process training requirements. In the aggregate we get the ability to track overdue training, and a forward look at what training is coming due. Quite frankly, lagging indicators that show success at completing assigned training but give no insight to the central question – do we have enough qualified individuals to do the work?

To get this proactive, we start with the resource plan. What operations need to happen in a time frame and what are the resources needed. We then compare that to the training requirements for those operations.

We can then evaluate current training status and retention levels and determine how many instructors we will need to ensure adequate training.

We perform a gap assessment to determine what new training needs exist

We then take a forward look at what new improvements are planned and ensure appropriate training is forecasted.

Now we have a good picture of what an “adequate number” is. We can now set a leading KPI to ensure that training is truly proactive.

Structured What-If Technique as a Risk Assessment Tool

The structured what-if technique, SWIFT, is a high-level and less formal risk identification technique that can be used independently, or as part of a staged approach to make bottom-up methods such as FMEA more efficient. SWIFT uses structured brainstorming in a facilitated workshop where a predetermined set of guidewords (timing, amount, etc.) are combined with prompts elicited from participants that often begin with phrases such as “what if?” or “how could?”.

At the heart of a SWIFT is a list of guidewords to enable a comprehensive review of risks or sources of risk. At the start of the workshop the context, scope and purpose of the SWIFT is discussed and criteria for success articulated. Using the guidewords and “what if?” prompts, the facilitator asks the participants to raise and discuss issues such as:

  • known risks
  • risk sources and drivers
  • previous experience, successes and incidents
  • known and existing controls
  • regulatory requirements and constraints

The list of guidewords is utilized by the facilitator to monitor the discussion and to suggest additional issues and scenarios for the team to discuss. The team considers whether controls are adequate and if not considers potential treatments. During this discussion, further “what if?” questions are posed.

Often the list of risks generated can be used to fuel a qualitative or semi-quantitative risk assessment method, such as an FMEA is.

A SWIFT Analysis allows participants to look at the system response to problems rather than just examining the consequences of component failure. As such, it can be used to identify opportunities for improvement of processes and systems and generally can be used to identify actions that lead to and enhance their probabilities of success.

What-If Analysis

What–If Analysis is a structured brainstorming method of determining what things can go wrong and judging the likelihood and consequences of those situations occurring.  The answers to these questions form the basis for making judgments regarding the acceptability of those risks and determining a recommended course of action for those risks judged to be unacceptable.  An experienced review team can effectively and productively discern major issues concerning a process or system.  Lead by an energetic and focused facilitator, each member of the review team participates in assessing what can go wrong based on their past experiences and knowledge of similar situations.

What If?AnswerLikelihoodSeverityRecommendations
What could go wrong?What would happen if it did?How likely?ConsequencesWhat will we do about them Again – prevent and monitor
What-If Analysis

Steps in a SWIFT Analysis

SWIFT Risk Assessment
  1. Prepare the guide words: The facilitator should select a set of guide words to be used in the SWIFT.
  2. Assemble the team: Select participants for the SWIFT workshop based on their knowledge of the system/process being assessed and the degree to which they represent the full range of stakeholder groups.
  3. Background: Describe the trigger for the SWIFT (e.g., a regulatory change, an adverse event, etc.).
  4. Articulate the purpose: Clearly explain the purpose to be served by the SWIFT (e.g., to improve effectiveness of the process).
  5. Define the requirements: Articulate the criteria for success
  6. Describe the system: Provide appropriate-level textual and graphical descriptions of the system or process to be risk assessed. A clear understanding is necessary and can be is established through interviews, gathering a multifunctional team and through the study of documents, plans and other records. Normally the
  7. Identify the risks/hazards: This is where the structured what-if technique is applied. Use the guide words/headings with each system, high-level subsystem, or process step in turn. Participants should use prompts starting with the phrases like “What if…” or “How could…” to elicit potential risks/hazards associated with the guide word. For instance, if the process is “Receipt of samples,” and the guide word is “time, timing or speed,” prompts might include: “What if the sample is delivered at a shift change” (wrong time) or “How could the sample be left waiting too long in ambient conditions?” (wrong timing).
  8. Assess the risks: With the use of either a generic approach or a supporting risk analysis technique, estimate the risk associated with the identified hazards. In light of existing controls, assess the likelihood that they could lead to harm and the severity of harm they might cause. Evaluate the acceptability of these risk levels, and identify any aspects of the system that may require more detailed risk identification and analysis.
  9. Propose actions: Propose risk control action plans to reduce the identified risks to an acceptable level.
  10. Review the process: Determine whether the SWIFT met its objectives, or whether a more detailed risk assessment is required for some parts of the system.
  11. Document: Produce an overview document to communicate the results of the SWIFT.
  12. Additional risk assessment: Conduct additional risk assessments using more detailed or quantitative techniques, if required. The SWIFT Analysis is really effective as a filtering mechanism to focus effort on the most valuable areas.

Guideword Examples

The facilitator and process owner can choose any guide words that seem appropriate. Guidewords usually stem around:

  • Wrong: Person or people
  • Wrong: Place, location, site, or environment
  • Wrong: Thing or things
  • Wrong: Idea, information, or understanding
  • Wrong: Time, timing, or speed
  • Wrong: Process
  • Wrong: Amount
  • Failure: Control or Detection
  • Failure: Equipment

If your organization has invested time to create root cause categories and sub-categories, the guidewords can easily start there.