Build Key Risk Indicators

We perform risk assessments; execute risk mitigations; and we end up with four types of inherent risks (parenthesis is opportunities) in our risk register:

  1. Mitigated (or enhanced)
  2. Avoided (or exploited)
  3. Transferred (or shared)
  4. Accepted

We’ve built a set of risk response plans to ensure we are continuing to treat these risks. And now we need to monitor the effectiveness of our risk plan and to ensure that the risks are behaving in the manner anticipated during risk treatment.

The living risk assessment is designed to conduct reassessment of risks after treatment and continuously throughout the life cycle. However, not all systems and risks need to be reassessed continually, and the organization should prioritize which systems should be reassessed based on a schedule.

Identify indicators that inform the organization about the status of the risk without having to conduct a full risk assessment every time. The trending status of these indicators can act as a flag for investigations, which may result in complete risk assessments.

This risk indicator is then a metric that indicates the state of the level of risk. It is important to note that not all indicators show the exact level of risk exposure, instead providing a trend of drivers, causes or intermediary effects of risk.

The most important risks can be categorized as key risks and the indicators for these key risks are known as key risk indicators (KRIs) which can be defined as: A metric that provides a leading or lagging indicator of the current state of risk exposure on key objectives. KRIs can be used to continually assess current and predict potential risk exposures.

These KRIs need to have a strong relationship with the key performance indicators of the organization.

KRIs are monitored through Quality Management Review.

A good rule of thumb is as you identify the key performance indicators to assess the performance of a specific process, product, system or function you then identify the risks and the KRIs for that objective.

Strive to have leading indicators that measure the elements that influences the risk performance. Lagging indicators will measure they actual performance of the risk controls.

These KRIs qualitatively or quantitatively present the risk exposure by having a strong relationship qirh the risk, its intermediate output or its drivers.

Let’s think in terms of a pharmaceutical supply chain. We’ve done our risk assessments and end up with a top level view like this:

For the risk column we should have some good probabilities and impacts and mitigations in place. We can then chose some KRIs to monitor, such as

  1. Nonconformance rate
  2. Supplier score card
  3. Lab error rate
  4. Product Complaints

As we develop, our KRIs can get more specific and focused. A good KRI is:

  • Quantifiable
  • Measurable (accurately and precisely) 
  • Can be validated (have a high level of confidence) 
  • Relevant (measuring the right thing associated with decisions) 

In developing a KRI to serve as a leading indicator for potential future occurrences of a risk, it can be helpful to think through the chain of events that led to the event so that management can uncover the ultimate driver (i.e., root cause(s)) of the risk event. When KRIs for root cause events and intermediate events are monitored, we are in an enviable position to identify early mitigation strategies that can begin to reduce or eliminate the impact associated with an emerging risk event.

These KRIs will help us monitor and quantify our risk exposure. They help our organizations compare business objectives and strategy to actual performance to isolate changes, measure the effectiveness of processes or projects, and demonstrate changes in the frequency or impact of a specific risk event.

Effective KRIs can provide value to the organization in a variety of ways. Potential value may be derived from each of the following contributions:

  • Risk Appetite – KRIs require the determination of appropriate thresholds for action at different levels within the organization. By mapping KRI measures to identified risk appetite and tolerance levels, KRIs can be a useful tool for better articulating the risk appetite that best represents the organizational mindset.
  • Risk and Opportunity Identification – KRIs can be designed to alert management to trends that may adversely affect the achievement of organizational objectives or may indicate the presence of new opportunities.
  • Risk Treatment – KRIs can initiate action to mitigate developing risks by serving as triggering mechanisms. KRIs can serve as controls by defining limits to certain actions.

Level of Effort for Planning

Risk based approach for planning

In the post “Design Lifecycle within PDCA – Planning” I laid out a design thinking approach to planning a change.

Like most activities, the level of effort is commensurate with the level of risk. Above I provide some different activities that can happen based on the risk inherent in the process and problem being evaluated.

This is a great reason why Living Risk Assessments are so critical to an organization.

Living vs Ad hoc risk assessments

Pandemics and the failure to think systematically

As it turns out, the reality-based, science-friendly communities and information sources many of us depend on also largely failed. We had time to prepare for this pandemic at the state, local, and household level, even if the government was terribly lagging, but we squandered it because of widespread asystemic thinking: the inability to think about complex systems and their dynamics. We faltered because of our failure to consider risk in its full context, especially when dealing with coupled risk—when multiple things can go wrong together. We were hampered by our inability to think about second- and third-order effects and by our susceptibility to scientism—the false comfort of assuming that numbers and percentages give us a solid empirical basis. We failed to understand that complex systems defy simplistic reductionism.

Zeynep Tufekci, “What Really Doomed Americas Coronovirus Response” published 24-Mar-2020 in the Atlantic

On point analysis. Hits many of the themes of this blog, including system thinking, complexity and risk and makes some excellent points that all of us in quality should be thinking deeply upon.

COVID-19 is not a black swan. Pandemics like this have been well predicted. This event is a different set of failures, that on a hopefully smaller scale most of us are unfortunately familiar with in our organizations.

I certainly didn’t break out of the mainstream narrative. I traveled in February, went to a conference and then held a small event on the 29th.

The article stresses the importance of considering the trade-offs between resilience, efficiency, and redundancy within the system, and how the second- and third-order impacts can reverberate. It’s well worth reading for the analysis of the growth of COVID-19, and more importantly our reaction to it, from a systems perspective.

Building Experts

Subject matter experts have explicit knowledge from formal education and embedded in reports, manuals, websites, memos, and other corporate documents. But their implicit and tacit knowledge, based on their experience, is perhaps the source of their greatest value — whether the subject-matter expert with decades of experience who is lightning fast with a diagnosis and almost always spot-on or the manager whose team everyone wants to be on because she’s so good at motivating and mentoring.

Experts, no matter the domain, tend to have very similar attributes. Understanding these attributes allows us to start understanding how we build expertise.

DimensionExperts Demonstrate
Cognitive
Critical know-how and “know-what”Managerial, technical, or both; superior, experience-based techniques and processes; extraordinary factual knowledge
System thinkingKnowing interdependencies, anticipating consequences, understanding interactions
JudgementRapid, wise decision making
Context AwarenessAbility to take context into account
Pattern RecognitionSwift recognition of a phenomenon, situation, or process that has been encountered before
Behavioral
Networking (“Known-who”)Building and maintaining an extensive network of professionally important individuals
InterpersonalAbility to deal with individuals, including motivating and leading them; comfort with intellectual disagreement
CommunicationAbility to construct, tailor, and deliver messages through one or more media to build logical and persuasive arguments
Diagnosis and cue seekingAbility to actively identify cues in a situation that would confirm or challenge a familiar pattern; ability to distinguish signal from noise
Physical
SensoryAbility to diagnose, interpret, or predict through appropriate senses
Attributes of an Expert

One of the critical parts of being a subject matter expert is being able to help others absorb knowledge and gain wisdom through learn-by-doing techniques— guided practice, observation, problem solving, and experimentation.

Think of this as an apprenticeship program that provides deliberate practice with expert feedback, which is fundamental to the development of expertise.

Do your organizations have this sort of organized way to train an expert? How does it work?

ASQ Audit Conference – Day 1 Morning

Day 1 of the 2019 Audit Conference.

Grace Duffy is the keynote speaker. I’ve known Grace for years and consider her a mentor and I’m always happy to hear her speak. Grace has been building on a theme around her Modular Kaizen approach and the use of the OODA Loop, and this presentation built nicely on what she presented at the Lean Six Sigma Conference in Phoenix, at WCQI and in other places.

Audits as a form of sustainability is an important point to stress, and hopefully this will be a central theme throughout the conference.

The intended purpose is to build on a systems view for preparation for an effective audit and using the OODA loop to approach evolutionary and revolutionary change approaches.

John Boyd’s OODA loop

Grace starts with a brief overview of system and process and then from vision to strategy to daily, and how that forms a mobius strip of macro, meso, micro and individual. She talks a little about the difference between Deming and Juran’s approaches and does a little what-if thinking about how Lean would have devoted if Juran had gone to Japan instead of Deming.

Breaking down OODA (Observe, Orient, Decide Act) as “Where am I and where is the organization” and then feed into decision making. Stresses how Orient discusses culture and discusses understanding the culture. Her link to Lean is a little tenuous in my mind.

She then discusses Tom Pearson’s knowledge management model with: Local Action; Management Action; Exploratory Analysis; Knowledge Building; Complex Systems; Knowledge Management; Scientific Creativity. Units all this with system thinking and psychology.  “We’re going to share shamelessly because that’s how we learn.” “If we can’t have fun with this stuff it’s no good.”

Uniting the two, she describes the knowledge management model as part of Orient.

Puts revolutionary and evolutionary change in light of Juran’s Breakthrough versus Continuous Improvement. From here she covers modular kaizen, starting with incremental change versus process redesign. From there she breaks it down into a DMAIC model and goes into how much she loves the measure. She discusses how the human brain is better at connections, which is a good reinforce of the OODA model.

Breaks down a culture model of Culture/Beliefs, Visions/Goals and Activities/Plans-and-actions influenced by external events and how evolutionary improvements stem out of compatibility with those. OODA is the tool to help determine that compatibility.

Discusses briefly on how standardization fits into systems and pushes a look from a stability.

Goes back to the culture model but now adds idea generation and quality test with decisions off of it that lead to revolutionary improvements. Links back to OODA.

Then quickly covers DMAIC versus DMADV and how that is another way of thinking about these concepts.

Covers Gina Wickman’s concept of visionary and integrator from Traction.

Ties back OODA to effective auditing: focus on patterns and not just numbers, Grasp the bigger picture, be adaptive.

This is a big sprawling topic for a key note and at times it felt like a firehose.. Keynotes often benefit from a lot more laser focus. OODA alone would have been enough. My head is reeling, and I feel comfortable with this material. Grace is an amazing, passionate educator and she finds this material exciting. I hope most of the audience picked that up in this big gulp approach. This system approach, building on culture and strategy is critical.

OODA as an audit tool is relevant, and it is a tool I think we should be teaching better. Might be a good tool to do for TWEF as it ties into the team/workplace excellence approach. OODA and situational awareness are really united in my mind and that deserves a separate post.

Concurrent Sessions

After the keynote there are the breakout sessions. As always, I end up having too many options and must make some decisions. Can never complain about having too many options during a conference.

First Impressions: The Myth of the Objective & Impartial Audit

First session is “First Impressions: The Myth of the Objective & Impartial Audit” by William Taraszewski. I met Bill back at the 2018 World Conference of Quality Improvement.

Bill starts by discussing how subjectivity and first impressions and how that involves audits from the very start.

Covers the science of first impressions, point to research of bias and how negative behavior weighs more than positive and how this can be contextual. Draws from Amy Cuddy’s work and lays a good foundation of Trust and Competence and the importance in work and life in general.

Brings this back to ISO 19011:2018 “Guidelines for auditing management systems” and clause 7.2 determining auditor competence placing personal behavior over knowledge and skills.

Brings up video auditing and the impressions generated from video vs in-person are pretty similar but the magnitude of the bad impressions are greater and the magnitude of positive is lower. That was an interesting point and I will need to follow-up with that research.

Moves to discussing impartiality in context of ISO 19011:2018, pointing out the halo and horn effects.

Discusses prejudice vs experience as an auditor and covers confirmation bias and how selective exposure and selective perception fits into our psychology with the need to be careful since negative outweighs.

Moves into objective evidence and how it fits into an audit.

Provides top tips for good auditor first impressions with body language and eye contact. Most important, how to check your attitude.

This was a good fundamental on the topics that reinforces some basics and goes back to the research. Quality as a profession really needs to understand how objectivity and impartiality are virtually impossible and how we can overcome bias.

Auditing Risk Management

Barry Craner presented on :Are you ready for an audit of your risk management system?”

Starts with how risk management is here to stay and how it is in most industries. The presenter is focused on medical devices but the concepts are very general.

“As far possible” as a concept is discussed and residual risk. Covers this at a high level.

Covers at a high level the standard risk management process (risk identification, risk analysis, risk control, risk monitoring, risk reporting) asking the question is “RM system acceptable? Can you describe and defend it?”

Provides an example of a risk management file sequence that matches the concept of living risk assessments. This is a flow that goes from Preliminary Hazard analysis to Fault Tree Analysis (FTA) to FMEA. With the focus on medical devices talks about design and process for both the FTA and the FMEA. This is all from the question “Can you describe and defend your risk management program?”

In laying out the risk management program focused in on personnel qualification being pivotal. Discusses answering the question “Are these ready for audit?” When discussing the plan asks the questions “Is your risk management plan: documented and reasonable; ready to audit; and, SOP followed by your company?”

When discussing risk impact breaks it down to “Is the risk acceptable or not.” Goes on to discuss how important it is to defend the scoring rubric, asking the question”Well defined, can we defend?”

Goes back and discusses some basic concepts of hazard and harm. Asks the questions “Did you do this hazard assessment with enough thoroughness? Were the right hazards identified?” Recommends building a example of hazards table. This is good advice. From there answer the question “Do your hazard analses yield reasonable, useful information? Do you use it?”

Provides a nice example of how to build a mitigation plan out of a fault tree analysis.

Discussion on FMEAs faultered on detection, probably could have gone into controls a lot deeper here.

With both the PTA and FMEA discussed how the results needs to be defendable.

Risk management review, with the right metrics are discussed at a high level. This easily can be a session on its own.

Asks the question “Were there actionable tasks? Progress on these tasks?”

It is time to stop having such general overviews at conferences, especially at a conference which are not targeted to junior personnel.