Expert Intuition and Risk Management

Saturday Morning Breakfast Cereal source http://smbc-comics.com/comic/horrible

Risk management is a crucial aspect of any organization or project. However, it is often subject to human errors in subjective risk judgments. This is because most risk assessment methods rely on subjective inputs from experts. Without certain precautions, experts can make consistent errors in judgment about uncertainty and risk.

There are methods that can correct the systemic errors that people make, but very few organizations implement them. As a result, there is often an almost universal understatement of risk. We need to keep in mind a few rules about experience and expertise.

  • Experience is a nonrandom, nonscientific sample of events throughout our lifetime.
  • Experience is memory-based and we are very selective regarding what we choose to remember,
  • What we conclude from our experience can be full of logical errors
  • Unless we get reliable feedback on past decisions, there is no reason to believe our experience will tell us much.

No matter how much experience we accumulate, we seem to be very inconsistent in its application.

Experts have unconscious heuristics and biases that impact their judgment, some important ones include:

  • Misconceptions of chance: If you flip a coin six times, which result is more likely (H= heads, T= tails): HHHTTT or HTHTTH? They are both equal, but many people assume that because the first series looks “less random” than the second, it must be less likely. This is an example of representativeness bias. We appear to judge odds based on what we assume to be representative scenarios. Human beings easily confuse patterns and randomness.
  • The conjunction fallacy: We often see specific events as more likely than broader categories of events.
  • Irrational belief in small samples
  • Disregarding variance in small samples. Small samples have more random variance that large samples is considered less than it should be.
  • Insensitivity to prior probabilities: People tend to ignore the past and focus on new information when making subjective estimates.

This is all about overconfidence as an expert, which will consistently underestimate risks.

What are some ways to overcome this? I recommend the following be built into your risk management system.

  • Pretend you are in the future looking back at failure. Start with the assumption that a major disaster did happen and describe how it happened.
  • Look to risks from others. Gather a list of related failures, for example, regulatory agency observations, and think of risks in relation to those.
  • Include Everyone. Your organization has numerous experts on all sorts of specific risks. Make the effort to survey representatives of just about every job level.
  • Do peer reviews. Check assumptions by showing them to peers who are not immersed in the assessment.
  • Implement metrics for performance. The Brier score is a way to evaluate the result of predictions both by how often the team was right and by the probability the estimated for getting a correct answer.

Further Reading

Here are some sources that discuss the topic of human errors and subjective judgments in risk management:

Bias

There are many forms of bias that we must be cognizant during problem solving and decision making.

That chart can be a little daunting. I’m just going to mention three of the more common biases.

  • Attribution bias: When we do something well, we tend to think it’s because of our own merit. When we do something poorly, we tend to believe it was due to external factors (e.g. other people’s actions). When it comes to other people, we tend to think the opposite – if they did something well, we consider them lucky, and if they did something poorly, we tend to think it’s due to their personality or lack of skills.
  • Confirmation bias: The tendency to seek out evidence that supports decisions and positions we’ve already embraced – regardless of whether the information is true – and putting less weight on facts that contradict them.
  • Hindsight bias: The tendency to believe an event was predictable or preventable when looking at the sequence of events in hindsight. This can result in oversimplification of cause and effect and an exaggerated view that a person involved with an event could’ve prevented it. They didn’t know the outcome like you do now and likely couldn’t have predicted it with the information available at the time.

A few ways to address our biases include:

  • Bouncing ideas off of others, especially those not involved in the discussion or decision.
  • Surround yourself with a diverse group of people and do not be afraid to consider dissenting views. Actively listen.
  • Imagine yourself in other’s shoes.
  • Be mindful of your internal environment. If you’re struggling with a decision, take a moment to breathe. Don’t make decisions tired, hungry or stressed.
  • Consider who is impacted by your decision (or lack of decision). Sometimes, looking at how others will be impacted by a given decision will help to clarify the decision for you.

The advantage of focusing on decision quality is that we have a process that allows us to ensure we are doing the right things consistently. By building mindfulness we can strive for good decisions, reducing subjectivity and effective problem-solving.

Decision Quality

The decisions we make are often complex and uncertain. A good decision-making process better is critical to success – knowing how we make decisions, and how to confirm we are making good decisions – allows us to bring quality to our decisions. To do this we need to understand what a quality decision looks like and how to obtain it.

There is no universal best process or set of steps to follow in making good decisions. However, any good decision process needs to have the idea of decision-quality as the measurable destination.

Decisions do not come ready to be made. They must be shaped starting by declaring what the decision you that must be made. All decisions have one thing in common – the best choice creates the best possibility of what you truly want. To find that best choice, you need decision-quality and you must recognize it as the destination when you get there. You cannot reach a good decision, achieve decision-quality, if you are unable to visualize or describe it. Nor can you say you have accomplished it, if you cannot recognize it when it is achieved.

What makes a Good Decision?

The six requirements for a good decision are: (1) an appropriate frame, (2) creative alternatives, (3) relevant and reliable information, (4) clear values and trade-offs, (5) sound reasoning, and (6) commitment to action. To judge the quality of any decision before you act, each requirement must be met and addressed with quality. I like representing it as a chain, because a decision is no better than the weakest link.

The frame specifies the problem or opportunity you are tackling, asking what is to be decided. It has three parts:  purpose in making the decision; scope of what will be included and left out; and your perspective including your point of view, how you want to approach the decision, what conversations will be needed, and with whom. Agreement on framing is essential, especially when more than one party is involved in decision making. What is important is to find the frame that is most appropriate for the situation. If you get the frame wrong, you will be solving the wrong problem or not dealing with the opportunity in the correct way.

The next three links are: alternatives – defining what you can do; information – capturing what you know and believe (but cannot control), and values – representing what you want and hope to achieve. These are the basis of the decision and are combined using sound reasoning, which guides you to the best choice (the alternative that gets you the most of what you want and in light of what you know). With sound reasoning, you reach clarity of intention and are ready for the final element – commitment to action.

Asking: “What is the decision I should be making?” is not a simple question. Furthermore, asking the question “On what decision should I be focusing?” is particularly challenging. It is a question, however, that is important to be asked, because you must know what decision you are making. It defines the range within which you have creative and compelling alternatives. It defines constraints. It defines what is possible. Many organizations fail to create a rich set of alternatives and simply debate whether to accept or reject a proposal. The problem with this approach is that people frequently latch on to ideas that are easily accessible, familiar or aligned directly with their experiences.

Exploring alternatives is a combination of analysis, rigor, technology and judgement. This is about the past and present – requiring additional judgement to anticipate future consequences. What we know about the future is uncertain and therefore needs to be described with possibilities and probabilities. Questions like: “What might happen?” and “How likely is it to happen?” are difficult and often compound. To produce reliable judgements about future outcomes and probabilities you must gather facts, study trends and interview experts while avoiding distortions from biases and decision traps. When one alternative provides everything desired, the choice among alternatives is not difficult. Trade-offs must be made when alternatives do not provide everything desired. You must then decide how much of one value you are willing to give up to receive more of another.  

Commitment to action is reached by involving the right people in the decision efforts. The right people must include individuals who have the authority and resources to commit to the decision and to make it stick (the decision makers) and those who will be asked to execute the decided-upon actions (the implementers). Decision makers are frequently not the implementers and much of a decision’s value can be lost in the handoff to implementers. It is important to always consider the resource requirements and challenges for implementation.

These six requirements of decision-quality can be used to judge the quality of the decision at the time it is made. There is no need to wait six months or six years to assess its outcome before declaring the decision’s quality. By meeting the six requirements you know at the time of the decision you made a high-quality choice. You cannot simply say: “I did all the right steps.” You have got to be able to judge the decision itself, not just how you got to that decision. When you ask, “How good is this decision if we make it now?” the answer must be a very big part of your process. The piece missing in the process just may be in the material and the research and that is a piece that must go right.

Decision-quality is all about reducing comfort zone bias – when people do what they know how to do, rather than what is needed to make a strong, high-quality decision. You overcome the comfort zone bias by figuring out where there are gaps. Let us say the gap is with alternatives. Your process then becomes primarily a creative process to generate alternatives instead of gathering a great deal more data. Maybe we are awash in a sea of information, but we just have not done the reasoning and modelling and understanding of the consequences. This becomes more of an analytical effort. The specific gaps define where you should put your attention to improve the quality of the decision.

Leadership needs to have clearly defined decision rights and understand that the role of leadership is assembling the right people to make quality decisions. Once you know how to recognize digital quality, you need an effective and efficient process to get there and that process involves many things including structured interactions between decision maker and decision staff, remembering that productive discussions result when multiple parties are involved in the decision process and difference in judgement are present.

Beware Advocacy

The most common decision process tends to be an advocacy decision process – you are asking somebody to sell you an answer. Once you are in advocacy mode, you are no longer in a decision-quality mode and you cannot get the best choice out of an advocacy decision process. Advocacy suppresses alternatives. Advocacy forces confirming evidence bias and means selective attention to what supports your position. Once in advocacy mode, you are really in a sales mode and it becomes a people competition.

When you want quality in a decision, you want the alternatives to compete, not the people. From the decision board’s perspective, when you are making a decision, you want to have multiple alternatives in front of you and you want to figure out which of these alternatives beats the others in terms of understanding the full consequences in risk, uncertainty and return. For each of the alternatives one will show up better. If you can make this happen, then it is not the advocate selling it, it is you trying to help look at which of these things gives us the most value for our investment in some way.

The role outcomes play in the measuring of decision quality

Always think of decisions and outcomes as separate because when you make decisions in an uncertain world, you cannot fully control the outcomes. When looking back from an outcome to a decision, the only thing you can really tell is if you had a good outcome or a bad outcome. Hindsight bias is strong, and once triggered, it is hard to put yourself back into understanding what decisions should have been made with what you knew, or could have known, at the time.

In understanding how we use outcomes in terms of evaluating decisions, you need to understand the importance of documenting the decision and the decision quality at the time of the decision. Ask yourself, if you were going to look back two years from now, what about this decision file answers the questions: “Did we make a decision that was good?” and “What can we learn about the things about which we had some questions?” This kind of documentation is different from what people usually do. What is usually documented is the approval and the working process. There is usually no documentation answering the question: “If we are going to look back in the future, what would we need to know to be able to learn about making better decisions?”

The reason you want to look back is because that is the way you learn and improve the whole decision process. It is not for blaming; in the end, what you are trying to show in documentation is: “We made the best decision we could then. Here is what we thought about the uncertainties. Here is what we thought were the driving factors.” Its about having a learning culture.

When decision makers and individuals understand the importance of reaching quality in each of the six requirements, they feel meeting those requirements is a decision-making right and should be demanded as part of the decision process. To be in a position where they can make a good decision, they know they deserve a good frame and significantly different alternatives or they cannot be in a position to reach a powerful, correct conclusion and make a decision. From a decision-maker’s perspective, these are indeed needs and rights to be thought about. From a decision support perspective, these needs and rights are required to be able to position the decision maker to make a good choice.

Building decision-quality enables measurable value creation and its framework can be learned, implemented and measured. Decision-quality helps you navigate the complexity of uncertainty of significant and strategic choices, avoid mega biases and big decision traps.

ASQ Audit Conference – Day 1 Morning

Day 1 of the 2019 Audit Conference.

Grace Duffy is the keynote speaker. I’ve known Grace for years and consider her a mentor and I’m always happy to hear her speak. Grace has been building on a theme around her Modular Kaizen approach and the use of the OODA Loop, and this presentation built nicely on what she presented at the Lean Six Sigma Conference in Phoenix, at WCQI and in other places.

Audits as a form of sustainability is an important point to stress, and hopefully this will be a central theme throughout the conference.

The intended purpose is to build on a systems view for preparation for an effective audit and using the OODA loop to approach evolutionary and revolutionary change approaches.

John Boyd’s OODA loop

Grace starts with a brief overview of system and process and then from vision to strategy to daily, and how that forms a mobius strip of macro, meso, micro and individual. She talks a little about the difference between Deming and Juran’s approaches and does a little what-if thinking about how Lean would have devoted if Juran had gone to Japan instead of Deming.

Breaking down OODA (Observe, Orient, Decide Act) as “Where am I and where is the organization” and then feed into decision making. Stresses how Orient discusses culture and discusses understanding the culture. Her link to Lean is a little tenuous in my mind.

She then discusses Tom Pearson’s knowledge management model with: Local Action; Management Action; Exploratory Analysis; Knowledge Building; Complex Systems; Knowledge Management; Scientific Creativity. Units all this with system thinking and psychology.  “We’re going to share shamelessly because that’s how we learn.” “If we can’t have fun with this stuff it’s no good.”

Uniting the two, she describes the knowledge management model as part of Orient.

Puts revolutionary and evolutionary change in light of Juran’s Breakthrough versus Continuous Improvement. From here she covers modular kaizen, starting with incremental change versus process redesign. From there she breaks it down into a DMAIC model and goes into how much she loves the measure. She discusses how the human brain is better at connections, which is a good reinforce of the OODA model.

Breaks down a culture model of Culture/Beliefs, Visions/Goals and Activities/Plans-and-actions influenced by external events and how evolutionary improvements stem out of compatibility with those. OODA is the tool to help determine that compatibility.

Discusses briefly on how standardization fits into systems and pushes a look from a stability.

Goes back to the culture model but now adds idea generation and quality test with decisions off of it that lead to revolutionary improvements. Links back to OODA.

Then quickly covers DMAIC versus DMADV and how that is another way of thinking about these concepts.

Covers Gina Wickman’s concept of visionary and integrator from Traction.

Ties back OODA to effective auditing: focus on patterns and not just numbers, Grasp the bigger picture, be adaptive.

This is a big sprawling topic for a key note and at times it felt like a firehose.. Keynotes often benefit from a lot more laser focus. OODA alone would have been enough. My head is reeling, and I feel comfortable with this material. Grace is an amazing, passionate educator and she finds this material exciting. I hope most of the audience picked that up in this big gulp approach. This system approach, building on culture and strategy is critical.

OODA as an audit tool is relevant, and it is a tool I think we should be teaching better. Might be a good tool to do for TWEF as it ties into the team/workplace excellence approach. OODA and situational awareness are really united in my mind and that deserves a separate post.

Concurrent Sessions

After the keynote there are the breakout sessions. As always, I end up having too many options and must make some decisions. Can never complain about having too many options during a conference.

First Impressions: The Myth of the Objective & Impartial Audit

First session is “First Impressions: The Myth of the Objective & Impartial Audit” by William Taraszewski. I met Bill back at the 2018 World Conference of Quality Improvement.

Bill starts by discussing how subjectivity and first impressions and how that involves audits from the very start.

Covers the science of first impressions, point to research of bias and how negative behavior weighs more than positive and how this can be contextual. Draws from Amy Cuddy’s work and lays a good foundation of Trust and Competence and the importance in work and life in general.

Brings this back to ISO 19011:2018 “Guidelines for auditing management systems” and clause 7.2 determining auditor competence placing personal behavior over knowledge and skills.

Brings up video auditing and the impressions generated from video vs in-person are pretty similar but the magnitude of the bad impressions are greater and the magnitude of positive is lower. That was an interesting point and I will need to follow-up with that research.

Moves to discussing impartiality in context of ISO 19011:2018, pointing out the halo and horn effects.

Discusses prejudice vs experience as an auditor and covers confirmation bias and how selective exposure and selective perception fits into our psychology with the need to be careful since negative outweighs.

Moves into objective evidence and how it fits into an audit.

Provides top tips for good auditor first impressions with body language and eye contact. Most important, how to check your attitude.

This was a good fundamental on the topics that reinforces some basics and goes back to the research. Quality as a profession really needs to understand how objectivity and impartiality are virtually impossible and how we can overcome bias.

Auditing Risk Management

Barry Craner presented on :Are you ready for an audit of your risk management system?”

Starts with how risk management is here to stay and how it is in most industries. The presenter is focused on medical devices but the concepts are very general.

“As far possible” as a concept is discussed and residual risk. Covers this at a high level.

Covers at a high level the standard risk management process (risk identification, risk analysis, risk control, risk monitoring, risk reporting) asking the question is “RM system acceptable? Can you describe and defend it?”

Provides an example of a risk management file sequence that matches the concept of living risk assessments. This is a flow that goes from Preliminary Hazard analysis to Fault Tree Analysis (FTA) to FMEA. With the focus on medical devices talks about design and process for both the FTA and the FMEA. This is all from the question “Can you describe and defend your risk management program?”

In laying out the risk management program focused in on personnel qualification being pivotal. Discusses answering the question “Are these ready for audit?” When discussing the plan asks the questions “Is your risk management plan: documented and reasonable; ready to audit; and, SOP followed by your company?”

When discussing risk impact breaks it down to “Is the risk acceptable or not.” Goes on to discuss how important it is to defend the scoring rubric, asking the question”Well defined, can we defend?”

Goes back and discusses some basic concepts of hazard and harm. Asks the questions “Did you do this hazard assessment with enough thoroughness? Were the right hazards identified?” Recommends building a example of hazards table. This is good advice. From there answer the question “Do your hazard analses yield reasonable, useful information? Do you use it?”

Provides a nice example of how to build a mitigation plan out of a fault tree analysis.

Discussion on FMEAs faultered on detection, probably could have gone into controls a lot deeper here.

With both the PTA and FMEA discussed how the results needs to be defendable.

Risk management review, with the right metrics are discussed at a high level. This easily can be a session on its own.

Asks the question “Were there actionable tasks? Progress on these tasks?”

It is time to stop having such general overviews at conferences, especially at a conference which are not targeted to junior personnel.