Decision Quality

The decisions we make are often complex and uncertain. Making the decision-making process better is critical to success, and yet too often we do not think of the how we make decisions, and how to confirm we are making good decisions. In order to bring quality to our decisions, we need to understand what quality looks like and how to obtain it

There is no universal best process or set of steps to follow in making good decisions. However, any good decision process needs to embed the idea of decision-quality as the measurable destination.

Decisions do not come ready to be made. You must shape them and declare what is the decision you should be making; that must be made. All decisions have one thing in common – the best choice creates the best possibility of what you truly want. To find that best choice, you need decision-quality and you must recognize it as the destination when you get there. You cannot reach a good decision, achieve decision-quality, if you are unable to visualize or describe it. Nor can you say you have accomplished it, if you cannot recognize it when it is achieved.

What makes a Good Decision?

The six requirements for a good decision are: (1) an appropriate frame, (2) creative alternatives, (3) relevant and reliable information, (4) clear values and trade-offs, (5) sound reasoning, and (6) commitment to action. To judge the quality of any decision before you act, each requirement must be met and addressed with quality. I like representing it as a chain, because a decision is no better than the weakest link.

The frame specifies the problem or opportunity you are tackling, asking what is to be decided. It has three parts:  purpose in making the decision; scope of what will be included and left out; and your perspective including your point of view, how you want to approach the decision, what conversations will be needed, and with whom. Agreement on framing is essential, especially when more than one party is involved in decision making. What is important is to find the frame that is most appropriate for the situation. If you get the frame wrong, you will be solving the wrong problem or not dealing with the opportunity in the correct way.

The next three links are: alternatives – defining what you can do; information – capturing what you know and believe (but cannot control), and values – representing what you want and hope to achieve. These are the basis of the decision and are combined using sound reasoning, which guides you to the best choice (the alternative that gets you the most of what you want and in light of what you know). With sound reasoning, you reach clarity of intention and are ready for the final element – commitment to action.

Asking: “What is the decision I should be making?” is not a simple question. Furthermore, asking the question “On what decision should I be focusing?” is particularly challenging. It is a question, however, that is important to be asked, because you must know what decision you are making. It defines the range within which you have creative and compelling alternatives. It defines constraints. It defines what is possible. Many organizations fail to create a rich set of alternatives and simply debate whether to accept or reject a proposal. The problem with this approach is that people frequently latch on to ideas that are easily accessible, familiar or aligned directly with their experiences.

Exploring alternatives is a combination of analysis, rigor, technology and judgement. This is about the past and present – requiring additional judgement to anticipate future consequences. What we know about the future is uncertain and therefore needs to be described with possibilities and probabilities. Questions like: “What might happen?” and “How likely is it to happen?” are difficult and often compound. To produce reliable judgements about future outcomes and probabilities you must gather facts, study trends and interview experts while avoiding distortions from biases and decision traps. When one alternative provides everything desired, the choice among alternatives is not difficult. Trade-offs must be made when alternatives do not provide everything desired. You must then decide how much of one value you are willing to give up to receive more of another.  

Commitment to action is reached by involving the right people in the decision efforts. The right people must include individuals who have the authority and resources to commit to the decision and to make it stick (the decision makers) and those who will be asked to execute the decided-upon actions (the implementers). Decision makers are frequently not the implementers and much of a decision’s value can be lost in the handoff to implementers. It is important to always consider the resource requirements and challenges for implementation.

These six requirements of decision-quality can be used to judge the quality of the decision at the time it is made. There is no need to wait six months or six years to assess its outcome before declaring the decision’s quality. By meeting the six requirements you know at the time of the decision you made a high-quality choice. You cannot simply say: “I did all the right steps.” You have got to be able to judge the decision itself, not just how you got to that decision. When you ask, “How good is this decision if we make it now?” the answer must be a very big part of your process. The piece missing in the process just may be in the material and the research and that is a piece that must go right.

Decision-quality is all about reducing comfort zone bias – when people do what they know how to do, rather than what is needed to make a strong, high-quality decision. You overcome the comfort zone bias by figuring out where there are gaps. Let us say the gap is with alternatives. Your process then becomes primarily a creative process to generate alternatives instead of gathering a great deal more data. Maybe we are awash in a sea of information, but we just have not done the reasoning and modelling and understanding of the consequences. This becomes more of an analytical effort. The specific gaps define where you should put your attention to improve the quality of the decision.

Leadership needs to have clearly defined decision rights and understand that the role of leadership is assembling the right people to make quality decisions. Once you know how to recognize digital quality, you need an effective and efficient process to get there and that process involves many things including structured interactions between decision maker and decision staff, remembering that productive discussions result when multiple parties are involved in the decision process and difference in judgement are present.

Beware Advocacy

The most common decision process tends to be an advocacy decision process – you are asking somebody to sell you an answer. Once you are in advocacy mode, you are no longer in a decision-quality mode and you cannot get the best choice out of an advocacy decision process. Advocacy suppresses alternatives. Advocacy forces confirming evidence bias and means selective attention to what supports your position. Once in advocacy mode, you are really in a sales mode and it becomes a people competition.

When you want quality in a decision, you want the alternatives to compete, not the people. From the decision board’s perspective, when you are making a decision, you want to have multiple alternatives in front of you and you want to figure out which of these alternatives beats the others in terms of understanding the full consequences in risk, uncertainty and return. For each of the alternatives one will show up better. If you can make this happen, then it is not the advocate selling it, it is you trying to help look at which of these things gives us the most value for our investment in some way.

The role outcomes play in the measuring of decision quality

Always think of decisions and outcomes as separate because when you make decisions in an uncertain world, you cannot fully control the outcomes. When looking back from an outcome to a decision, the only thing you can really tell is if you had a good outcome or a bad outcome. Hindsight bias is strong, and once triggered, it is hard to put yourself back into understanding what decisions should have been made with what you knew, or could have known, at the time.

In understanding how we use outcomes in terms of evaluating decisions, you need to understand the importance of documenting the decision and the decision quality at the time of the decision. Ask yourself, if you were going to look back two years from now, what about this decision file answers the questions: “Did we make a decision that was good?” and “What can we learn about the things about which we had some questions?” This kind of documentation is different from what people usually do. What is usually documented is the approval and the working process. There is usually no documentation answering the question: “If we are going to look back in the future, what would we need to know to be able to learn about making better decisions?”

The reason you want to look back is because that is the way you learn and improve the whole decision process. It is not for blaming; in the end, what you are trying to show in documentation is: “We made the best decision we could then. Here is what we thought about the uncertainties. Here is what we thought were the driving factors.” Its about having a learning culture.

When decision makers and individuals understand the importance of reaching quality in each of the six requirements, they feel meeting those requirements is a decision-making right and should be demanded as part of the decision process. To be in a position where they can make a good decision, they know they deserve a good frame and significantly different alternatives or they cannot be in a position to reach a powerful, correct conclusion and make a decision. From a decision-maker’s perspective, these are indeed needs and rights to be thought about. From a decision support perspective, these needs and rights are required to be able to position the decision maker to make a good choice.

Building decision-quality enables measurable value creation and its framework can be learned, implemented and measured. Decision-quality helps you navigate the complexity of uncertainty of significant and strategic choices, avoid mega biases and big decision traps.

ASQ Audit Conference – Day 1 Morning

Day 1 of the 2019 Audit Conference.

Grace Duffy is the keynote speaker. I’ve known Grace for years and consider her a mentor and I’m always happy to hear her speak. Grace has been building on a theme around her Modular Kaizen approach and the use of the OODA Loop, and this presentation built nicely on what she presented at the Lean Six Sigma Conference in Phoenix, at WCQI and in other places.

Audits as a form of sustainability is an important point to stress, and hopefully this will be a central theme throughout the conference.

The intended purpose is to build on a systems view for preparation for an effective audit and using the OODA loop to approach evolutionary and revolutionary change approaches.

John Boyd’s OODA loop

Grace starts with a brief overview of system and process and then from vision to strategy to daily, and how that forms a mobius strip of macro, meso, micro and individual. She talks a little about the difference between Deming and Juran’s approaches and does a little what-if thinking about how Lean would have devoted if Juran had gone to Japan instead of Deming.

Breaking down OODA (Observe, Orient, Decide Act) as “Where am I and where is the organization” and then feed into decision making. Stresses how Orient discusses culture and discusses understanding the culture. Her link to Lean is a little tenuous in my mind.

She then discusses Tom Pearson’s knowledge management model with: Local Action; Management Action; Exploratory Analysis; Knowledge Building; Complex Systems; Knowledge Management; Scientific Creativity. Units all this with system thinking and psychology.  “We’re going to share shamelessly because that’s how we learn.” “If we can’t have fun with this stuff it’s no good.”

Uniting the two, she describes the knowledge management model as part of Orient.

Puts revolutionary and evolutionary change in light of Juran’s Breakthrough versus Continuous Improvement. From here she covers modular kaizen, starting with incremental change versus process redesign. From there she breaks it down into a DMAIC model and goes into how much she loves the measure. She discusses how the human brain is better at connections, which is a good reinforce of the OODA model.

Breaks down a culture model of Culture/Beliefs, Visions/Goals and Activities/Plans-and-actions influenced by external events and how evolutionary improvements stem out of compatibility with those. OODA is the tool to help determine that compatibility.

Discusses briefly on how standardization fits into systems and pushes a look from a stability.

Goes back to the culture model but now adds idea generation and quality test with decisions off of it that lead to revolutionary improvements. Links back to OODA.

Then quickly covers DMAIC versus DMADV and how that is another way of thinking about these concepts.

Covers Gina Wickman’s concept of visionary and integrator from Traction.

Ties back OODA to effective auditing: focus on patterns and not just numbers, Grasp the bigger picture, be adaptive.

This is a big sprawling topic for a key note and at times it felt like a firehose.. Keynotes often benefit from a lot more laser focus. OODA alone would have been enough. My head is reeling, and I feel comfortable with this material. Grace is an amazing, passionate educator and she finds this material exciting. I hope most of the audience picked that up in this big gulp approach. This system approach, building on culture and strategy is critical.

OODA as an audit tool is relevant, and it is a tool I think we should be teaching better. Might be a good tool to do for TWEF as it ties into the team/workplace excellence approach. OODA and situational awareness are really united in my mind and that deserves a separate post.

Concurrent Sessions

After the keynote there are the breakout sessions. As always, I end up having too many options and must make some decisions. Can never complain about having too many options during a conference.

First Impressions: The Myth of the Objective & Impartial Audit

First session is “First Impressions: The Myth of the Objective & Impartial Audit” by William Taraszewski. I met Bill back at the 2018 World Conference of Quality Improvement.

Bill starts by discussing how subjectivity and first impressions and how that involves audits from the very start.

Covers the science of first impressions, point to research of bias and how negative behavior weighs more than positive and how this can be contextual. Draws from Amy Cuddy’s work and lays a good foundation of Trust and Competence and the importance in work and life in general.

Brings this back to ISO 19011:2018 “Guidelines for auditing management systems” and clause 7.2 determining auditor competence placing personal behavior over knowledge and skills.

Brings up video auditing and the impressions generated from video vs in-person are pretty similar but the magnitude of the bad impressions are greater and the magnitude of positive is lower. That was an interesting point and I will need to follow-up with that research.

Moves to discussing impartiality in context of ISO 19011:2018, pointing out the halo and horn effects.

Discusses prejudice vs experience as an auditor and covers confirmation bias and how selective exposure and selective perception fits into our psychology with the need to be careful since negative outweighs.

Moves into objective evidence and how it fits into an audit.

Provides top tips for good auditor first impressions with body language and eye contact. Most important, how to check your attitude.

This was a good fundamental on the topics that reinforces some basics and goes back to the research. Quality as a profession really needs to understand how objectivity and impartiality are virtually impossible and how we can overcome bias.

Auditing Risk Management

Barry Craner presented on :Are you ready for an audit of your risk management system?”

Starts with how risk management is here to stay and how it is in most industries. The presenter is focused on medical devices but the concepts are very general.

“As far possible” as a concept is discussed and residual risk. Covers this at a high level.

Covers at a high level the standard risk management process (risk identification, risk analysis, risk control, risk monitoring, risk reporting) asking the question is “RM system acceptable? Can you describe and defend it?”

Provides an example of a risk management file sequence that matches the concept of living risk assessments. This is a flow that goes from Preliminary Hazard analysis to Fault Tree Analysis (FTA) to FMEA. With the focus on medical devices talks about design and process for both the FTA and the FMEA. This is all from the question “Can you describe and defend your risk management program?”

In laying out the risk management program focused in on personnel qualification being pivotal. Discusses answering the question “Are these ready for audit?” When discussing the plan asks the questions “Is your risk management plan: documented and reasonable; ready to audit; and, SOP followed by your company?”

When discussing risk impact breaks it down to “Is the risk acceptable or not.” Goes on to discuss how important it is to defend the scoring rubric, asking the question”Well defined, can we defend?”

Goes back and discusses some basic concepts of hazard and harm. Asks the questions “Did you do this hazard assessment with enough thoroughness? Were the right hazards identified?” Recommends building a example of hazards table. This is good advice. From there answer the question “Do your hazard analses yield reasonable, useful information? Do you use it?”

Provides a nice example of how to build a mitigation plan out of a fault tree analysis.

Discussion on FMEAs faultered on detection, probably could have gone into controls a lot deeper here.

With both the PTA and FMEA discussed how the results needs to be defendable.

Risk management review, with the right metrics are discussed at a high level. This easily can be a session on its own.

Asks the question “Were there actionable tasks? Progress on these tasks?”

It is time to stop having such general overviews at conferences, especially at a conference which are not targeted to junior personnel.

Overcoming Subjectivity in Risk Management and Decision Making Requires a Culture of Quality and Excellence

Risk assessments, problem solving and making good decisions need teams, but any team has challenges in group think it must overcome. Ensuring your facilitators, team leaders and sponsors are aware and trained on these biases will help lead to deal with subjectivity, understand uncertainty and drive to better outcomes. But no matter how much work you do there, it won’t make enough of a difference until you’ve built a culture of quality and excellence.

The mindsets we are trying to build into our culture will strive to overcome a few biases in our teams that lead to subjectivity.

Bias Toward Fitting In

We have a natural desire to want to fit in. This tendency leads to two challenges:

Challenge #1: Believing we need to conform. Early in life, we realize that there are tangible benefits to be gained from following social and organizational norms and rules. As a result, we make a significant effort to learn and adhere to written and unwritten codes of behavior at work. But here’s the catch: Doing so limits what we bring to the organization.

Challenge #2: Failure to use one’s strengths. When employees conform to what they think the organization wants, they are less likely to be themselves and to draw on their strengths. When people feel free to stand apart from the crowd, they can exercise their signature strengths (such as curiosity, love for learning, and perseverance), identify opportunities for improvement, and suggest ways to exploit them. But all too often, individuals are afraid of rocking the boat.

We need to use several methods to combat the bias toward fitting in. These need to start at the cultural level. Risk management, problem solving and decision making only overcome biases when embedded in a wider, effective culture.

Encourage people to cultivate their strengths. To motivate and support employees, some companies allow them to spend a certain portion of their time doing work of their own choosing. Although this is a great idea, we need to build our organization to help individuals apply their strengths every day as a normal part of their jobs.

Managers need to help individuals identify and develop their fortes—and not just by discussing them in annual performance reviews. Annual performance reviews are horribly ineffective. Just by using “appreciation jolt”, positive feedback., can start to improve the culture. It’s particularly potent when friends, family, mentors, and coworkers share stories about how the person excels. These stories trigger positive emotions, cause us to realize the impact that we have on others, and make us more likely to continue capitalizing on our signature strengths rather than just trying to fit in.

Managers should ask themselves the following questions: Do I know what my employees’ talents and passions are? Am I talking to them about what they do well and where they can improve? Do our goals and objectives include making maximum use of employees’ strengths?

Increase awareness and engage workers. If people don’t see an issue, you can’t expect them to speak up about it.  

Model good behavior. Employees take their cues from the managers who lead them.

Bias Toward Experts

This is going to sound counter-intuitive, especially since expertise is so critical. Yet our biases about experts can cause a few challenges.

Challenge #1: An overly narrow view of expertise. Organizations tend to define “expert” too narrowly, relying on indicators such as titles, degrees, and years of experience. However, experience is a multidimensional construct. Different types of experience—including time spent on the front line, with a customer or working with particular people—contribute to understanding a problem in detail and creating a solution.

A bias toward experts can also lead people to misunderstand the potential drawbacks that come with increased time and practice in the job. Though experience improves efficiency and effectiveness, it can also make people more resistant to change and more likely to dismiss information that conflicts with their views.

Challenge #2: Inadequate frontline involvement. Frontline employees—the people directly involved in creating, selling, delivering, and servicing offerings and interacting with customers—are frequently in the best position to spot and solve problems. Too often, though, they aren’t empowered to do so.

The following tactics can help organizations overcome weaknesses of the expert bias.

Encourage workers to own problems that affect them. Make sure that your organization is adhering to the principle that the person who experiences a problem should fix it when and where it occurs. This prevents workers from relying too heavily on experts and helps them avoid making the same mistakes again. Tackling the problem immediately, when the relevant information is still fresh, increases the chances that it will be successfully resolved. Build a culture rich with problem-solving and risk management skills and behaviors.

Give workers different kinds of experience. Recognize that both doing the same task repeatedly (“specialized experience”) and switching between different tasks (“varied experience”) have benefits. Yes, Over the course of a single day, a specialized approach is usually fastest. But over time, switching activities across days promotes learning and kept workers more engaged. Both specialization and variety are important to continuous learning.

Empower employees to use their experience. Organizations should aggressively seek to identify and remove barriers that prevent individuals from using their expertise. Solving the customer’s problems in innovative, value-creating ways—not navigating organizational impediments— should be the challenging part of one’s job.

In short we need to build the capability to leverage all level of experts, and not just a few in their ivory tower.

These two biases can be overcome and through that we can start building the mindsets to deal effectively with subjectivity and uncertainty. Going further, build the following as part of our team activities as sort of a quality control checklist:

  1. Check for self-interest bias
  2. Check for the affect heuristic. Has the team fallen in love with its own output?
  3. Check for group think. Were dissenting views explored adequately?
  4. Check for saliency bias. Is this routed in past successes?
  5. Check for confirmation bias.
  6. Check for availability bias
  7. Check for anchoring bias
  8. Check for halo effect
  9. Check for sunk cost fallacy and endowment effect
  10. Check for overconfidence, planning fallacy, optimistic biases, competitor neglect
  11. Check for disaster neglect. Have the team conduct a post-mortem: Imagine that the worst has happened and develop a story about its causes.
  12. Check for loss aversion

Uncertainty and Subjectivity in Risk Management

The July-2019 monthly gift to members of the ASQ is a lot of material on Failure Mode and Effect Analysis (FMEA). Reading through the material got me to thinking of subjectivity in risk management.

Risk assessments have a core of the subjective to them, frequently including assumptions about the nature of the hazard, possible exposure pathways, and judgments for the likelihood that alternative risk scenarios might occur. Gaps in the data and information about hazards, uncertainty about the most likely projection of risk, and incomplete understanding of possible scenarios contribute to uncertainties in risk assessment and risk management. You can go even further and say that risk is socially constructed, and that risk is at once both objectively verifiable and what we perceive or feel it to be. Then again, the same can be said of most of science.

Risk is a future chance of loss given exposure to a hazard. Risk estimates, or qualitative ratings of risk, are necessarily projections of future consequences. Thus, the true probability of the risk event and its consequences cannot be known in advance. This creates a need for subjective judgments to fill-in information about an uncertain future. In this way risk management is rightly seen as a form of decision analysis, a form of making decisions against uncertainty.

Everyone has a mental picture of risk, but the formal mathematics of risk analysis are inaccessible to most, relying on probability theory with two major schools of thought: the frequency school and the subjective probability school. The frequency school says probability is based on a count of the number of successes divided by total number of trials. Uncertainty that is ready characterized using frequentist probability methods is “aleatory” – due to randomness (or random sampling in practice). Frequentist methods give an estimate of “measured” uncertainty; however, it is arguably trapped in the past because it does not lend itself to easily to predicting future successes.

In risk management we tend to measure uncertainty with a combination of frequentist and subjectivist probability distributions. For example, a manufacturing process risk assessment might begin with classical statistical control data and analyses. But projecting the risks from a process change might call for expert judgments of e.g. possible failure modes and the probability that failures might occur during a defined period. The risk assessor(s) bring prior expert knowledge and, if we are lucky, some prior data, and start to focus the target of the risk decision using subjective judgments of probabilities.

Some have argued that a failure to formally control subjectivity — in relation to probability judgments – is the failure of risk management. This was an argument that some made during WCQI, for example. Subjectivity cannot be eliminated nor is it an inherent limitation. Rather, the “problem with subjectivity” more precisely concerns two elements:

  1. A failure to recognize where and when subjectivity enters and might create problems in risk assessment and risk-based decision making; and
  2. A failure to implement controls on subjectivity where it is known to occur.

Risk is about the chance of adverse outcomes of events that are yet to occur, subjective judgments of one form or another will always be required in both risk assessment and risk management decision-making.

We control subjectivity in risk management by:

  • Raising awareness of where/when subjective judgments of probability occur in risk assessment and risk management
  • Identifying heuristics and biases where they occur
  • Improving the understanding of probability among the team and individual experts
  • Calibrating experts individually
  • Applying knowledge from formal expert elicitation
  • Use expert group facilitation when group probability judgments are sought

Each one of these is it’s own, future, post.

Self Awareness and Problem Solving

We often try to solve problems as if we are outside them. When people describe a problem you will see them pointing away from themselves – you hear the word “them” a lot. “They” are seen as the problem. However, truly hard problems are system problems, and if you are part of the system (hint – you are) then you are part of the problem.

Being inside the problem means we have to understand bias and our blind spots – both as individuals, as teams and as organizations.

Understanding our blind spots

An easy tool to start thinking about this is the Johari window, a technique that helps people better understand their relationship with themselves and others. There are two axis, others and self. This forms four quadrants:

  • Arena – What is known by both self and others. It is also often referred to as the Public Area.
  • Blind spot – This region deals with knowledge unknown to self but visible to others, such as shortcomings or annoying habits.
  • Façade – This includes the features and knowledge of the individual which are not known to others. I prefer when this is called the Hidden. It was originally called facade because it can include stuff that is untrue but for the individual’s claim.
  • Unknown – The characteristics of the person that are unknown to both self and others.
The original Johari Window (based on Luft, 1969)

An example of a basic Johari Window (my own) can be found here.

Users are advised to reduce the area of ‘blind spot’ and ‘unknown’, while expand the ‘arena’. The premise is that the lesser the hidden personality, the better the person becomes in relating with other people.

The use of Johari Window is popular among business coaches as a cognitive tool to understand intrapersonal and interpersonal relationships. There isn’t much value of this tool as an empirical framework and it hasn’t held up to academic rigor. Still, like many such things it can bring to light the central point that we need to understand our hidden biases.

Another good tool to start understanding biases is a personal audit.

Using the Johari Window for Teams

Teams and organizations have blind spots, think of them as negative input factors or as procedural negatives.

The Johari Window can also be applied to knowledge transparency, and it fits nicely to the concepts of tacit and explicit knowledge bringing to light knowledge-seeking and knowledge-sharing behavior. For example, the ‘arena’ can simply become the ‘unknown’ if there is no demand or offer pertaining to the knowledge to be occupied by the recipient or to be shared by the owner, respectively.

The Johari Window transforms with the the four quadrants changing to:

  • Arena What the organization knows it knows. Contains knowledge available to the team as well as related organizations. Realizing such improvements is usually demanded by network partners and should be priority for implementation.
  • Façade What the organization does know it knows. Knowledge that is only available to parts of the focal organization. Derived improvements are unexpected, but beneficial for the organization and its collaborations.
  • Blind SpotWhat the organization knows it does not know. Knowledge only available to other organizations – internal and external. This area should be investigated with highest priority, to benefit from insights and to maintain effectiveness.
  • Unknown What the organization does not know it does not know, and what the organization believes it knows but does not actually know. Knowledge about opportunities for improvement that is not available to anyone. Its identification leads to the Façade sector.

We are firmly in the land of uncertainty, ignorance and surprise, and we are starting to perform a risk based approach to our organization blind spots. At the heart, knowledge management, problem solving and risk management are all very closely intertwined.