Identifying Waste in Risk Management

Risk Management often devolves into a check-the-box, non-valued activity in an organization. While many organizations ensure they have the right processes in place, they still end up not protecting themselves against risk effectively. A lot of our organizations struggle to understand risk and apply this mindset in productive ways.

As quality professionals we should be applying the same improvement tools to our risk management processes as we do anything else.

To improve a process, we first need to understand the value from the process. Risk management is the identification, evaluation, and prioritization of risks (defined in ISO 31000 as the effect of uncertainty on objectives) followed by coordinated and economical application of resources to minimize, monitor, and control the probability or impact of unfortunate events or to maximize the realization of opportunities.

Risk management then is an application of decision quality to reduce uncertainty on objectives. We can represent the process this way:

The risk evaluation is the step where the knowledge base is evaluated, and a summary judgment is reached on the risks and uncertainties involved in the case under investigation. This evaluation must take the values of the decision-makers into account and a careful understanding has to be had on just what the practical burden of proof is in the particular decision.

Does Risk Management then create value for those perceived by the stakeholders? Can we apply a value stream approach and look to reduce wastes?  Some common ones include:

Waste in Risk ManagementExampleReflects
Defective Information“The things that hurts you is never in a risk matrix”  “You have to deliver a risk matrix, but how you got there doesn’t matter”Missing stakeholder viewpoints, poor Risk Management process, lack of considering multiple sources of uncertainty, poor input data, lack of sharing information
Overproduction“if it is just a checklist sitting somewhere, then people don’t use it, and it becomes a wasted effort”Missing standardization, serial processing and creation of similar documents, reports are not used after creation
Stockpiling Information“we’re uncertain what are the effect of the risk as this early stage, I think it would make more sense to do after”Documented risk lay around unutilized during a project, change or operations
Unnecessary movement of people“It can be time consuming walking around to get information about risk”Lack of documentation, risks only retrievable by going around asking employees
Rework“Time spend in risk identification is always little in the beginning of a project because everybody wants to start and then do the first part as quickly as possible.”Low quality initial work, ‘tick the-box’ risk management
Information rot“Risk reports are always out of date”The documents were supposed to be updated and re-evaluated, but was not, thus becoming partially obsolete over time
Common wastes in Risk Management

Once we understand waste in risk management we can identify when it happens and engage in improvement activities. We should do this based on the principles of decision quality and very aware of the role uncertainty applies.

References

  • Anjum, Rani Lill, and Elena Rocca. “From Ideal to Real Risk: Philosophy of Causation Meets Risk Analysis.” Risk Analysis, vol. 39, no. 3, 19 Sept. 2018, pp. 729–740, 10.1111/risa.13187.
  • Hansson, Sven Ove, and Terje Aven. “Is Risk Analysis Scientific?” Risk Analysis, vol. 34, no. 7, 11 June 2014, pp. 1173–1183, 10.1111/risa.12230
  • Walker, Warren E., et al. “Deep Uncertainty.” Encyclopedia of Operations Research and Management Science, 2013, pp. 395–402, 10.1007/978-1-4419-1153-7_1140
  • Willumsen, Pelle, et al. “Value Creation through Project Risk Management.” International Journal of Project Management, Feb. 2019, 10.1016/j.ijproman.2019.01.007

VUCA – Accented Just Right It is a Profanity

Talk about strategy, risk management or change and it is inevitable that the acronym VUCA — short for volatility, uncertainty, complexity, and ambiguity—will come up. VUCA is basically a catchall for “Hey, it’s crazy out there!” And like many catch-all’s it is misleading, VUCA conflates four distinct types of challenges that demand four distinct types of responses. VUCA can quickly become a crutch, a way to throw off the hard work of strategy and planning—after all, you can’t prepare for a VUCA world, right?

The mistake folks often make here is treating these four traits as a single idea, which leads to poorer decision making.

VUCA really isn’t a tool. It’s a checklist of four things that hopefully your system is paying attention to. All four represent distinct elements that make our environment and organization harder to grasp and control. 

Overcoming Subjectivity in Risk Management and Decision Making Requires a Culture of Quality and Excellence

Risk assessments, problem solving and making good decisions need teams, but any team has challenges in group think it must overcome. Ensuring your facilitators, team leaders and sponsors are aware and trained on these biases will help lead to deal with subjectivity, understand uncertainty and drive to better outcomes. But no matter how much work you do there, it won’t make enough of a difference until you’ve built a culture of quality and excellence.

The mindsets we are trying to build into our culture will strive to overcome a few biases in our teams that lead to subjectivity.

Bias Toward Fitting In

We have a natural desire to want to fit in. This tendency leads to two challenges:

Challenge #1: Believing we need to conform. Early in life, we realize that there are tangible benefits to be gained from following social and organizational norms and rules. As a result, we make a significant effort to learn and adhere to written and unwritten codes of behavior at work. But here’s the catch: Doing so limits what we bring to the organization.

Challenge #2: Failure to use one’s strengths. When employees conform to what they think the organization wants, they are less likely to be themselves and to draw on their strengths. When people feel free to stand apart from the crowd, they can exercise their signature strengths (such as curiosity, love for learning, and perseverance), identify opportunities for improvement, and suggest ways to exploit them. But all too often, individuals are afraid of rocking the boat.

We need to use several methods to combat the bias toward fitting in. These need to start at the cultural level. Risk management, problem solving and decision making only overcome biases when embedded in a wider, effective culture.

Encourage people to cultivate their strengths. To motivate and support employees, some companies allow them to spend a certain portion of their time doing work of their own choosing. Although this is a great idea, we need to build our organization to help individuals apply their strengths every day as a normal part of their jobs.

Managers need to help individuals identify and develop their fortes—and not just by discussing them in annual performance reviews. Annual performance reviews are horribly ineffective. Just by using “appreciation jolt”, positive feedback., can start to improve the culture. It’s particularly potent when friends, family, mentors, and coworkers share stories about how the person excels. These stories trigger positive emotions, cause us to realize the impact that we have on others, and make us more likely to continue capitalizing on our signature strengths rather than just trying to fit in.

Managers should ask themselves the following questions: Do I know what my employees’ talents and passions are? Am I talking to them about what they do well and where they can improve? Do our goals and objectives include making maximum use of employees’ strengths?

Increase awareness and engage workers. If people don’t see an issue, you can’t expect them to speak up about it.  

Model good behavior. Employees take their cues from the managers who lead them.

Bias Toward Experts

This is going to sound counter-intuitive, especially since expertise is so critical. Yet our biases about experts can cause a few challenges.

Challenge #1: An overly narrow view of expertise. Organizations tend to define “expert” too narrowly, relying on indicators such as titles, degrees, and years of experience. However, experience is a multidimensional construct. Different types of experience—including time spent on the front line, with a customer or working with particular people—contribute to understanding a problem in detail and creating a solution.

A bias toward experts can also lead people to misunderstand the potential drawbacks that come with increased time and practice in the job. Though experience improves efficiency and effectiveness, it can also make people more resistant to change and more likely to dismiss information that conflicts with their views.

Challenge #2: Inadequate frontline involvement. Frontline employees—the people directly involved in creating, selling, delivering, and servicing offerings and interacting with customers—are frequently in the best position to spot and solve problems. Too often, though, they aren’t empowered to do so.

The following tactics can help organizations overcome weaknesses of the expert bias.

Encourage workers to own problems that affect them. Make sure that your organization is adhering to the principle that the person who experiences a problem should fix it when and where it occurs. This prevents workers from relying too heavily on experts and helps them avoid making the same mistakes again. Tackling the problem immediately, when the relevant information is still fresh, increases the chances that it will be successfully resolved. Build a culture rich with problem-solving and risk management skills and behaviors.

Give workers different kinds of experience. Recognize that both doing the same task repeatedly (“specialized experience”) and switching between different tasks (“varied experience”) have benefits. Yes, Over the course of a single day, a specialized approach is usually fastest. But over time, switching activities across days promotes learning and kept workers more engaged. Both specialization and variety are important to continuous learning.

Empower employees to use their experience. Organizations should aggressively seek to identify and remove barriers that prevent individuals from using their expertise. Solving the customer’s problems in innovative, value-creating ways—not navigating organizational impediments— should be the challenging part of one’s job.

In short we need to build the capability to leverage all level of experts, and not just a few in their ivory tower.

These two biases can be overcome and through that we can start building the mindsets to deal effectively with subjectivity and uncertainty. Going further, build the following as part of our team activities as sort of a quality control checklist:

  1. Check for self-interest bias
  2. Check for the affect heuristic. Has the team fallen in love with its own output?
  3. Check for group think. Were dissenting views explored adequately?
  4. Check for saliency bias. Is this routed in past successes?
  5. Check for confirmation bias.
  6. Check for availability bias
  7. Check for anchoring bias
  8. Check for halo effect
  9. Check for sunk cost fallacy and endowment effect
  10. Check for overconfidence, planning fallacy, optimistic biases, competitor neglect
  11. Check for disaster neglect. Have the team conduct a post-mortem: Imagine that the worst has happened and develop a story about its causes.
  12. Check for loss aversion

Uncertainty and Subjectivity in Risk Management

The July-2019 monthly gift to members of the ASQ is a lot of material on Failure Mode and Effect Analysis (FMEA). Reading through the material got me to thinking of subjectivity in risk management.

Risk assessments have a core of the subjective to them, frequently including assumptions about the nature of the hazard, possible exposure pathways, and judgments for the likelihood that alternative risk scenarios might occur. Gaps in the data and information about hazards, uncertainty about the most likely projection of risk, and incomplete understanding of possible scenarios contribute to uncertainties in risk assessment and risk management. You can go even further and say that risk is socially constructed, and that risk is at once both objectively verifiable and what we perceive or feel it to be. Then again, the same can be said of most of science.

Risk is a future chance of loss given exposure to a hazard. Risk estimates, or qualitative ratings of risk, are necessarily projections of future consequences. Thus, the true probability of the risk event and its consequences cannot be known in advance. This creates a need for subjective judgments to fill-in information about an uncertain future. In this way risk management is rightly seen as a form of decision analysis, a form of making decisions against uncertainty.

Everyone has a mental picture of risk, but the formal mathematics of risk analysis are inaccessible to most, relying on probability theory with two major schools of thought: the frequency school and the subjective probability school. The frequency school says probability is based on a count of the number of successes divided by total number of trials. Uncertainty that is ready characterized using frequentist probability methods is “aleatory” – due to randomness (or random sampling in practice). Frequentist methods give an estimate of “measured” uncertainty; however, it is arguably trapped in the past because it does not lend itself to easily to predicting future successes.

In risk management we tend to measure uncertainty with a combination of frequentist and subjectivist probability distributions. For example, a manufacturing process risk assessment might begin with classical statistical control data and analyses. But projecting the risks from a process change might call for expert judgments of e.g. possible failure modes and the probability that failures might occur during a defined period. The risk assessor(s) bring prior expert knowledge and, if we are lucky, some prior data, and start to focus the target of the risk decision using subjective judgments of probabilities.

Some have argued that a failure to formally control subjectivity — in relation to probability judgments – is the failure of risk management. This was an argument that some made during WCQI, for example. Subjectivity cannot be eliminated nor is it an inherent limitation. Rather, the “problem with subjectivity” more precisely concerns two elements:

  1. A failure to recognize where and when subjectivity enters and might create problems in risk assessment and risk-based decision making; and
  2. A failure to implement controls on subjectivity where it is known to occur.

Risk is about the chance of adverse outcomes of events that are yet to occur, subjective judgments of one form or another will always be required in both risk assessment and risk management decision-making.

We control subjectivity in risk management by:

  • Raising awareness of where/when subjective judgments of probability occur in risk assessment and risk management
  • Identifying heuristics and biases where they occur
  • Improving the understanding of probability among the team and individual experts
  • Calibrating experts individually
  • Applying knowledge from formal expert elicitation
  • Use expert group facilitation when group probability judgments are sought

Each one of these is it’s own, future, post.