Identifying Waste in Risk Management

Risk Management often devolves into a check-the-box, non-valued activity in an organization. While many organizations ensure they have the right processes in place, they still end up not protecting themselves against risk effectively. A lot of our organizations struggle to understand risk and apply this mindset in productive ways.

As quality professionals we should be applying the same improvement tools to our risk management processes as we do anything else.

To improve a process, we first need to understand the value from the process. Risk management is the identification, evaluation, and prioritization of risks (defined in ISO 31000 as the effect of uncertainty on objectives) followed by coordinated and economical application of resources to minimize, monitor, and control the probability or impact of unfortunate events or to maximize the realization of opportunities.

Risk management then is an application of decision quality to reduce uncertainty on objectives. We can represent the process this way:

The risk evaluation is the step where the knowledge base is evaluated, and a summary judgment is reached on the risks and uncertainties involved in the case under investigation. This evaluation must take the values of the decision-makers into account and a careful understanding has to be had on just what the practical burden of proof is in the particular decision.

Does Risk Management then create value for those perceived by the stakeholders? Can we apply a value stream approach and look to reduce wastes?  Some common ones include:

Waste in Risk ManagementExampleReflects
Defective Information“The things that hurts you is never in a risk matrix”  “You have to deliver a risk matrix, but how you got there doesn’t matter”Missing stakeholder viewpoints, poor Risk Management process, lack of considering multiple sources of uncertainty, poor input data, lack of sharing information
Overproduction“if it is just a checklist sitting somewhere, then people don’t use it, and it becomes a wasted effort”Missing standardization, serial processing and creation of similar documents, reports are not used after creation
Stockpiling Information“we’re uncertain what are the effect of the risk as this early stage, I think it would make more sense to do after”Documented risk lay around unutilized during a project, change or operations
Unnecessary movement of people“It can be time consuming walking around to get information about risk”Lack of documentation, risks only retrievable by going around asking employees
Rework“Time spend in risk identification is always little in the beginning of a project because everybody wants to start and then do the first part as quickly as possible.”Low quality initial work, ‘tick the-box’ risk management
Information rot“Risk reports are always out of date”The documents were supposed to be updated and re-evaluated, but was not, thus becoming partially obsolete over time
Common wastes in Risk Management

Once we understand waste in risk management we can identify when it happens and engage in improvement activities. We should do this based on the principles of decision quality and very aware of the role uncertainty applies.

References

  • Anjum, Rani Lill, and Elena Rocca. “From Ideal to Real Risk: Philosophy of Causation Meets Risk Analysis.” Risk Analysis, vol. 39, no. 3, 19 Sept. 2018, pp. 729–740, 10.1111/risa.13187.
  • Hansson, Sven Ove, and Terje Aven. “Is Risk Analysis Scientific?” Risk Analysis, vol. 34, no. 7, 11 June 2014, pp. 1173–1183, 10.1111/risa.12230
  • Walker, Warren E., et al. “Deep Uncertainty.” Encyclopedia of Operations Research and Management Science, 2013, pp. 395–402, 10.1007/978-1-4419-1153-7_1140
  • Willumsen, Pelle, et al. “Value Creation through Project Risk Management.” International Journal of Project Management, Feb. 2019, 10.1016/j.ijproman.2019.01.007

Risk Based Data Integrity Assessment

A quick overview. The risk-based approach will utilize three factors, data criticality, existing controls, and level of detection.

When assessing current controls, technical controls (properly implemented) are stronger than operational or organizational controls as they can eliminate the potential for data falsification or human error rather than simply reducing/detecting it. 

For criticality, it helps to build a table based on what the data is used for. For example:

For controls, use a table like the one below. Rank each column and then multiply the numbers together to get a final control ranking.  For example, if a process has Esign (1), no access control (3), and paper archival (2) then the control ranking would be 6 (1 x 3 x 2). 

Determine detectibility on the table below, rank each column and then multiply the numbers together to get a final detectability ranking. 

Another way to look at these scores:

Multiple above to determine a risk ranking and move ahead with mitigations. Mitigations should be to drive risk as low as possible, though the following table can be used to help determine priority.

Risk Rating Action Mitigation
>25 High Risk-Potential Impact to Patient Safety or Product Quality Mandatory
12-25 Moderate Risk-No Impact to Patient Safety or Product Quality but Potential Regulatory Risk Recommended
<12 Negligible DI Risk Not Required

In the case of long-term risk remediation actions, risk reducing short-term actions shall be implemented to reduce risk and provide an acceptable level of governance until the long-term remediation actions are completed.

Relevant site procedures (e.g., change control, validation policy) should outline the scope of additional testing through the change management process.

Reassessment of the system may be completed following the completion of remediation activities. The reassessment may be done at any time during the remediation process to document the impact of the remediation actions.

Once final remediation is complete, a reassessment of the equipment/system should be completed to demonstrate that the risk rating has been mitigated by the remediation actions taken. Think living risk assessment.

Uncertainty and Subjectivity in Risk Management

The July-2019 monthly gift to members of the ASQ is a lot of material on Failure Mode and Effect Analysis (FMEA). Reading through the material got me to thinking of subjectivity in risk management.

Risk assessments have a core of the subjective to them, frequently including assumptions about the nature of the hazard, possible exposure pathways, and judgments for the likelihood that alternative risk scenarios might occur. Gaps in the data and information about hazards, uncertainty about the most likely projection of risk, and incomplete understanding of possible scenarios contribute to uncertainties in risk assessment and risk management. You can go even further and say that risk is socially constructed, and that risk is at once both objectively verifiable and what we perceive or feel it to be. Then again, the same can be said of most of science.

Risk is a future chance of loss given exposure to a hazard. Risk estimates, or qualitative ratings of risk, are necessarily projections of future consequences. Thus, the true probability of the risk event and its consequences cannot be known in advance. This creates a need for subjective judgments to fill-in information about an uncertain future. In this way risk management is rightly seen as a form of decision analysis, a form of making decisions against uncertainty.

Everyone has a mental picture of risk, but the formal mathematics of risk analysis are inaccessible to most, relying on probability theory with two major schools of thought: the frequency school and the subjective probability school. The frequency school says probability is based on a count of the number of successes divided by total number of trials. Uncertainty that is ready characterized using frequentist probability methods is “aleatory” – due to randomness (or random sampling in practice). Frequentist methods give an estimate of “measured” uncertainty; however, it is arguably trapped in the past because it does not lend itself to easily to predicting future successes.

In risk management we tend to measure uncertainty with a combination of frequentist and subjectivist probability distributions. For example, a manufacturing process risk assessment might begin with classical statistical control data and analyses. But projecting the risks from a process change might call for expert judgments of e.g. possible failure modes and the probability that failures might occur during a defined period. The risk assessor(s) bring prior expert knowledge and, if we are lucky, some prior data, and start to focus the target of the risk decision using subjective judgments of probabilities.

Some have argued that a failure to formally control subjectivity — in relation to probability judgments – is the failure of risk management. This was an argument that some made during WCQI, for example. Subjectivity cannot be eliminated nor is it an inherent limitation. Rather, the “problem with subjectivity” more precisely concerns two elements:

  1. A failure to recognize where and when subjectivity enters and might create problems in risk assessment and risk-based decision making; and
  2. A failure to implement controls on subjectivity where it is known to occur.

Risk is about the chance of adverse outcomes of events that are yet to occur, subjective judgments of one form or another will always be required in both risk assessment and risk management decision-making.

We control subjectivity in risk management by:

  • Raising awareness of where/when subjective judgments of probability occur in risk assessment and risk management
  • Identifying heuristics and biases where they occur
  • Improving the understanding of probability among the team and individual experts
  • Calibrating experts individually
  • Applying knowledge from formal expert elicitation
  • Use expert group facilitation when group probability judgments are sought

Each one of these is it’s own, future, post.

Likelihood of occurrence in risk estimation

People use imprecise words to describe the chance of events all the time — “It’s likely to rain,” or “There’s a real possibility they’ll launch before us,” or “It’s doubtful the nurses will strike.” Not only are such probabilistic terms subjective, but they also can have widely different interpretations. One person’s “pretty likely” is another’s “far from certain.” Our research shows just how broad these gaps in understanding can be and the types of problems that can flow from these differences in interpretation.

“If You Say Something Is “Likely,” How Likely Do People Think It Is?” by by Andrew Mauboussin and Michael J. Mauboussin

Risk estimation is based on two components:

  • The probability of the occurrence of harm
  • The consequences of that harm

With a third element of detectability of the harm being used in many tools.

Often-times we simplify probability of the occurrence into likelihood. The quoted article above is a good simple primer on why we should be careful of that. It offers three recommendations that I want to talk about. Go read the article and then come back.

I.                Use probabilities instead of words to avoid misinterpretation

Avoid the simplified quality probability levels, such as “likely to happen”, “frequent”, “can happen, but not frequently”, “rare”, “remote”, and “unlikely to happen.” Instead determine probability levels. even if you are heavily using expert opinion to drive probabilities, given ranges of numbers such as “<10% of the time”, “20-60% of the time” and “greater than 60% of the time.”

It helps to have several sets of scales.

The article has an awesome graph that really is telling for why we should avoid words.

W180614_MAUBOUSSIN_HOWPEOPLE

II.             Use structured approaches to set probabilities

Ideally pressure test these using a Delphi approach, or something similar like paired comparisons or absolute probability judgments. Using the historic data, and expert opinion, spend the time to make sure your probabilities actually capture the realities.

Be aware that when using historical data that if there is a very low frequent of occurrence historically, then any estimate of probability will be uncertain. In these cases its important to use predicative techniques and simulations. Monte Carlo anyone?

III.           Seek feedback to improve your forecasting

Risk management is a lifecycle approach, and you need to be applying good knowledge management to that lifecycle. Have a mechanism to learn from the risk assessments you conduct, and feed that back into your scales. These scales should never be a once and done.

In Conclusion

Risk Management is not new. It’s been around long enough that many companies have the elements in place. What we need to be doing to driving to consistency. Drive out the vague and build best practices that will give the best results. When it comes to likelihood there is a wide body of research on the subject and we should be drawing from it as we work to improve our risk management.

Move beyond setting your scales at the beginning of a risk assessment. Scales should exist as a library (living) that are drawn upon for specific risk evaluations. This will help to ensure that all participants in the risk assessment have a working vocabulary of the criteria, and will keep us honest and prevent any intentional or unintentional manipulation of the criteria based on an expected outcome.

.