Uncertainty and Subjectivity in Risk Management

The July-2019 monthly gift to members of the ASQ is a lot of material on Failure Mode and Effect Analysis (FMEA). Reading through the material got me to thinking of subjectivity in risk management.

Risk assessments have a core of the subjective to them, frequently including assumptions about the nature of the hazard, possible exposure pathways, and judgments for the likelihood that alternative risk scenarios might occur. Gaps in the data and information about hazards, uncertainty about the most likely projection of risk, and incomplete understanding of possible scenarios contribute to uncertainties in risk assessment and risk management. You can go even further and say that risk is socially constructed, and that risk is at once both objectively verifiable and what we perceive or feel it to be. Then again, the same can be said of most of science.

Risk is a future chance of loss given exposure to a hazard. Risk estimates, or qualitative ratings of risk, are necessarily projections of future consequences. Thus, the true probability of the risk event and its consequences cannot be known in advance. This creates a need for subjective judgments to fill-in information about an uncertain future. In this way risk management is rightly seen as a form of decision analysis, a form of making decisions against uncertainty.

Everyone has a mental picture of risk, but the formal mathematics of risk analysis are inaccessible to most, relying on probability theory with two major schools of thought: the frequency school and the subjective probability school. The frequency school says probability is based on a count of the number of successes divided by total number of trials. Uncertainty that is ready characterized using frequentist probability methods is “aleatory” – due to randomness (or random sampling in practice). Frequentist methods give an estimate of “measured” uncertainty; however, it is arguably trapped in the past because it does not lend itself to easily to predicting future successes.

In risk management we tend to measure uncertainty with a combination of frequentist and subjectivist probability distributions. For example, a manufacturing process risk assessment might begin with classical statistical control data and analyses. But projecting the risks from a process change might call for expert judgments of e.g. possible failure modes and the probability that failures might occur during a defined period. The risk assessor(s) bring prior expert knowledge and, if we are lucky, some prior data, and start to focus the target of the risk decision using subjective judgments of probabilities.

Some have argued that a failure to formally control subjectivity — in relation to probability judgments – is the failure of risk management. This was an argument that some made during WCQI, for example. Subjectivity cannot be eliminated nor is it an inherent limitation. Rather, the “problem with subjectivity” more precisely concerns two elements:

  1. A failure to recognize where and when subjectivity enters and might create problems in risk assessment and risk-based decision making; and
  2. A failure to implement controls on subjectivity where it is known to occur.

Risk is about the chance of adverse outcomes of events that are yet to occur, subjective judgments of one form or another will always be required in both risk assessment and risk management decision-making.

We control subjectivity in risk management by:

  • Raising awareness of where/when subjective judgments of probability occur in risk assessment and risk management
  • Identifying heuristics and biases where they occur
  • Improving the understanding of probability among the team and individual experts
  • Calibrating experts individually
  • Applying knowledge from formal expert elicitation
  • Use expert group facilitation when group probability judgments are sought

Each one of these is it’s own post, which I will try to address in the month of July.

Risk Management is about reducing uncertainty

Risk Management is all about eliminating surprise. So to truly start to understand our risks, we need to understand uncertainty, we need to understand the unknowns. Borrowing from Andreas Schamanek’s Taxonomies of the unknown, let’s explore a few of the various taxonomies of what is not known.

Ignorance Map

I’m pretty sure Ann Kerwin first gave us the “known unknowns” and the “unknown knowns” that people still find a source of amusement about former defense secretary Rumsfield.

KnownUnknown
KnownKnown knowns Known unknowns (conscious ignorance)
Unknown Unknown knowns (tacit knowledge) Unknown unknowns (meta-ignorance)

Understanding uncertainty involves knowledge management, this is why a rigorous knowledge management program is a prerequisite for an effective quality management system.

Risk management is then a way of teasing out the unknowns and allowing us to take action:

  1. Risk assessments mostly easily focus on the ignorance that we are aware of, the ‘known unknowns’.
  2. Risk assessments can also serve as a tool of teasing out the ‘unknown knowns’. This is why participation of subject matter experts is so critical. Through the formal methodology of the risk assessment we expose and explore tacit knowledge.
  3. The third kind of ignorance is what we do now know we do not know, the ‘unknown unknowns’. We generally become aware of unknown unknowns in two ways: hindsight (deviations) and by purposefully expanding our horizons. This expansion includes diversity and also good experimentation. It is the hardest, but perhaps, most valuable part of risk management.

Taxonomy of Ignorance

Different Kinds of Unknowns, Source: Smithson (1989, p. 9); also in Bammer et al. (2008, p. 294).

Smithson distinguishes between passive and active ignorance. Passive ignorance involves areas that we are ignorant of, whereas active ignorance refers to areas we ignore. He uses the term ‘error’ for the unknowns encompassed by passive ignorance and ‘irrelevance’ for active ignorance.

Taboo is fascinating because it gets to the heart of our cultural blindness, those parts of our organization that are closed to scrutiny.

Smithson can help us understand why risk assessments are both a qualitative and a quantitative endeavor. While dealing with the unknown is the bread and butter of statistics, only a small part of the terrain of uncertainty is covered. Under Smithson’s typology, statistics primarily operates in the area of incompleteness, across probability and some kinds of vagueness. In terms of its considerations of sampling bias, statistics also has some overlap with inaccuracy. But, as the typology shows, there is much more to unknowns than the areas statistics deals with. This is another reason that subject matter experts, and different ways of thinking is a must.

Ensuring wide and appropriate expert participation gives additional perspectives on unknowns. There is also synergies by finding unrecognized similarities between disciplines and stakeholders in the unknowns they deal with and there may be great benefit from combining forces. It is important to use these concerns to enrich thinking about unknowns, rather than ruling them out as irrelevant.

Sources of Surprise

Risk management is all about managing surprise. It helps to break surprise down to three types: risk, uncertainty and ignorance.

  • Risk: The condition in which the event, process, or outcomes and the probability that each will occur is known.
    • Issue: In reality, complete knowledge of probabilities and range of potential outcomes or consequences is not usually known and is sometimes unknowable.
  • Uncertainty: The condition in which the event, process, or outcome is known (factually or hypothetically) but the probabilities that it will occur are not known.
    • Issue: The probabilities assigned, if any, are subjective, and ways to establish reliability for different subjective probability estimates are debatable.
  • Ignorance: The condition in which the event, process, or outcome is not known or expected.
    • Issue: How can we anticipate the unknown, improve the chances of anticipating, and, therefore, improve the chances of reducing vulnerability?

Effective use of the methodology moves ideally from ignorance to eventually risk.


Ignorance

DescriptionMethods of Mitigation
Closed Ignorance
Information is available but SMEs are unwilling or unable to consider that some outcomes are unknown to them.

Self-audit process, regular third-party audits, and open and transparent system with global participation
Open Ignorance
Information is available and SMEs are willing to recognize and consider that some outcomes are unknown.
Personal
Surprise occurs because an individual SME lacks knowledge or awareness of the available information.

effective teams xxplore multiple perspectives by including a diverse set of individuals and data sources for data gathering and analysis.

Transparency in process.
Communal
Surprise occurs because a group of SMEs has only similar viewpoints represented or may be less willing to consider views outside the community.
Diversity of viewpoints and sue of tools to overcome group-think and “tribal” knowledge
Novelty
Surprise occurs because the SMEs are unable to anticipate and prepare for external shocks or internal changes in preferences, technologies, and institutions.

Simulating impacts and gaming alternative outcomes of various potentials under different conditions
(Blue Team/Read Team exercises)
Complexity
Surprise occurs when inadequate forecasting tools are used to analyze the available data, resulting in inter-relationships, hidden dependencies, feedback loops, and other negative factors that lead to inadequate or incomplete understanding of the data.
System Thinking


Track changes and interrelationships of various systems to discover potential macro-effect force changes
12-Levers


Risk Management is all about understanding surprise and working to reduce uncertainty and ignorance in order to reduce, eliminate and sometimes accept. As a methodology it is effective at avoiding surrender and denial. With innovation we can even contemplate exploitation. As organizations mature, it is important to understand these concepts and utilize them.

References

  • Gigerenzer, Gerd and Garcia-Retamero, Rocio. Cassandra’s Regret: The Psychology of Not Wanting to Know (March 2017), Psychological Review, 2017, Vol. 124, No. 2, 179–196.
  • House, Robert J., Paul J. Hanges, Mansour Javidan, Peter Dorfman, and Vipin Gupta, eds. 2004. Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies. Thousand Oaks, Calif.: Sage Publications.
  • Kerwin, A. (1993). None Too Solid: Medical Ignorance. Knowledge, 15(2), 166–185.
  • Smithson, M. (1989) Ignorance and Uncertainty: Emerging Paradigms, New York: Springer-Verlag.
  • Smithson, M. (1993) “Ignorance and Science”, Knowledge: Creation, Diffusion, Utilization, 15(2) December: 133-156.

Making Learning a Part of Everyday Work

Cultivating expertise, in short learning, is critical to building a quality culture. Yet, the urgency of work easily trumps learning. It can be difficult to carve out time for learning in the inexorable flow of daily tasks. We are all experienced with the way learning ends up being in the lowest box on the 2×2 Eisenhower matrix, or however you like to prioritize your tasks.

For learning to really happen, it must fit around and align itself to our working days. We need to build our systems so that learning is an inevitable result of doing work. There are also things we as individuals can practice to make learning happen.

What we as individuals can do

Practice mindfulness. As you go about your daily job be present and aware, using it as an opportunity to ability to learn and develop. Don’t just sit in on that audit; notice and learn the auditor’s tactics and techniques as you engage with her. Ask product managers about product features; ask experts about industry trends; ask peers for feedback on your presentation skills. These kinds of inquiries are learning experiences and most peers love to tell you what they know.

Keep a to-learn list. Keep a list of concepts, thoughts, practices, and vocabulary you want to explore and then later later explore them when you have a few moments to reflect. Try to work a few off the list, maybe during your commute or at other times when you have space to reflect.

Build learning into your calendar. Many of us schedule email time, time for project updates, time to do administrative work. Make sure you dedicate time for learning.

Share meaningfully. Share with others, but just don’t spread links. Discuss why you are sharing it, what you learned and why you think it is important. This blog is a good example of that.

What we can build into our systems

Make sure our learning and knowledge management systems are built into everything we do. Make them easy to use. Ensure content is shared internally and leads to continuous improvement.

Ensure learning is valued.

Plan for short-term wins. There is no nirvana, no perfect state. Ensure you have lots of little victories and shareable moments. Plan for this as part of your schedules and cycles.

Learning is a very effective lever for system improvement. At the very least it gives us the power to “add, change, evolve or self-organize system structure” (lever 4) and can also start giving us ways to change the paradigm (lever 2) and eventually even transcend paradigms (lever 1).

Lessons Learned and Change Management

One of the hallmarks of a quality culture is learning from our past experiences, to eliminate repeat mistakes and to reproduce success. The more times you do an activity, the more you learn, and the better you get (within limits for simple activities).  Knowledge management is an enabler of quality systems, in part, to focus on learning and thus accelerate learning across the organization as a whole, and not just one person or a team.

This is where the” lessons learned” process comes in.  There are a lot of definitions of lessons learned out there, but the definition I keep returning to is that a lessons learned is a change in personal or organizational behavior as a result from learning from experience. Ideally, this is a permanent, institutionalized change, and this is often where our quality systems can really drive continuous improvement.

Lessons learned is activity to lessons identified to updated processes
Lessons Learned

Part of Knowledge Management

The lessons learned process is an application of knowledge management.

Lessons identified is generate, assess, and share.

Updated processes (and documents) is contextualize, apply and update.

Lessons Learned in the Context of Knowledge Management

Identify Lessons Learned

Identifying lessons needs to be done regularly, the closer to actual change management and control activities the better. The formality of this exercise depends on the scale of the change. There are basically a few major forms:

  • After action reviews: held daily (or other regular cycle) for high intensity learning. Tends to be very focused on questions of the day.
  • Retrospective: Held at specific periods (for example project gates or change control status changes. Tends to have a specific focus on a single project.
  • Consistency discussions: Held periodically among a community of practice, such as quality reviewers or multiple site process owners. This form looks holistically at all changes over a period of time (weekly, monthly, quarterly). Very effective when linked to a set of leading and lagging indicators.
  • Incident and events: Deviations happen. Make sure you learn the lessons and implement solutions.

The chosen formality should be based on the level of change. A healthy organization will be utilizing all of these.

Level of ChangeForm of Lesson Learned
TransactionalConsistency discussion
After action (when things go wrong)
OrganizationalRetrospective
After action (weekly, daily as needed)
TransformationalRetrospective
After action (daily)

Successful lessons learned:

  • Are based on solid performance data: Based on facts and the analysis of facts.
  • Look at positive and negative experiences.
  • Refer back to the change management process, objectives of the change, and other success criteria
  • Separate experience from opinion as much as possible. A lesson arises from actual experience and is an objective reflection on the results.
  • Generate distinct lessons from which others can learn and take action. A good action avoids generalities.

In practice there are a lot of similarities between the techniques to facilitate a good lessons learned and a root cause analysis. Start with a good core of questions, starting with the what:

  • What were some of the key issues?
  • What were the success factors?
  • What worked well?
  • What did not work well?
  • What were the challenges and pitfalls?
  • What would you approach differently if you ever did this again?

From these what questions, we can continue to narrow in on the learnings by asking why and how questions. Ask open questions, and utilize all the techniques of root cause analysis here.

Then once you are at (or close) to a defined issue for the learning (a root cause), ask a future-tense question to make it actionable, such as:

  • What would your advice be for someone doing this in the future?
  • What would you do next time?

Press for specifics. if it is not actionable it is not really a learning.

Update the Process

Learning implies memory, and an organization’s memories usually require procedures, job aids and other tools to be updated and created. In short, lessons should evolve your process. This is often the responsibility of the change management process owner. You need to make sure the lesson actually takes hold.

Differences between effectiveness reviews and lesson’s learned

There are three things to answer in every change

  1. Was the change effective – did it meet the intended purposes
  2. Did the change have any unexpected effects
  3. What can we learn from this change for the next change?

Effectiveness reviews are 1 and 2 (based on a risk based approach) while lessons learned is 3. Lessons learned contributes to the health of the system and drives continuous improvements in the how we make changes.

Citations

  • Lesson learned management model for solving incidents. (2017). 2017 12th Iberian Conference on Information Systems and Technologies (CISTI), Information Systems and Technologies (CISTI), 2017 12th Iberian Conference On, 1.
  • Fowlin, J. j & Cennamo, K. (2017). Approaching Knowledge Management Through the Lens of the Knowledge Life Cycle: a Case Study Investigation. TechTrends: Linking Research & Practice to Improve Learning61(1), 55–64. 
  • Michell, V., & McKenzie, J. (2017). Lessons learned: Structuring knowledge codification and abstraction to provide meaningful information for learning. VINE: The Journal of Information & Knowledge Management Systems47(3), 411–428.
  • Milton, N. J. (2010). The Lessons Learned Handbook : Practical Approaches to Learning From Experience. Burlington: Chandos Publishing.
  • Paul R. Carlile. (2004). Transferring, Translating, and Transforming: An Integrative Framework for Managing Knowledge across Boundaries. Organization Science, (5), 555.
  • Secchi, P. (Ed.) (1999). Proceedings of Alerts and Lessons Learned: An Effective way to prevent failures and problems. Technical Report WPP-167. Noordwijk, The Netherlands: ESTEC

Coherence and Quality

Sonja Blignaut on More Beyond wrote a good post “All that jazz … making coherence coherent” on coherence where she states at the end “In order to remain competitive and thrive in the new world of work, we need to focus our organisation design, leadership and strategic efforts on the complex contexts and create the conditions for coherence. “

Ms. Blignaut defines coherence mainly through analogy and metaphor, so I strongly recommend reading the original post.

In my post “Forget the technology, Quality 4.0 is all about thinking” I spelled out some principles of system design.

PrincipleDescription
BalanceThe system creates value for the multiple stakeholders. While the ideal is to develop a design that maximizes the value for all the key stakeholders, the designer often has to compromise and balance the needs of the various stakeholders.
CongruenceThe degree to which the system components are aligned and consistent with each other and the other organizational systems, culture, plans, processes, information, resource decisions, and actions.
ConvenienceThe system is designed to be as convenient as possible for the participants to implement (a.k.a. user friendly). System includes specific processes, procedures, and controls only when necessary.
CoordinationSystem components are interconnected and harmonized with the other (internal and external) components, systems, plans, processes, information, and resource decisions toward common action or effort. This is beyond congruence and is achieved when the individual components of a system operate as a fully interconnected unit.
EleganceComplexity vs. benefit — the system includes only enough complexity as is necessary to meet the stakeholder’s needs. In other words, keep the design as simple as possible and no more while delivering the desired benefits. It often requires looking at the system in new ways.
HumanParticipants in the system are able to find joy, purpose and meaning in their work.
LearningKnowledge management, with opportunities for reflection and learning (learning loops), is designed into the system. Reflection and learning are built into the system at key points to encourage single- and double-loop learning from experience to improve future implementation and to systematically evaluate the design of the system itself.
SustainabilityThe system effectively meets the near- and long-term needs of the current stakeholders without compromising the ability of future generations of stakeholders to meet their own needs.

I used the term congruence to summarize the point Ms. Blignaut is reaching with alignment and coherence. I love her putting these against the Cynefin framework, it makes a great of sense to see alignment for the obvious domain and the need for coherence driving from complexity.

So what might driving for coherence look like? Well if we start with coherence being the long range order (the jazz analogy) we are building systems that build order through their function – they learn and are sustainable.

To apply this in the framework of ICHQ10 or the US FDA’s “Guidance for Industry Quality Systems Approach to Pharmaceutical CGMP Regulations” one way to drive for coherence is to use similar building blocks across our systems: risk management, data integrity and knowledge management are all examples of that.