ASQ Audit Conference – Day 2 Afternoon

“Risk: What is it? Prove it, show me” by Larry Litke

At this point I may be a glutton for sessions about risk. While I am fascinated by how people are poking at this beast, and sometimes dismayed by how far back our thinking is on the subject, there may just be an element that at an audit conference I find a lot of the other topics not really aligned to my interests.

Started by covering high level definition of risk and then moved into the IS (001:2015’s risk based thinking at a high level, mostly by reading from the standard.

It is good that succession planning is specifically discussed as part of risk-based thinking.

“Above all it is communication” is good advice for every change.

It is an important point that the evidence of risk-based thinking is the actual results and not a separate thing.

This presentation’s strengths was when it focused on business continuity as a form of risk-based thinking.

“Auditing the Quality System for Data Integrity” by Jeremiah Genest

My second presentation of the conference is here.

Overall Impressions

This year’s Audit Division conference was pretty small. I was in sessions with 10 people and we didn’t fill a medium size ballroom. I’m told this was smaller than in past years and I sincerely hope this will be a bigger conference next year, which is back in Orlando. My daughter will be thrilled, and I may be back just to meet that set of user requirements.

I think this conference could benefit from the rigor the LSS Conference and WCQI apply for presentation development. I was certainly guilty here. But way too many presentations were wall-to-wall text.

Risk Based Data Integrity Assessment

A quick overview. The risk-based approach will utilize three factors, data criticality, existing controls, and level of detection.

When assessing current controls, technical controls (properly implemented) are stronger than operational or organizational controls as they can eliminate the potential for data falsification or human error rather than simply reducing/detecting it. 

For criticality, it helps to build a table based on what the data is used for. For example:

For controls, use a table like the one below. Rank each column and then multiply the numbers together to get a final control ranking.  For example, if a process has Esign (1), no access control (3), and paper archival (2) then the control ranking would be 6 (1 x 3 x 2). 

Determine detectibility on the table below, rank each column and then multiply the numbers together to get a final detectability ranking. 

Another way to look at these scores:

Multiple above to determine a risk ranking and move ahead with mitigations. Mitigations should be to drive risk as low as possible, though the following table can be used to help determine priority.

Risk Rating Action Mitigation
>25 High Risk-Potential Impact to Patient Safety or Product Quality Mandatory
12-25 Moderate Risk-No Impact to Patient Safety or Product Quality but Potential Regulatory Risk Recommended
<12 Negligible DI Risk Not Required

In the case of long-term risk remediation actions, risk reducing short-term actions shall be implemented to reduce risk and provide an acceptable level of governance until the long-term remediation actions are completed.

Relevant site procedures (e.g., change control, validation policy) should outline the scope of additional testing through the change management process.

Reassessment of the system may be completed following the completion of remediation activities. The reassessment may be done at any time during the remediation process to document the impact of the remediation actions.

Once final remediation is complete, a reassessment of the equipment/system should be completed to demonstrate that the risk rating has been mitigated by the remediation actions taken. Think living risk assessment.

ASQ Audit Conference – Day 1 Morning

Day 1 of the 2019 Audit Conference.

Grace Duffy is the keynote speaker. I’ve known Grace for years and consider her a mentor and I’m always happy to hear her speak. Grace has been building on a theme around her Modular Kaizen approach and the use of the OODA Loop, and this presentation built nicely on what she presented at the Lean Six Sigma Conference in Phoenix, at WCQI and in other places.

Audits as a form of sustainability is an important point to stress, and hopefully this will be a central theme throughout the conference.

The intended purpose is to build on a systems view for preparation for an effective audit and using the OODA loop to approach evolutionary and revolutionary change approaches.

John Boyd’s OODA loop

Grace starts with a brief overview of system and process and then from vision to strategy to daily, and how that forms a mobius strip of macro, meso, micro and individual. She talks a little about the difference between Deming and Juran’s approaches and does a little what-if thinking about how Lean would have devoted if Juran had gone to Japan instead of Deming.

Breaking down OODA (Observe, Orient, Decide Act) as “Where am I and where is the organization” and then feed into decision making. Stresses how Orient discusses culture and discusses understanding the culture. Her link to Lean is a little tenuous in my mind.

She then discusses Tom Pearson’s knowledge management model with: Local Action; Management Action; Exploratory Analysis; Knowledge Building; Complex Systems; Knowledge Management; Scientific Creativity. Units all this with system thinking and psychology.  “We’re going to share shamelessly because that’s how we learn.” “If we can’t have fun with this stuff it’s no good.”

Uniting the two, she describes the knowledge management model as part of Orient.

Puts revolutionary and evolutionary change in light of Juran’s Breakthrough versus Continuous Improvement. From here she covers modular kaizen, starting with incremental change versus process redesign. From there she breaks it down into a DMAIC model and goes into how much she loves the measure. She discusses how the human brain is better at connections, which is a good reinforce of the OODA model.

Breaks down a culture model of Culture/Beliefs, Visions/Goals and Activities/Plans-and-actions influenced by external events and how evolutionary improvements stem out of compatibility with those. OODA is the tool to help determine that compatibility.

Discusses briefly on how standardization fits into systems and pushes a look from a stability.

Goes back to the culture model but now adds idea generation and quality test with decisions off of it that lead to revolutionary improvements. Links back to OODA.

Then quickly covers DMAIC versus DMADV and how that is another way of thinking about these concepts.

Covers Gina Wickman’s concept of visionary and integrator from Traction.

Ties back OODA to effective auditing: focus on patterns and not just numbers, Grasp the bigger picture, be adaptive.

This is a big sprawling topic for a key note and at times it felt like a firehose.. Keynotes often benefit from a lot more laser focus. OODA alone would have been enough. My head is reeling, and I feel comfortable with this material. Grace is an amazing, passionate educator and she finds this material exciting. I hope most of the audience picked that up in this big gulp approach. This system approach, building on culture and strategy is critical.

OODA as an audit tool is relevant, and it is a tool I think we should be teaching better. Might be a good tool to do for TWEF as it ties into the team/workplace excellence approach. OODA and situational awareness are really united in my mind and that deserves a separate post.

Concurrent Sessions

After the keynote there are the breakout sessions. As always, I end up having too many options and must make some decisions. Can never complain about having too many options during a conference.

First Impressions: The Myth of the Objective & Impartial Audit

First session is “First Impressions: The Myth of the Objective & Impartial Audit” by William Taraszewski. I met Bill back at the 2018 World Conference of Quality Improvement.

Bill starts by discussing how subjectivity and first impressions and how that involves audits from the very start.

Covers the science of first impressions, point to research of bias and how negative behavior weighs more than positive and how this can be contextual. Draws from Amy Cuddy’s work and lays a good foundation of Trust and Competence and the importance in work and life in general.

Brings this back to ISO 19011:2018 “Guidelines for auditing management systems” and clause 7.2 determining auditor competence placing personal behavior over knowledge and skills.

Brings up video auditing and the impressions generated from video vs in-person are pretty similar but the magnitude of the bad impressions are greater and the magnitude of positive is lower. That was an interesting point and I will need to follow-up with that research.

Moves to discussing impartiality in context of ISO 19011:2018, pointing out the halo and horn effects.

Discusses prejudice vs experience as an auditor and covers confirmation bias and how selective exposure and selective perception fits into our psychology with the need to be careful since negative outweighs.

Moves into objective evidence and how it fits into an audit.

Provides top tips for good auditor first impressions with body language and eye contact. Most important, how to check your attitude.

This was a good fundamental on the topics that reinforces some basics and goes back to the research. Quality as a profession really needs to understand how objectivity and impartiality are virtually impossible and how we can overcome bias.

Auditing Risk Management

Barry Craner presented on :Are you ready for an audit of your risk management system?”

Starts with how risk management is here to stay and how it is in most industries. The presenter is focused on medical devices but the concepts are very general.

“As far possible” as a concept is discussed and residual risk. Covers this at a high level.

Covers at a high level the standard risk management process (risk identification, risk analysis, risk control, risk monitoring, risk reporting) asking the question is “RM system acceptable? Can you describe and defend it?”

Provides an example of a risk management file sequence that matches the concept of living risk assessments. This is a flow that goes from Preliminary Hazard analysis to Fault Tree Analysis (FTA) to FMEA. With the focus on medical devices talks about design and process for both the FTA and the FMEA. This is all from the question “Can you describe and defend your risk management program?”

In laying out the risk management program focused in on personnel qualification being pivotal. Discusses answering the question “Are these ready for audit?” When discussing the plan asks the questions “Is your risk management plan: documented and reasonable; ready to audit; and, SOP followed by your company?”

When discussing risk impact breaks it down to “Is the risk acceptable or not.” Goes on to discuss how important it is to defend the scoring rubric, asking the question”Well defined, can we defend?”

Goes back and discusses some basic concepts of hazard and harm. Asks the questions “Did you do this hazard assessment with enough thoroughness? Were the right hazards identified?” Recommends building a example of hazards table. This is good advice. From there answer the question “Do your hazard analses yield reasonable, useful information? Do you use it?”

Provides a nice example of how to build a mitigation plan out of a fault tree analysis.

Discussion on FMEAs faultered on detection, probably could have gone into controls a lot deeper here.

With both the PTA and FMEA discussed how the results needs to be defendable.

Risk management review, with the right metrics are discussed at a high level. This easily can be a session on its own.

Asks the question “Were there actionable tasks? Progress on these tasks?”

It is time to stop having such general overviews at conferences, especially at a conference which are not targeted to junior personnel.

Overcoming Subjectivity in Risk Management and Decision Making Requires a Culture of Quality and Excellence

Risk assessments, problem solving and making good decisions need teams, but any team has challenges in group think it must overcome. Ensuring your facilitators, team leaders and sponsors are aware and trained on these biases will help lead to deal with subjectivity, understand uncertainty and drive to better outcomes. But no matter how much work you do there, it won’t make enough of a difference until you’ve built a culture of quality and excellence.

The mindsets we are trying to build into our culture will strive to overcome a few biases in our teams that lead to subjectivity.

Bias Toward Fitting In

We have a natural desire to want to fit in. This tendency leads to two challenges:

Challenge #1: Believing we need to conform. Early in life, we realize that there are tangible benefits to be gained from following social and organizational norms and rules. As a result, we make a significant effort to learn and adhere to written and unwritten codes of behavior at work. But here’s the catch: Doing so limits what we bring to the organization.

Challenge #2: Failure to use one’s strengths. When employees conform to what they think the organization wants, they are less likely to be themselves and to draw on their strengths. When people feel free to stand apart from the crowd, they can exercise their signature strengths (such as curiosity, love for learning, and perseverance), identify opportunities for improvement, and suggest ways to exploit them. But all too often, individuals are afraid of rocking the boat.

We need to use several methods to combat the bias toward fitting in. These need to start at the cultural level. Risk management, problem solving and decision making only overcome biases when embedded in a wider, effective culture.

Encourage people to cultivate their strengths. To motivate and support employees, some companies allow them to spend a certain portion of their time doing work of their own choosing. Although this is a great idea, we need to build our organization to help individuals apply their strengths every day as a normal part of their jobs.

Managers need to help individuals identify and develop their fortes—and not just by discussing them in annual performance reviews. Annual performance reviews are horribly ineffective. Just by using “appreciation jolt”, positive feedback., can start to improve the culture. It’s particularly potent when friends, family, mentors, and coworkers share stories about how the person excels. These stories trigger positive emotions, cause us to realize the impact that we have on others, and make us more likely to continue capitalizing on our signature strengths rather than just trying to fit in.

Managers should ask themselves the following questions: Do I know what my employees’ talents and passions are? Am I talking to them about what they do well and where they can improve? Do our goals and objectives include making maximum use of employees’ strengths?

Increase awareness and engage workers. If people don’t see an issue, you can’t expect them to speak up about it.  

Model good behavior. Employees take their cues from the managers who lead them.

Bias Toward Experts

This is going to sound counter-intuitive, especially since expertise is so critical. Yet our biases about experts can cause a few challenges.

Challenge #1: An overly narrow view of expertise. Organizations tend to define “expert” too narrowly, relying on indicators such as titles, degrees, and years of experience. However, experience is a multidimensional construct. Different types of experience—including time spent on the front line, with a customer or working with particular people—contribute to understanding a problem in detail and creating a solution.

A bias toward experts can also lead people to misunderstand the potential drawbacks that come with increased time and practice in the job. Though experience improves efficiency and effectiveness, it can also make people more resistant to change and more likely to dismiss information that conflicts with their views.

Challenge #2: Inadequate frontline involvement. Frontline employees—the people directly involved in creating, selling, delivering, and servicing offerings and interacting with customers—are frequently in the best position to spot and solve problems. Too often, though, they aren’t empowered to do so.

The following tactics can help organizations overcome weaknesses of the expert bias.

Encourage workers to own problems that affect them. Make sure that your organization is adhering to the principle that the person who experiences a problem should fix it when and where it occurs. This prevents workers from relying too heavily on experts and helps them avoid making the same mistakes again. Tackling the problem immediately, when the relevant information is still fresh, increases the chances that it will be successfully resolved. Build a culture rich with problem-solving and risk management skills and behaviors.

Give workers different kinds of experience. Recognize that both doing the same task repeatedly (“specialized experience”) and switching between different tasks (“varied experience”) have benefits. Yes, Over the course of a single day, a specialized approach is usually fastest. But over time, switching activities across days promotes learning and kept workers more engaged. Both specialization and variety are important to continuous learning.

Empower employees to use their experience. Organizations should aggressively seek to identify and remove barriers that prevent individuals from using their expertise. Solving the customer’s problems in innovative, value-creating ways—not navigating organizational impediments— should be the challenging part of one’s job.

In short we need to build the capability to leverage all level of experts, and not just a few in their ivory tower.

These two biases can be overcome and through that we can start building the mindsets to deal effectively with subjectivity and uncertainty. Going further, build the following as part of our team activities as sort of a quality control checklist:

  1. Check for self-interest bias
  2. Check for the affect heuristic. Has the team fallen in love with its own output?
  3. Check for group think. Were dissenting views explored adequately?
  4. Check for saliency bias. Is this routed in past successes?
  5. Check for confirmation bias.
  6. Check for availability bias
  7. Check for anchoring bias
  8. Check for halo effect
  9. Check for sunk cost fallacy and endowment effect
  10. Check for overconfidence, planning fallacy, optimistic biases, competitor neglect
  11. Check for disaster neglect. Have the team conduct a post-mortem: Imagine that the worst has happened and develop a story about its causes.
  12. Check for loss aversion

Uncertainty and Subjectivity in Risk Management

The July-2019 monthly gift to members of the ASQ is a lot of material on Failure Mode and Effect Analysis (FMEA). Reading through the material got me to thinking of subjectivity in risk management.

Risk assessments have a core of the subjective to them, frequently including assumptions about the nature of the hazard, possible exposure pathways, and judgments for the likelihood that alternative risk scenarios might occur. Gaps in the data and information about hazards, uncertainty about the most likely projection of risk, and incomplete understanding of possible scenarios contribute to uncertainties in risk assessment and risk management. You can go even further and say that risk is socially constructed, and that risk is at once both objectively verifiable and what we perceive or feel it to be. Then again, the same can be said of most of science.

Risk is a future chance of loss given exposure to a hazard. Risk estimates, or qualitative ratings of risk, are necessarily projections of future consequences. Thus, the true probability of the risk event and its consequences cannot be known in advance. This creates a need for subjective judgments to fill-in information about an uncertain future. In this way risk management is rightly seen as a form of decision analysis, a form of making decisions against uncertainty.

Everyone has a mental picture of risk, but the formal mathematics of risk analysis are inaccessible to most, relying on probability theory with two major schools of thought: the frequency school and the subjective probability school. The frequency school says probability is based on a count of the number of successes divided by total number of trials. Uncertainty that is ready characterized using frequentist probability methods is “aleatory” – due to randomness (or random sampling in practice). Frequentist methods give an estimate of “measured” uncertainty; however, it is arguably trapped in the past because it does not lend itself to easily to predicting future successes.

In risk management we tend to measure uncertainty with a combination of frequentist and subjectivist probability distributions. For example, a manufacturing process risk assessment might begin with classical statistical control data and analyses. But projecting the risks from a process change might call for expert judgments of e.g. possible failure modes and the probability that failures might occur during a defined period. The risk assessor(s) bring prior expert knowledge and, if we are lucky, some prior data, and start to focus the target of the risk decision using subjective judgments of probabilities.

Some have argued that a failure to formally control subjectivity — in relation to probability judgments – is the failure of risk management. This was an argument that some made during WCQI, for example. Subjectivity cannot be eliminated nor is it an inherent limitation. Rather, the “problem with subjectivity” more precisely concerns two elements:

  1. A failure to recognize where and when subjectivity enters and might create problems in risk assessment and risk-based decision making; and
  2. A failure to implement controls on subjectivity where it is known to occur.

Risk is about the chance of adverse outcomes of events that are yet to occur, subjective judgments of one form or another will always be required in both risk assessment and risk management decision-making.

We control subjectivity in risk management by:

  • Raising awareness of where/when subjective judgments of probability occur in risk assessment and risk management
  • Identifying heuristics and biases where they occur
  • Improving the understanding of probability among the team and individual experts
  • Calibrating experts individually
  • Applying knowledge from formal expert elicitation
  • Use expert group facilitation when group probability judgments are sought

Each one of these is it’s own, future, post.