Causal Factor

A causal factor is a significant contributor to an incident, event, or problem that, if eliminated or addressed, would have prevented the occurrence or reduced its severity or frequency. Here are the key points to understand about causal factors:

  1. Definition: A causal factor is a major unplanned, unintended contributor to an incident (a negative event or undesirable condition) that, if eliminated, would have either prevented the occurrence of the incident or reduced its severity or frequency.
  2. Distinction from root cause: While a causal factor contributes to an incident, it is not necessarily the primary driver. The root cause, on the other hand, is the fundamental reason for the occurrence of a problem or event. (Pay attention to the deficiencies of the model)
  3. Multiple contributors: An incident may have multiple causal factors, and eliminating one causal factor might not prevent the incident entirely but could reduce its likelihood or impact. Swiss-Cheese Model.
  4. Identification methods: Causal factors can be identified through various techniques, including: Root cause analysis (including such tools as fishbone diagrams (Ishikawa diagrams) or the Why-Why technique), Causal Learning Cycle(CLC) analysis, and Causal factor charting.
  5. Importance in problem-solving: Identifying causal factors is crucial for developing effective preventive measures and improving safety, quality, and efficiency.
  6. Characteristics: Causal factors must be mistakes, errors, or failures that directly lead to an incident or fail to mitigate its consequences. They should not contain other causal factors within them.
  7. Distinction from root causes: It’s important to note that root causes are not causal factors but rather lead to causal factors. Examples of root causes often mistaken for causal factors include inadequate procedures, improper training, or poor work culture.

Human Factors are not always Causal Factors, but can be!

Human factor and human error are related concepts but are not the same. A human error is always a causal factor, and the human factor explains why human errors can happen.

Human Error

Human error refers to an unintentional action or decision that fails to achieve the intended outcome. It encompasses mistakes, slips, lapses, and violations that can lead to accidents or incidents. There are two types:

  • Unintentional Errors include slips (attentional failures) and lapses (memory failures) caused by distractions, interruptions, fatigue, or stress.
  • Intentional Errors are violations in which an individual knowingly deviates from safe practices, procedures, or regulations. They are often categorized into routine, situational, or exceptional violations.

Human Factors

Human factors is a broader field that studies how humans interact with various system elements, including tools, machines, environments, and processes. It aims to optimize human well-being and overall system performance by understanding human capabilities, limitations, behaviors, and characteristics.

  • Physical Ergonomics focuses on human anatomical, anthropometric, physiological, and biomechanical characteristics.
  • Cognitive Ergonomics deals with mental processes such as perception, memory, reasoning, and motor response.
  • Organizational Ergonomics involves optimizing organizational structures, policies, and processes to improve overall system performance and worker well-being.

Relationship Between Human Factors and Human Error

  • Causal Relationship: Human factors delve into the underlying reasons why human errors occur. They consider the conditions and systems that contribute to errors, such as poor design, inadequate training, high workload, and environmental factors.
  • Error Prevention: By addressing human factors, organizations can design systems and processes that minimize the likelihood of human errors. This includes implementing error-proofing solutions, improving ergonomics, and enhancing training and supervision.

Key Differences

  • Focus:
    • Human Error: Focuses on the outcome of an action or decision that fails to achieve the intended result.
    • Human Factors: Focuses on the broader context and conditions that influence human performance and behavior.
  • Approach:
    • Human Error: Often addressed through training, disciplinary actions, and procedural changes.
    • Human Factors: Involves a multidisciplinary approach to design systems, environments, and processes that support optimal human performance and reduce the risk of errors.

Fostering Critical Thinking

As a leader, fostering critical thinking in my team and beyond is a core part of my job. Fostering critical thinking means an approach that encourages open-mindedness, curiosity, and structured problem-solving.

Encourage Questioning and Healthy Debate

It is essential to create an environment where team members feel comfortable questioning assumptions and engaging in constructive debates. Encourage them to ask “why” and explore different perspectives. This open dialogue promotes deeper thinking and prevents groupthink.

Foster a Culture of Curiosity

Inspire your team to ask questions and seek deeper understanding. Role model this behavior by starting meetings with thought-provoking “what if” scenarios or sharing your own curiosities. Celebrate curiosity and reward those who think outside the box.

Assign Stretch Assignments

Provide your team with challenging tasks that push them beyond their comfort zones. These stretch assignments force them to think critically, analyze information from multiple angles, and develop innovative solutions.

Promote Diverse Perspectives

Encourage diversity of thought within your team. Diverse backgrounds, experiences, and viewpoints can challenge assumptions and biases, leading to a more comprehensive understanding and better decision-making.

Engage in Collaborative Problem-Solving

Involve your team in decision-making processes and problem-solving exercises. Techniques like role reversal debates, where team members argue a point they disagree with, can help them understand different perspectives and refine their argumentative skills.

Provide Training and Resources

Offer training sessions on critical thinking techniques, such as SWOT analysis, root cause analysis, and logical fallacies. Equip your team with the tools and frameworks they need to think critically.

Lead by Example

As a leader, model critical thinking behaviors. Discuss your thought processes openly, question your assumptions, and show the value of critical evaluation in real-time decision-making. Your team will be more likely to emulate these habits.

Encourage Continuous Learning

Recommend learning resources, such as courses, articles, and books from diverse fields. Continuous learning can broaden perspectives and foster multifaceted thinking.

Embrace Feedback and Mistakes

Establish feedback loops within the team and create a safe environment where mistakes are treated as learning opportunities. Receiving and giving feedback helps refine understanding and overcome biases.

Implement Role-Playing Scenarios

Use role-playing scenarios to simulate real-world challenges. This helps team members practice critical thinking in a controlled environment, enhancing their ability to apply these skills in actual situations.

Build Into the Team Charter

Building these expectations into the team charter holds you and your team accountable.

Value: Regulatory Intelligence

Definition: Stay current on industry regulations and guidances. 

Desired Behaviors:

  1. I will dedicate time to reading industry-related guidance and regulation publications related to my job.
  2. I will share publications that I find interesting or applicable to my job with the team
  3. I will present to the team on at least one topic per year to share learnings with the team (or wider organization)

Value: Learning Culture

Definition: Share lessons learned from projects so the team can grow together and remain aligned.  Engage in knowledge-sharing sessions.

Desired Behaviors:

  1. I will share lessons learned from each project with the wider team via the team channel and/or weekly team meeting.
  2. I will encourage team members to openly share their experiences, successes, and challenges without fear of judgement.
  3. I will update RAID log with decisions made by the team.
  4. I will identify possible process improvements and update the process improvement tracker.

Value: Team Collaboration

Definition: Willingness to help teammates when they reach out for input/help

Desired Behaviors:

  1. I will be supportive of my teammate’s requests for assistance
  2. I will engage and offer my SME advice when asked or help identify another SME to assist 
  3. I will not ignore requests for input/help
  4. I will contribute to an environment where teammates can request help

Thinking of Swiss Cheese: Reason’s Theory of Active and Latent Failures

The Theory of Active and Latent Failures was proposed by James Reason in his book, Human Error. Reason stated accidents within most complex systems, such as health care, are caused by a breakdown or absence of safety barriers across four levels within a system. These levels can best be described as Unsafe Acts, Preconditions for Unsafe Acts, Supervisory Factors, and Organizational Influences. Reason used the term “active failures” to describe factors at the Unsafe Acts level, whereas “latent failures” was used to describe unsafe conditions higher up in the system.

This is represented as the Swiss Cheese model, and has become very popular in root cause analysis and risk management circles and widely applied beyond the safety world.

Swiss Cheese Model

In the Swiss Cheese model, the holes in the cheese depict the failure or absence of barriers within a system. Such occurrences represent failures that threaten the overall integrity of the system. If such failures never occurred within a system (i.e., if the system were perfect), then there would not be any holes in the cheese. We would have a nice Engelberg cheddar.

Not every hole that exists in a system will lead to an error. Sometimes holes may be inconsequential. Other times, holes in the cheese may be detected and corrected before something bad happens. This process of detecting and correcting errors occurs all the time.

The holes in the cheese are dynamic, not static. They open and close over time due to many factors, allowing the system to function appropriately without catastrophe. This is what human factors engineers call “resilience.” A resilient system is one that can adapt and adjust to changes or disturbances.

Holes in the cheese open and close at different rates. The rate at which holes pop up or disappear is determined by the type of failure the hole represents.

  1. Holes that occur at the Unsafe Acts level, and even some at the Preconditions level, represent active failures. Active failures usually occur during the activity of work and are directly linked to the bad outcome. Active failures change during the process of performing, opening, and closing over time as people make errors, catch their errors, and correct them.
  2. Latent failures occur higher up in the system, above the Unsafe Acts level — the Organizational, Supervisory, and Preconditions levels. These failures are referred to as “latent” because when they occur or open, they often go undetected. They can lie “dormant” or “latent” in the system for an extended period of time before they are recognized. Unlike active failures, latent failures do not close or disappear quickly.

Most events (harms) are associated with multiple active and latent failures. Unlike the typical Swiss Cheese diagram above, which shows an arrow flying through one hole at each level of the system, there can be a variety of failures at each level that interact to produce an event. In other words, there can be several failures at the Organizational, Supervisory, Preconditions, and Unsafe Acts levels that all lead to harm. The number of holes in the cheese associated with events are more frequent at the Unsafe Acts and Preconditions levels, but (usually) become fewer as one progresses upward through the Supervisory and Organizational levels.

Given the frequency and dynamic nature of activities, there are more opportunities for holes to open up at the Unsafe and Preconditions levels on a frequent basis and there are often more holes identified at these levels during root cause investigation and risk assessments.

The way the holes in the cheese interact across levels is important:

  • One-to-many mapping of causal factors is when a hole at a higher level (e.g., Preconditions) may result in several holes at a lower level (e.g. Unsafe acts)
  • Many-to-one mapping of causal factors when multiple holes at the higher level (e.g. preconditions) might interact to produce a single hole at the lower level (e.g. Unsafe Acts)

By understand the Swiss Cheese model, and Reason’s wider work in Active and Latent Failures, we can strengthen our approach to problem-solving.

Plus cheese is cool.

Swiss Cheese on a cheese board with knife

Call a Band-Aid a Band-Aid: Corrections and Problem-Solving

A common mistake made in problem-solving, especially within the deviation process, is not giving enough foresight to band-aids. As I discussed in the post “Treating All Investigations the Same” it is important to be able to determine what problems need deep root-cause analysis and which ones should be more catch and release.

For catch and release you usually correct, document, and close. In these cases the problem is inherently small enough and the experience suggesting a possible course of action – the correction – sound enough, that you can proceed without root cause analysis and a solution. If those problems persist, and experience and intuition-drive solutions prove ineffective, then we might decide to engage in structured problem-solving for a more effective solution and outcome.

In the post “When troubleshooting causes trouble” I discussed that lays out the 4Cs: Concern, Cause, Countermeasure, Check Results. It is during the Countermeasure step that we determine what immediate or temporary countermeasures can be taken to reduce or eliminate the problem? Where we apply correction and immediate action.

It helps to agree on what a correction is, especially as it relates to corrective actions. Folks often get confused here. A Correction addresses the problem, it does not get to addressing the cause.

Fixing a tire, rebooting a computer, doing the dishes. These are all corrections.

As I discussed in “Design Problem Solving into the Process” good process design involves thinking of as many problems that could occur, identifying the ways to notice these problems, and having clear escalation paths. For low-risk issues, that is often just fix, record, move on. I talk a lot more about this in the post “Managing Events Systematically.”

A good problem-solving system is built to help people decide when to apply these band-aids, and when to engage in more structured problem-solving. This reliance on situational awareness is key to build into the organization.

Design Problem Solving into the Process

Good processes and systems have ways designed into them to identify when a problem occurs, and ensure it gets the right rigor of problem-solving. A model like Art Smalley’s can be helpful here.

Each and every process should go through the following steps:

  1. Define those problems that should be escalated and those that should not. Everyone working in a process should have the same definition of what is a problem. Often times we end up with a hierarchy of issues that are solved within the process – Level 1 – and those processes that go to a root cause process (deviation/CAPA) – level 2.
  2. Identify the ways to notice a problem. Make the work as visual as possible so it is easier to detect the problem.
  3. Define the escalation method. There should be one clear way to surface a problem. There are many ways to create a signal, but it should be simple, timely, and very clear.

These three elements make up the request for help.

The next two steps make up the response to that request.

  1. Who is the right person to respond? Supervisor? Area management? Process Owner? Quality?
  2. How does the individual respond, and most importantly when? This should be standardized so the other end of that help chain is not wondering whether, when, and in what form that help is going to arrive.

In order for this to work, it is important to identify clear ownership of the problem. There always must be one person clearly accountable, even if only responsible for bits, so they can push the problem forward.

It is easy for problem-solving to stall. So make sure progress is transparent. Knowing what is being worked on, and what is not, is critical.

Prioritization is key. Not every problem needs solving so have a mechanism to ensure the right problems are being solved in the process.

Problem solving within a process