Cause-Consequence Analysis (CCA): A Powerful Tool for Risk Assessment

Cause-Consequence Analysis (CCA) is a versatile and comprehensive risk assessment technique that combines elements of fault tree analysis and event tree analysis. This powerful method allows analysts to examine both the causes and potential consequences of critical events, providing a holistic view of risk scenarios.

What is Cause-Consequence Analysis?

Cause-Consequence Analysis is a graphical method that integrates two key aspects of risk assessment:

  1. Cause analysis: Identifying and analyzing the potential causes of a critical event using fault tree-like structures.
  2. Consequence analysis: Evaluating the possible outcomes and their probabilities using event tree-like structures.

The result is a comprehensive diagram that visually represents the relationships between causes, critical events, and their potential consequences.

When to Use Cause-Consequence Analysis

CCA is particularly useful in the following situations:

  1. Complex systems analysis: When dealing with intricate systems where multiple factors can interact to produce various outcomes.
  2. Safety-critical industries: In sectors such as nuclear power, chemical processing, and aerospace, where understanding both causes and consequences is crucial.
  3. Multiple outcome scenarios: When a critical event can lead to various consequences depending on the success or failure of safety systems or interventions.
  4. Comprehensive risk assessment: When a thorough understanding of both the causes and potential impacts of risks is required.
  5. Decision support: To aid in risk management decisions by providing a clear picture of risk pathways and potential outcomes.

How to Implement Cause-Consequence Analysis

Implementing CCA involves several key steps:

1. Identify the Critical Event

Start by selecting a critical event – an undesired occurrence that could lead to significant consequences. This event serves as the focal point of the analysis.

2. Construct the Cause Tree

Working backwards from the critical event, develop a fault tree-like structure to identify and analyze the potential causes. This involves:

  • Identifying primary, secondary, and root causes
  • Using logic gates (AND, OR) to show how causes combine
  • Assigning probabilities to basic events

3. Develop the Consequence Tree

Moving forward from the critical event, create an event tree-like structure to map out potential consequences:

  • Identify safety functions and barriers
  • Determine possible outcomes based on the success or failure of these functions
  • Include time delays where relevant

4. Integrate Cause and Consequence Trees

Combine the cause and consequence trees around the critical event to create a complete CCA diagram.

5. Analyze Probabilities

Calculate the probabilities of different outcome scenarios by combining the probabilities from both the cause and consequence portions of the diagram.

6. Evaluate and Interpret Results

Assess the overall risk picture, identifying the most critical pathways and potential areas for risk reduction.

Benefits of Cause-Consequence Analysis

CCA offers several advantages:

  • Comprehensive view: Provides a complete picture of risk scenarios from causes to consequences.
  • Flexibility: Can be applied to various types of systems and risk scenarios.
  • Visual representation: Offers a clear, graphical depiction of risk pathways.
  • Quantitative analysis: Allows for probability calculations and risk quantification.
  • Decision support: Helps identify critical areas for risk mitigation efforts.

Challenges and Considerations

While powerful, CCA does have some limitations to keep in mind:

  • Complexity: For large systems, CCA diagrams can become very complex and time-consuming to develop.
  • Expertise required: Proper implementation requires a good understanding of both fault tree and event tree analysis techniques.
  • Data needs: Accurate probability data for all events may not always be available.
  • Static representation: The basic CCA model doesn’t capture dynamic system behavior over time.

Cause-Consequence Analysis is a valuable tool in the risk assessment toolkit, offering a comprehensive approach to understanding and managing risk. By integrating cause analysis with consequence evaluation, CCA provides decision-makers with a powerful means of visualizing risk scenarios and identifying critical areas for intervention. While it requires some expertise to implement effectively, the insights gained from CCA can be invaluable in developing robust risk management strategies across a wide range of industries and applications.

Cause-Consequence Analysis Example

Process StepPotential CauseConsequenceMitigation Strategy
Upstream Bioreactor OperationLeak in single-use bioreactor bagContamination risk, batch lossUse reinforced bags with pressure sensors + secondary containment
Cell CultureFailure to maintain pH/temperatureReduced cell viability, lower mAb yieldReal-time monitoring with automated control systems
Harvest ClarificationPump malfunction during depth filtrationCell lysis releasing impuritiesRedundant pumping systems + surge tanks
Protein A ChromatographyLoss of column integrityInefficient antibody captureRegular integrity testing + parallel modular columns
Viral FiltrationMembrane foulingReduced throughput, extended processing timePre-filtration + optimized flow rates
FormulationImproper mixing during buffer exchangeProduct aggregation, inconsistent dosingAutomated mixing systems with density sensors
Aseptic FillingBreach in sterile barrierMicrobial contaminationClosed system transfer devices (CSTDs) + PUPSIT testing
Cold Chain StorageTemperature deviation during freezingProtein denaturationControlled rate freeze-thaw systems + temperature loggers

Key Risk Areas and Systemic Impacts

1. Contamination Cascade
Single-use system breaches can lead to:

  • Direct product loss ($500k-$2M per batch)
  • Facility downtime for decontamination (2-4 weeks)
  • Regulatory audit triggers

2. Supply Chain Interdependencies
Delayed delivery of single-use components causes:

  • Production schedule disruptions
  • Increased inventory carrying costs
  • Potential quality variability between suppliers

3. Environmental Tradeoffs
While reducing water/energy use by 30-40% vs stainless steel, single-use systems introduce:

  • Plastic waste generation (300-500 kg/batch)
  • Supply chain carbon footprint from polymer production

Mitigation Effectiveness Analysis

Control MeasureRisk Reduction (%)Cost Impact
Automated monitoring systems45-60High initial investment
Redundant fluid paths30-40Moderate
Supplier qualification25-35Low
Staff training programs15-25Recurring

This analysis demonstrates that single-use mAb manufacturing offers flexibility and contamination reduction benefits, but requires rigorous control of material properties, process parameters, and supply chain logistics. Modern solutions like closed-system automation and modular facility designs help mitigate key risks while maintaining the environmental advantages of single-use platforms.

Determining Causative Laboratory Error in Bioburden, Endotoxin, and Environmental Monitoring OOS Results

In the previous post, we discussed the critical importance of thorough investigations into deviations, as highlighted by the recent FDA warning letter to Sanofi. Let us delve deeper into a specific aspect of these investigations: determining whether an invalidated out-of-specification (OOS) result for bioburden, endotoxin, or environmental monitoring action limit excursions conclusively demonstrates causative laboratory error.

When faced with an OOS result in microbiological testing, it’s crucial to conduct a thorough investigation before invalidating the result. The FDA expects companies to provide scientific justification and evidence that conclusively demonstrates a causative laboratory error if a result is to be invalidated.

Key Steps in Evaluating Laboratory Error

1. Review of Test Method and Procedure

  • Examine the standard operating procedure (SOP) for the test method
  • Verify that all steps were followed correctly
  • Check for any deviations from the established procedure

2. Evaluation of Equipment and Materials

Evaluation of Equipment and Materials is a critical step in determining whether laboratory error caused an out-of-specification (OOS) result, particularly for bioburden, endotoxin, or environmental monitoring tests. Here’s a detailed approach to performing this evaluation:

Equipment Assessment

Functionality Check
  • Run performance verification tests on key equipment used in the analysis
  • Review equipment logs for any recent malfunctions or irregularities
  • Verify that all equipment settings were correct for the specific test performed
Calibration Review
  • Check calibration records to ensure equipment was within its calibration period
  • Verify that calibration standards used were traceable and not expired
  • Review any recent calibration data for trends or shifts
Maintenance Evaluation
  • Examine maintenance logs for adherence to scheduled maintenance
  • Look for any recent repairs or adjustments that could affect performance
  • Verify that all preventive maintenance tasks were completed as required

Materials Evaluation

Reagent Quality Control
  • Check expiration dates of all reagents used in the test
  • Review storage conditions to ensure reagents were stored properly
  • Verify that quality control checks were performed on reagents before use
Media Assessment (for Bioburden and Environmental Monitoring)
  • Review growth promotion test results for culture media
  • Check pH and sterility of prepared media
  • Verify that media was stored at the correct temperature
Water Quality (for Endotoxin Testing)
  • Review records of water quality used for reagent preparation
  • Check for any recent changes in water purification systems
  • Verify endotoxin levels in water used for testing

Environmental Factors

Laboratory Conditions
  • Review temperature and humidity logs for the testing area
  • Check for any unusual events (e.g., power outages, HVAC issues) around the time of testing
  • Verify that environmental conditions met the requirements for the test method
Contamination Control
  • Examine cleaning logs for the laboratory area and equipment
  • Review recent environmental monitoring results for the testing area
  • Check for any breaches in aseptic technique during testing

Documentation Review

Standard Operating Procedures (SOPs)
  • Verify that the most current version of the SOP was used
  • Check for any recent changes to the SOP that might affect the test
  • Ensure all steps in the SOP were followed and documented
Equipment and Material Certifications
  • Review certificates of analysis for critical reagents and standards
  • Check equipment qualification documents (IQ/OQ/PQ) for compliance
  • Verify that all required certifications were current at the time of testing

By thoroughly evaluating equipment and materials using these detailed steps, laboratories can more conclusively determine whether an OOS result was due to laboratory error or represents a true product quality issue. This comprehensive approach helps ensure the integrity of microbiological testing and supports robust quality control in pharmaceutical manufacturing.

3. Assessment of Analyst Performance

Here are key aspects to consider when evaluating analyst performance during an OOS investigation:

Review Training Records

  • Examine the analyst’s training documentation to ensure they are qualified to perform the specific test method.
  • Verify that the analyst has completed all required periodic refresher training.
  • Check if the analyst has demonstrated proficiency in the particular test method recently.

Evaluate Recent Performance History

  • Review the analyst’s performance on similar tests over the past few months.
  • Look for any patterns or trends in the analyst’s results, such as consistently high or low readings.
  • Compare the analyst’s results with those of other analysts performing the same tests.

Conduct Interviews

  • Interview the analyst who performed the test to gather detailed information about the testing process.
  • Ask open-ended questions to encourage the analyst to describe any unusual occurrences or deviations from standard procedures.
  • Inquire about the analyst’s workload and any potential distractions during testing.

Observe Technique

  • If possible, have the analyst demonstrate the test method while being observed by a supervisor or senior analyst.
  • Pay attention to the analyst’s technique, including sample handling, reagent preparation, and equipment operation.
  • Note any deviations from standard operating procedures (SOPs) or good practices.

Review Documentation Practices

  • Examine the analyst’s laboratory notebooks and test records for completeness and accuracy.
  • Verify that all required information was recorded contemporaneously.
  • Check for any unusual notations or corrections in the documentation.

Assess Knowledge of Method and Equipment

  • Quiz the analyst on critical aspects of the test method and equipment operation.
  • Verify their understanding of acceptance criteria, potential sources of error, and troubleshooting procedures.
  • Ensure the analyst is aware of recent changes to SOPs or equipment calibration requirements.

Evaluate Workload and Environment

  • Consider the analyst’s workload at the time of testing, including any time pressures or competing priorities.
  • Assess the laboratory environment for potential distractions or interruptions that could have affected performance.
  • Review any recent changes in the analyst’s responsibilities or work schedule.

Perform Comparative Testing

  • Have another qualified analyst repeat the test using the same sample and equipment, if possible.
  • Compare the results to determine if there are significant discrepancies between analysts.
  • If discrepancies exist, investigate potential reasons for the differences.

Review Equipment Use Records

  • Check equipment logbooks to verify proper use and any noted issues during the time of testing.
  • Confirm that the analyst used the correct equipment and that it was properly calibrated and maintained.

Consider Human Factors

  • Assess any personal factors that could have affected the analyst’s performance, such as fatigue, illness, or personal stress.
  • Review the analyst’s work schedule leading up to the OOS result for any unusual patterns or extended hours.

By thoroughly assessing analyst performance using these methods, investigators can determine whether human error contributed to the OOS result and identify areas for improvement in training, procedures, or work environment. It’s important to approach this assessment objectively and supportively, focusing on systemic improvements rather than individual blame.

4. Examination of Environmental Factors

  • Review environmental monitoring data for the testing area
  • Check for any unusual events or conditions that could have affected the test

5. Data Analysis and Trending

  • Compare the OOS result with historical data and trends
  • Look for any patterns or anomalies that might explain the result

Conclusive vs. Inconclusive Evidence

Conclusive Evidence of Laboratory Error

To conclusively demonstrate laboratory error, you should be able to:

  • Identify a specific, documented error in the testing process
  • Reproduce the error and show how it leads to the OOS result
  • Demonstrate that correcting the error leads to an in-specification result

Examples of conclusive evidence might include:

  • Documented use of an expired reagent
  • Verified malfunction of testing equipment
  • Confirmed contamination of a negative control

Inconclusive Evidence

If the investigation reveals potential issues but cannot definitively link them to the OOS result, the evidence is considered inconclusive. This might include:

  • Minor deviations from SOPs that don’t clearly impact the result
  • Slight variations in environmental conditions
  • Analyst performance issues that aren’t directly tied to the specific test

Special Considerations for Microbiological Testing

Bioburden, endotoxin, and environmental monitoring tests present unique challenges due to their biological nature.

Bioburden Testing

  • Consider the possibility of sample contamination during collection or processing
  • Evaluate the recovery efficiency of the test method
  • Assess the potential for microbial growth during sample storage

Endotoxin Testing

  • Review the sample preparation process, including any dilution steps
  • Evaluate the potential for endotoxin masking or enhancement
  • Consider the impact of product formulation on the test method

Environmental Monitoring

  • Assess the sampling technique and equipment used
  • Consider the potential for transient environmental contamination
  • Evaluate the impact of recent cleaning or maintenance activities

Documenting the Investigation

Regardless of the outcome, it’s crucial to thoroughly document the investigation process. This documentation should include:

  • A clear description of the OOS result and initial observations
  • Detailed accounts of all investigative steps taken
  • Raw data and analytical results from the investigation
  • A comprehensive analysis of the evidence
  • A scientifically justified conclusion

Conclusion

Determining whether an invalidated OOS result conclusively demonstrates causative laboratory error requires a systematic, thorough, and well-documented investigation. For microbiological tests like bioburden, endotoxin, and environmental monitoring, this process can be particularly challenging due to the complex and sometimes variable nature of biological systems.

Remember, the goal is not to simply invalidate OOS results, but to understand the root cause and implement corrective and preventive actions. Only through rigorous investigation and continuous improvement can we ensure the quality and safety of pharmaceutical products. When investigating environmental and in-process results we are investigating the whole house of contamination control.

Causal Factor

A causal factor is a significant contributor to an incident, event, or problem that, if eliminated or addressed, would have prevented the occurrence or reduced its severity or frequency. Here are the key points to understand about causal factors:

  1. Definition: A causal factor is a major unplanned, unintended contributor to an incident (a negative event or undesirable condition) that, if eliminated, would have either prevented the occurrence of the incident or reduced its severity or frequency.
  2. Distinction from root cause: While a causal factor contributes to an incident, it is not necessarily the primary driver. The root cause, on the other hand, is the fundamental reason for the occurrence of a problem or event. (Pay attention to the deficiencies of the model)
  3. Multiple contributors: An incident may have multiple causal factors, and eliminating one causal factor might not prevent the incident entirely but could reduce its likelihood or impact. Swiss-Cheese Model.
  4. Identification methods: Causal factors can be identified through various techniques, including: Root cause analysis (including such tools as fishbone diagrams (Ishikawa diagrams) or the Why-Why technique), Causal Learning Cycle(CLC) analysis, and Causal factor charting.
  5. Importance in problem-solving: Identifying causal factors is crucial for developing effective preventive measures and improving safety, quality, and efficiency.
  6. Characteristics: Causal factors must be mistakes, errors, or failures that directly lead to an incident or fail to mitigate its consequences. They should not contain other causal factors within them.
  7. Distinction from root causes: It’s important to note that root causes are not causal factors but rather lead to causal factors. Examples of root causes often mistaken for causal factors include inadequate procedures, improper training, or poor work culture.

Human Factors are not always Causal Factors, but can be!

Human factor and human error are related concepts but are not the same. A human error is always a causal factor, and the human factor explains why human errors can happen.

Human Error

Human error refers to an unintentional action or decision that fails to achieve the intended outcome. It encompasses mistakes, slips, lapses, and violations that can lead to accidents or incidents. There are two types:

  • Unintentional Errors include slips (attentional failures) and lapses (memory failures) caused by distractions, interruptions, fatigue, or stress.
  • Intentional Errors are violations in which an individual knowingly deviates from safe practices, procedures, or regulations. They are often categorized into routine, situational, or exceptional violations.

Human Factors

Human factors is a broader field that studies how humans interact with various system elements, including tools, machines, environments, and processes. It aims to optimize human well-being and overall system performance by understanding human capabilities, limitations, behaviors, and characteristics.

  • Physical Ergonomics focuses on human anatomical, anthropometric, physiological, and biomechanical characteristics.
  • Cognitive Ergonomics deals with mental processes such as perception, memory, reasoning, and motor response.
  • Organizational Ergonomics involves optimizing organizational structures, policies, and processes to improve overall system performance and worker well-being.

Relationship Between Human Factors and Human Error

  • Causal Relationship: Human factors delve into the underlying reasons why human errors occur. They consider the conditions and systems that contribute to errors, such as poor design, inadequate training, high workload, and environmental factors.
  • Error Prevention: By addressing human factors, organizations can design systems and processes that minimize the likelihood of human errors. This includes implementing error-proofing solutions, improving ergonomics, and enhancing training and supervision.

Key Differences

  • Focus:
    • Human Error: Focuses on the outcome of an action or decision that fails to achieve the intended result.
    • Human Factors: Focuses on the broader context and conditions that influence human performance and behavior.
  • Approach:
    • Human Error: Often addressed through training, disciplinary actions, and procedural changes.
    • Human Factors: Involves a multidisciplinary approach to design systems, environments, and processes that support optimal human performance and reduce the risk of errors.

What prevents us from improving systems?

Improvement is a process and sometimes it can feel like it is a one-step-forward-two-steps-back sort of shuffle. And just like any dance, knowing the steps to avoid can be critical. Here are some important ones to consider. In many ways they can be considered an onion, we systematically can address a problem layer and then work our way to the next.

Human-error-as-cause

The vague, ambiguous and poorly defined bucket concept called human error is just a mess. Human error is never the root cause; it is a category, an output that needs to be understood. Why did the human error occur? Was it because the technology was difficult to use or that the procedure was confusing? Those answers are things that are “actionable”—you can address them with a corrective action.

The only action you can take when you say “human error” is to get rid of the people. As an explanation the concept it widely misused and abused. 

Human performance instead of human error
AttributePerson ApproachSystem Approach
FocusErrors and violationsHumans are fallible; errors are to be expected
Presumed CauseForgetfulness, inattention, carelessness, negligence“Upstream” failures, error traps; organizational failures that contribute to these
Countermeasure to applyFear, more/longer procedures, retraining, disciplinary measures, shamingEstablish system defenses and barriers
Options to avoid human error

Human error has been a focus for a long time, and many companies have been building programmatic approaches to avoiding this pitfall. But we still have others to grapple with.

Causal Chains

We like to build our domino cascades that imply a linear ordering of cause-and-effect – look no further than the ubiquitous presence of the 5-Whys. Causal chains force people to think of complex systems by reducing them when we often need to grapple with systems for their tendency towards non-linearity, temporariness of influence, and emergence.

This is where taking risk into consideration and having robust problem-solving with adaptive techniques is critical. Approach everything like a simple problem and nothing will ever get fixed. Similarly, if every problem is considered to need a full-on approach you are paralyzed. As we mature we need to have the mindset of types of problems and the ability to easily differentiate and move between them.

Root cause(s)

We remove human error, stop overly relying on causal chains – the next layer of the onion is to take a hard look at the concept of a root cause. The idea of a root cause “that, if removed, prevents recurrence” is pretty nonsensical. Novice practitioners of root cause analysis usually go right to the problem when they ask “How do I know I reached the root cause.” To which the oft-used stopping point “that management can control” is quite frankly fairly absurd.  The concept encourages the idea of a single root cause, ignoring multiple, jointly necessary, contributory causes let alone causal loops, emergent, synergistic or holistic effects. The idea of a root cause is just an efficiency-thoroughness trade-off, and we are better off understanding that and applying risk thinking to deciding between efficiency and resource constraints.

In conclusion

Our problem solving needs to strive to drive out monolithic explanations, which act as proxies for real understanding, in the form of big ideas wrapped in simple labels. The labels are ill-defined and come in and out of fashion – poor/lack of quality culture, lack of process, human error – that tend to give some reassurance and allow the problem to be passed on and ‘managed’, for instance via training or “transformations”. And yes, maybe there is some irony in that I tend to think of the problems of problem solving in light of these ways of problem solving.