Barriers and Root Cause Analysis: A Comprehensive Framework

Barriers, or controls, are one of the fundamental elements of root cause analysis. By understanding barriers—including their types and functions—we can understand both why a problem happened and how it can be prevented in the future. An evaluation of current process controls as part of root cause analysis can help determine whether all the current barriers pertaining to the problem you are investigating were present and effective.

Understanding Barrier Analysis

At its simplest, barrier analysis is a three-part brainstorm that examines the status and effectiveness of safety measures:

Barrier Analysis
Barriers that failed
Barriers that were not used
Barriers that did not exist

The key to this brainstorming session is to try to find all of the failed, unused, or nonexistent barriers. Do not be concerned if you are not certain which category they belong in initially.

Types of Barriers: Technical, Human, and Organizational

Most forms of barrier analysis examine two primary types: technical and administrative. Administrative barriers can be further broken down into “human” and “organizational” categories.

ChooseTechnicalHumanOrganizational
IfA technical or engineering control existsThe control relies on a human reviewer or operatorThe control involves a transfer of responsibility. For example, a document reviewed by both manufacturing and quality.
ExamplesSeparation among manufacturing or packaging lines
Emergency power supply
Dedicated equipment
Barcoding
Keypad controlled doors
Separated storage for components
Software that prevents a workflow from going further if a field is not completed
Redundant designs
Training and certifications
Use of checklist
Verification of critical task by a second person
Clear procedures and policies
Adequate supervision
Adequate load of work
Periodic process audits

Preventive vs. Mitigative Barriers: A Critical Distinction

A fundamental aspect of barrier analysis involves understanding the difference between preventive and mitigative barriers. This distinction is crucial for comprehensive risk management and aligns with widely used frameworks such as bow-tie analysis.

Preventive Barriers

Preventive barriers are measures designed to prevent the top event from occurring. These barriers:

  • Focus on stopping incidents before they happen
  • Act as the first line of defense against threats
  • Aim to reduce the likelihood that a risk will materialize
  • Are proactive in nature, addressing potential causes before they can lead to unwanted events

Examples of preventive barriers include:

  • Regular equipment maintenance programs
  • Training and certification programs
  • Access controls and authentication systems
  • Equipment qualification protocols (IQ/OQ/PQ) validating proper installation and operation

Mitigative Barriers

Mitigative barriers are designed to reduce the impact and severity of consequences after the top event has occurred. These barriers:

  • Focus on damage control rather than prevention
  • Act to minimize harm when preventive measures have failed
  • Reduce the severity or substantially decrease the likelihood of consequences occurring
  • Are reactive in nature, coming into play after a risk has materialized

Examples of mitigative barriers include:

  • Alarm systems and response procedures
  • Containment measures for hazards
  • Emergency response teams and protocols
  • Backup power systems for critical operations

Timeline and Implementation Differences

The timing of barrier implementation and failure differs significantly between preventive and mitigative barriers:

  • Preventive barriers often fail over days, weeks, or years before the top event occurs, providing more opportunities for identification and intervention
  • Mitigative barriers often fail over minutes or hours after the top event occurs, requiring higher reliability and immediate effectiveness
  • This timing difference leads to higher reliance on mitigative barriers working correctly the first time

Enhanced Barrier Analysis Framework

Building on the traditional three-part analysis, organizations should incorporate the preventive vs. mitigative distinction into their barrier evaluation:

Enhanced Barrier Analysis
Preventive barriers that failed
Preventive barriers that were not used
Preventive barriers that did not exist
Mitigative barriers that failed
Mitigative barriers that were not used
Mitigative barriers that did not exist

Integration with Risk Assessment

These barriers are the same as current controls in risk assessment, which is key in a wide variety of risk assessment tools. The optimal approach involves balancing both preventive and mitigative barriers without placing reliance on just one type. Some companies may favor prevention by placing high confidence in their systems and practices, while others may emphasize mitigation through reactive policies, but neither approach alone is advisable as they each result in over-reliance on one type of barrier.

Practical Application

When conducting barrier analysis as part of root cause investigation:

  1. Identify all relevant barriers that were supposed to protect against the incident
  2. Classify each barrier as preventive or mitigative based on its intended function
  3. Determine the barrier type: technical, human, or organizational
  4. Assess barrier status: failed, not used, or did not exist
  5. Evaluate the balance between preventive and mitigative measures
  6. Develop corrective actions that address gaps in both preventive and mitigative barriers

This comprehensive approach to barrier analysis provides a more nuanced understanding of how incidents occur and how they can be prevented or their consequences minimized in the future. By understanding both the preventive and mitigative functions of barriers, organizations can develop more robust risk management strategies that address threats at multiple points in the incident timeline.

The Role of the HACCP

Reading Strukmyer LLC’s recent FDA Warning Letter, and reflecting back to last year’s Colgate-Palmolive/Tom’s of Maine, Inc. Warning Letter, has me thinking of common language In both warning letters where the FDA asks for “A comprehensive, independent assessment of the design and control of your firm’s manufacturing operations, with a detailed and thorough review of all microbiological hazards.”

It is hard to read that as anything else than a clarion call to use a HACCP.

If that isn’t a HACCP, I don’t know what is. Given the FDA’s rich history and connection to the tool, it is difficult to imagine them thinking of any other tool. Sure, I can invent about 7 other ways to do that, but why bother when there is a great tool, full of powerful uses, waiting to be used that the regulators pretty much have in their DNA.

The Evolution of HACCP in FDA Regulation: A Journey to Enhanced Food Safety

The Hazard Analysis and Critical Control Points (HACCP) system has a fascinating history that is deeply intertwined with FDA regulations. Initially developed in the 1960s by NASA, the Pillsbury Company, and the U.S. Army, HACCP was designed to ensure safe food for space missions. This pioneering collaboration aimed to prevent food safety issues by identifying and controlling critical points in food processing. The success of HACCP in space missions soon led to its application in commercial food production.

In the 1970s, Pillsbury applied HACCP to its commercial operations, driven by incidents such as the contamination of farina with glass. This prompted Pillsbury to adopt HACCP more widely across its production lines. A significant event in 1971 was a panel discussion at the National Conference on Food Protection, which led to the FDA’s involvement in promoting HACCP for food safety inspections. The FDA recognized the potential of HACCP to enhance food safety standards and began to integrate it into its regulatory framework.

As HACCP gained prominence as a food safety standard in the 1980s and 1990s, the National Advisory Committee on Microbiological Criteria for Foods (NACMCF) refined its principles. The committee added preliminary steps and solidified the seven core principles of HACCP, which include hazard analysis, critical control points identification, establishing critical limits, monitoring procedures, corrective actions, verification procedures, and record-keeping. This structured approach helped standardize HACCP implementation across different sectors of the food industry.

A major milestone in the history of HACCP was the implementation of the Pathogen Reduction/HACCP Systems rule by the USDA’s Food Safety and Inspection Service (FSIS) in 1996. This rule mandated HACCP in meat and poultry processing facilities, marking a significant shift towards preventive food safety measures. By the late 1990s, HACCP became a requirement for all food businesses, with some exceptions for smaller operations. This widespread adoption underscored the importance of proactive food safety management.

The Food Safety Modernization Act (FSMA) of 2011 further emphasized preventive controls, including HACCP, to enhance food safety across the industry. FSMA shifted the focus from responding to food safety issues to preventing them, aligning with the core principles of HACCP. Today, HACCP remains a cornerstone of food safety management globally, with ongoing training and certification programs available to ensure compliance with evolving regulations. The FDA continues to support HACCP as part of its broader efforts to protect public health through safe food production and processing practices. As the food industry continues to evolve, the principles of HACCP remain essential for maintaining high standards of food safety and quality.

Why is a HACCP Useful in Biotech Manufacturing

The HACCP seeks to map a process – the manufacturing process, one cleanroom, a series of interlinked cleanrooms, or the water system – and identifies hazards (a point of contamination) by understanding the personnel, material, waste, and other parts of the operational flow. These hazards are assessed at each step in the process for their likelihood and severity. Mitigations are taken to reduce the risk the hazard presents (“a contamination control point”). Where a risk cannot be adequately minimized (either in terms of its likelihood of occurrence, the severity of its nature, or both), this “contamination control point” should be subject to a form of detection so that the facility has an understanding of whether the microbial hazard was potentially present at a given time, for a given operation. In other words, the “critical control point” provides a reasoned area for selecting a monitoring location. For aseptic processing, for example, the target is elimination, even if this cannot be absolutely demonstrated.

The HACCP approach can easily be applied to pharmaceutical manufacturing where it proves very useful for microbial control. Although alternative risk tools exist, such as Failure Modes and Effects Analysis, the HACCP approach is better for microbial control.

The HACCP is a core part of an effective layers of control analysis.

Conducting a HACCP

HACCP provides a systematic approach to identifying and controlling potential hazards throughout the production process.

Step 1: Conduct a Hazard Analysis

  1. List All Process Steps: Begin by detailing every step involved in your biotech manufacturing process, from raw material sourcing to final product packaging. Make sure to walk down the process thoroughly.
  2. Identify Potential Hazards: At each step, identify potential biological, chemical, and physical hazards. Biological hazards might include microbial contamination, while chemical hazards could involve chemical impurities or inappropriate reagents. Physical hazards might include particulates or inappropriate packaging materials.
  3. Evaluate Severity and Likelihood: Assess the severity and likelihood of each identified hazard. This evaluation helps prioritize which hazards require immediate attention.
  4. Determine Preventive Measures: Develop strategies to control significant hazards. This might involve adjusting process conditions, improving cleaning protocols, or enhancing monitoring systems.
  5. Document Justifications: Record the rationale behind including or excluding hazards from your analysis. This documentation is essential for transparency and regulatory compliance.

Step 2: Determine Critical Control Points (CCPs)

  1. Identify Control Points: Any step where biological, chemical, or physical factors can be controlled is considered a control point.
  2. Determine CCPs: Use a decision tree to identify which control points are critical. A CCP is a step at which control can be applied and is essential to prevent or eliminate a hazard or reduce it to an acceptable level.
  3. Establish Critical Limits: For each CCP, define the maximum or minimum values to which parameters must be controlled. These limits ensure that hazards are effectively managed.
Control PointsCritical Control Points
Process steps where a control measure (mitigation activity) is necessary to prevent the hazard from occurringProcess steps where both control and monitoring are necessary to assure product quality and patient safety
Are not necessarily critical control points (CCPs)Are also control points
Determined from the risk associated with the hazardDetermined through a decision tree

Step 3: Establish Monitoring Procedures

  1. Develop Monitoring Plans: Create detailed plans for monitoring each CCP. This includes specifying what to monitor, how often, and who is responsible.
  2. Implement Monitoring Tools: Use appropriate tools and equipment to monitor CCPs effectively. This might include temperature sensors, microbial testing kits, or chemical analyzers.
  3. Record Monitoring Data: Ensure that all monitoring data is accurately recorded and stored for future reference.

Step 4: Establish Corrective Actions

  1. Define Corrective Actions: Develop procedures for when monitoring indicates that a CCP is not within its critical limits. These actions should restore control and prevent hazards.
  2. Proceduralize: You are establishing alternative control strategies here so make sure they are appropriately verified and controlled by process/procedure in the quality system.
  3. Train Staff: Ensure that all personnel understand and can implement corrective actions promptly.

Step 5: Establish Verification Procedures

  1. Regular Audits: Conduct regular audits to verify that the HACCP system is functioning correctly. This includes reviewing monitoring data and observing process operations.
  2. Validation Studies: Perform validation studies to confirm that CCPs are effective in controlling hazards.
  3. Continuous Improvement: Use audit findings to improve the HACCP system over time.

Step 6: Establish Documentation and Record-Keeping

  1. Maintain Detailed Records: Keep comprehensive records of all aspects of the HACCP system, including hazard analyses, CCPs, monitoring data, corrective actions, and verification activities.
  2. Ensure Traceability: Use documentation to ensure traceability throughout the production process, facilitating quick responses to any safety issues.

Step 7: Implement and Review the HACCP Plan

  1. Implement the Plan: Ensure that all personnel involved in biotech manufacturing understand and follow the HACCP plan.
  2. Regular Review: Regularly review and update the HACCP plan to reflect changes in processes, new hazards, or lessons learned from audits and incidents.