The Minimal Viable Risk Assessment Team

Ineffective risk management and quality systems revolve around superficial risk management. The core issue? Teams designed for compliance as a check-the-box activity rather than cognitive rigor. These gaps create systematic blind spots that no checklist can fix. The solution isn’t more assessors—it’s fewer, more competent ones anchored in science, patient impact, and lived process reality.

Core Roles: The Non-Negotiables

1. Process Owner: The Reality Anchor

Not a title. A lived experience. Superficial ownership creates the “unjustified assumptions.” This role requires daily engagement with the process—not just signature authority. Without it, assumptions go unchallenged.

2. ASTM E2500 Molecule Steward: The Patient’s Advocate

Beyond “SME”—the protein whisperer. This role demands provable knowledge of degradation pathways, critical quality attributes (CQAs), and patient impact. Contrast this with generic “subject matter experts” who lack molecule-specific insights. Without this anchor, assessments overlook patient-centric failure modes.

3. Technical System Owner: The Engineer

The value of the Technical System Owner—often the engineer—lies in their unique ability to bridge the worlds of design, operations, and risk control throughout the pharmaceutical lifecycle. Far from being a mere custodian of equipment, the system owner is the architect who understands not just how a system is built, but how it behaves under real-world conditions and how it integrates with the broader manufacturing program

4. Quality: The Cognitive Warper

Forget the auditor—this is your bias disruptor. Quality’s value lies in forcing cross-functional dialogue, challenging tacit assumptions, and documenting debates. When Quality fails to interrogate assumptions, hazards go unidentified. Their real role: Mandate “assumption logs” where every “We’ve always done it this way” must produce data or die.

A Venn diagram with three overlapping blue circles, each representing a different role: "Process Owner: The Reality Anchor," "Molecule Steward: The Patient’s Advocate," and "Technical System Owner: The Engineer." In the center, where all three circles overlap, is a green dashed circle labeled "Quality: Cognitive Warper." Each role has associated bullet points in colored dots:

Process Owner (top left): "Daily Engagement" and "Lived Experience" (blue dots).

Molecule Steward (top right): "Molecular specific insights" and "Patient-centric" (blue dots).

Technical System Owner (bottom): "The How’s" and "Technical understanding" (blue dots).

Additional points for Technical System Owner (bottom right): "Bias disruptor" and "Interrogate assumptions" (green dots).

The diagram visually emphasizes the intersection of these roles in achieving quality through cognitive diversity.

Team Design as Knowledge Preservation

Team design in the context of risk management is fundamentally an act of knowledge preservation, not just an exercise in filling seats or meeting compliance checklists. Every effective risk team is a living repository of the organization’s critical process insights, technical know-how, and nuanced operational experience. When teams are thoughtfully constructed to include individuals with deep, hands-on familiarity—process owners, technical system engineers, molecule stewards, and quality integrators—they collectively safeguard the hard-won lessons and tacit knowledge that are so often lost when people move on or retire. This approach ensures that risk assessments are not just theoretical exercises but are grounded in the practical realities that only those with lived experience can provide.

Combating organizational forgetting requires more than documentation or digital knowledge bases; it demands intentional, cross-functional team design that fosters active knowledge transfer. When a risk team brings together diverse experts who routinely interact, challenge each other’s assumptions, and share context from their respective domains, they create a dynamic environment where critical information is surfaced, scrutinized, and retained. This living dialogue is far more effective than static records, as it allows for the continuous updating and contextualization of knowledge in response to new challenges, regulatory changes, and operational shifts. In this way, team design becomes a strategic defense against the silent erosion of expertise that can leave organizations exposed to avoidable risks.

Ultimately, investing in team design as a knowledge preservation strategy is about building organizational resilience. It means recognizing that the greatest threats often arise not from what is known, but from what is forgotten or never shared. By prioritizing teams that embody both breadth and depth of experience, organizations create a robust safety net—one that catches subtle warning signs, adapts to evolving risks, and ensures that critical knowledge endures beyond any single individual’s tenure. This is how organizations move from reactive problem-solving to proactive risk management, turning collective memory into a competitive advantage and a foundation for sustained quality.

Call to Action: Build the Risk Team

Moving from compliance theater to true protection starts with assembling a team designed for cognitive rigor, knowledge depth and psychological safety.

Start with a Clear Charter, Not a Checklist

An excellent risk team exists to frame, analyse and communicate uncertainty so that the business can make science-based, patient-centred decisions. Assigning authorities and accountabilities is a leadership duty, not an after-thought. Before naming people, write down:

  • the decisions the team must enable,
  • the degree of formality those decisions demand, and
  • the resources (time, data, tools) management will guarantee.

Without this charter, even star performers will default to box-ticking.

Fill Four Core Seats – And Prove Competence

ICH Q9 is blunt: risk work should be done by interdisciplinary teams that include experts from quality, engineering, operations and regulatory affairs. ASTM E2500 translates that into a requirement for documented subject-matter experts (SMEs) who own critical knowledge throughout the lifecycle. Map those expectations onto four non-negotiable roles.

  • Process Owner – The Reality Anchor: This individual has lived the operation in the last 90 days, not just signed SOPs. They carry the authority to change methods, budgets and training, and enough hands-on credibility to spot when a theoretical control will never work on the line. Authentic owners dismantle assumptions by grounding every risk statement in current shop-floor facts.
  • Molecule Steward – The Patient’s Advocate: Too often “SME” is shorthand for “the person available.” The molecule steward is different: a scientist who understands how the specific product fails and can translate deviations into patient impact. When temperature drifts two degrees during freeze-drying, the steward can explain whether a monoclonal antibody will aggregate or merely lose a day of shelf life. Without this anchor, the team inevitably under-scores hazards that never appear in a generic FMEA template.
  • Technical System Owner – The Engineering Interpreter: Equipment does not care about meeting minutes; it obeys physics. The system owner must articulate functional requirements, design limits and integration logic. Where a tool-focused team may obsess over gasket leaks, the system owner points out that a single-loop PLC has no redundancy and that a brief voltage dip could push an entire batch outside critical parameters—a classic case of method over physics.
  • Quality Integrator – The Bias Disruptor: Quality’s mission is to force cross-functional dialogue and preserve evidence. That means writing assumption logs, challenging confirmation bias and ensuring that dissenting voices are heard. The quality lead also maintains the knowledge repository so future teams are not condemned to repeat forgotten errors.

Secure Knowledge Accessibility, Not Just Possession

A credentialed expert who cannot be reached when the line is down at 2 a.m. is as useful as no expert at all. Conduct a Knowledge Accessibility Index audit before every major assessment.

Embed Psychological Safety to Unlock the Team’s Brainpower

No amount of SOPs compensates for a culture that punishes bad news. Staff speak up only when leaders are approachable, intolerant of blame and transparent about their own fallibility. Leaders must therefore:

  • Invite dissent early: begin meetings with “What might we be overlooking?”
  • Model vulnerability: share personal errors and how the system, not individuals, failed.
  • Reward candor: recognize the engineer who halted production over a questionable trend.

Psychological safety converts silent observers into active risk sensors.

Choose Methods Last, After Understanding the Science

Excellent teams let the problem dictate the tool, not vice versa. They build a failure-tree or block diagram first, then decide whether FMEA, FTA or bow-tie analysis will illuminate the weak spot. If the team defaults to a method because “it’s in the SOP,” stop and reassess. Tool selection is a decision, not a reflex.

Provide Time and Resources Proportionate to Uncertainty

ICH Q9 asks decision-makers to ensure resources match the risk question. Complex, high-uncertainty topics demand longer workshops, more data and external review, while routine changes may only need a rapid check. Resist the urge to shoehorn every assessment into a one-hour meeting because calendars are overloaded.

Institutionalize Learning Loops

Great teams treat every assessment as both analysis and experiment. They:

  1. Track prediction accuracy: did the “medium”-ranked hazard occur?
  2. Compare expected versus actual detectability: were controls as effective as assumed?
  3. Feed insights into updated templates and training so the next team starts smarter.

The loop closes when the knowledge base evolves at the same pace as the plant.

When to Escalate – The Abort-Mission Rule

If a risk scenario involves patient safety, novel technology and the molecule steward is unavailable, stop. The assessment waits until a proper team is in the room. Rushing ahead satisfies schedules, not safety.

Conclusion

Excellence in risk management is rarely about adding headcount; it is about curating brains with complementary lenses and giving them the culture, structure and time to think. Build that environment and the monsters stay on the storyboard, never in the plant.

The Pre-Mortem

A pre-mortem is a proactive risk management exercise that enables pharmaceutical teams to anticipate and mitigate failures before they occur. This tool can transform compliance from a reactive checklist into a strategic asset for safeguarding product quality.


Pre-Mortems in Pharmaceutical Quality Systems

In GMP environments, where deviations in drug substance purity or drug product stability can cascade into global recalls, pre-mortems provide a structured framework to challenge assumptions. For example, a team developing a monoclonal antibody might hypothesize that aggregation occurred during drug substance purification due to inadequate temperature control in bioreactors. By contrast, a tablet manufacturing team might explore why dissolution specifications failed because of inconsistent API particle size distribution. These exercises align with ICH Q9’s requirement for systematic hazard analysis and ICH Q10’s emphasis on knowledge management, forcing teams to document tacit insights about process boundaries and failure modes.

Pre-mortems excel at identifying “unknown unknowns” through creative thinking. Their value lies in uncovering risks traditional assessments miss. As a tool it can usually be strongly leveraged to identify areas for focus that may need a deeper tool, such as an FMEA. In practice, pre-mortems and FMEA are synergistic through a layered approach which satisfies ICH Q9’s requirement for both creative hazard identification and structured risk evaluation, turning hypothetical failures into validated control strategies.

By combining pre-mortems’ exploratory power with FMEA’s rigor, teams can address both systemic and technical risks, ensuring compliance while advancing operational resilience.


Implementing Pre-Mortems

1. Scenario Definition and Stakeholder Engagement

Begin by framing the hypothetical failure, the risk question. For drug substances, this might involve declaring, “The API batch was rejected due to genotoxic impurity levels exceeding ICH M7 limits.” For drug products, consider, “Lyophilized vials failed sterility testing due to vial closure integrity breaches.” Assemble a team spanning technical operations, quality control, and regulatory affairs to ensure diverse viewpoints.

2. Failure Mode Elicitation

To overcome groupthink biases in traditional brainstorming, teams should begin with brainwriting—a silent, written idea-generation technique. The prompt is a request to list reasons behind the risk question, such as “List reasons why the API batch failed impurity specifications”. Participants anonymously write risks on structured templates for 10–15 minutes, ensuring all experts contribute equally.

The collected ideas are then synthesized into a fishbone (Ishikawa) diagram, categorizing causes relevant branches, using a 6 M technique.

This method ensures comprehensive risk identification while maintaining traceability for regulatory audits.

3. Risk Prioritization and Control Strategy Development

Risks identified during the pre-mortem are evaluated using a severity-probability-detectability matrix, structured similarly to Failure Mode and Effects Analysis (FMEA).

4. Integration into Pharmaceutical Quality Systems

Mitigation plans are formalized in in control strategies and other mechanisms.


Case Study: Preventing Drug Substance Oxidation in a Small Molecule API

A company developing an oxidation-prone API conducted a pre-mortem anticipating discoloration and potency loss. The exercise revealed:

  • Drug substance risk: Inadequate nitrogen sparging during final isolation led to residual oxygen in crystallization vessels.
  • Drug product risk: Blister packaging with insufficient moisture barrier exacerbated degradation.

Mitigations included installing dissolved oxygen probes in purification tanks and switching to aluminum-foil blisters with desiccants. Process validation batches showed a 90% reduction in oxidation byproducts, avoiding a potential FDA Postmarketing Commitment

The Role of the HACCP

Reading Strukmyer LLC’s recent FDA Warning Letter, and reflecting back to last year’s Colgate-Palmolive/Tom’s of Maine, Inc. Warning Letter, has me thinking of common language In both warning letters where the FDA asks for “A comprehensive, independent assessment of the design and control of your firm’s manufacturing operations, with a detailed and thorough review of all microbiological hazards.”

It is hard to read that as anything else than a clarion call to use a HACCP.

If that isn’t a HACCP, I don’t know what is. Given the FDA’s rich history and connection to the tool, it is difficult to imagine them thinking of any other tool. Sure, I can invent about 7 other ways to do that, but why bother when there is a great tool, full of powerful uses, waiting to be used that the regulators pretty much have in their DNA.

The Evolution of HACCP in FDA Regulation: A Journey to Enhanced Food Safety

The Hazard Analysis and Critical Control Points (HACCP) system has a fascinating history that is deeply intertwined with FDA regulations. Initially developed in the 1960s by NASA, the Pillsbury Company, and the U.S. Army, HACCP was designed to ensure safe food for space missions. This pioneering collaboration aimed to prevent food safety issues by identifying and controlling critical points in food processing. The success of HACCP in space missions soon led to its application in commercial food production.

In the 1970s, Pillsbury applied HACCP to its commercial operations, driven by incidents such as the contamination of farina with glass. This prompted Pillsbury to adopt HACCP more widely across its production lines. A significant event in 1971 was a panel discussion at the National Conference on Food Protection, which led to the FDA’s involvement in promoting HACCP for food safety inspections. The FDA recognized the potential of HACCP to enhance food safety standards and began to integrate it into its regulatory framework.

As HACCP gained prominence as a food safety standard in the 1980s and 1990s, the National Advisory Committee on Microbiological Criteria for Foods (NACMCF) refined its principles. The committee added preliminary steps and solidified the seven core principles of HACCP, which include hazard analysis, critical control points identification, establishing critical limits, monitoring procedures, corrective actions, verification procedures, and record-keeping. This structured approach helped standardize HACCP implementation across different sectors of the food industry.

A major milestone in the history of HACCP was the implementation of the Pathogen Reduction/HACCP Systems rule by the USDA’s Food Safety and Inspection Service (FSIS) in 1996. This rule mandated HACCP in meat and poultry processing facilities, marking a significant shift towards preventive food safety measures. By the late 1990s, HACCP became a requirement for all food businesses, with some exceptions for smaller operations. This widespread adoption underscored the importance of proactive food safety management.

The Food Safety Modernization Act (FSMA) of 2011 further emphasized preventive controls, including HACCP, to enhance food safety across the industry. FSMA shifted the focus from responding to food safety issues to preventing them, aligning with the core principles of HACCP. Today, HACCP remains a cornerstone of food safety management globally, with ongoing training and certification programs available to ensure compliance with evolving regulations. The FDA continues to support HACCP as part of its broader efforts to protect public health through safe food production and processing practices. As the food industry continues to evolve, the principles of HACCP remain essential for maintaining high standards of food safety and quality.

Why is a HACCP Useful in Biotech Manufacturing

The HACCP seeks to map a process – the manufacturing process, one cleanroom, a series of interlinked cleanrooms, or the water system – and identifies hazards (a point of contamination) by understanding the personnel, material, waste, and other parts of the operational flow. These hazards are assessed at each step in the process for their likelihood and severity. Mitigations are taken to reduce the risk the hazard presents (“a contamination control point”). Where a risk cannot be adequately minimized (either in terms of its likelihood of occurrence, the severity of its nature, or both), this “contamination control point” should be subject to a form of detection so that the facility has an understanding of whether the microbial hazard was potentially present at a given time, for a given operation. In other words, the “critical control point” provides a reasoned area for selecting a monitoring location. For aseptic processing, for example, the target is elimination, even if this cannot be absolutely demonstrated.

The HACCP approach can easily be applied to pharmaceutical manufacturing where it proves very useful for microbial control. Although alternative risk tools exist, such as Failure Modes and Effects Analysis, the HACCP approach is better for microbial control.

The HACCP is a core part of an effective layers of control analysis.

Conducting a HACCP

HACCP provides a systematic approach to identifying and controlling potential hazards throughout the production process.

Step 1: Conduct a Hazard Analysis

  1. List All Process Steps: Begin by detailing every step involved in your biotech manufacturing process, from raw material sourcing to final product packaging. Make sure to walk down the process thoroughly.
  2. Identify Potential Hazards: At each step, identify potential biological, chemical, and physical hazards. Biological hazards might include microbial contamination, while chemical hazards could involve chemical impurities or inappropriate reagents. Physical hazards might include particulates or inappropriate packaging materials.
  3. Evaluate Severity and Likelihood: Assess the severity and likelihood of each identified hazard. This evaluation helps prioritize which hazards require immediate attention.
  4. Determine Preventive Measures: Develop strategies to control significant hazards. This might involve adjusting process conditions, improving cleaning protocols, or enhancing monitoring systems.
  5. Document Justifications: Record the rationale behind including or excluding hazards from your analysis. This documentation is essential for transparency and regulatory compliance.

Step 2: Determine Critical Control Points (CCPs)

  1. Identify Control Points: Any step where biological, chemical, or physical factors can be controlled is considered a control point.
  2. Determine CCPs: Use a decision tree to identify which control points are critical. A CCP is a step at which control can be applied and is essential to prevent or eliminate a hazard or reduce it to an acceptable level.
  3. Establish Critical Limits: For each CCP, define the maximum or minimum values to which parameters must be controlled. These limits ensure that hazards are effectively managed.
Control PointsCritical Control Points
Process steps where a control measure (mitigation activity) is necessary to prevent the hazard from occurringProcess steps where both control and monitoring are necessary to assure product quality and patient safety
Are not necessarily critical control points (CCPs)Are also control points
Determined from the risk associated with the hazardDetermined through a decision tree

Step 3: Establish Monitoring Procedures

  1. Develop Monitoring Plans: Create detailed plans for monitoring each CCP. This includes specifying what to monitor, how often, and who is responsible.
  2. Implement Monitoring Tools: Use appropriate tools and equipment to monitor CCPs effectively. This might include temperature sensors, microbial testing kits, or chemical analyzers.
  3. Record Monitoring Data: Ensure that all monitoring data is accurately recorded and stored for future reference.

Step 4: Establish Corrective Actions

  1. Define Corrective Actions: Develop procedures for when monitoring indicates that a CCP is not within its critical limits. These actions should restore control and prevent hazards.
  2. Proceduralize: You are establishing alternative control strategies here so make sure they are appropriately verified and controlled by process/procedure in the quality system.
  3. Train Staff: Ensure that all personnel understand and can implement corrective actions promptly.

Step 5: Establish Verification Procedures

  1. Regular Audits: Conduct regular audits to verify that the HACCP system is functioning correctly. This includes reviewing monitoring data and observing process operations.
  2. Validation Studies: Perform validation studies to confirm that CCPs are effective in controlling hazards.
  3. Continuous Improvement: Use audit findings to improve the HACCP system over time.

Step 6: Establish Documentation and Record-Keeping

  1. Maintain Detailed Records: Keep comprehensive records of all aspects of the HACCP system, including hazard analyses, CCPs, monitoring data, corrective actions, and verification activities.
  2. Ensure Traceability: Use documentation to ensure traceability throughout the production process, facilitating quick responses to any safety issues.

Step 7: Implement and Review the HACCP Plan

  1. Implement the Plan: Ensure that all personnel involved in biotech manufacturing understand and follow the HACCP plan.
  2. Regular Review: Regularly review and update the HACCP plan to reflect changes in processes, new hazards, or lessons learned from audits and incidents.

Computer Software Assurance Draft

The FDA published on 13-Sep-2022 the long-awaited draft of the guidance “Computer Software Assurance for Production and Quality System Software,” and you may, based on all the emails and posting be wondering just how radical a change this is.

It’s not. This guidance is just one big “calm down people” letter from the agency. They publish these sorts of guidance every now and then because we as an industry can sometimes learn the wrong lessons.

This guidance states:

  1. Determine intended use
  2. Perform a risk assessment
  3. Perform activities to the required level

I wrote about this approach in “Risk Based Data Integrity Assessment,” and it has existed in GAMP5 and other approaches for years.

So read the guidance, but don’t panic. You are either following it already or you just need to spend some time getting better at risk assessments and creating some matrix approaches.

Preliminary Hazard Analysis

The Preliminary Hazard Analysis (PHA) is a risk tool that is used during initial design and development, thus the name “preliminary”, to identify systematic hazards that affect the intended function of the design to provide an opportunity to modify requirements that will help avoid issues in the design.

Like a fair amount of tools used in risk, the PHA was created by the US Army. ANSI/ASSP Z.590.3 “Prevention through Design, Guidelines for Addressing Occupational Hazards and Risks in Design and Redesign Processes” makes this one of the eight risk assessment tools everyone should know.

Taking the time to perform a PHA early on in the design will speed up the design process and avoid costly mistakes. Any identified hazards that cannot be avoided or eliminated are then controlled so that the risk is reduced to an acceptable level.

PHAs can also be used to examine existing systems, prioritize risk levels and select those systems requiring further study. The use of a single PHA may also be appropriate for simple, less compelx systems.

Main steps of PHA

A. Identify Hazards

Like a Structured What-If, the Preliminary Hazard Analysis benefits from an established list of general categories:

  • by the source of risk: raw materials, environmental, equipment, usability and human factors, safety hazards, etc.
  • by consequence, aspects or dimensions of objectives or performance

Based on the established list, a preliminary hazard list is identified which lists the potential, significant hazards associated with a design. The purpose of the preliminary hazard list is to initially identify the most evident or worst-credible hazards that could occur in the system being designed. Such hazards may be inherent to the design or created by the interaction with other systems/environment/etc.

A team should be involved in collecting and reviewing.

B. Sequence of Events

Once the hazards are identified, the sequence of events that leads from each hazard to various hazardous situations is identified.

C. Hazardous Situation

For each sequence of events, we identify one or more hazardous situations.

D. Impact

For each hazardous situation, we identify one or more outcomes (or harms).

E. Severity and occurrence of the impact

Based on the identified outcomes/harms the severity is determined. An occurrence or probability is determined for each sequence of events that leads from the hazard to the hazardous situation to the outcome.

Based on severity and likelihood of occurrence a risk level is determined.

From hazard to a variety of harms

I tend to favor a 5×5 matrix for a PHA, though some use 3×3, and I’ve even seen 4×5.

Intended outcomes

Likelihood of Occurrence

Severity Rating

Impact to failure scale

1

Very unlikely

2

Likely

3

Possible

4

Likely

5

Very Likely

5

Complete failure

5

10

15

20

25

4

Maximum tolerable failure

4

8

12

16

20

3

Maximum anticipated failure

3

6

9

12

15

2

Minimum anticipated failure

2

4

6

8

10

1

Negligible

1

2

3

4

5

Very high risk: 15 or greater, High risk 9-14, Medium risk 5-8, Low risk 1-4

 

F. Risk Control Measures

Based on the risk level risk controls and developed and applied. These risk controls will help the design team create new requirements that will drive the design.

On-going risks should be evaluated for the risk register.