Beyond “Knowing Is Half the Battle”

Dr. Valerie Mulholland’s recent exploration of the GI Joe Bias strikes gets to the heart of a fundamental challenge in pharmaceutical quality management: the persistent belief that awareness of cognitive biases is sufficient to overcome them. I find Valerie’s analysis particularly compelling because it connects directly to the practical realities we face when implementing ICH Q9(R1)’s mandate to actively manage subjectivity in risk assessment.

Valerie’s observation that “awareness of a bias does little to prevent it from influencing our decisions” shows us that the GI Joe Bias underlays a critical gap between intellectual understanding and practical application—a gap that pharmaceutical organizations must bridge if they hope to achieve the risk-based decision-making excellence that ICH Q9(R1) demands.

The Expertise Paradox: Why Quality Professionals Are Particularly Vulnerable

Valerie correctly identifies that quality risk management facilitators are often better at spotting biases in others than in themselves. This observation connects to a deeper challenge I’ve previously explored: the fallacy of expert immunity. Our expertise in pharmaceutical quality systems creates cognitive patterns that simultaneously enable rapid, accurate technical judgments while increasing our vulnerability to specific biases.

The very mechanisms that make us effective quality professionals—pattern recognition, schema-based processing, heuristic shortcuts derived from base rate experiences—are the same cognitive tools that generate bias. When I conduct investigations or facilitate risk assessments, my extensive experience with similar events creates expectations and assumptions that can blind me to novel failure modes or unexpected causal relationships. This isn’t a character flaw; it’s an inherent part of how expertise develops and operates.

Valerie’s emphasis on the need for trained facilitators in high-formality QRM activities reflects this reality. External facilitation isn’t just about process management—it’s about introducing cognitive diversity and bias detection capabilities that internal teams, no matter how experienced, cannot provide for themselves. The facilitator serves as a structured intervention against the GI Joe fallacy, embodying the systematic approaches that awareness alone cannot deliver.

From Awareness to Architecture: Building Bias-Resistant Quality Systems

The critical insight from both Valerie’s work and my writing about structured hypothesis formation is that effective bias management requires architectural solutions, not individual willpower. ICH Q9(R1)’s introduction of the “Managing and Minimizing Subjectivity” section represents recognition that regulatory compliance requires systematic approaches to cognitive bias management.

In my post on reducing subjectivity in quality risk management, I identified four strategies that directly address the limitations Valerie highlights about the GI Joe Bias:

  1. Leveraging Knowledge Management: Rather than relying on individual awareness, effective bias management requires systematic capture and application of objective information. When risk assessors can access structured historical data, supplier performance metrics, and process capability studies, they’re less dependent on potentially biased recollections or impressions.
  2. Good Risk Questions: The formulation of risk questions represents a critical intervention point. Well-crafted questions can anchor assessments in specific, measurable terms rather than vague generalizations that invite subjective interpretation. Instead of asking “What are the risks to product quality?”, effective risk questions might ask “What are the potential causes of out-of-specification dissolution results for Product X in the next 6 months based on the last three years of data?”
  3. Cross-Functional Teams: Valerie’s observation that we’re better at spotting biases in others translates directly into team composition strategies. Diverse, cross-functional teams naturally create the external perspective that individual bias recognition cannot provide. The manufacturing engineer, quality analyst, and regulatory specialist bring different cognitive frameworks that can identify blind spots in each other’s reasoning.
  4. Structured Decision-Making Processes: The tools Valerie mentions—PHA, FMEA, Ishikawa, bow-tie analysis—serve as external cognitive scaffolding that guides thinking through systematic pathways rather than relying on intuitive shortcuts that may be biased.

The Formality Framework: When and How to Escalate Bias Management

One of the most valuable aspects of ICH Q9(R1) is its introduction of the formality concept—the idea that different situations require different levels of systematic intervention. Valerie’s article implicitly addresses this by noting that “high formality QRM activities” require trained facilitators. This suggests a graduated approach to bias management that scales intervention intensity with decision importance.

This formality framework needs to include bias management that organizations can use to determine when and how intensively to apply bias mitigation strategies:

  • Low Formality Situations: Routine decisions with well-understood parameters, limited stakeholders, and reversible outcomes. Basic bias awareness training and standardized checklists may be sufficient.
  • Medium Formality Situations: Decisions involving moderate complexity, uncertainty, or impact. These require cross-functional input, structured decision tools, and documentation of rationales.
  • High Formality Situations: Complex, high-stakes decisions with significant uncertainty, multiple conflicting objectives, or diverse stakeholders. These demand external facilitation, systematic bias checks, and formal documentation of how potential biases were addressed.

This framework acknowledges that the GI Joe fallacy is most dangerous in high-formality situations where the stakes are highest and the cognitive demands greatest. It’s precisely in these contexts that our confidence in our ability to overcome bias through awareness becomes most problematic.

The Cultural Dimension: Creating Environments That Support Bias Recognition

Valerie’s emphasis on fostering humility, encouraging teams to acknowledge that “no one is immune to bias, even the most experienced professionals” connects to my observations about building expertise in quality organizations. Creating cultures that can effectively manage subjectivity requires more than tools and processes; it requires psychological safety that allows bias recognition without professional threat.

I’ve noted in past posts that organizations advancing beyond basic awareness levels demonstrate “systematic recognition of cognitive bias risks” with growing understanding that “human judgment limitations can affect risk assessment quality.” However, the transition from awareness to systematic application requires cultural changes that make bias discussion routine rather than threatening.

This cultural dimension becomes particularly important when we consider the ironic processing effects that Valerie references. When organizations create environments where acknowledging bias is seen as admitting incompetence, they inadvertently increase bias through suppression attempts. Teams that must appear confident and decisive may unconsciously avoid bias recognition because it threatens their professional identity.

The solution is creating cultures that frame bias recognition as professional competence rather than limitation. Just as we expect quality professionals to understand statistical process control or regulatory requirements, we should expect them to understand and systematically address their cognitive limitations.

Practical Implementation: Moving Beyond the GI Joe Fallacy

Building on Valerie’s recommendations for structured tools and systematic approaches, here are some specific implementation strategies that organizations can adopt to move beyond bias awareness toward bias management:

  • Bias Pre-mortems: Before conducting risk assessments, teams explicitly discuss what biases might affect their analysis and establish specific countermeasures. This makes bias consideration routine rather than reactive.
  • Devil’s Advocate Protocols: Systematic assignment of team members to challenge prevailing assumptions and identify information that contradicts emerging conclusions.
  • Perspective-Taking Requirements: Formal requirements to consider how different stakeholders (patients, regulators, operators) might view risks differently from the assessment team.
  • Bias Audit Trails: Documentation requirements that capture not just what decisions were made, but how potential biases were recognized and addressed during the decision-making process.
  • External Review Requirements: For high-formality decisions, mandatory review by individuals who weren’t involved in the initial assessment and can provide fresh perspectives.

These interventions acknowledge that bias management is not about eliminating human judgment—it’s about scaffolding human judgment with systematic processes that compensate for known cognitive limitations.

The Broader Implications: Subjectivity as Systemic Challenge

Valerie’s analysis of the GI Joe Bias connects to broader themes in my work about the effectiveness paradox and the challenges of building rigorous quality systems in an age of pop psychology. The pharmaceutical industry’s tendency to adopt appealing frameworks without rigorous evaluation extends to bias management strategies. Organizations may implement “bias training” or “awareness programs” that create the illusion of progress while failing to address the systematic changes needed for genuine improvement.

The GI Joe Bias serves as a perfect example of this challenge. It’s tempting to believe that naming the bias—recognizing that awareness isn’t enough—somehow protects us from falling into the awareness trap. But the bias is self-referential: knowing about the GI Joe Bias doesn’t automatically prevent us from succumbing to it when implementing bias management strategies.

This is why Valerie’s emphasis on systematic interventions rather than individual awareness is so crucial. Effective bias management requires changing the decision-making environment, not just the decision-makers’ knowledge. It requires building systems, not slogans.

A Call for Systematic Excellence in Bias Management

Valerie’s exploration of the GI Joe Bias provides a crucial call for advancing pharmaceutical quality management beyond the illusion that awareness equals capability. Her work, combined with ICH Q9(R1)’s explicit recognition of subjectivity challenges, creates an opportunity for the industry to develop more sophisticated approaches to cognitive bias management.

The path forward requires acknowledging that bias management is a core competency for quality professionals, equivalent to understanding analytical method validation or process characterization. It requires systematic approaches that scaffold human judgment rather than attempting to eliminate it. Most importantly, it requires cultures that view bias recognition as professional strength rather than weakness.

As I continue to build frameworks for reducing subjectivity in quality risk management and developing structured approaches to decision-making, Valerie’s insights about the limitations of awareness provide essential grounding. The GI Joe Bias reminds us that knowing is not half the battle—it’s barely the beginning.

The real battle lies in creating pharmaceutical quality systems that systematically compensate for human cognitive limitations while leveraging human expertise and judgment. That battle is won not through individual awareness or good intentions, but through systematic excellence in bias management architecture.

What structured approaches has your organization implemented to move beyond bias awareness toward systematic bias management? Share your experiences and challenges as we work together to advance the maturity of risk management practices in our industry.


Meet Valerie Mulholland

Dr. Valerie Mulholland is transforming how our industry thinks about quality risk management. As CEO and Principal Consultant at GMP Services in Ireland, Valerie brings over 25 years of hands-on experience auditing and consulting across biopharmaceutical, pharmaceutical, medical device, and blood transfusion industries throughout the EU, US, and Mexico.

But what truly sets Valerie apart is her unique combination of practical expertise and cutting-edge research. She recently earned her PhD from TU Dublin’s Pharmaceutical Regulatory Science Team, focusing on “Effective Risk-Based Decision Making in Quality Risk Management”. Her groundbreaking research has produced 13 academic papers, with four publications specifically developed to support ICH’s work—research that’s now incorporated into the official ICH Q9(R1) training materials. This isn’t theoretical work gathering dust on academic shelves; it’s research that’s actively shaping global regulatory guidance.

Why Risk Revolution Deserves Your Attention

The Risk Revolution podcast, co-hosted by Valerie alongside Nuala Calnan (25-year pharmaceutical veteran and Arnold F. Graves Scholar) and Dr. Lori Richter (Director of Risk Management at Ultragenyx with 21+ years industry experience), represents something unique in pharmaceutical podcasting. This isn’t your typical regulatory update show—it’s a monthly masterclass in advancing risk management maturity.

In an industry where staying current isn’t optional—it’s essential for patient safety—Risk Revolution offers the kind of continuing education that actually advances your professional capabilities. These aren’t recycled conference presentations; they’re conversations with the people shaping our industry’s future.

The Lotus Blossom Brainstorming Technique

In the world of creative problem-solving and idea generation, the Lotus Blossom technique stands out as a powerful and structured approach to brainstorming. Developed by Yasuo Matsumura, a Japanese management consultant, this method combines the free-flowing nature of traditional brainstorming with a systematic framework that encourages deeper exploration of ideas.

How It Works

The Lotus Blossom technique uses a visual diagram resembling a lotus flower, hence its name. Here’s a step-by-step breakdown of the process:

  1. Start with a central idea or problem in the middle of a 3×3 grid.
  2. Surround the central concept with eight related ideas or themes.
  3. Take each of these eight ideas and make them the center of their own 3×3 grids.
  4. Generate eight new ideas for each of these secondary grids.
  5. Repeat the process until you have a fully bloomed “lotus” of ideas.

By the end of this process, you’ll have generated up to 64 ideas stemming from your original concept.

Benefits of the Lotus Blossom Technique

Structured Creativity: Unlike traditional brainstorming, which can sometimes feel chaotic, the Lotus Blossom method provides a clear structure for idea generation.

Depth and Breadth: This technique encourages both broad thinking and deep exploration of specific themes.

Visual Organization: The diagram format helps visualize connections between ideas and keeps the brainstorming process organized.

Flexibility: It can be used individually or in small groups, making it versatile for various settings.

Tips for Success

To make the most of the Lotus Blossom technique, consider these tips:

  • Embrace All Ideas: Don’t self-censor. Even seemingly unrelated or far-fetched ideas can spark innovation.
  • Time Management: Set time limits for each phase to maintain momentum and prevent overthinking.
  • Iterate and Refine: After completing the diagram, review and refine your ideas. Look for patterns or combinations that might lead to breakthrough solutions.

Who-What Matrix

Effective organizations assign people to particular roles, such as Process Owners, to solve problems better and make choices faster. Yet, it is frighteningly easy it is to exclude the right people in problem-solving. Who plays what role is not always clear in organizations. In organizations where specialized knowledge and expertise are distributed widely the different parts of an organization can see different problems in the same situation. Ensuring that the right people are at the whiteboard to solve the problem.

The Who-What Matrix is a great tool to ensure the right people are involved.

By including a wider set of people, the Who-What Matrix assists in creating trust, commitment, and a sense of procedural justice, and thus, enhance the likelihood of success. The matrix can also integrate people across functions, hierarchy, business units, locations, and partner organizations.

Once the need to problem-solve is identified, the matrix can be used to determine what people and organizations should be involved in which roles in problem-solving and whose interests should betaken into account in the deliberations. Players may provide input (information, ideas, resources); be part of the solving process(formulating problem, gathering data, doing analyses, generating solution options, supporting the work), be among those making choices or executing them. Considering the interests of all players during problem-solving can lead to better choices and outcomes.

The aim is to use the framework’s categories to think broadly but be selective in deciding which players play what role. A lengthy collection of players can be so overwhelming as to lead to neglect. The same player can play more than one role, and roles played can change over time. Players can come and go as problem-solving proceeds and circumstances change.

By deliberately bringing people into problem-solving, we are showing how to give people a meaningful role in the learning culture.

Who-What Matrix

The roles breakdown as:

  • Input: Provide input, provide data gathering, data sources
  • Recommend: Evaluate problem, recommend solutions and path forward
  • Decide: Make the final decision and commit the organization to action
  • Perform: Be accountable for making the decision happen once made
  • Agree: Formally approve a decision, implies veto power
  • Outcome: Accountable for the outcome of problem solving, results over time

Interviewing

One of the great tools of root cause analysis, planning, process improvement and knowledge management is the interview. Properly used the interview allows one to gather a great deal of information and perspective and ferret out hidden information.

For interviews to be truly effective, we have to understand how the function and apply a process. Cognitive Interviewing, originally created for law enforcement and later adopted during accident investigations by the National Transportation Safety Board (NTSB), provides an effective framework. I was first introduced to this at my previous company, where it has become a real linchpin, so I share it here.

The two principles here are:

  • Witnesses need time and encouragement to recall information
  • Retrieval cues enhance memory recall

Based on these two principles there are four components:

ComponentWhat It Consists of
Mental ReinstatementEncourage the interviewee to mentally recreate the environment and people involved.
In-Depth ReportingEncourage the reporting of all the details, even if it is minor or not directly related to the purpose of the interview. This is intended to improve the detail and accuracy of memory.

For example, if investigation a computer error, you would encourage the interviewee to discuss everything they were doing around the event. You would hold the interview at the station the error happened, ideally using the computeras much as possible.
Multiple PerspectivesAsk the interviewee to recall the event from others’ points of view. For example, the person upstream or downstream, or a partner or observer.
Several OrdersAsk the interviewee to recount the timeline in different ways. Beginning to end, end to beginning.
Four Components of Cognitive Interviewing

A key part of this is that retrieval cues access memory. This is why doing the interview on the scene (or Gemba) is so effective.

The basic behaviors you want to bring to bear are:

  • Recreate the original context; have them outline and walk you through process to explain how they work.
  • Tell the the witness to actively generate information and not wait passively for the interviewer to ask questions.
  • Adopt the witness’s perspective; ask eyewitness-compatible questions.
  • Perform the interview at the Gemba, the place where the work happens.
  • Listen actively, do not interrupt, and pause after the witness’s response.
  • Ask open-ended questions, utilize short phrases when possible.
  • Encourage the witness to use imagery. Explicitly request detailed descriptions.
  • Follow the sequence of the cognitive interview major components.
  • Bring support materials such as attachments, procedures, and copies of relevant documents.
  • Establish a connection with the witness; demeanor has a big impact.
  • Remember, active listening.
  • Do not tell the interviewee how they made the mistake, blame, or assume.

Active listening is key here.

Active Listening funnel

At the mouth of the funnel we begin with an ‘open’ question. This question is intended to give the interviewee the widest possible scope for responding. Sometimes it may be necessary to repeat or rephrase this question to give the interviewee more thinking time and further opportunities to raise information. Working down the narrowing body of the funnel we use a series of probing questions to draw out further specific information and help complete the picture. Closed questions then have their place to draw out, check or confirm specific pieces of information, or to get the interviewee to commit on a point more precisely. This then brings us to the bottom of the funnel where we clarify, using a short summary, what we have got out of the discussion, aiming to check our understanding of the main points. The question sequence might go something like this:

  • ‘Tell me how you went about…?’ (open)
  • ‘How did you prepare?’ (open – secondary)
  • ‘What was your starting point?’ (probe)
  • ‘So, what happened next?’ (probe)
  • ‘Who else was involved?’ (probe)
  • ‘And how did they respond?’ (probe)
  • ‘What were your thoughts at that stage?’ (probe)
  • ‘What were the main outcomes?’ (probe)
  • ‘So, that took a total of 30 minutes?’ (closed – clarifying)
  • ‘And the task was completed?’ (closed – clarifying)
  • ‘So, let me see if I’ve followed you…’ (checking – summary)

A good interview requires preparation. Have opening questions ready, ensure you have all the right props and the right people involved. That extra hour or two will pay dividends.

Here is a helpful worksheet.

ASQ Audit Conference – Day 2 Morning

Jay Arthur “The Future of Quality”

Starts with our “Heroes are gone” and “it is time to stand on our  two feet.”

Focuses on the time and effort to train people on lean and six sigma, and how many people do not actually do projects. Basic point is that we use the tools in old ways which are not nimble and aligned to today’s needs. The tools we use versus the tools we are taught.

Hacking lean six sigma is along a similar line to Art Smalley’s four problems.

Applying the spirit of hacking to quality.

Covers valuestream mapping and spaghetti diagrams with a focus on “they delays in between.” Talks about how control charts are not more standard. Basic point is people don’t spend enough time with the tools of quality. A point I have opinions on that will end up in another post.

Overcooked data versus raw data – summarized data has little or no nutritional value.

Brings this back to the issue of lack of problem diagnosis and not problem solving. Comes back to a need for a few easy tools and not the long-tail of six sigma.

This talk is very focused on LSS and the use of very specific tools, which seems like an odd choice at an Audit conference.

“Objectives and Process Measures: ISO 13485:2016 and ISO 9001:2015” by Nancy Pasquan

I appreciate it when the session manager (person who introduces the speaker and manages time) does a safety moment. Way to practice what we preach. Seriously, it should be a norm at all conferences.

Connects with the audience with a confession that the speaker is here to share her pain.

Objective – where we are going. Provide a flow chart of mission/vision (scope) ->establish process -> right direction? -> monitor and measure

Objectives should challenge the organization. Should not be too easy. References SMART. Covers objectives in very standard way. “Remember the purpose is to focus the effort of the entire organization toward these goals.” Links process objectives to the overall company objectives.

Process measures are harder. Uses training for an example. Which tells me adult learning practice is not as much as the QBOK way of thinking as I would like. Kilpatrick is a pretty well-known model.

Process measures will not tell us if we have the right process is a pretty loaded concept. Being careful of what you measure is good advice.

“Auditing Current Trends in Cleaning Validation” by Cathelene Compton

One of the trends in 2019 FDA Warning letters has been cleaning. While not one of the four big ones, cleaning validation always seems relevant and I’m looking forward to this presentation.

Starting with the fact that 15% if all observations on 483 forms related to leaning validation and documentation.

Reviews the three stages from the 2011 FDA Process Validation Guidance and then delvers into a deeper validation lifecycle flowchart.

Some highlights:

Stage 1 – choosing the right cleaning agent; different manufacturers of cleaning agents; long-term damage to equipment parts and cleaning agent compatibility. Vendor study for cleaning agent; concentration levels; challenge the cleaning process with different concentrations.

Delves more into cleaning acceptance limits and the importance of calculating in multiple ways. Stresses the importance of an involvement of a toxicologist. Stresses the use of Permitted Daily Exposure and how it can be difficult to get the F-factors.

Ensure that analytical methods meet ICHQ2(R1). Recovery studies on materials of construction. For cleaning agent look for target marker, check if other components in the laboratory also use this marker. Pitfall is the glassware washer not validated.

Trends around recovery factors, for example recoveries for stainless tell should be 90%.

Discusses matrix rationales from the Mylan 483 stressing the need to ensure all toxicity levels are determined and pharmaceological potency is there.

Stage 2 all studies should include visual inspection, micro and analytical. Materials of construction and surface area calculations and swabs on hard to clean or water hold up locations. Chromatography must be assessed for extraneous peaks.

Verification vs verification – validation always preferred.

Training – qualify the individuals who swab. Qualify visual inspectors.

Should see campaign studies, clean hold studies and dirty equipment hold studies.

Stage 3 – continuous is so critical, where folks fall flat. Do every 6 months, no more than a year or manual. CIP should be under a periodic review of mechanical aspects which means requal can be 2-3 years out.