AI and Quality Profession Work

AI and its capabilities are big in the news right now, and inevitably folks start asking “What will this mean for my profession.”

The pharmaceutical GxP world is inherently slow to adopt new technologies. How many of my CxOs are still using 1990s technology? All of them. However, AI/ML has been showing up in more and more places so it is good to examine the potential to the Quality profession

It may seem counter-intuitive but the first place AI-powered software is making a difference is in improving the speed, accuracy, and efficiency of document review. From the eTMF to lab results to all the forms still used on the manufacturing floor, AI is already reviewing at a rate much faster than humans (and more reliably). Expect to see these options grow and become more part of offerings, such as your eTMF.

Trending is another place where AI/ML software is more capable than humans. Basically, anything that involves analysis will eventually be done by AI software, and will be done by software. We’re already seeing more and more automation be built into COTS and we will continue to see this trend grow.

So what won’t AI take the place of in the Quality Profession. Our real value is to strategize, understand causation, be creative, empathize and use common sense.

There’s a long way to go before AI can even do the things above. Have you asked one to write a policy on use of AI in GxP? Said policy would result in no AI ever being used in the GxPs.

Here is an example of a GxP policy for the use of AI to write SAS scripts, provided by an AI:

GxP Policy for Use of AI to Write SAS Scripts

  1. Purpose: The purpose of this policy is to provide guidance on the use of AI to write SAS scripts in compliance with GxP regulations.
  2. Scope: This policy applies to all AI systems and applications used to write SAS scripts for the analysis of data subject to GxP regulations.
  3. Responsibilities: It is the responsibility of all employees to comply with this policy and to report any non-compliance to the appropriate management.
  4. Compliance with GxP Regulations: All AI systems and applications used to write SAS scripts must be designed, developed, validated, and maintained in compliance with applicable GxP regulations.
  5. Data Integrity: All data used by AI systems to write SAS scripts must be accurate, complete, consistent, and attributable. Data must be protected against unauthorized access or modification.
  6. Validation: All AI systems used to write SAS scripts must be validated to ensure that they produce accurate and reliable scripts.
  7. Change Control: Any changes to AI systems used to write SAS scripts must be documented and approved through a formal change control process.
  8. Training: All employees involved in the use of AI systems to write SAS scripts must receive appropriate training on their use and on this policy.
  9. Audits and Inspections: All AI systems used to write SAS scripts are subject to internal and external audits and inspections to ensure compliance with this policy and with applicable GxP regulations.
  10. Continuous Improvement: This policy will be reviewed regularly and updated as necessary to ensure its continued effectiveness.

Awesome. I’ll circulate this beauty for approval right away.

Bow-Tie Diagram

The bow-tie method is a powerful tool for visualizing and managing risks. Named after its distinctive shape, this tool is used to analyze the causes and consequences of potential risks.

At the center of the bow-tie diagram is the “top event,” which represents the risk being analyzed. On the left side of the diagram are the potential causes of the top event, while on the right side are the potential consequences. The diagram also includes barriers or controls that can be put in place to prevent or mitigate the risk.

To create a bow-tie diagram identify the “top event” representing the risk being analyzed. This is placed at the center of the diagram.

Next, you identify the potential causes of the top event and place them on the left side of the diagram. These causes can be further broken down into sub-causes if necessary.

On the right side of the diagram, you identify the potential consequences of the top event. These can also be further broken down into sub-consequences if necessary.

Once you have identified the causes and consequences of the top event, you can then add barriers or controls to the diagram. These are measures that can be put in place to prevent or mitigate the risk. Barriers can be placed between the causes and the top event to prevent it from occurring, while controls can be placed between the top event and its consequences to mitigate their impact.

The bow-tie method works by providing a clear and concise visual representation of a risk and its potential impacts. This allows stakeholders to better understand the risk and identify areas where additional controls may be needed.

This tool also works nicely with desirable consequences.

This picture showed up when I typed bow-tie on my computer. It’s relevant

GAMP’s Biggest Problem is the Name

GAMP5 is pretty clear in its ambition:

This Guide applies to computerized systems used in regulated activities covered by:

•Good Manufacturing Practice (GMP) (pharmaceutical, including Active Pharmaceutical Ingredient (API), veterinary, and blood)

•Good Clinical Practice (GCP)

•Good Laboratory Practice (GLP)•Good Distribution Practice (GDP)

•Good Pharmacovigilance Practices (GVP)

•Medical Device Regulations (where applicable and appropriate, e.g., for systems used as part of production or the quality system, and for some examples of Software as a Medical Device (SaMD1))

GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems (2nd edition),

The biggest problem with GAMP is when you search GAMP you get:

That’s right, the ISPE telling you that GAMP is all about manufacturing. A point that Wikipedia is more than happy to reinforce: https://en.wikipedia.org/wiki/Good_automated_manufacturing_practice

This means that I spend a lot of time explaining why GAMP is relevant outside of manufacturing, to a lot of skeptical people who already struggle with the idea that GCP or GLP isn’t some special and unique flower.

To add to that, it is structured like a GxP. I see a G-some letters-P I instantly think Good <something> Practices. It is how my brain and the brain of every single person who works in the GxPs have been trained.

Second, what is that 5? What does it mean? It’s such a bit of esoteric lore that I have to spend more time explaining. For absolutely no value.

And then last, I inevitably have to deal with skepticism about something published by the International Society of Pharmaceutical Engineering being even remotely relevant to the work a study investigator is doing.

Without a doubt, GAMP is a powerful methodology and toolbox. It just shoots itself in the foot every time. It is unfortunate that with the 2nd edition the ISPE did not take a big breath and successfully rebrand as maybe GDIP or something.

Robert Morris and Koko are Violators of International Standards

The Declaration of Helsinki is the bedrock of international principles in human research, and the foundation of governmental practices, including the ICH E6 Good Clinical Practice. The core principle is respect for the individual (Article 8), their right to self-determination and the right to make informed decisions (Articles 20, 21 and 22) regarding participation in research, both initially and during the course of the research. Principles that Dr Robert Morris violated when his firm, Koko, used artifical intelligence to engage in medical research on uninformed participants. The man, and his company, deserves the full force of international censure, including disbarment by the NHS and all other international bodies with even a shred of oversight on healh practices.

I’m infuriated by this. AI is already an ethically ambigious area full of concerns, and for this callous individual and his company to waltz in and break a fundamental principle of human research is unconsciouable.

Another reason why we need serious regulatory oversight of AI. We won’t see this from the US, so hopefully the EU gets their act together and pushes forward. GPDR may not be perfect but we are in a better place with something rather than nothing, and as the actions of callous companies like Koko show we are in desperate need for protection when it comes to the ‘promises’ of AI.

Also, shame on Stonybrook’s Institutional Review Board. While not a case of IRB shopping, they sure did their best to avoid grappling with the issues behind the study.

I am pretty sure this AI counts as software as a device, in which case a whole lot of regulations were broken.

“Move fast and Break Things” is a horrible mantra, especially when health and well being is involved. Robert Morris, like Elizabeth Holmes, are examples of why we need a strong oversight regime when it comes to scientific research and why technology on its own is never the solution.