AI Can Create Value, but Only If You Bring Employees Along

Great article in HBR by Behnam Tabrizi and Babak PahlavanCompanies That Replace People with AI Will Get Left Behind” that makes excellent points about how companies should be looking for AI to free up employees to create new value, and not to replace employees.

Automation has been a truism throughout my career. Organizations that leveraged that automation to create value were superior to the ones which used that automation as an excuse to cut jobs.

As we move oh so quickly to dealing with the impact of hyper-automation on our organizations it is important to have a vision and a strategy. Apply quality principles, and remember to drive out fear through the strategic execution.

AI and Quality Profession Work

AI and its capabilities are big in the news right now, and inevitably folks start asking “What will this mean for my profession.”

The pharmaceutical GxP world is inherently slow to adopt new technologies. How many of my CxOs are still using 1990s technology? All of them. However, AI/ML has been showing up in more and more places so it is good to examine the potential to the Quality profession

It may seem counter-intuitive but the first place AI-powered software is making a difference is in improving the speed, accuracy, and efficiency of document review. From the eTMF to lab results to all the forms still used on the manufacturing floor, AI is already reviewing at a rate much faster than humans (and more reliably). Expect to see these options grow and become more part of offerings, such as your eTMF.

Trending is another place where AI/ML software is more capable than humans. Basically, anything that involves analysis will eventually be done by AI software, and will be done by software. We’re already seeing more and more automation be built into COTS and we will continue to see this trend grow.

So what won’t AI take the place of in the Quality Profession. Our real value is to strategize, understand causation, be creative, empathize and use common sense.

There’s a long way to go before AI can even do the things above. Have you asked one to write a policy on use of AI in GxP? Said policy would result in no AI ever being used in the GxPs.

Here is an example of a GxP policy for the use of AI to write SAS scripts, provided by an AI:

GxP Policy for Use of AI to Write SAS Scripts

  1. Purpose: The purpose of this policy is to provide guidance on the use of AI to write SAS scripts in compliance with GxP regulations.
  2. Scope: This policy applies to all AI systems and applications used to write SAS scripts for the analysis of data subject to GxP regulations.
  3. Responsibilities: It is the responsibility of all employees to comply with this policy and to report any non-compliance to the appropriate management.
  4. Compliance with GxP Regulations: All AI systems and applications used to write SAS scripts must be designed, developed, validated, and maintained in compliance with applicable GxP regulations.
  5. Data Integrity: All data used by AI systems to write SAS scripts must be accurate, complete, consistent, and attributable. Data must be protected against unauthorized access or modification.
  6. Validation: All AI systems used to write SAS scripts must be validated to ensure that they produce accurate and reliable scripts.
  7. Change Control: Any changes to AI systems used to write SAS scripts must be documented and approved through a formal change control process.
  8. Training: All employees involved in the use of AI systems to write SAS scripts must receive appropriate training on their use and on this policy.
  9. Audits and Inspections: All AI systems used to write SAS scripts are subject to internal and external audits and inspections to ensure compliance with this policy and with applicable GxP regulations.
  10. Continuous Improvement: This policy will be reviewed regularly and updated as necessary to ensure its continued effectiveness.

Awesome. I’ll circulate this beauty for approval right away.

Robert Morris and Koko are Violators of International Standards

The Declaration of Helsinki is the bedrock of international principles in human research, and the foundation of governmental practices, including the ICH E6 Good Clinical Practice. The core principle is respect for the individual (Article 8), their right to self-determination and the right to make informed decisions (Articles 20, 21 and 22) regarding participation in research, both initially and during the course of the research. Principles that Dr Robert Morris violated when his firm, Koko, used artifical intelligence to engage in medical research on uninformed participants. The man, and his company, deserves the full force of international censure, including disbarment by the NHS and all other international bodies with even a shred of oversight on healh practices.

I’m infuriated by this. AI is already an ethically ambigious area full of concerns, and for this callous individual and his company to waltz in and break a fundamental principle of human research is unconsciouable.

Another reason why we need serious regulatory oversight of AI. We won’t see this from the US, so hopefully the EU gets their act together and pushes forward. GPDR may not be perfect but we are in a better place with something rather than nothing, and as the actions of callous companies like Koko show we are in desperate need for protection when it comes to the ‘promises’ of AI.

Also, shame on Stonybrook’s Institutional Review Board. While not a case of IRB shopping, they sure did their best to avoid grappling with the issues behind the study.

I am pretty sure this AI counts as software as a device, in which case a whole lot of regulations were broken.

“Move fast and Break Things” is a horrible mantra, especially when health and well being is involved. Robert Morris, like Elizabeth Holmes, are examples of why we need a strong oversight regime when it comes to scientific research and why technology on its own is never the solution.

AI/ML-Based SaMD Framework

The US Food and Drug Administration’s proposed regulatory framework for artificial intelligence- (AI) and machine learning- (ML) based software as a medical device (SaMD) is fascinating in what it exposes about the uncertainty around the near-term future of a lot of industry 4.0 initiatives in pharmaceuticals and medical devices.

While focused on medical devices, this proposal is interesting read for folks interested in applying machine learning and artificial intelligence to other regulated areas, such as manufacturing.

We are seeing is the early stages of consensus building around the concept of Good Machine Learning Practices (GMLP), the idea of applying quality system practices to the unique challenges of machine learning.