AI and Quality Profession Work

AI and its capabilities are big in the news right now, and inevitably folks start asking “What will this mean for my profession.”

The pharmaceutical GxP world is inherently slow to adopt new technologies. How many of my CxOs are still using 1990s technology? All of them. However, AI/ML has been showing up in more and more places so it is good to examine the potential to the Quality profession

It may seem counter-intuitive but the first place AI-powered software is making a difference is in improving the speed, accuracy, and efficiency of document review. From the eTMF to lab results to all the forms still used on the manufacturing floor, AI is already reviewing at a rate much faster than humans (and more reliably). Expect to see these options grow and become more part of offerings, such as your eTMF.

Trending is another place where AI/ML software is more capable than humans. Basically, anything that involves analysis will eventually be done by AI software, and will be done by software. We’re already seeing more and more automation be built into COTS and we will continue to see this trend grow.

So what won’t AI take the place of in the Quality Profession. Our real value is to strategize, understand causation, be creative, empathize and use common sense.

There’s a long way to go before AI can even do the things above. Have you asked one to write a policy on use of AI in GxP? Said policy would result in no AI ever being used in the GxPs.

Here is an example of a GxP policy for the use of AI to write SAS scripts, provided by an AI:

GxP Policy for Use of AI to Write SAS Scripts

  1. Purpose: The purpose of this policy is to provide guidance on the use of AI to write SAS scripts in compliance with GxP regulations.
  2. Scope: This policy applies to all AI systems and applications used to write SAS scripts for the analysis of data subject to GxP regulations.
  3. Responsibilities: It is the responsibility of all employees to comply with this policy and to report any non-compliance to the appropriate management.
  4. Compliance with GxP Regulations: All AI systems and applications used to write SAS scripts must be designed, developed, validated, and maintained in compliance with applicable GxP regulations.
  5. Data Integrity: All data used by AI systems to write SAS scripts must be accurate, complete, consistent, and attributable. Data must be protected against unauthorized access or modification.
  6. Validation: All AI systems used to write SAS scripts must be validated to ensure that they produce accurate and reliable scripts.
  7. Change Control: Any changes to AI systems used to write SAS scripts must be documented and approved through a formal change control process.
  8. Training: All employees involved in the use of AI systems to write SAS scripts must receive appropriate training on their use and on this policy.
  9. Audits and Inspections: All AI systems used to write SAS scripts are subject to internal and external audits and inspections to ensure compliance with this policy and with applicable GxP regulations.
  10. Continuous Improvement: This policy will be reviewed regularly and updated as necessary to ensure its continued effectiveness.

Awesome. I’ll circulate this beauty for approval right away.

AI/ML-Based SaMD Framework

The US Food and Drug Administration’s proposed regulatory framework for artificial intelligence- (AI) and machine learning- (ML) based software as a medical device (SaMD) is fascinating in what it exposes about the uncertainty around the near-term future of a lot of industry 4.0 initiatives in pharmaceuticals and medical devices.

While focused on medical devices, this proposal is interesting read for folks interested in applying machine learning and artificial intelligence to other regulated areas, such as manufacturing.

We are seeing is the early stages of consensus building around the concept of Good Machine Learning Practices (GMLP), the idea of applying quality system practices to the unique challenges of machine learning.

WCQI Day 2 – morning

My day 2 at WCQI is Day 1 of the conference proper. I’m going to try to live blog.

Morning keynote

Today’s morning keynote is the same futurist as at the LSS in Phoenix last month, Patrick Schwerdtfeger, and not only was I dismayed that it was they exact same  I was reminded yet again how much I dislike futurists. I’m all about thinking of the future, but futurists seem to be particularly bad at it. It is all woowoo and bro-slapping and never ever a serious consideration of the impact of technology. Futurists are grifters.

These grifters profit by obscuring facts for personal gain. They are working an angle, all of them: the health gurus and the life hackers peddling easy solutions to difficult problems, the futurists who basically state current trends as revelations.  They are all trying to pull off the ultimate con – persuading people they really matter.

They are selling themselves: their books, their podcasts, their websites, their supplements, their claims to some secret knowledge about how the world works. But I fundamentally doubt that anyone who gave 40 talks in the last year has the bandwidth to do anything that really matters. It is all snake-oil. And as quality professionals, individuals who are dedicated to process and transparency and continuous improvement, we deserve better.

I’m not sure how these keynotes are selected but I think we need to holistically view just what we want to be as a society and the pillars we want for our conferences.

Anyone know a good article that evaluates futurists and life hackers with the prosperity gospel? Seem like they are coming from a similar place in the American psyche.

Ooh, artificial intelligence. Don’t get me wrong there is real potential (maybe not the potential people feel like there is) but most discussions on artificial intelligence is hype and bluster, and this presentation is no different. Autonomous vehicles block chains. Hype and bluster.

There is definitely people thinking this seriously, offering real insights and tools. We’re just not there. The speaker admits he gets 90% of his income from speaking. Pretty sure he isn’t actually doing that much. He might be an aggregator (as most of his slides with real content were attributed) but I keep struggling to see value here.

Gratuitous Steve Jobs picture.

After this there is some white space for vendor stuff.

At least I could multi-task and did some work.

Quality 4.0 Talks

I didn’t attend many of these last year because they were all standing and had a thrown together feeling. This year the ASQ seems to have upped the game. Shorter sessions can be good if the presentation is tight. At 15 minutes that is a hard bar to set.

First up we have Nicole Radziwill on “Mapping Quality Problems to Machine Learning Solutions” Nicole’s very active in the software section, which under the new membership model I’ll start paying a lot more attention to.

In this short presentation Nicole focuses on hitting the points of her 2018 Quality Progress article. Talking about quality 4.0’s path from Taylorism as “discovery & learning.”

She references Jim Duarte’s article Data Disruption.

Hit on big data hubris and the importance of statistics and analysis. The importance of defining models before we use them.

From ” Let’s Get Digital” in Quality Progress

Covers machine learning problem types at a high level.

  1. Prediction
  2. Classification
  3. Pattern Identification (Clustering)
  4. Data Reduction
  5. Anomaly Detection
  6. Pathfinding

High level recommendations were domain expertise, statistical expertise, data quality and human bias.

Next up is Beverly Daniels on “Risk and Industry 4.0”

It is telling that so many of the talkers make Millennial jokes. As a Gen-Xer I am both annoyed because no one ever made Gen-Xer jokes (the boomers never even noticed we were at the conference) AND frustrated because this is something telling about the graying of the ASQ.

Quick review of risk as more than probability. Hit on human beings as eternally optimistic and thus horrible at determining probability. As this was a quick talk it left more questions than answers. Getting rid of probability from risk is something I need to think about more, but her points are aligned to my thoughts on uncertainty.

Focusing on impact and mitigation is interesting. I liked the line “All it does is make your management feel good about not making a decision.”

Considerations when validating and auditing algorithms

The future is now. Industry 4.0 probably means you have algorithms in your process. For example, if you aren’t using algorithims to analyze deviations, you probably soon will.

 And with those algorithms come a whole host of questions on how to validate and how to ensure they work properly over time. The FDA has indicated that ““we want to get an understanding of your general idea for model maintenance.” FDA also wants to know the “trigger” for updating the model, the criteria for recalibration, and the level of validation of the model.”

Kate Crawford at Microsoft speaks about “data fundamentalism” – the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. It shouldn’t take much to realize the reasons why this trap can produce some very bad decision making. Our algorithm’s have biases, just as human beings have biases. They are dependent on the data models used to build and refine them.

Based on reported FDA thinking, and given where European regulators are in other areas, it is very clear we need to be able to explain and justify our algorithmic decisions. Machine learning in here now and will only grow more important.

Basic model of data science

Ask an Interesting Question

The first step is to be very clear on why there is a need for this system and what problem it is trying to solve. Having alignment across all the stakeholders is key to guarantee that the entire team is here with the same purpose. Here we start building a framework

Get the Data

The solution will only be as good as what it learns from. Following the common saying “garbage in, garbage out”, the problem is not with the machine learning tool itself, it lies with how it’s been trained and what data it is learning from.

Explore the Data

Look at the raw data. Look at data summary. Visualize the data. Do it all again a different way. Notice things. Do it again. Probably get more data.  Design experiments with the data.

Model the Data

The only true way to validate a model is to observe, iterate and audit. If we take a traditional csv model to machine learning, we are in for a lot of hurt. We need to take the framework we built and validate to it. Ensure there are emchanisms to observe to this framework and audit to performance over time.