Considerations when validating and auditing algorithms

The future is now. Industry 4.0 probably means you have algorithms in your process. For example, if you aren’t using algorithims to analyze deviations, you probably soon will.

 And with those algorithms come a whole host of questions on how to validate and how to ensure they work properly over time. The FDA has indicated that ““we want to get an understanding of your general idea for model maintenance.” FDA also wants to know the “trigger” for updating the model, the criteria for recalibration, and the level of validation of the model.”

Kate Crawford at Microsoft speaks about “data fundamentalism” – the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. It shouldn’t take much to realize the reasons why this trap can produce some very bad decision making. Our algorithm’s have biases, just as human beings have biases. They are dependent on the data models used to build and refine them.

Based on reported FDA thinking, and given where European regulators are in other areas, it is very clear we need to be able to explain and justify our algorithmic decisions. Machine learning in here now and will only grow more important.

Basic model of data science

Ask an Interesting Question

The first step is to be very clear on why there is a need for this system and what problem it is trying to solve. Having alignment across all the stakeholders is key to guarantee that the entire team is here with the same purpose. Here we start building a framework

Get the Data

The solution will only be as good as what it learns from. Following the common saying “garbage in, garbage out”, the problem is not with the machine learning tool itself, it lies with how it’s been trained and what data it is learning from.

Explore the Data

Look at the raw data. Look at data summary. Visualize the data. Do it all again a different way. Notice things. Do it again. Probably get more data.  Design experiments with the data.

Model the Data

The only true way to validate a model is to observe, iterate and audit. If we take a traditional csv model to machine learning, we are in for a lot of hurt. We need to take the framework we built and validate to it. Ensure there are emchanisms to observe to this framework and audit to performance over time.

Computer verification and validation – or what do I actually find myself worrying about every day

voting_software
xkcd “Voting Software” https://xkcd.com/2030/

This xkcd comic basically sums up my recent life. WFI system? Never seems to be a problem. Bioreactors? Work like clockwork. Cell growth? We go that covered. The list goes on. And then we get to pure software systems, and I spend all my time and effort on them. I wish it was just my company, but lets be frank, this stuff is harder than it should be and don’t trust a single software company or consultant who wants to tell you otherwise.

I am both terrified and ecstatic as everything moves to the cloud. Terrified because these are the same people who can’t get stuff like time clocks right, ecstatic because maybe when we all have the exact same problem we will see some changes (misery loves company, this is why we all go to software conferences).

So, confessional moment done, let us turn to a few elements of a competent computer systems validation program (csv).

Remember your system is more than software and hardware

Any system is made up of Process, Technology, People and Organization. All four need to be evaluated, planned for, and tested every step of the way. Too many computer systems fall flat because they focus on technology and maybe a little process.

Utilize a risk based approach regarding the impact of a computer system impact on product quality, patient and consumer safety, or related data integrity.

Risk assessments allow for a detailed, analytical review of potential risks posed by a process or system. Not every computer system has the same expectations on its data. Health authorities recognize that, and accept a risk based approach. This is reflected across the various guidances and regulations, best practices (GAMP 5, for instance) and the ISOs (14971 is a great example).

Some of the benefits of taking this risk based approach include:

  • Help to focus verification and validation efforts, which will allow you to better focus of resources on the higher-risk items
  • Determine which aspects of the system and/or business process around the system, require risk mitigation controls to reduce risk related to patient safety, product quality, data integrity, or business risk
  • Build a better understanding of  systems and processes from a quality and risk-based perspective

Don’t short the user requirements

A good user requirement process is critical. User requirements should include, among other things:

  • Technical Requirements: Should include things like capacity, performance, and hardware requirements.
  • System Functions: Should include things like calculations, logical security, audit trails, use of electronic signature.
  • Data: Should describe the data handling, definition of electronic records, required fields.
  • Environment: Should describe the physical conditions that the system will be required to operate in.
  • Interface: What and how will this system interface with other systems
  • Constraints: discuss compatibility, maximum allowable periods for downtime, user skill levels.
  • Lifecycle Requirements: Include mandatory design methods or special testing requirements.

Evaluate each of people, process, technology and organization.

This user requirement will be critical for performing a proper risk assessment. Said risk assessment is often iterative.

Build and test your system to mitigate risk

  • Eliminating risk through process or system redesign
  • Reduce risk by reducing the probability of a failure occurring (redundant design, more reliable solution)
  • Reduce risk by increasing the in-process detectability of a failure
  • Reduce risk by establishing downstream checks or error traps (e.g., fail-safe, or controlled fail state)
  • Increased rigor of verification testing may reduce the likelihood by providing new information to allow for a better assessment

After performing verification and validation activities, return to your risk assessment.

Apply a lifecycle approach once live

  • Apply proper change management
  • Perform periodic reviews of the system. This should include: current range of functionality, access and training, process robustness (do the current operating procedures provide the desired outcome), incident and deviation review, change history (including upgrades), performance, reliability, security and a general review of the current verified/validated state.
  • Ensure the risk assessment is returned to. On a periodic basis return to it and refresh based on new knowledge gained from the periodic review and other activities.

Do not separate any of this from your project management and development methodology

Too many times I’ve seen the hot new development lifecycle consider all this as an after thought to be done when the software is complete. That approach is expense, and oh so frustrating

 

Mylan gets a 32 page 483

The FDA’s April 483 for Mylan Pharmaceuticals has been in the fore-front of a lot of conversations in the last week. Let’s be honest, the FDA posts a 32 page, 13 observation 483 report on any manufacturer and it will be news. One as prominent as Mylan and doubly so. On the same day, the FDA also posted a 2016 483 and 2017 warning letter against a Mylan facility in India.

The 483 is a hit parade of observations, like the 1st observation of failure of the quality unit, including a reference to lack of quality approval of change controls.

What everyone has been intensely focusing on is the strong emphasis on cleaning, with 11 pages dedicated to failures in cleaning validation.

Which to be frank, is a big deal in a multi-product facility.

Read the 483, and when doing so evaluate your site’s cleaning program. Ask yourself some of these questions:

  • Are there appropriate cleaning procedures in place for all products-contact equipment, product contact accessories?
  • Are there appropriate cleaning procedures in place for facility cleaning (dispensing, sampling room…)?
  • Do your procedures include the sequence of the cleaning activities? Is it significantly detailed?
  • Do the procedures address the different scenarios (cleaning between different batches of the same product, cleaning between products changes, holding time before and after cleaning…)?
  • Do the procedures address who is responsible for performing the cleaning?
  • Does the validation study, the acceptance criteria and when revalidation justification and keys documentation approved by Quality? Does it include a clear status on the cleaning process?
  • Is the strategy used for the cleaning validation clearly established? (matrix approach, dedicated equipment, worst case scenario, grouping equipment, equipment train…)
  • Are batches that come after the cleaning validation run, released after completion of the cleaning validation?
  • Are the acceptance criteria (products, detergents, cleaning agent, micro… ) scientifically established and followed? Do these acceptance criteria include a safety margin?

Approval of cleaning validation is a key responsibility of the quality unit that involves some very specific requirements. These requirements should be built into the quality systems, including validation, deviation, and change management.