Computer Software Assurance Draft

The FDA published on 13-Sep-2022 the long-awaited draft of the guidance “Computer Software Assurance for Production and Quality System Software,” and you may, based on all the emails and posting be wondering just how radical a change this is.

It’s not. This guidance is just one big “calm down people” letter from the agency. They publish these sorts of guidance every now and then because we as an industry can sometimes learn the wrong lessons.

This guidance states:

  1. Determine intended use
  2. Perform a risk assessment
  3. Perform activities to the required level

I wrote about this approach in “Risk Based Data Integrity Assessment,” and it has existed in GAMP5 and other approaches for years.

So read the guidance, but don’t panic. You are either following it already or you just need to spend some time getting better at risk assessments and creating some matrix approaches.

Review of Audit Trails

One of the requirements for data integrity that has changed in detail as the various guidances (FDA, MHRA, PIC/S) have gone through draft has been review of audit trails. This will also probably be one of the more controversial in certain corners as it can be seen by some as going beyond what has traditionally been the focus of good document practices and computer system validation.

What the guidances say

Audit trail review is similar to assessing cross-outs on paper when reviewing data. Personnel responsible for record review under CGMP should review the audit trails that capture changes to data associated with the record as they review the rest of the record (e.g., §§ 211.22(a), 211.101(c) and (d), 211.103, 211.182, 211.186(a), 211.192, 211.194(a)(8), and 212.20(d)). For example, all production and control records, which includes audit trails, must be reviewed and approved by the quality unit (§ 211.192). The regulations provide flexibility to have some activities reviewed by a person directly supervising or checking information (e.g., § 211.188). FDA recommends a quality system approach to implementing oversight and review of CGMP records.

US FDA. “Who should review audit trails?”  Data Integrity and Compliance With Drug CGMP Questions and Answers Guidance for Industry. Section 7, page 8

If the review frequency for the data is specified in CGMP regulations, adhere to that frequency for the audit trail review. For example, § 211.188(b) requires review after each significant step in manufacture, processing, packing, or holding, and § 211.22 requires data review before batch release. In these cases, you would apply the same review frequency for the audit trail.If the review frequency for the data is not specified in CGMP regulations, you should determine the review frequency for the audit trail using knowledge of your processes and risk assessment tools. The risk assessment should include evaluation of data criticality, control mechanisms, and impact on product quality. Your approach to audit trail review and the frequency with which you conduct it should ensure that CGMP requirements are met, appropriate controls are implemented, and the reliability of the review is proven.


US FDA. “How often should audit trails be reviewed?”  Data Integrity and Compliance With Drug CGMP Questions and Answers Guidance for Industry. Section 8, page 8
  Expectations Potential risk of not meeting expectations / items to be checked
1 Consideration should be given to data management and integrity requirements when purchasing and implementing computerised systems. Companies should select software that includes appropriate electronic audit trail functionality.   Companies should endeavour to purchase and upgrade older systems to implement software that includes electronic audit trail functionality.   It is acknowledged that some very simple systems lack appropriate audit trails; however, alternative arrangements to verify the veracity of data must be implemented, e.g. administrative procedures, secondary checks and controls. Additional guidance may be found under section 9.9 regarding Hybrid Systems.   Audit trail functionality should be verified during validation of the system to ensure that all changes and deletions of critical data associated with each manual activity are recorded and meet ALCOA+ principles.   Audit trail functionalities must be enabled and locked at all times and it must not be possible to deactivate the functionality. If it is possible for administrative users to deactivate the audit trail functionality, an automatic entry should be made in the audit trail indicating that the functionality has been deactivated.   Companies should implement procedures that outline their policy and processes for the review of audit trails in accordance with risk management principles. Critical audit trails related to each operation should be independently reviewed with all other records related to the operation and prior to the review of the completion of the operation, e.g. prior to batch release, so as to ensure that critical data and changes to it are acceptable. This review should be performed by the originating department, and where necessary verified by the quality unit, e.g. during self-inspection or investigative activities.   Validation documentation should demonstrate that audit trails are functional, and that all activities, changes and other transactions within the systems are recorded, together with all metadata.   Verify that audit trails are regularly reviewed (in accordance with quality risk management principles) and that discrepancies are investigated.   If no electronic audit trail system exists a paper based record to demonstrate changes to data may be acceptable until a fully audit trailed (integrated system or independent audit software using a validated interface) system becomes available. These hybrid systems are permitted, where they achieve equivalence to integrated audit trail, such as described in Annex 11 of the PIC/S GMP Guide. Failure to adequately review audit trails may allow manipulated or erroneous data to be inadvertently accepted by the Quality Unit and/or Authorised Person.   Clear details of which data are critical, and which changes and deletions must be recorded (audit trail) should be documented.
2 Where available, audit trail functionalities for electronic-based systems should be assessed and configured properly to capture any critical activities relating to the acquisition, deletion, overwriting of and changes to data for audit purposes.   Audit trails should be configured to record all manually initiated processes related to critical data.   The system should provide a secure, computer generated, time stamped audit trail to independently record the date and time of entries and actions that create, modify, or delete electronic records.   The audit trail should include the following parameters: – Who made the change – What was changed, incl. old and new values – When the change was made, incl. date and time – Why the change was made (reason) – Name of any person authorising the change.   The audit trail should allow for reconstruction of the course of events relating to the creation, modification, or deletion of an electronic record. The system must be able to print and provide an electronic copy of the audit trail, and whether looked at in the system or in a copy, the audit trail should be available in a meaningful format.   If possible, the audit trail should retain the dynamic functionalities found in the computer system, e.g. search functionality and export to e.g. Excel Verify the format of audit trails to ensure that all critical and relevant information is captured.   The audit trail must include all previous values and record changes must not obscure previously recorded information.   Audit trail entries should be recorded in true time and reflect the actual time of activities. Systems recording the same time for a number of sequential interactions, or which only make an entry in the audit trail, once all interactions have been completed, may not in compliance with expectations to data integrity, particularly where each discrete interaction or sequence is critical, e.g. for the electronic recording of addition of 4 raw materials to a mixing vessel. If the order of addition is a CPP, then each addition should be recorded individually, with time stamps. If the order of addition is not a CCP then the addition of all 4 materials could be recored as a single timestamped activity.

PIC/S. PI 041-1 “Good Practices for Data Management and Data Integrity in regulated GMP/GDP Environments“ (3rd draft) section 9.4 “Audit trail for computerised systems” page 36

Thoughts

It has long been the requirement that computer systems have audit trails and that these be convertible to a format that can be reviewed as appropriate. What these guidances are stating is:

  • There are key activities captured in the audit trail. These key determined in a risk-based manner.
  • These key activities need to be reviewed when making decisions based on them (determine a frequency)
  • The audit trail needs to be able to show the reviewer the key activity
  • These reviews needs to be captured in the quality system (proceduralized, recorded) 
  • This is part of the validated state of your system

So for example, my deviation system is evaluated and the key activity that needs to be reviewed in the decision to forward process. In this deviation decision quality makes the determination at several points of the workflow. The audit trail review would thus be looking at who made the decision when and did that meet criteria. The frequency might be established at the point of disposition for any deviation still in an opened state and upon closure.

What we are being asked is to evaluate all your computer systems and figure out what parts of the audit trail need to be reviewed when. 

Now here’s the problem. Most audit trails are garbage. Maybe they are human readable by some vague definition of readable (or even human). But they don’t have filters, or search or templates. So  companies need to be  (again based on a risk based approach)  evaluating their audit trails system by system to see if they are up-to-the-task. You then end up with one or more solutions:

  • Rebuild the audit trail to make it human readable and give filters and search criteria. For example on a deviation record there is one view for “disposition” and another for “closure”
  • Add reports (such as a set of crystal reports) to make it human readable and give filters and search criteria. Probably end up with a report for “disposition” and another report for “closure.”
  • Utilize an export function to Excel (or similar program)and use Excel’s functions to filter and search. Remember to ensure you have a data verification process in place.
  • The best solution is to ensure the audit trail is a step in your workflow and the review is captured as part of the audit trail. Ideally this is part of an exception reporting process driven by the system.

What risk based questions should drive this?

  • Overall risk of the system
  • Capabilities of audit trail
  • Can the data be modified after entry? Can it be modified prior to approval?
  • Is the result qualitative or quantitative 
  • Are changes to data visible on the record itself?

Considerations when validating and auditing algorithms

The future is now. Industry 4.0 probably means you have algorithms in your process. For example, if you aren’t using algorithims to analyze deviations, you probably soon will.

 And with those algorithms come a whole host of questions on how to validate and how to ensure they work properly over time. The FDA has indicated that ““we want to get an understanding of your general idea for model maintenance.” FDA also wants to know the “trigger” for updating the model, the criteria for recalibration, and the level of validation of the model.”

Kate Crawford at Microsoft speaks about “data fundamentalism” – the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. It shouldn’t take much to realize the reasons why this trap can produce some very bad decision making. Our algorithm’s have biases, just as human beings have biases. They are dependent on the data models used to build and refine them.

Based on reported FDA thinking, and given where European regulators are in other areas, it is very clear we need to be able to explain and justify our algorithmic decisions. Machine learning in here now and will only grow more important.

Basic model of data science

Ask an Interesting Question

The first step is to be very clear on why there is a need for this system and what problem it is trying to solve. Having alignment across all the stakeholders is key to guarantee that the entire team is here with the same purpose. Here we start building a framework

Get the Data

The solution will only be as good as what it learns from. Following the common saying “garbage in, garbage out”, the problem is not with the machine learning tool itself, it lies with how it’s been trained and what data it is learning from.

Explore the Data

Look at the raw data. Look at data summary. Visualize the data. Do it all again a different way. Notice things. Do it again. Probably get more data.  Design experiments with the data.

Model the Data

The only true way to validate a model is to observe, iterate and audit. If we take a traditional csv model to machine learning, we are in for a lot of hurt. We need to take the framework we built and validate to it. Ensure there are emchanisms to observe to this framework and audit to performance over time.

Computer system changes and the GAMP5 framework

Appropriate controls shall be exercised over computer or related systems to assure that changes in master production and control records or other records are instituted only by authorized personnel. Input to and output from the computer or related system of formulas or other records or data shall be checked for accuracy. The degree and frequency of input/output verification shall be based on the complexity and reliability of the computer or related system. A backup file of data entered into the computer or related system shall be maintained except where certain data, such as calculations performed in connection with laboratory analysis, are eliminated by computerization or other automated processes. In such instances a written record of the program shall be maintained along with appropriate validation data. Hard copy or alternative systems, such as duplicates, tapes, or microfilm, designed to assure that backup data are exact and complete and that it is secure from alteration, inadvertent erasures, or loss shall be maintained.

21 CFR 211.68(b)

Kris Kelly over at Advantu got me thinking about GAMP5 today.  As a result I went to the FDA’s Inspection Observations page and was quickly reminded me that in 2017 one of the top ten highest citations was against 211.68(b), with the largest frequency being “Appropriate controls are not exercised over computers or related systems to assure that changes in master production and control records or other records are instituted only by authorized personnel. ”

Similar requirements are found throughout the regulations of all major markets (for example EU 5.25) and data integrity is a big piece of this pie.

So yes, GAMP5 is probably one of your best tools for computer system validation. But this is also an argument for having one change management system/one change control process.

When building your change management system remember that your change is both a change to a validated change and a change to a process, and needs to go through the same appropriate rigor on both ends. Companies continue to get in a lot of trouble on this. Especially when you add in the impact of master data.

Make sure your IT organization is fully aligned. There’s a tendency at many companies (including mine) to build walls between an ITIL orientated change process and process changes. This needs to be driven by a risk based approach, and find the opportunities to tear down walls. I’m spending a lot of my time finding ways to do this, and to be honest, worry that there aren’t enough folks on the IT side of the fence willing to help tear down the fence.

So yes, GAMP5 is a great tool. Maybe one of the best frameworks we have available.

gamp5

 

Computer verification and validation – or what do I actually find myself worrying about every day

voting_software
xkcd “Voting Software” https://xkcd.com/2030/

This xkcd comic basically sums up my recent life. WFI system? Never seems to be a problem. Bioreactors? Work like clockwork. Cell growth? We go that covered. The list goes on. And then we get to pure software systems, and I spend all my time and effort on them. I wish it was just my company, but lets be frank, this stuff is harder than it should be and don’t trust a single software company or consultant who wants to tell you otherwise.

I am both terrified and ecstatic as everything moves to the cloud. Terrified because these are the same people who can’t get stuff like time clocks right, ecstatic because maybe when we all have the exact same problem we will see some changes (misery loves company, this is why we all go to software conferences).

So, confessional moment done, let us turn to a few elements of a competent computer systems validation program (csv).

Remember your system is more than software and hardware

Any system is made up of Process, Technology, People and Organization. All four need to be evaluated, planned for, and tested every step of the way. Too many computer systems fall flat because they focus on technology and maybe a little process.

Utilize a risk based approach regarding the impact of a computer system impact on product quality, patient and consumer safety, or related data integrity.

Risk assessments allow for a detailed, analytical review of potential risks posed by a process or system. Not every computer system has the same expectations on its data. Health authorities recognize that, and accept a risk based approach. This is reflected across the various guidances and regulations, best practices (GAMP 5, for instance) and the ISOs (14971 is a great example).

Some of the benefits of taking this risk based approach include:

  • Help to focus verification and validation efforts, which will allow you to better focus of resources on the higher-risk items
  • Determine which aspects of the system and/or business process around the system, require risk mitigation controls to reduce risk related to patient safety, product quality, data integrity, or business risk
  • Build a better understanding of  systems and processes from a quality and risk-based perspective

Don’t short the user requirements

A good user requirement process is critical. User requirements should include, among other things:

  • Technical Requirements: Should include things like capacity, performance, and hardware requirements.
  • System Functions: Should include things like calculations, logical security, audit trails, use of electronic signature.
  • Data: Should describe the data handling, definition of electronic records, required fields.
  • Environment: Should describe the physical conditions that the system will be required to operate in.
  • Interface: What and how will this system interface with other systems
  • Constraints: discuss compatibility, maximum allowable periods for downtime, user skill levels.
  • Lifecycle Requirements: Include mandatory design methods or special testing requirements.

Evaluate each of people, process, technology and organization.

This user requirement will be critical for performing a proper risk assessment. Said risk assessment is often iterative.

Build and test your system to mitigate risk

  • Eliminating risk through process or system redesign
  • Reduce risk by reducing the probability of a failure occurring (redundant design, more reliable solution)
  • Reduce risk by increasing the in-process detectability of a failure
  • Reduce risk by establishing downstream checks or error traps (e.g., fail-safe, or controlled fail state)
  • Increased rigor of verification testing may reduce the likelihood by providing new information to allow for a better assessment

After performing verification and validation activities, return to your risk assessment.

Apply a lifecycle approach once live

  • Apply proper change management
  • Perform periodic reviews of the system. This should include: current range of functionality, access and training, process robustness (do the current operating procedures provide the desired outcome), incident and deviation review, change history (including upgrades), performance, reliability, security and a general review of the current verified/validated state.
  • Ensure the risk assessment is returned to. On a periodic basis return to it and refresh based on new knowledge gained from the periodic review and other activities.

Do not separate any of this from your project management and development methodology

Too many times I’ve seen the hot new development lifecycle consider all this as an after thought to be done when the software is complete. That approach is expense, and oh so frustrating