I find it interesting to read a different perspective. I tend to be a big fan of guidances (they always need work) as they help lay down how we can get better and improve. Being on the front line of regulatory inspections probably more than a group of lawyers, I recognize the differences in how guidances are treated differently than regulations, and how the agencies apply very long lead times on how inspections treat this material. And frankly, the 483s and Warning Letters we are seeing coming out of data integrity scare the beejeezus out of me. There is also a need for the FDA to ensure it’s thinking on matters is aligned with our European and rest-of-world counterparts, especially in this day of mutual recognition agreements.
Regulatory and administrative law is definitely continually evolving. It is important to be aware of a variety of perspectives on the subject.
There must be document controls in place to assure product quality (see §§ 211.100, 211.160(a),211.186, 212.20(d), and 212.60(g)). For example, bound paginated notebooks, stamped for official use by a document control group, provide good document control because they allow easy detection of unofficial notebooks as well as any gaps in notebook pages. If used, blank forms (e.g., electronic worksheets, laboratory notebooks, and MPCRs) should be controlled by the quality unit or by another document control method. As appropriate, numbered sets of blank forms may be issued and should be reconciled upon completion of all issued forms. Incomplete or erroneous forms should be kept as part of the permanent record along with written justification for their replacement (see, e.g., §§ 211.192, 211.194, 212.50(a), and 212.70(f)(1)(vi)). All data required to recreate a CGMP activity should be maintained as part of the complete record.
6. How should blank forms be controlled? on page 7 of 13
First sentence “There must be document controls in place to assure product quality” should be interpreted in a risk based approach. All forms should always be published from a controlled manner, ideally an electronic system that ensures the correct version is used and provides a time/date stamp of when the form is published. Some forms (based on risk) should be published in such a way that contemporaneity and originality are more easy to prove. In other words, bind them.
A good rule of thumb for binding a printed form (which is now going to become a record) is as follows:
Is it one large form with individual pages contributing to the whole record that could be easily lost, misplaced or even intentionally altered?
Is it a form that provides chronological order to the same or similar pieces of information such as a logbook?
Is time of entry important?
Will this form live with a piece of equipment, an instrument, a room for a period of time? Another way to phrase this, if the form is not a once and done that upon completion as a record moves along in a review flow.
If you answer yes to any of these, then the default should be to bind it and control it through a central publishing function, traditionally called document control.
Potential risk of not meeting expectations/items to be checked
Distribution and Control Item 2 page 17 of 52
Issue should be controlled by written procedures that include the following controls: – Details of who issued the copies and when they were issued. – using of a secure stamp, or paper colour code not available in the working areas or another appropriate system. – ensuring that only the current approved version is available for use. – allocating a unique identifier to each blank document issued and recording the issue of each document in a register. – Numbering every distributed copy (e.g.: copy 2 of 2) and sequential numbering of issued pages in bound books. Where the re-issue of additional copies of the blank template is necessary, a controlled process regarding re-issue should be followed. All distributed copies should be maintained and a justification and approval for the need of an extra copy should be recorded, e.g.: “the original template record was damaged”. – All issued records should be reconciled following use to ensure the accuracy and completeness of records.
Without the use of security measures, there is a risk that rewriting
or falsification of data may be made after photocopying or scanning the
template record (which gives the user another template copy to use). Obsolete
version can be used intentionally or by error. A filled record with an
anomalous data entry could be replaced by a new rewritten template.
All unused forms should be accounted for, and either defaced and
destroyed, or returned for secure filing.
Did someone declare December Data Integrity month when I wasn’t looking? Though recent FDA announcements really mean that every month is data integrity month.
In the spirit of giving the US published on 13Dec2017 “Data Integrity and Compliance with Drug CGMP: Questions and Answers.” This guidance updates a draft version released in 2016 and has been revised to include additional information on the agency’s current thinking on best practices and covers the design, operation and monitoring of systems and controls to maintain data integrity.
The future is now. Industry 4.0 probably means you have algorithms in your process. For example, if you aren’t using algorithims to analyze deviations, you probably soon will.
And with those algorithms come a whole host of questions on how to validate and how to ensure they work properly over time. The FDA has indicated that ““we want to get an understanding of your general idea for model maintenance.” FDA also wants to know the “trigger” for updating the model, the criteria for recalibration, and the level of validation of the model.”
Kate Crawford at Microsoft speaks about “data fundamentalism” – the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. It shouldn’t take much to realize the reasons why this trap can produce some very bad decision making. Our algorithm’s have biases, just as human beings have biases. They are dependent on the data models used to build and refine them.
Based on reported FDA thinking, and given where European regulators are in other areas, it is very clear we need to be able to explain and justify our algorithmic decisions. Machine learning in here now and will only grow more important.
Ask an Interesting Question
The first step is to be very clear on why there is a need for this system and what problem it is trying to solve. Having alignment across all the stakeholders is key to guarantee that the entire team is here with the same purpose. Here we start building a framework
Get the Data
The solution will only be as good as what it learns from. Following the common saying “garbage in, garbage out”, the problem is not with the machine learning tool itself, it lies with how it’s been trained and what data it is learning from.
Explore the Data
Look at the raw data. Look at data summary. Visualize the data. Do it all again a different way. Notice things. Do it again. Probably get more data. Design experiments with the data.
Model the Data
The only true way to validate a model is to observe, iterate and audit. If we take a traditional csv model to machine learning, we are in for a lot of hurt. We need to take the framework we built and validate to it. Ensure there are emchanisms to observe to this framework and audit to performance over time.