With data integrity on everyone’s mind the last few years, the role of a data steward is being more and more discussed. Putting aside my amusement on the proliferation of stewards and champions across our quality systems, the idea of data stewards is a good one.
Data steward is someone from the business who handle master data. It is not an IT role, as a good data steward will truly be invested in how the data is being used, managed and groomed. The data steward is responsible and accountable for how data enters the system and ensure it adds value to the process.
The job revolves around, but is not limited to, the following questions:
Why is this particular data important to the organization?
How long should the particular records (data) be stored or kept?
Measurements to improve the quality of that analysis
Data stewards do this by providing:
Operational Oversight by overseeing the life cycle through defining and implementing policies and procedures for the day-to-day operational and administrative management of systems and data — including the intake, storage, processing, and transmission of data to internal and external systems. They are accountable to define and document data and terminology in a relevant glossary. This includes ensuring that each critical data element has a clear definition and is still in use.
Data quality, including evaluation and root cause analysis
Risk management, including retention, archival, and disposal requirements and ensuring compliance with internal policy and regulations.
With systems being made up of people, process and technology, the line between data steward and system owner is pretty vague. When a technology is linked to a single system or process it makes sense for them to be the same person (or team), for example a document management system. However, most technology platforms are across multiple systems or processes (for example an ERP or Quality Management System) and it is critical to look at the technology holistically as the data steward. I think we are all familiar with the problems that can be created by the same piece of data being treated differently between workflows in a technology platform.
As organizations evolve their data governance I think we will see the role of the data steward become more and more part of the standard quality toolbox, as the competencies are pretty similar.
On my.ASQ.org the following question was asked “The Device History Record is a form in fillable PDF format. Worker opens the PDF from a secure source within the local network. The only thing they can change is checkmark Pass/Fail, Yes/No and enter serial numbers in the allowed fields. Then after the assembly process is done for each procedure, the worker prints the DHR, signs and dates it by hand, to verify the accuracy of data entered. No re-printing or saving PDF’s is allowed.”
This comes up a lot. This is really a simple version of a hybrid situation, where both electronic and paper versions of the record exists.
Turning to the PIC/S draft guidance we find on page 44 of 52 “Each element of the hybrid system should be qualified and controlled in accordance with the guidance relating to manual and computerised systems”
Here would be my recommendation (and its one tried and tested).
The pdf form needs to be under the same document management system and controls as any other form. Ideally the exact same system. This provides version control and change management to the form. It also allows users to know they have the current version at all times.
Once it is printed, the paper version is the record. It has a wet-signature and it under all the same predicate record requirements. This record gets archived appropriately.
Where I have seen companies get messed up here is when the pdf exists in a separate, usually poorly controlled system from the rest of your document management. Situations like this should really be evaluated from the document management perspective and not the computer systems life-cycle perspective. But its all data integrity.
One of the requirements for data integrity that has changed in detail as the various guidances (FDA, MHRA, PIC/S) have gone through draft has been review of audit trails. This will also probably be one of the more controversial in certain corners as it can be seen by some as going beyond what has traditionally been the focus of good document practices and computer system validation.
What the guidances say
Audit trail review is similar to assessing cross-outs on paper when reviewing data. Personnel responsible for record review under CGMP should review the audit trails that capture changes to data associated with the record as they review the rest of the record (e.g., §§ 211.22(a), 211.101(c) and (d), 211.103, 211.182, 211.186(a), 211.192, 211.194(a)(8), and 212.20(d)). For example, all production and control records, which includes audit trails, must be reviewed and approved by the quality unit (§ 211.192). The regulations provide flexibility to have some activities reviewed by a person directly supervising or checking information (e.g., § 211.188). FDA recommends a quality system approach to implementing oversight and review of CGMP records.
If the review frequency for the data is specified in CGMP regulations, adhere to that frequency for the audit trail review. For example, § 211.188(b) requires review after each significant step in manufacture, processing, packing, or holding, and § 211.22 requires data review before batch release. In these cases, you would apply the same review frequency for the audit trail.If the review frequency for the data is not specified in CGMP regulations, you should determine the review frequency for the audit trail using knowledge of your processes and risk assessment tools. The risk assessment should include evaluation of data criticality, control mechanisms, and impact on product quality. Your approach to audit trail review and the frequency with which you conduct it should ensure that CGMP requirements are met, appropriate controls are implemented, and the reliability of the review is proven.
Potential risk of not
meeting expectations / items to be checked
Consideration should be given
to data management and integrity requirements when purchasing and
implementing computerised systems. Companies should select software that includes
appropriate electronic audit trail functionality.
Companies should endeavour to
purchase and upgrade older systems to implement software that includes
electronic audit trail functionality.
It is acknowledged that some
very simple systems lack appropriate audit trails; however, alternative
arrangements to verify the veracity of data must be implemented, e.g.
administrative procedures, secondary checks and controls. Additional guidance
may be found under section 9.9 regarding Hybrid Systems.
Audit trail functionality
should be verified during validation of the system to ensure that all changes
and deletions of critical data associated with each manual activity are
recorded and meet ALCOA+ principles.
Audit trail functionalities
must be enabled and locked at all times and it must not be possible to
deactivate the functionality. If it is possible for administrative users to
deactivate the audit trail functionality, an automatic entry should be made
in the audit trail indicating that the functionality has been deactivated.
Companies should implement
procedures that outline their policy and processes for the review of audit
trails in accordance with risk management principles. Critical audit trails
related to each operation should be independently reviewed with all other
records related to the operation and prior to the review of the completion of
the operation, e.g. prior to batch release, so as to ensure that critical
data and changes to it are acceptable. This review should be performed by the
originating department, and where necessary verified by the quality unit,
e.g. during self-inspection or investigative activities.
should demonstrate that audit trails are functional, and that all activities,
changes and other transactions within the systems are recorded, together with
Verify that audit trails are
regularly reviewed (in accordance with quality risk management principles)
and that discrepancies are investigated.
If no electronic audit trail
system exists a paper based record to demonstrate changes to data may be
acceptable until a fully audit trailed (integrated system or independent
audit software using a validated interface) system becomes available. These
hybrid systems are permitted, where they achieve equivalence to integrated
audit trail, such as described in Annex 11 of the PIC/S GMP Guide.
Failure to adequately review
audit trails may allow manipulated or erroneous data to be inadvertently
accepted by the Quality Unit and/or Authorised Person.
Clear details of which data
are critical, and which changes and deletions must be recorded (audit trail)
should be documented.
Where available, audit trail
functionalities for electronic-based systems should be assessed and
configured properly to capture any critical activities relating to the
acquisition, deletion, overwriting of and changes to data for audit purposes.
Audit trails should be
configured to record all manually initiated processes related to critical
The system should provide a
secure, computer generated, time stamped audit trail to independently record
the date and time of entries and actions that create, modify, or delete
The audit trail should
include the following parameters:
– Who made the change
– What was changed, incl. old
and new values
– When the change was made,
incl. date and time
– Why the change was made
– Name of any person
authorising the change.
The audit trail should allow
for reconstruction of the course of events relating to the creation,
modification, or deletion of an electronic record.
The system must be able to
print and provide an electronic copy of the audit trail, and whether looked
at in the system or in a copy, the audit trail should be available in a
If possible, the audit trail
should retain the dynamic functionalities found in the computer system, e.g.
search functionality and export to e.g. Excel
format of audit trails to ensure that all critical and relevant information
The audit trail
must include all previous values and record changes must not obscure
previously recorded information.
entries should be recorded in true time and reflect the actual time of
activities. Systems recording the same time for a number of sequential
interactions, or which only make an entry in the audit trail, once all
interactions have been completed, may not in compliance with expectations to
data integrity, particularly where each discrete interaction or sequence is
critical, e.g. for the electronic
recording of addition of 4 raw materials to a mixing vessel. If the order of
addition is a CPP, then each addition should be recorded individually, with
time stamps. If the order of addition is not a CCP then the addition of all 4
materials could be recored as a single timestamped activity.
It has long been the requirement that computer systems have audit trails and that these be convertible to a format that can be reviewed as appropriate. What these guidances are stating is:
There are key activities captured in the audit trail. These key determined in a risk-based manner.
These key activities need to be reviewed when making decisions based on them (determine a frequency)
The audit trail needs to be able to show the reviewer the key activity
These reviews needs to be captured in the quality system (proceduralized, recorded)
This is part of the validated state of your system
So for example, my deviation system is evaluated and the key activity that needs to be reviewed in the decision to forward process. In this deviation decision quality makes the determination at several points of the workflow. The audit trail review would thus be looking at who made the decision when and did that meet criteria. The frequency might be established at the point of disposition for any deviation still in an opened state and upon closure.
What we are being asked is to evaluate all your computer systems and figure out what parts of the audit trail need to be reviewed when.
Now here’s the problem. Most audit trails are garbage. Maybe they are human readable by some vague definition of readable (or even human). But they don’t have filters, or search or templates. So companies need to be (again based on a risk based approach) evaluating their audit trails system by system to see if they are up-to-the-task. You then end up with one or more solutions:
Rebuild the audit trail to make it human readable and give filters and search criteria. For example on a deviation record there is one view for “disposition” and another for “closure”
Add reports (such as a set of crystal reports) to make it human readable and give filters and search criteria. Probably end up with a report for “disposition” and another report for “closure.”
Utilize an export function to Excel (or similar program)and use Excel’s functions to filter and search. Remember to ensure you have a data verification process in place.
The best solution is to ensure the audit trail is a step in your workflow and the review is captured as part of the audit trail. Ideally this is part of an exception reporting process driven by the system.
I find it interesting to read a different perspective. I tend to be a big fan of guidances (they always need work) as they help lay down how we can get better and improve. Being on the front line of regulatory inspections probably more than a group of lawyers, I recognize the differences in how guidances are treated differently than regulations, and how the agencies apply very long lead times on how inspections treat this material. And frankly, the 483s and Warning Letters we are seeing coming out of data integrity scare the beejeezus out of me. There is also a need for the FDA to ensure it’s thinking on matters is aligned with our European and rest-of-world counterparts, especially in this day of mutual recognition agreements.
Regulatory and administrative law is definitely continually evolving. It is important to be aware of a variety of perspectives on the subject.
There must be document controls in place to assure product quality (see §§ 211.100, 211.160(a),211.186, 212.20(d), and 212.60(g)). For example, bound paginated notebooks, stamped for official use by a document control group, provide good document control because they allow easy detection of unofficial notebooks as well as any gaps in notebook pages. If used, blank forms (e.g., electronic worksheets, laboratory notebooks, and MPCRs) should be controlled by the quality unit or by another document control method. As appropriate, numbered sets of blank forms may be issued and should be reconciled upon completion of all issued forms. Incomplete or erroneous forms should be kept as part of the permanent record along with written justification for their replacement (see, e.g., §§ 211.192, 211.194, 212.50(a), and 212.70(f)(1)(vi)). All data required to recreate a CGMP activity should be maintained as part of the complete record.
6. How should blank forms be controlled? on page 7 of 13
First sentence “There must be document controls in place to assure product quality” should be interpreted in a risk based approach. All forms should always be published from a controlled manner, ideally an electronic system that ensures the correct version is used and provides a time/date stamp of when the form is published. Some forms (based on risk) should be published in such a way that contemporaneity and originality are more easy to prove. In other words, bind them.
A good rule of thumb for binding a printed form (which is now going to become a record) is as follows:
Is it one large form with individual pages contributing to the whole record that could be easily lost, misplaced or even intentionally altered?
Is it a form that provides chronological order to the same or similar pieces of information such as a logbook?
Is time of entry important?
Will this form live with a piece of equipment, an instrument, a room for a period of time? Another way to phrase this, if the form is not a once and done that upon completion as a record moves along in a review flow.
If you answer yes to any of these, then the default should be to bind it and control it through a central publishing function, traditionally called document control.
Potential risk of not meeting expectations/items to be checked
Distribution and Control Item 2 page 17 of 52
Issue should be controlled by written procedures that include the following controls: – Details of who issued the copies and when they were issued. – using of a secure stamp, or paper colour code not available in the working areas or another appropriate system. – ensuring that only the current approved version is available for use. – allocating a unique identifier to each blank document issued and recording the issue of each document in a register. – Numbering every distributed copy (e.g.: copy 2 of 2) and sequential numbering of issued pages in bound books. Where the re-issue of additional copies of the blank template is necessary, a controlled process regarding re-issue should be followed. All distributed copies should be maintained and a justification and approval for the need of an extra copy should be recorded, e.g.: “the original template record was damaged”. – All issued records should be reconciled following use to ensure the accuracy and completeness of records.
Without the use of security measures, there is a risk that rewriting
or falsification of data may be made after photocopying or scanning the
template record (which gives the user another template copy to use). Obsolete
version can be used intentionally or by error. A filled record with an
anomalous data entry could be replaced by a new rewritten template.
All unused forms should be accounted for, and either defaced and
destroyed, or returned for secure filing.