Data Process Mapping

In a presentation on practical applications of data integrity for laboratories at the March 2019 MHRA Laboratories Symposium held in London, UK, MHRA Lead GCP and GLP Inspector Jason Wakelin-Smith highlighted the important role data process mapping plays in understanding these challenges and moving down the DI pathway.

He pointed out that understanding of processes and systems, which data maps facilitate, is a key theme in MHRA’s GxP data integrity guidance, finalized in March of 2018. The guidance is intended to be broadly applicable across the regulated practices, but excluding the medical device arena, which is regulated in Europe by third-party notified bodies.

IPQ. MHRA Inspectors are Advocating Data Mapping as a Key First Step on the Data Integrity Pilgrimage

Data process maps look at the entire data life-cycle from creation through storage (covering key components of create, modify and delete) and include all operations with both paper and electronic records.   Data maps are cross-functional diagrams (swim-lanes) and have the following sections:

  • Prep/Input
  • Data Creation
  • Data Manipulation (include delete)
  • Data  Use
  • Data Storage

Use a standard symbol for paper record, computer data and process step.

For computer data denote (usually by color) the level of controls:

  • Fully aligned with Part 11 and Data Integrity guidances
  • Gaps in compliance but remediation plan in place (this includes places where paper is considered “true copy”
  • Not compliant, no remediation plan

Data operations are depicted utilizing arrows.  The following data operations are probably most common, and are recommended for consistency:

  • Data Entry – input of process, meta data (e.g. lot ID, operator)
  • Data Store – archival location
  • Data Copy – transcription from another system or paper, transfer of data from one system to another, printing (Indicate if it is a manual process).
  • Data Edit – calculations, processing, reviews, unit changes  (Indicate if it is a manual process)
  • Data Move – movement of paper or electronic records

Data operation arrows should denote (again by color) the current controls in place:

  • Technical Controls – Validated Automated Process
  • Operational Controls – Manual Process with Review/Verified/Witness Requirements
  • No Controls – Automated process that is not validated or Manual process with no Review/Verified/Witness Considerations
Example data map

Top 5 Posts by Views in 2019 (first half)

With June almost over a look at the five top views for 2019. Not all of these were written in 2019, but I find it interesting what folks keep ending up at my blog to read.

  1. FDA signals – no such thing as a planned deviation: Since I wrote this has been a constant source of hits, mostly driven by search engines. I always feel like I should do a follow-up, but not sure what to say beyond – don’t do planned deviations, temporary changes belong in the change control system.
  2. Empathy and Feedback as part of Quality Culture: The continued popularity of this post since I wrote it in March has driven a lot of the things I am writing lately.
  3. Effective Change Management: Change management and change control are part of my core skill set and I’m gratified that this post gets a lot of hits. I wonder if I should build it into some sort of expanded master class, but I keep feeling I already have.
  4. Review of Audit Trails: Data Integrity is so critical these days. I should write more on the subject.
  5. Risk Management is about reducing uncertainty: This post really captures a lot of the stuff I am thinking about and driving action on at work.

Thinking back to my SWOT, and the ACORN test I did at the end of 2018, I feel fairly good about the first six months. I certainly wish I found time to blog more often, but that seems doable. And like most bloggers, I still am looking for ways to increase engagement with my posts and to spark conversations.

Falsification and error

At the heart, data integrity is a lot about culture. There are technical requirements, but mostly we are returning to the same principles as quality culture and just keep coming back to Deming. A great example of this is the use of the fraud triangle and human error.

The fraud triangle was developed by Donald Cressey in the 1950s when investigating financial fraud and embezzlement. The principles Cressey identified are directly relevant to data integrity, and to quality culture as a whole.

Falsification Triangle
Element Exists When To Break
Incentive or Pressure Why commit falsification of data? Managerial pressure or financial gains are the two main drivers here to push people to commit fraud. Setting unrealistic objectives such as stretch goals, turnaround time or key performance indicators that are totally divorced from reality especially when these are linked to pay or advancement will only encourage staff to falsify data to receive rewards. These goals coupled with poor analytical instruments and methods will only ensure that corners will be cut to meet deadlines or targets. Management must lead by example – not through communication or establishing data governance structures but by ensuring the pressure to falsify data is removed. This means setting realistic expectations that are compatible with the organization’s capacity and process capability.
Rationalization or Incentive To commit fraud people must either have an incentive or can rationalize that this is an acceptable practice within an organization or department. Staff need to understand how their actions can impact the health of the patient. Ensure individuals know the importance of reliable and accurate data to the wellbeing of the patient as well as the business health of the company.
Opportunity The opportunity to falsify data can be due to encouragement by management as a means of keeping cost down or a combination of lax controls or poor oversight of activities that contribute to staff being able to commit fraud. Implement a process that is technically controlled so there is little, if any, opportunity to commit falsification of data.

Mistakes are human nature – we all have fat finger moments. This is why we build our processes and technologies to ensure we capture these errors and self-correct them. These errors should be tracked and trended, but only as a way to drive continuous improvement. It is important to have the capability in your quality systems to be able to evaluate mistakes up-to-and including fraud.

It helps to be able to classify issues and determine if there are changes to governance, management systems and behaviors necessary.

Events should be classified based on how intentional they are

Human error should be built into investigative systems. Yes, whenever possible we are looking for technical controls, but the human exists and needs to be fully taken into consideration.

The best way to ensure data integrity is the best way to build a quality culture.

System Model

The role of a data steward

With data integrity on everyone’s mind the last few years, the role of a data steward is being more and more discussed. Putting aside my amusement on the proliferation of stewards and champions across our quality systems, the idea of data stewards is a good one.

Data steward is someone from the business who handle master data. It is not an IT role, as a good data steward will truly be invested in how the data is being used, managed and groomed. The data steward is responsible and accountable for how data enters the system and ensure it adds value to the process.

The job revolves around, but is not limited to, the following questions:

  • Why is this particular data important to the organization?
  • How long should the particular records (data) be stored or kept?
  • Measurements to improve the quality of that analysis

Data stewards do this by providing:

  • Operational Oversight by overseeing the life cycle through defining and implementing policies and procedures for the day-to-day operational and administrative management of systems and data — including the intake, storage, processing, and transmission of data to internal and external systems. They are accountable to define and document data and terminology in a relevant glossary. This includes ensuring that each critical data element has a clear definition and is still in use.
  • Data quality, including evaluation and root cause analysis
  • Risk management, including retention, archival, and disposal requirements and ensuring compliance with internal policy and regulations.

With systems being made up of people, process and technology, the line between data steward and system owner is pretty vague. When a technology is linked to a single system or process it makes sense for them to be the same person (or team), for example a document management system. However, most technology platforms are across multiple systems or processes (for example an ERP or Quality Management System) and it is critical to look at the technology holistically as the data steward. I think we are all familiar with the problems that can be created by the same piece of data being treated differently between workflows in a technology platform.

As organizations evolve their data governance I think we will see the role of the data steward become more and more part of the standard quality toolbox, as the competencies are pretty similar.

PDF fillable forms

On my.ASQ.org the following question was asked “The Device History Record is a form in fillable PDF format. Worker opens the PDF from a secure source within the local network. The only thing they can change is checkmark Pass/Fail, Yes/No and enter serial numbers in the allowed fields. Then after the assembly process is done for each procedure, the worker prints the DHR, signs and dates it by hand, to verify the accuracy of data entered. No re-printing or saving PDF’s is allowed.”

This comes up a lot. This is really a simple version of a hybrid situation, where both electronic and paper versions of the record exists.

Turning to the PIC/S draft guidance we find on page 44 of 52 “Each element of the hybrid system should be qualified and controlled in accordance with the guidance relating to manual and computerised systems”

Here would be my recommendation (and its one tried and tested).

The pdf form needs to be under the same document management system and controls as any other form. Ideally the exact same system. This provides version control and change management to the form. It also allows users to know they have the current version at all times.

Once it is printed, the paper version is the record. It has a wet-signature and it under all the same predicate record requirements. This record gets archived appropriately.

Where I have seen companies get messed up here is when the pdf exists in a separate, usually poorly controlled system from the rest of your document management. Situations like this should really be evaluated from the document management perspective and not the computer systems life-cycle perspective. But its all data integrity.