ASQ Audit Conference – Day 2 Afternoon

“Risk: What is it? Prove it, show me” by Larry Litke

At this point I may be a glutton for sessions about risk. While I am fascinated by how people are poking at this beast, and sometimes dismayed by how far back our thinking is on the subject, there may just be an element that at an audit conference I find a lot of the other topics not really aligned to my interests.

Started by covering high level definition of risk and then moved into the IS (001:2015’s risk based thinking at a high level, mostly by reading from the standard.

It is good that succession planning is specifically discussed as part of risk-based thinking.

“Above all it is communication” is good advice for every change.

It is an important point that the evidence of risk-based thinking is the actual results and not a separate thing.

This presentation’s strengths was when it focused on business continuity as a form of risk-based thinking.

“Auditing the Quality System for Data Integrity” by Jeremiah Genest

My second presentation of the conference is here.

Overall Impressions

This year’s Audit Division conference was pretty small. I was in sessions with 10 people and we didn’t fill a medium size ballroom. I’m told this was smaller than in past years and I sincerely hope this will be a bigger conference next year, which is back in Orlando. My daughter will be thrilled, and I may be back just to meet that set of user requirements.

I think this conference could benefit from the rigor the LSS Conference and WCQI apply for presentation development. I was certainly guilty here. But way too many presentations were wall-to-wall text.

Risk Based Data Integrity Assessment

A quick overview. The risk-based approach will utilize three factors, data criticality, existing controls, and level of detection.

When assessing current controls, technical controls (properly implemented) are stronger than operational or organizational controls as they can eliminate the potential for data falsification or human error rather than simply reducing/detecting it. 

For criticality, it helps to build a table based on what the data is used for. For example:

For controls, use a table like the one below. Rank each column and then multiply the numbers together to get a final control ranking.  For example, if a process has Esign (1), no access control (3), and paper archival (2) then the control ranking would be 6 (1 x 3 x 2). 

Determine detectibility on the table below, rank each column and then multiply the numbers together to get a final detectability ranking. 

Another way to look at these scores:

Multiple above to determine a risk ranking and move ahead with mitigations. Mitigations should be to drive risk as low as possible, though the following table can be used to help determine priority.

Risk Rating Action Mitigation
>25 High Risk-Potential Impact to Patient Safety or Product Quality Mandatory
12-25 Moderate Risk-No Impact to Patient Safety or Product Quality but Potential Regulatory Risk Recommended
<12 Negligible DI Risk Not Required

In the case of long-term risk remediation actions, risk reducing short-term actions shall be implemented to reduce risk and provide an acceptable level of governance until the long-term remediation actions are completed.

Relevant site procedures (e.g., change control, validation policy) should outline the scope of additional testing through the change management process.

Reassessment of the system may be completed following the completion of remediation activities. The reassessment may be done at any time during the remediation process to document the impact of the remediation actions.

Once final remediation is complete, a reassessment of the equipment/system should be completed to demonstrate that the risk rating has been mitigated by the remediation actions taken. Think living risk assessment.

Data Process Mapping

In a presentation on practical applications of data integrity for laboratories at the March 2019 MHRA Laboratories Symposium held in London, UK, MHRA Lead GCP and GLP Inspector Jason Wakelin-Smith highlighted the important role data process mapping plays in understanding these challenges and moving down the DI pathway.

He pointed out that understanding of processes and systems, which data maps facilitate, is a key theme in MHRA’s GxP data integrity guidance, finalized in March of 2018. The guidance is intended to be broadly applicable across the regulated practices, but excluding the medical device arena, which is regulated in Europe by third-party notified bodies.

IPQ. MHRA Inspectors are Advocating Data Mapping as a Key First Step on the Data Integrity Pilgrimage

Data process maps look at the entire data life-cycle from creation through storage (covering key components of create, modify and delete) and include all operations with both paper and electronic records.   Data maps are cross-functional diagrams (swim-lanes) and have the following sections:

  • Prep/Input
  • Data Creation
  • Data Manipulation (include delete)
  • Data  Use
  • Data Storage

Use a standard symbol for paper record, computer data and process step.

For computer data denote (usually by color) the level of controls:

  • Fully aligned with Part 11 and Data Integrity guidances
  • Gaps in compliance but remediation plan in place (this includes places where paper is considered “true copy”
  • Not compliant, no remediation plan

Data operations are depicted utilizing arrows.  The following data operations are probably most common, and are recommended for consistency:

  • Data Entry – input of process, meta data (e.g. lot ID, operator)
  • Data Store – archival location
  • Data Copy – transcription from another system or paper, transfer of data from one system to another, printing (Indicate if it is a manual process).
  • Data Edit – calculations, processing, reviews, unit changes  (Indicate if it is a manual process)
  • Data Move – movement of paper or electronic records

Data operation arrows should denote (again by color) the current controls in place:

  • Technical Controls – Validated Automated Process
  • Operational Controls – Manual Process with Review/Verified/Witness Requirements
  • No Controls – Automated process that is not validated or Manual process with no Review/Verified/Witness Considerations
Example data map

Top 5 Posts by Views in 2019 (first half)

With June almost over a look at the five top views for 2019. Not all of these were written in 2019, but I find it interesting what folks keep ending up at my blog to read.

  1. FDA signals – no such thing as a planned deviation: Since I wrote this has been a constant source of hits, mostly driven by search engines. I always feel like I should do a follow-up, but not sure what to say beyond – don’t do planned deviations, temporary changes belong in the change control system.
  2. Empathy and Feedback as part of Quality Culture: The continued popularity of this post since I wrote it in March has driven a lot of the things I am writing lately.
  3. Effective Change Management: Change management and change control are part of my core skill set and I’m gratified that this post gets a lot of hits. I wonder if I should build it into some sort of expanded master class, but I keep feeling I already have.
  4. Review of Audit Trails: Data Integrity is so critical these days. I should write more on the subject.
  5. Risk Management is about reducing uncertainty: This post really captures a lot of the stuff I am thinking about and driving action on at work.

Thinking back to my SWOT, and the ACORN test I did at the end of 2018, I feel fairly good about the first six months. I certainly wish I found time to blog more often, but that seems doable. And like most bloggers, I still am looking for ways to increase engagement with my posts and to spark conversations.

Falsification and error

At the heart, data integrity is a lot about culture. There are technical requirements, but mostly we are returning to the same principles as quality culture and just keep coming back to Deming. A great example of this is the use of the fraud triangle and human error.

The fraud triangle was developed by Donald Cressey in the 1950s when investigating financial fraud and embezzlement. The principles Cressey identified are directly relevant to data integrity, and to quality culture as a whole.

Falsification Triangle
Element Exists When To Break
Incentive or Pressure Why commit falsification of data? Managerial pressure or financial gains are the two main drivers here to push people to commit fraud. Setting unrealistic objectives such as stretch goals, turnaround time or key performance indicators that are totally divorced from reality especially when these are linked to pay or advancement will only encourage staff to falsify data to receive rewards. These goals coupled with poor analytical instruments and methods will only ensure that corners will be cut to meet deadlines or targets. Management must lead by example – not through communication or establishing data governance structures but by ensuring the pressure to falsify data is removed. This means setting realistic expectations that are compatible with the organization’s capacity and process capability.
Rationalization or Incentive To commit fraud people must either have an incentive or can rationalize that this is an acceptable practice within an organization or department. Staff need to understand how their actions can impact the health of the patient. Ensure individuals know the importance of reliable and accurate data to the wellbeing of the patient as well as the business health of the company.
Opportunity The opportunity to falsify data can be due to encouragement by management as a means of keeping cost down or a combination of lax controls or poor oversight of activities that contribute to staff being able to commit fraud. Implement a process that is technically controlled so there is little, if any, opportunity to commit falsification of data.

Mistakes are human nature – we all have fat finger moments. This is why we build our processes and technologies to ensure we capture these errors and self-correct them. These errors should be tracked and trended, but only as a way to drive continuous improvement. It is important to have the capability in your quality systems to be able to evaluate mistakes up-to-and including fraud.

It helps to be able to classify issues and determine if there are changes to governance, management systems and behaviors necessary.

Events should be classified based on how intentional they are

Human error should be built into investigative systems. Yes, whenever possible we are looking for technical controls, but the human exists and needs to be fully taken into consideration.

The best way to ensure data integrity is the best way to build a quality culture.

System Model