Human Performance and Data Integrity

Gilbert’s Behavior Engineering Model (BEM) presents a concise way to consider both the environmental and the individual influences on a person’s behavior. The model suggests that a person’s environment supports impact to one’s behavior through information, instrumentation, and motivation. Examples include feedback, tools, and financial incentives (respectively), to name a few. The model also suggests that an individual’s behavior is influenced by their knowledge, capacity, and motives. Examples include training/education, physical or emotional limitations, and what drives them (respectively), to name a few. Let’s look at some further examples to better understand the variability of individual behavioral influences to see how they may negatively impact data integrity.

Kip Wolf “People: The Most Persistent Risk To Data Integrity

Good article in Pharmaceutical Online last week. It cannot be stated enough, and it is good that folks like Kip keep saying it — to understand data integrity we need to understand behavior — what people do and say — and realize it is a means to an end. It is very easy to focus on the behaviors which are observable acts that can be seen and heard by management and auditors and other stakeholders but what is more critical is to design systems to drive the behaviors we want. To recognize that behavior and its causes are extremely valuable as the signal for improvement efforts to anticipate, prevent, catch, or recover from errors.

By realizing that error-provoking aspects of design, procedures, processes, and human nature exist throughout our organizations. And people cannot perform better than the organization supporting them.

Design Consideration

Human Error Considerations

Manage Controls

Define the Scope of Work

·       Identify the critical steps

·       Consider the possible errors associated with each critical step and the likely consequences.

·       Ponder the "worst that could happen."

·       Consider the appropriate human performance tool(s) to use.

·       Identify other controls, contingencies, and relevant operating experience.

When tasks are identified and prioritized, and resources

are properly allocated (e.g., supervision, tools, equipment, work control, engineering support, training), human performance can flourish.

 

These organizational factors create a unique array of job-site conditions – a good work environment – that sets people up for success. Human error increases when expectations are not set, tasks are not clearly identified, and resources are not available to carry out the job.

The error precursors – conditions that provoke error – are reduced. This includes things such as:

·       Unexpected conditions

·       Workarounds

·       Departures from the routine

·       Unclear standards

·       Need to interpret requirements

 

Properly managing controls is

dependent on the elimination of error precursors that challenge the integrity of controls and allow human error to become consequential.

Apply proactive Risk Management

When risk is properly analyzed we can take appropriate action to mitigate the risks. Include the criteria in risk assessments:

·       Adverse environmental conditions (e.g. impact of gowning, noise, temperature, etc)

·       Unclear roles/responsibilities

·       Time pressures

·       High workload

·       Confusing displays or controls

Addressing risk through engineering and administrative controls are a cornerstone of a quality system.

 

Strong administrative and cultural controls can withstand human error. Controls are weakened when conditions are present that provoke error.

 

Eliminating error precursors

in the workplace reduces

the incidences of active errors.

Perform Work

 

Utilizing error reduction tools as part of all work. Examples include:

·       Self-checking

o   Questioning attitude

o   Stop when unsure

o   Effective communication

o   Procedure use and adherence

o   Peer-checking

o   Second-person verifications

o   Turnovers

 

Engineering Controls can often take the place of some of these, for example second-person verifications can be replaced by automation.

Appropriate process and tools in place to ensure that the organizational processes and values are in place to adequately support performance.

Because people err and make mistakes, it is all the more important that controls are implemented and properly maintained.

Feedback and Improvement

 

Continuous improvement is critical. Topics should include:

·       Surprises or unexpected outcomes.

·       Usability and quality of work documents

·       Knowledge and skill shortcomings

·       Minor errors during the activity

·       Unanticipated workplace conditions

·       Adequacy of tools and Resources

·       Quality of work planning/scheduling

·       Adequacy of supervision

Errors during work are inevitable. If we strive to understand and address even inconsequential acts we can strengthen controls and make future performance better.

Vulnerabilities with controls can be found and corrected when management decides it is important enough to devote resources to the effort

 

The fundamental aim of oversight is to improve resilience to significant events triggered by active errors in the workplace—that is, to minimize the severity of events.

 

Oversight controls provide opportunities to see what is happening, to identify specific vulnerabilities or performance gaps, to take action to address those vulnerabilities and performance gaps, and to verify that they have been resolved.

 

FDA 483 data

The FDA has posted the 2019 483 observations as an excel file. The FDA has made these files available every year since 2006 and I find them to be one of my favorite tools for evaluating regulatory trends.

So for example, looking at change related 483 I see:

2019 vs 2018 483 comparison for short description including “change”

Or for data integrity issues:

2019 vs 2018 483 comparison for short description including “data”

Very useful resource that should be in the bookmarks for every pharmaceutical quality professional.

ASQ Audit Conference – Day 2 Afternoon

“Risk: What is it? Prove it, show me” by Larry Litke

At this point I may be a glutton for sessions about risk. While I am fascinated by how people are poking at this beast, and sometimes dismayed by how far back our thinking is on the subject, there may just be an element that at an audit conference I find a lot of the other topics not really aligned to my interests.

Started by covering high level definition of risk and then moved into the IS (001:2015’s risk based thinking at a high level, mostly by reading from the standard.

It is good that succession planning is specifically discussed as part of risk-based thinking.

“Above all it is communication” is good advice for every change.

It is an important point that the evidence of risk-based thinking is the actual results and not a separate thing.

This presentation’s strengths was when it focused on business continuity as a form of risk-based thinking.

“Auditing the Quality System for Data Integrity” by Jeremiah Genest

My second presentation of the conference is here.

Overall Impressions

This year’s Audit Division conference was pretty small. I was in sessions with 10 people and we didn’t fill a medium size ballroom. I’m told this was smaller than in past years and I sincerely hope this will be a bigger conference next year, which is back in Orlando. My daughter will be thrilled, and I may be back just to meet that set of user requirements.

I think this conference could benefit from the rigor the LSS Conference and WCQI apply for presentation development. I was certainly guilty here. But way too many presentations were wall-to-wall text.

Risk Based Data Integrity Assessment

A quick overview. The risk-based approach will utilize three factors, data criticality, existing controls, and level of detection.

When assessing current controls, technical controls (properly implemented) are stronger than operational or organizational controls as they can eliminate the potential for data falsification or human error rather than simply reducing/detecting it. 

For criticality, it helps to build a table based on what the data is used for. For example:

For controls, use a table like the one below. Rank each column and then multiply the numbers together to get a final control ranking.  For example, if a process has Esign (1), no access control (3), and paper archival (2) then the control ranking would be 6 (1 x 3 x 2). 

Determine detectibility on the table below, rank each column and then multiply the numbers together to get a final detectability ranking. 

Another way to look at these scores:

Multiple above to determine a risk ranking and move ahead with mitigations. Mitigations should be to drive risk as low as possible, though the following table can be used to help determine priority.

Risk Rating Action Mitigation
>25 High Risk-Potential Impact to Patient Safety or Product Quality Mandatory
12-25 Moderate Risk-No Impact to Patient Safety or Product Quality but Potential Regulatory Risk Recommended
<12 Negligible DI Risk Not Required

In the case of long-term risk remediation actions, risk reducing short-term actions shall be implemented to reduce risk and provide an acceptable level of governance until the long-term remediation actions are completed.

Relevant site procedures (e.g., change control, validation policy) should outline the scope of additional testing through the change management process.

Reassessment of the system may be completed following the completion of remediation activities. The reassessment may be done at any time during the remediation process to document the impact of the remediation actions.

Once final remediation is complete, a reassessment of the equipment/system should be completed to demonstrate that the risk rating has been mitigated by the remediation actions taken. Think living risk assessment.

Data Process Mapping

In a presentation on practical applications of data integrity for laboratories at the March 2019 MHRA Laboratories Symposium held in London, UK, MHRA Lead GCP and GLP Inspector Jason Wakelin-Smith highlighted the important role data process mapping plays in understanding these challenges and moving down the DI pathway.

He pointed out that understanding of processes and systems, which data maps facilitate, is a key theme in MHRA’s GxP data integrity guidance, finalized in March of 2018. The guidance is intended to be broadly applicable across the regulated practices, but excluding the medical device arena, which is regulated in Europe by third-party notified bodies.

IPQ. MHRA Inspectors are Advocating Data Mapping as a Key First Step on the Data Integrity Pilgrimage

Data process maps look at the entire data life-cycle from creation through storage (covering key components of create, modify and delete) and include all operations with both paper and electronic records.   Data maps are cross-functional diagrams (swim-lanes) and have the following sections:

  • Prep/Input
  • Data Creation
  • Data Manipulation (include delete)
  • Data  Use
  • Data Storage

Use a standard symbol for paper record, computer data and process step.

For computer data denote (usually by color) the level of controls:

  • Fully aligned with Part 11 and Data Integrity guidances
  • Gaps in compliance but remediation plan in place (this includes places where paper is considered “true copy”
  • Not compliant, no remediation plan

Data operations are depicted utilizing arrows.  The following data operations are probably most common, and are recommended for consistency:

  • Data Entry – input of process, meta data (e.g. lot ID, operator)
  • Data Store – archival location
  • Data Copy – transcription from another system or paper, transfer of data from one system to another, printing (Indicate if it is a manual process).
  • Data Edit – calculations, processing, reviews, unit changes  (Indicate if it is a manual process)
  • Data Move – movement of paper or electronic records

Data operation arrows should denote (again by color) the current controls in place:

  • Technical Controls – Validated Automated Process
  • Operational Controls – Manual Process with Review/Verified/Witness Requirements
  • No Controls – Automated process that is not validated or Manual process with no Review/Verified/Witness Considerations
Example data map