Human Performance and Data Integrity

Gilbert’s Behavior Engineering Model (BEM) presents a concise way to consider both the environmental and the individual influences on a person’s behavior. The model suggests that a person’s environment supports impact to one’s behavior through information, instrumentation, and motivation. Examples include feedback, tools, and financial incentives (respectively), to name a few. The model also suggests that an individual’s behavior is influenced by their knowledge, capacity, and motives. Examples include training/education, physical or emotional limitations, and what drives them (respectively), to name a few. Let’s look at some further examples to better understand the variability of individual behavioral influences to see how they may negatively impact data integrity.

Kip Wolf “People: The Most Persistent Risk To Data Integrity

Good article in Pharmaceutical Online last week. It cannot be stated enough, and it is good that folks like Kip keep saying it — to understand data integrity we need to understand behavior — what people do and say — and realize it is a means to an end. It is very easy to focus on the behaviors which are observable acts that can be seen and heard by management and auditors and other stakeholders but what is more critical is to design systems to drive the behaviors we want. To recognize that behavior and its causes are extremely valuable as the signal for improvement efforts to anticipate, prevent, catch, or recover from errors.

By realizing that error-provoking aspects of design, procedures, processes, and human nature exist throughout our organizations. And people cannot perform better than the organization supporting them.

Design Consideration

Human Error Considerations

Manage Controls

Define the Scope of Work

·       Identify the critical steps

·       Consider the possible errors associated with each critical step and the likely consequences.

·       Ponder the "worst that could happen."

·       Consider the appropriate human performance tool(s) to use.

·       Identify other controls, contingencies, and relevant operating experience.

When tasks are identified and prioritized, and resources

are properly allocated (e.g., supervision, tools, equipment, work control, engineering support, training), human performance can flourish.

 

These organizational factors create a unique array of job-site conditions – a good work environment – that sets people up for success. Human error increases when expectations are not set, tasks are not clearly identified, and resources are not available to carry out the job.

The error precursors – conditions that provoke error – are reduced. This includes things such as:

·       Unexpected conditions

·       Workarounds

·       Departures from the routine

·       Unclear standards

·       Need to interpret requirements

 

Properly managing controls is

dependent on the elimination of error precursors that challenge the integrity of controls and allow human error to become consequential.

Apply proactive Risk Management

When risk is properly analyzed we can take appropriate action to mitigate the risks. Include the criteria in risk assessments:

·       Adverse environmental conditions (e.g. impact of gowning, noise, temperature, etc)

·       Unclear roles/responsibilities

·       Time pressures

·       High workload

·       Confusing displays or controls

Addressing risk through engineering and administrative controls are a cornerstone of a quality system.

 

Strong administrative and cultural controls can withstand human error. Controls are weakened when conditions are present that provoke error.

 

Eliminating error precursors

in the workplace reduces

the incidences of active errors.

Perform Work

 

Utilizing error reduction tools as part of all work. Examples include:

·       Self-checking

o   Questioning attitude

o   Stop when unsure

o   Effective communication

o   Procedure use and adherence

o   Peer-checking

o   Second-person verifications

o   Turnovers

 

Engineering Controls can often take the place of some of these, for example second-person verifications can be replaced by automation.

Appropriate process and tools in place to ensure that the organizational processes and values are in place to adequately support performance.

Because people err and make mistakes, it is all the more important that controls are implemented and properly maintained.

Feedback and Improvement

 

Continuous improvement is critical. Topics should include:

·       Surprises or unexpected outcomes.

·       Usability and quality of work documents

·       Knowledge and skill shortcomings

·       Minor errors during the activity

·       Unanticipated workplace conditions

·       Adequacy of tools and Resources

·       Quality of work planning/scheduling

·       Adequacy of supervision

Errors during work are inevitable. If we strive to understand and address even inconsequential acts we can strengthen controls and make future performance better.

Vulnerabilities with controls can be found and corrected when management decides it is important enough to devote resources to the effort

 

The fundamental aim of oversight is to improve resilience to significant events triggered by active errors in the workplace—that is, to minimize the severity of events.

 

Oversight controls provide opportunities to see what is happening, to identify specific vulnerabilities or performance gaps, to take action to address those vulnerabilities and performance gaps, and to verify that they have been resolved.

 

FDA 483 data

The FDA has posted the 2019 483 observations as an excel file. The FDA has made these files available every year since 2006 and I find them to be one of my favorite tools for evaluating regulatory trends.

So for example, looking at change related 483 I see:

2019 vs 2018 483 comparison for short description including “change”

Or for data integrity issues:

2019 vs 2018 483 comparison for short description including “data”

Very useful resource that should be in the bookmarks for every pharmaceutical quality professional.

ASQ Audit Conference – Day 2 Afternoon

“Risk: What is it? Prove it, show me” by Larry Litke

At this point I may be a glutton for sessions about risk. While I am fascinated by how people are poking at this beast, and sometimes dismayed by how far back our thinking is on the subject, there may just be an element that at an audit conference I find a lot of the other topics not really aligned to my interests.

Started by covering high level definition of risk and then moved into the IS (001:2015’s risk based thinking at a high level, mostly by reading from the standard.

It is good that succession planning is specifically discussed as part of risk-based thinking.

“Above all it is communication” is good advice for every change.

It is an important point that the evidence of risk-based thinking is the actual results and not a separate thing.

This presentation’s strengths was when it focused on business continuity as a form of risk-based thinking.

“Auditing the Quality System for Data Integrity” by Jeremiah Genest

My second presentation of the conference is here.

Overall Impressions

This year’s Audit Division conference was pretty small. I was in sessions with 10 people and we didn’t fill a medium size ballroom. I’m told this was smaller than in past years and I sincerely hope this will be a bigger conference next year, which is back in Orlando. My daughter will be thrilled, and I may be back just to meet that set of user requirements.

I think this conference could benefit from the rigor the LSS Conference and WCQI apply for presentation development. I was certainly guilty here. But way too many presentations were wall-to-wall text.

ASQ Audit Conference – Day 2 Morning

Jay Arthur “The Future of Quality”

Starts with our “Heroes are gone” and “it is time to stand on our  two feet.”

Focuses on the time and effort to train people on lean and six sigma, and how many people do not actually do projects. Basic point is that we use the tools in old ways which are not nimble and aligned to today’s needs. The tools we use versus the tools we are taught.

Hacking lean six sigma is along a similar line to Art Smalley’s four problems.

Applying the spirit of hacking to quality.

Covers valuestream mapping and spaghetti diagrams with a focus on “they delays in between.” Talks about how control charts are not more standard. Basic point is people don’t spend enough time with the tools of quality. A point I have opinions on that will end up in another post.

Overcooked data versus raw data – summarized data has little or no nutritional value.

Brings this back to the issue of lack of problem diagnosis and not problem solving. Comes back to a need for a few easy tools and not the long-tail of six sigma.

This talk is very focused on LSS and the use of very specific tools, which seems like an odd choice at an Audit conference.

“Objectives and Process Measures: ISO 13485:2016 and ISO 9001:2015” by Nancy Pasquan

I appreciate it when the session manager (person who introduces the speaker and manages time) does a safety moment. Way to practice what we preach. Seriously, it should be a norm at all conferences.

Connects with the audience with a confession that the speaker is here to share her pain.

Objective – where we are going. Provide a flow chart of mission/vision (scope) ->establish process -> right direction? -> monitor and measure

Objectives should challenge the organization. Should not be too easy. References SMART. Covers objectives in very standard way. “Remember the purpose is to focus the effort of the entire organization toward these goals.” Links process objectives to the overall company objectives.

Process measures are harder. Uses training for an example. Which tells me adult learning practice is not as much as the QBOK way of thinking as I would like. Kilpatrick is a pretty well-known model.

Process measures will not tell us if we have the right process is a pretty loaded concept. Being careful of what you measure is good advice.

“Auditing Current Trends in Cleaning Validation” by Cathelene Compton

One of the trends in 2019 FDA Warning letters has been cleaning. While not one of the four big ones, cleaning validation always seems relevant and I’m looking forward to this presentation.

Starting with the fact that 15% if all observations on 483 forms related to leaning validation and documentation.

Reviews the three stages from the 2011 FDA Process Validation Guidance and then delvers into a deeper validation lifecycle flowchart.

Some highlights:

Stage 1 – choosing the right cleaning agent; different manufacturers of cleaning agents; long-term damage to equipment parts and cleaning agent compatibility. Vendor study for cleaning agent; concentration levels; challenge the cleaning process with different concentrations.

Delves more into cleaning acceptance limits and the importance of calculating in multiple ways. Stresses the importance of an involvement of a toxicologist. Stresses the use of Permitted Daily Exposure and how it can be difficult to get the F-factors.

Ensure that analytical methods meet ICHQ2(R1). Recovery studies on materials of construction. For cleaning agent look for target marker, check if other components in the laboratory also use this marker. Pitfall is the glassware washer not validated.

Trends around recovery factors, for example recoveries for stainless tell should be 90%.

Discusses matrix rationales from the Mylan 483 stressing the need to ensure all toxicity levels are determined and pharmaceological potency is there.

Stage 2 all studies should include visual inspection, micro and analytical. Materials of construction and surface area calculations and swabs on hard to clean or water hold up locations. Chromatography must be assessed for extraneous peaks.

Verification vs verification – validation always preferred.

Training – qualify the individuals who swab. Qualify visual inspectors.

Should see campaign studies, clean hold studies and dirty equipment hold studies.

Stage 3 – continuous is so critical, where folks fall flat. Do every 6 months, no more than a year or manual. CIP should be under a periodic review of mechanical aspects which means requal can be 2-3 years out.

Risk Based Data Integrity Assessment

A quick overview. The risk-based approach will utilize three factors, data criticality, existing controls, and level of detection.

When assessing current controls, technical controls (properly implemented) are stronger than operational or organizational controls as they can eliminate the potential for data falsification or human error rather than simply reducing/detecting it. 

For criticality, it helps to build a table based on what the data is used for. For example:

For controls, use a table like the one below. Rank each column and then multiply the numbers together to get a final control ranking.  For example, if a process has Esign (1), no access control (3), and paper archival (2) then the control ranking would be 6 (1 x 3 x 2). 

Determine detectibility on the table below, rank each column and then multiply the numbers together to get a final detectability ranking. 

Another way to look at these scores:

Multiple above to determine a risk ranking and move ahead with mitigations. Mitigations should be to drive risk as low as possible, though the following table can be used to help determine priority.

Risk Rating Action Mitigation
>25 High Risk-Potential Impact to Patient Safety or Product Quality Mandatory
12-25 Moderate Risk-No Impact to Patient Safety or Product Quality but Potential Regulatory Risk Recommended
<12 Negligible DI Risk Not Required

In the case of long-term risk remediation actions, risk reducing short-term actions shall be implemented to reduce risk and provide an acceptable level of governance until the long-term remediation actions are completed.

Relevant site procedures (e.g., change control, validation policy) should outline the scope of additional testing through the change management process.

Reassessment of the system may be completed following the completion of remediation activities. The reassessment may be done at any time during the remediation process to document the impact of the remediation actions.

Once final remediation is complete, a reassessment of the equipment/system should be completed to demonstrate that the risk rating has been mitigated by the remediation actions taken. Think living risk assessment.