Layering metrics

We have these quality systems with lots of levers, with interrelated components. And yet we select one or two metrics and realize that even if we meet them, we aren’t really measuring the right stuff nor are we driving continious improvement.

One solution is to create layered metrics, which basically means drill down your process and identify the metrics at each step.

Lots of ways to do this. An easy way to start is to use the 5-why process, a tool most folks are comfortable with.

So for example, CAPA. It is pretty much agreed upon that CAPAs should be completed in a timely manner. That makes this a top level goal. Unfortunately, in this hypothetical example, we are suffering a less than 100% closure goal (or whatever level is appropriate in your organization based on maturity)

Why 1Why was CAPA closure not 100%
Because CAPA tasks were not closed on time.

Success factor needed for this step: CAPA tasks to be closed by due date.

Metric for this step: CAPA closure task success rate
Why 2Why were CAPA tasks not closed on time?
Because individuals did not have appropriate time to complete CAPA tasks.

Metric for this step: Planned versus Actual time commitment
Why 3Why did individuals not have appropriate time to complete CAPA tasks?
Because CAPA task due dates are guessed at.

Metric for this step: CAPA task adherence to target dates based on activity (e.g. it takes 14 days to revise a document and another 14 days to train, the average document revision task should be 28 days)
Why 4Why are CAPA task due dates guessed at?
Because appropriate project planning is not completed.

Metric for this step: Adherence to Process Confirmation
Why 5Why is appropriate project planning not completed?
Because CAPAs are always determined on the last day the deviation is due.

Metric: Adherence to Root Cause Analysis process

I might on report on the top CAPA closure rate and 1 or 2 of these, and keep the others in my process owner toolkit. Maybe we jump right to the last one as what we report on. Depends on what needs to be influenced in my organization and it will change over time.

It helps to compare this output against the 12 system leverage points.

Donella Meadows 12 System Leverage Points

These metrics go from 3 “goals of the system” with completing CAPA tasks effectively and on time, to 4 “self organize” and 5 “rules of the system.” It also has nice feedback loops based on the process confirmations. I’d view them as potentially pretty successful. Of course, we would test these and tinker and basically experiment until we find the right set of metrics that improves our top-level goal.

Coherence and Quality

Sonja Blignaut on More Beyond wrote a good post “All that jazz … making coherence coherent” on coherence where she states at the end “In order to remain competitive and thrive in the new world of work, we need to focus our organisation design, leadership and strategic efforts on the complex contexts and create the conditions for coherence. “

Ms. Blignaut defines coherence mainly through analogy and metaphor, so I strongly recommend reading the original post.

In my post “Forget the technology, Quality 4.0 is all about thinking” I spelled out some principles of system design.

PrincipleDescription
BalanceThe system creates value for the multiple stakeholders. While the ideal is to develop a design that maximizes the value for all the key stakeholders, the designer often has to compromise and balance the needs of the various stakeholders.
CongruenceThe degree to which the system components are aligned and consistent with each other and the other organizational systems, culture, plans, processes, information, resource decisions, and actions.
ConvenienceThe system is designed to be as convenient as possible for the participants to implement (a.k.a. user friendly). System includes specific processes, procedures, and controls only when necessary.
CoordinationSystem components are interconnected and harmonized with the other (internal and external) components, systems, plans, processes, information, and resource decisions toward common action or effort. This is beyond congruence and is achieved when the individual components of a system operate as a fully interconnected unit.
EleganceComplexity vs. benefit — the system includes only enough complexity as is necessary to meet the stakeholder’s needs. In other words, keep the design as simple as possible and no more while delivering the desired benefits. It often requires looking at the system in new ways.
HumanParticipants in the system are able to find joy, purpose and meaning in their work.
LearningKnowledge management, with opportunities for reflection and learning (learning loops), is designed into the system. Reflection and learning are built into the system at key points to encourage single- and double-loop learning from experience to improve future implementation and to systematically evaluate the design of the system itself.
SustainabilityThe system effectively meets the near- and long-term needs of the current stakeholders without compromising the ability of future generations of stakeholders to meet their own needs.

I used the term congruence to summarize the point Ms. Blignaut is reaching with alignment and coherence. I love her putting these against the Cynefin framework, it makes a great of sense to see alignment for the obvious domain and the need for coherence driving from complexity.

So what might driving for coherence look like? Well if we start with coherence being the long range order (the jazz analogy) we are building systems that build order through their function – they learn and are sustainable.

To apply this in the framework of ICHQ10 or the US FDA’s “Guidance for Industry Quality Systems Approach to Pharmaceutical CGMP Regulations” one way to drive for coherence is to use similar building blocks across our systems: risk management, data integrity and knowledge management are all examples of that.

Materials Receipt Controls

Significantly, your firm failed to perform identification testing for all incoming glycerin lots to verify identity and determine whether diethylene glycol (DEG) or ethylene glycol (EG) was present. Because you did not test each lot and container of glycerin using the USP identification test that detects these hazardous impurities, you failed to ensure the acceptability of component lots used in drug product manufacture. DEG contamination in pharmaceutical products has caused lethal poisoning incidents in humans worldwide.

 FDA Warning Letter of 02-Nov-2018 to Product Packaging West, Inc.

First of all, ouch. This brings to mind an old investigation that drew a lot of attention a few years back. It involved a tanker truck and a hurricane, but still, lots of memories.

This Warning Letter brings to mind questions about receipt of materials. So here are some top level thoughts.

Choosing tests should be a risk based approach evaluating what the material is, what it is used for, supplier qualification level and history of test results. A critical raw material with custom chemistry from a supplier that has had issues is a different matter than an off-the-shelf component that hasn’t had a problem in 10 years. But there always should some basic identity testing, especially if that is listed in an pharmacopeia. This should be done through a formal process, with periodic review.

Have a process in place for delivery of material to ensure that each container or grouping of containers of material are examined visually for correct labeling (including correlation between the name used by the supplier and the in-house name/code, if these are different), container damage, broken seals, and evidence of tampering or contamination. A good in-coming receipt inspection includes:

  • Each lot within a shipment of material or components is assigned a distinctive code and an unique internal number so material or component can be traced through manufacturing and distribution
  • A check to guarantee the origin of materials from approved manufacturers and approved distributors
  • Start inspection with visual examination of each shipping container for appropriate labelling, signs of damage, or contamination
  • Use a predefined checklist for inspection

Incoming material should be quarantined prior to approval for use. I recommend a separate quarantine area for incoming versus material segregated for investigations or issues.

Supplier qualification deserves a post of it’s own.

Change Management of multi-site implementations

A colleague asks in response to my post Group change controls:

… deploying a Learning + documentation system … all around the word [as a global deployment]  … do we I initiate a GLOBAL CC or does each site created a local CC.

The answer is usually, in my experience, both.

Change management is about process, organization, technology and people. Any change control needs to capture the actions necessary to successful implement the change.

so at implementation I would do two sets of changes. A global to capture all the global level changes and to implement the new (hopefully) harmonized system And then a local change control at each site to capture all the site impact.

System Element Global Local
Process Introduce the new global process

Update all global standards, procedures, etc

How will local procedures change? How will local system interactions change – clean up all the local procedures to ensure the point to the new global procedures and are harmonized as necessary.
Technology Computer system validation

Global interfaces

Global migration strategy

Local interfaces (if any) and configurations

Are local technologies being replaced? Plan for decommissioning.

Local migration (tactical)

People What do people do on the global level?

How will people interact within the system in the future?

Global training

What will be different for people at each individual site?

Localized training

Organization Will there be new organizational structures in place? Is this system being run out of a global group? How will communication be run.

System governance and change management

Site organization changes

How will different organizations and sub organizations adopt, adapt and work with the system

If you just have a global change control you are at real risk of missing a ton of local uniqueness and leaving in place a bunch of old ways of thinking and doing things.

If you just do local change controls you will be at risk of not seeing the big picture and getting the full benefits of harmonization. You also will probably have way too many change controls that regurgitate the same content, and then are at risk of divergence – a compliance nightmare.

This structure allows you better capture the diversity of perspectives at the sites. A global change control tends to be dominated by the folks at each site who own the system (all your documents and training folks in this example), while a site change will hopefully include other functions, such as engineering and operations. Trust me, they will have all sorts of impact.

This structure also allows you to have rolling implementations. The global implements when the technology is validated and the core processes are effective. each site then can implement based on their site deliverables. useful when deploying a document management system and you have a lot of migration.

Multisite changes

As part of the deployment make sure to think through matters of governance, especially change management. Once deployed it is easy to imagine many changes just needing a central change control. But be sure to have thought through the criteria that will require site change controls – such as impact other interrelated systems, site validation or different implementation dates.

I’ve done a lot of changes and a lot of deployment of systems. This structure has always worked well. I’ve never done just a global and been happy with the final results, they always leave too much unchanged elements behind that come back to haunt you. In the last year I’ve done 2 major changes to great success with this model, and seen one where the decision not to use this model has left us with lots of little messes to clean up.

As a final comment, keep the questions coming and I would love to hear other folks perspectives on these matters. I’m perpetually learning and I know there are lots of permutations to explore.

Questions to ask when contemplating data integrity

Here are a set of questions that should be evaluated in any data integrity risk assessment/evaluation.

  1. Do you have a list of all GxP activities performed in your organization?
  2. Do you know which GxP activities involve intensive data handling tasks?
  3. Do you know the automation status of each GxP activity?
  4. Have you identified a list of GxP records that will be created by each GxP activity?
  5. Have you determined the format in which the official GxP records will be maintained?
  6. Have you determined if a signature is required for each GxP record?
  7. Do you have controls to ensure that observed, measured or processed GxP data is accurate?
  8. Do you have controls to ensure that GxP data is maintained in full without being omitted, discarded or deleted?
  9. Do you have controls to ensure that naming, measurement units, and value limits are defined and applied consistently during GxP data handling?
  10. Do you have controls to ensure that GxP data is recorded at the same time as the observation/measurement is made or shortly thereafter?
  11. Do you have controls to ensure that GxP data is recorded in a clear and human readable form?
  12. Do you have controls to ensure that data values represent the first recording of the GxP data or an exact copy of an original data?
  13. Do you have SOP(s) addressing management of GxP documents and records and good documentation practices?
  14. Do you have SOP(s) addressing the escalation of quality events that also cover data integrity breaches?
  15. Do you have SOP(s) addressing self-inspections/audits with provisions for data integrity?
  16. Do you have SOP(s) addressing management of third parties with provisions for the protection of data integrity?
  17. Do you have SOP(s) for Computerized Systems Compliance?
  18. Do you have SOP(s) for training and does it include training on data integrity for employees handling GxP data?
  19. For GxP activities that generate data essential for product quality, product supply or patient safety, do you have controls to prevent or minimize:
    • Process execution errors due to human inability, negligence or inadequate procedures?
    •  Non-compliance due to unethical practices such as falsification?
  20. Do you have controls to ensure that only authorized employees are granted access to GxP data based on the requirements of their job role?
  21. Do you have controls to ensure that only the GxP activity owner or delegate can grant access to the GxP data?
  22. Do you have controls to eliminate or reduce audiovisual distractions for GxP activities with intensive data handling tasks?
  23. Do you assess the design and configuration of your computerized GxP activity to minimize manual interventions where possible?
  24. Do you have controls for review of audit trail data at relevant points in the process to support important GxP actions or decisions?
  25. Do you have controls, supervision or decision support aids to help employees who perform error-prone data handling activities?
  26. Do you have controls to ensure business continuity if a GxP record essential for product quality, product supply, or patient safety is not available? Both for when there is a temporary interruption to GxP activity or during a disaster scenario?
  27. Do you have a process for ensuring that data integrity requirements are included in the design and configuration of GxP facilities where data handling activities take place?
  28. Have you assessed the compliance status of computerized systems used to automate GxP activities?
  29. Do you have controls to prevent data capture and data handling errors during GxP data creation?
  30. Do you have controls to ensure the accuracy of date and time applied to GxP data, records and documents?
  31. Do you have controls to ensure that changes to GxP data are traceable to who did what, when and if relevant why during the lifecycle of the GxP data?
  32. Do you have controls to ensure that – when required – legally binding signatures can be applied to GxP records and its integrity are ensured during the retention period of the GxP record?
  33. Do you have controls to ensure that GxP computerized systems managing GxP data can:
    • Allow access only to employees with proper authorization?
    • Identify each authorized employee uniquely?
  34. Do you have controls to ensure that GxP data can be protected against accidental or willful harm?
  35. Do you have controls to keep GxP data in a human readable form for the duration of the retention period?
  36. Do you have controls to ensure that the process for offline retention and retrievals is fit for its intended purpose?