ASQ Audit Conference – Day 2 Morning

Jay Arthur “The Future of Quality”

Starts with our “Heroes are gone” and “it is time to stand on our  two feet.”

Focuses on the time and effort to train people on lean and six sigma, and how many people do not actually do projects. Basic point is that we use the tools in old ways which are not nimble and aligned to today’s needs. The tools we use versus the tools we are taught.

Hacking lean six sigma is along a similar line to Art Smalley’s four problems.

Applying the spirit of hacking to quality.

Covers valuestream mapping and spaghetti diagrams with a focus on “they delays in between.” Talks about how control charts are not more standard. Basic point is people don’t spend enough time with the tools of quality. A point I have opinions on that will end up in another post.

Overcooked data versus raw data – summarized data has little or no nutritional value.

Brings this back to the issue of lack of problem diagnosis and not problem solving. Comes back to a need for a few easy tools and not the long-tail of six sigma.

This talk is very focused on LSS and the use of very specific tools, which seems like an odd choice at an Audit conference.

“Objectives and Process Measures: ISO 13485:2016 and ISO 9001:2015” by Nancy Pasquan

I appreciate it when the session manager (person who introduces the speaker and manages time) does a safety moment. Way to practice what we preach. Seriously, it should be a norm at all conferences.

Connects with the audience with a confession that the speaker is here to share her pain.

Objective – where we are going. Provide a flow chart of mission/vision (scope) ->establish process -> right direction? -> monitor and measure

Objectives should challenge the organization. Should not be too easy. References SMART. Covers objectives in very standard way. “Remember the purpose is to focus the effort of the entire organization toward these goals.” Links process objectives to the overall company objectives.

Process measures are harder. Uses training for an example. Which tells me adult learning practice is not as much as the QBOK way of thinking as I would like. Kilpatrick is a pretty well-known model.

Process measures will not tell us if we have the right process is a pretty loaded concept. Being careful of what you measure is good advice.

“Auditing Current Trends in Cleaning Validation” by Cathelene Compton

One of the trends in 2019 FDA Warning letters has been cleaning. While not one of the four big ones, cleaning validation always seems relevant and I’m looking forward to this presentation.

Starting with the fact that 15% if all observations on 483 forms related to leaning validation and documentation.

Reviews the three stages from the 2011 FDA Process Validation Guidance and then delvers into a deeper validation lifecycle flowchart.

Some highlights:

Stage 1 – choosing the right cleaning agent; different manufacturers of cleaning agents; long-term damage to equipment parts and cleaning agent compatibility. Vendor study for cleaning agent; concentration levels; challenge the cleaning process with different concentrations.

Delves more into cleaning acceptance limits and the importance of calculating in multiple ways. Stresses the importance of an involvement of a toxicologist. Stresses the use of Permitted Daily Exposure and how it can be difficult to get the F-factors.

Ensure that analytical methods meet ICHQ2(R1). Recovery studies on materials of construction. For cleaning agent look for target marker, check if other components in the laboratory also use this marker. Pitfall is the glassware washer not validated.

Trends around recovery factors, for example recoveries for stainless tell should be 90%.

Discusses matrix rationales from the Mylan 483 stressing the need to ensure all toxicity levels are determined and pharmaceological potency is there.

Stage 2 all studies should include visual inspection, micro and analytical. Materials of construction and surface area calculations and swabs on hard to clean or water hold up locations. Chromatography must be assessed for extraneous peaks.

Verification vs verification – validation always preferred.

Training – qualify the individuals who swab. Qualify visual inspectors.

Should see campaign studies, clean hold studies and dirty equipment hold studies.

Stage 3 – continuous is so critical, where folks fall flat. Do every 6 months, no more than a year or manual. CIP should be under a periodic review of mechanical aspects which means requal can be 2-3 years out.

Layering metrics

We have these quality systems with lots of levers, with interrelated components. And yet we select one or two metrics and realize that even if we meet them, we aren’t really measuring the right stuff nor are we driving continious improvement.

One solution is to create layered metrics, which basically means drill down your process and identify the metrics at each step.

Lots of ways to do this. An easy way to start is to use the 5-why process, a tool most folks are comfortable with.

So for example, CAPA. It is pretty much agreed upon that CAPAs should be completed in a timely manner. That makes this a top level goal. Unfortunately, in this hypothetical example, we are suffering a less than 100% closure goal (or whatever level is appropriate in your organization based on maturity)

Why 1Why was CAPA closure not 100%
Because CAPA tasks were not closed on time.

Success factor needed for this step: CAPA tasks to be closed by due date.

Metric for this step: CAPA closure task success rate
Why 2Why were CAPA tasks not closed on time?
Because individuals did not have appropriate time to complete CAPA tasks.

Metric for this step: Planned versus Actual time commitment
Why 3Why did individuals not have appropriate time to complete CAPA tasks?
Because CAPA task due dates are guessed at.

Metric for this step: CAPA task adherence to target dates based on activity (e.g. it takes 14 days to revise a document and another 14 days to train, the average document revision task should be 28 days)
Why 4Why are CAPA task due dates guessed at?
Because appropriate project planning is not completed.

Metric for this step: Adherence to Process Confirmation
Why 5Why is appropriate project planning not completed?
Because CAPAs are always determined on the last day the deviation is due.

Metric: Adherence to Root Cause Analysis process

I might on report on the top CAPA closure rate and 1 or 2 of these, and keep the others in my process owner toolkit. Maybe we jump right to the last one as what we report on. Depends on what needs to be influenced in my organization and it will change over time.

It helps to compare this output against the 12 system leverage points.

Donella Meadows 12 System Leverage Points

These metrics go from 3 “goals of the system” with completing CAPA tasks effectively and on time, to 4 “self organize” and 5 “rules of the system.” It also has nice feedback loops based on the process confirmations. I’d view them as potentially pretty successful. Of course, we would test these and tinker and basically experiment until we find the right set of metrics that improves our top-level goal.

Measures of success for changes

A colleague asks:

Is it a compliance risk to extend timelines on a change control?

I want to take a step back to an important fundamental of change management to answer this question. All changes are done to realize strategic purposes; a good change management system is all about accelerating change. From the big transformations to the emergency changes to keep product being made each and every change has a strategic goal.

changing business environment

From this alignment to the strategy, each change has success metrics. Success metrics include economic, quality, technical and organization (among others) and they drive the how and the when of our change.

For example, a change driven by a CAPA to prevent reoccurrence will potentially have a different timeline than a change tied to a strategic goal to leverage a new way of working. But both have timelines driven by strategic to the tactical needs, usually filtered through a risk based prioritization tool.

And sometimes these change. The compliance aspect is not so much did you extend, it’s did you know what was happening with the change control in enough time to influence it in such a way to assure meeting the how.

The KPIs and other measures built into your system should monitor and ensure your changes reach the intended benefits.

manage for success

To return to the original question. Unlike deviations/conformances where there is a specific requirements to complete in a timely way, and CAPAs where the root cause needs to be dealt with as soon as possible, change controls have their own internal timeline based on the drivers (which may be a CAPA). Extensions are not bad in a specific one-by-one change control approach. Instead they are indicative of larger troubles in the system and should be dealt with holistically to ensure you get the maximum benefit from your changes in the best possible time.

Quality Metrics – Not Dead yet

Pink Sheet has an update this week on the FDA’s Quality Metrics initiative – US FDA Quality Metrics Initiative Continues Moving Forward … Quietly.

This is behind a firewall so may not be viewable by all.

The major takeaways were:

  1. The initiative is still happening
  2. the FDA wants to remind companies why they are doing this in the first place
  3. They are starting a pilot real “soon” now

These metrics have been a hard sell within Pharma. I’ll be curious what steps the FDA will be taking to rebrand the effort.