Learning Culture

Over at the Harvard Business Review there is a great article on 4 Ways to Create a Learning Culture on Your Team. A learning culture is a quality culture, and enabling a learning culture should be a key element of a robust knowledge management system.

Frankly, this is an attribute that I think needs to be better reflected in the QBok, as it is a core trait of a successful quality leader. And supporting learning is a core element of any professional society.

ALCOA or ALCOA+

My colleague Michelle Eldridge recently shared this video for the differences between ALCOA and ALCOA+ from learnaboutgmp. It’s cute, it’s to the point, it makes a nice primer.

As I’ve mentioned before, the MHRA in it’s data integrity guidance did take a dig at ALCOA+:

The guidance refers to the acronym ALCOA rather than ‘ALCOA +’. ALCOA being Attributable, Legible, Contemporaneous, Original, and Accurate and the ‘+’ referring to Complete, Consistent, Enduring, and Available. ALCOA was historically regarded as defining the attributes of data quality that are suitable for regulatory purposes. The ‘+’has been subsequently added to emphasise the requirements. There is no difference in expectations regardless of which acronym is used since data governance measures should ensure that data is complete, consistent, enduring and available throughout the data lifecycle.

Two things should be drawn from this:

  1. Data Integrity is a set of best practices that are still developing, so make sure you are pushing that development and not ignoring it. Much better to be pushing the boundaries of the “c” then end up being surprised.
  2. I actually agree with the MHRA. Complete, consistent, enduring and available are really just subsets of the others. But, like they also say the acronym means little, just make sure you are doing it.

Data Integrity, it’s the new quality culture.

Master and Transactional Data Management

Mylan’s 483 observation states that changes were being made to a LIMS system outside of the site’s change control process.

This should obviously be read in light of data integrity requirements. And it looks like in this case there was no way to produce a list of changes, which is a big audit trail no-no.

It’s also an area where I’ve seen a lot of folks make miss-steps, and frankly I’m not sure I’ve always got it right.

There is a real tendency to look at use of our enterprise systems and want all actions and approvals to happen within the system. This makes sense, we want to reduce our touch points, but there are some important items to consider before moving ahead with that approach.

Changes control is about assessing, handling and releasing the change. Most importantly it is in light the validated and regulatory impact. It serves disposition. As such, it is a good thing to streamline our changes into one system. To ensure every change gets assessed equally, and then gets the right level of handling it needs, and has a proper release.

Allowing a computer system to balkanize your changes, in the end, doesn’t really simplify. And in this day of master data management, of heavily aligned and talking systems, to be nimble requires us to know with a high degree of certainty that when we apply a change we are applying it thoroughly.

The day of separated computer systems is long over. It is important that our change management system takes that into account and offers single-stop shopping.

Mylan gets a 32 page 483

The FDA’s April 483 for Mylan Pharmaceuticals has been in the fore-front of a lot of conversations in the last week. Let’s be honest, the FDA posts a 32 page, 13 observation 483 report on any manufacturer and it will be news. One as prominent as Mylan and doubly so. On the same day, the FDA also posted a 2016 483 and 2017 warning letter against a Mylan facility in India.

The 483 is a hit parade of observations, like the 1st observation of failure of the quality unit, including a reference to lack of quality approval of change controls.

What everyone has been intensely focusing on is the strong emphasis on cleaning, with 11 pages dedicated to failures in cleaning validation.

Which to be frank, is a big deal in a multi-product facility.

Read the 483, and when doing so evaluate your site’s cleaning program. Ask yourself some of these questions:

  • Are there appropriate cleaning procedures in place for all products-contact equipment, product contact accessories?
  • Are there appropriate cleaning procedures in place for facility cleaning (dispensing, sampling room…)?
  • Do your procedures include the sequence of the cleaning activities? Is it significantly detailed?
  • Do the procedures address the different scenarios (cleaning between different batches of the same product, cleaning between products changes, holding time before and after cleaning…)?
  • Do the procedures address who is responsible for performing the cleaning?
  • Does the validation study, the acceptance criteria and when revalidation justification and keys documentation approved by Quality? Does it include a clear status on the cleaning process?
  • Is the strategy used for the cleaning validation clearly established? (matrix approach, dedicated equipment, worst case scenario, grouping equipment, equipment train…)
  • Are batches that come after the cleaning validation run, released after completion of the cleaning validation?
  • Are the acceptance criteria (products, detergents, cleaning agent, micro… ) scientifically established and followed? Do these acceptance criteria include a safety margin?

Approval of cleaning validation is a key responsibility of the quality unit that involves some very specific requirements. These requirements should be built into the quality systems, including validation, deviation, and change management.

Likelihood of occurrence in risk estimation

People use imprecise words to describe the chance of events all the time — “It’s likely to rain,” or “There’s a real possibility they’ll launch before us,” or “It’s doubtful the nurses will strike.” Not only are such probabilistic terms subjective, but they also can have widely different interpretations. One person’s “pretty likely” is another’s “far from certain.” Our research shows just how broad these gaps in understanding can be and the types of problems that can flow from these differences in interpretation.

“If You Say Something Is “Likely,” How Likely Do People Think It Is?” by by Andrew Mauboussin and Michael J. Mauboussin

Risk estimation is based on two components:

  • The probability of the occurrence of harm
  • The consequences of that harm

With a third element of detectability of the harm being used in many tools.

Often-times we simplify probability of the occurrence into likelihood. The quoted article above is a good simple primer on why we should be careful of that. It offers three recommendations that I want to talk about. Go read the article and then come back.

I.                Use probabilities instead of words to avoid misinterpretation

Avoid the simplified quality probability levels, such as “likely to happen”, “frequent”, “can happen, but not frequently”, “rare”, “remote”, and “unlikely to happen.” Instead determine probability levels. even if you are heavily using expert opinion to drive probabilities, given ranges of numbers such as “<10% of the time”, “20-60% of the time” and “greater than 60% of the time.”

It helps to have several sets of scales.

The article has an awesome graph that really is telling for why we should avoid words.

W180614_MAUBOUSSIN_HOWPEOPLE

II.             Use structured approaches to set probabilities

Ideally pressure test these using a Delphi approach, or something similar like paired comparisons or absolute probability judgments. Using the historic data, and expert opinion, spend the time to make sure your probabilities actually capture the realities.

Be aware that when using historical data that if there is a very low frequent of occurrence historically, then any estimate of probability will be uncertain. In these cases its important to use predicative techniques and simulations. Monte Carlo anyone?

III.           Seek feedback to improve your forecasting

Risk management is a lifecycle approach, and you need to be applying good knowledge management to that lifecycle. Have a mechanism to learn from the risk assessments you conduct, and feed that back into your scales. These scales should never be a once and done.

In Conclusion

Risk Management is not new. It’s been around long enough that many companies have the elements in place. What we need to be doing to driving to consistency. Drive out the vague and build best practices that will give the best results. When it comes to likelihood there is a wide body of research on the subject and we should be drawing from it as we work to improve our risk management.

Move beyond setting your scales at the beginning of a risk assessment. Scales should exist as a library (living) that are drawn upon for specific risk evaluations. This will help to ensure that all participants in the risk assessment have a working vocabulary of the criteria, and will keep us honest and prevent any intentional or unintentional manipulation of the criteria based on an expected outcome.

.