Data, and all that jazz

As  we all try to figure out just exactly what Industry 4.0 and Quality 4.0 mean it is not an exaggeration to say “Data is your most valuable asset. Yet we all struggle to actually get a benefit from this data and data integrity is an area of intense regulatory concern.

To truly have value our data needs to be properly defined, relevant to the tasks at hand, structured such that it is easy to find and understand, and of high-enough quality that it can be trusted. Without that we just have noise.

Apply principles of good master data management and data integrity. Ensure systems are appropriately built and maintained.

Understand why data matters, how to pick the right metrics, and how to ask the right questions from data. Understand correlation vs. causation to be able to make decisions about when to act on analysis and when not to is critical.

In the 2013 article Keep Up with Your Quants, Thomas Davenport lists six questions that should be asked to evaluate conclusions obtained from data:

1. What was the source of your data?

2. How well do the sample data represent the population?

3. Does your data distribution include outliers? How did they affect the results?

4. What assumptions are behind your analysis? Might certain conditions render your assumptions and your model invalid?

5. Why did you decide on that particular analytical approach? What alternatives did you consider?

6. How likely is it that the independent variables are actually causing the changes in the dependent variable? Might other analyses establish causality more clearly?

Framing data, being able to ask the right questions, is critical to being able to use that data and make decisions. In the past it was adequate enough for a quality professional to have a familiarity with a few basic tools. Today it is critical to understand basic statistics. As Nate Silver advises in an interview with HBR. “The best training is almost always going to be hands on training,” he says. “Getting your hands dirty with the data set is, I think, far and away better than spending too much time doing reading and so forth.”

Understanding data is a key ability and is necessary to thrive. It is time to truly contemplate the data ecosystem as a system and stop treating it as a specialized area of the organization.

Contamination Control, Risk Management and Change Control

Microbiologists won’t be sequestered in the laboratory, running samples and conducting environmental testing, once the revisions proposed for Annex 1 of the EU and Pharmaceutical Inspection Cooperation Scheme (PIC/S) GMP guides take effect, Annex 1 rapporteur Andrew Hopkins said Oct. 15.

They will have a broader role that includes conducting risk assessments to ensure that sterile products are made as contamination-free as possible, said Hopkins, who is an inspector for the UK Medicines and Healthcare products Regulatory Agency.

Pink Sheet “EU GMP Annex 1 Would Give Microbiologists A Greater Role In Sterility Assurance, Rapporteur Says

Contamination Control is a fairly wide term used to mean “getting microbiologists out of the lab” and involved in risk management and compliance. Our organization splits that function off from the QC Microbiology organization but there are many models for making it work.

Risk Management is a major part of the new Annex 1, and what they are driving at are good risk assessments with good risk mitigation that involve the microbiologists.

living risk assessments

This is really what is meant by a contamination control strategy which considers the product and process knowledge and skills in pharmaceutical product manufacturing and GMP/ cGMP compliance under the auspices of a Pharmaceutical Quality System (Q10) together with initiatives of Quality by Design (Q8) and Quality Risk Management (Q9).

From this strategy comes:

  • Targeted/ risk based measures of contamination avoidance
  • Key performance indicators to assess status of contamination control
  • A defined strategy for deviation management (investigations) and CAPA

environmental monitoring

When it comes to change management, one of the easiest places to go wrong is to forget to bring the microbiologist in to changes. Based on your strategy you can determine change changes require their assessment and include it in the tool utilized to determine SMEs, for example:

Department Required if the change meets any of the following criteria:
Contamination Control The change impacts environment integrity, conditions or monitoring, including:

  • Changes to a controlled room or area that impact integrity
  • Changes in sampling methodology
  • Construction activities
  • Changes in personnel or material flow
  • The change will result in or modify exposure of product to the environment.

The change can impact microbiological control within a process stream, raw material or process equipment

The changes are to water systems

Release control process sipoc

Google does not share search terms for privacy reasons, so it is rare that a search term stands out on my blog.

So here you go.

SIPOC for disposition

This is purposefully a high level process.

Quality Review of Records: Batch Record, Packaging Record, the like

Lot Assessment: Evaluation of deviations, of change control and test results; but also of other inputs such as Critical Utilities and Environmental Monitoring Review. Ideally a holistic view.

Lot Disposition: Decision that the product meets all requirements of the GMPs and the market authorization.

Some important regulatory requirements:

  • United States: 21 CFR 211.22(a); 211.22(d)
  • EU: 1.4(xv); 1.9(vii); EU Annex 16
  • World Health Organization: Annex 3-GMP 1.2(g); Annex 3-GMP 9.11, 9.13, 9.15

 

One of the drivers for digital transformation, and a concept at route of the ICHs, is the idea of release by exception. Our systems will be tight enough, our design space robust enough, that most products are automatically released and sent into the market.

 

 

Forms, forms, everywhere

Unless you work in the factory of the future the chances are you have forms — if you are like me over 1100 of them. So what is a form and how does it fit into our document management system?

Merriam-Webster Dictionary defines form (amongst other things) as “a printed or typed document with blank spaces for insertion of required or requested information.”

We use forms to tell what information needs to be captured, and usually to record when and by whom. Forms have the following advantages in our document management system:

  • The user has to write less
  • The user is told or reminded what information has to be supplied
  • There is uniformity
  • Information is collected in writing and so can be reexamined later. Forms almost always have a signature field to allow someone to take responsibility

It is useful to note here that electronic systems do basically the same thing.

Returning to our three major types of documents:

  • Functional Documents provide instructions so people can perform tasks and make decisions safely effectively, compliantly and consistently. This usually includes things like procedures, process instructions, protocols, methods and specifications. Many of these need some sort of training decision. Functional documents should involve a process to ensure they are up-to-date, especially in relation to current practices and relevant standards (periodic review)
  • Records provide evidence that actions were taken and decisions were made in keeping with procedures. This includes batch manufacturing records, logbooks and laboratory data sheets and notebooks. Records are a popular target for electronic alternatives.
  • Reports provide specific information on a particular topic on a formal, standardized way. Reports may include data summaries, findings and actions to be taken.

A form is a functional document that once printed and has data entered onto it becomes a record. That record then needs to be managed and has all sorts of good documentation and data integrity concerns including traceability and retention (archiving).

It is helpful here to also differentiate between a template and a form. A template is a form that is specifically used to build another document — an SOP template or a protocol template for example. Usually the template gives you a document that then goes through its own lifecycle.

What is in a title

Recently I’ve seen a few inspection observations that have provided an observation on the title of quality record (e.g. deviation, CAPA, change control).

The title might seem the most basic part of a quality system record – a simple task – but instead it should receive some serious thought. This is any inspector’s first interaction, it serves as a historical flag that generations of readers will use to become familiar. And everyone falls prey to “judging a book by its cover.” This cognitive bias tends to make readers considerably susceptible to allowing the quality systems title to function as the sole factor influencing their decision of whether to read or skip a record. A bad title could shape an inspection or deprive an important historical record from being evaluated in the future. We can do better.

A good quality systems record title:

  • Condenses the record’s content in a few words
  • Differentiates the record from other records of the same subject area

Some general tips:

  1. Keep it simple and brief: The primary function of a title is to provide a precise summary of the record’s content. So keep the title brief and clear. Use active verbs instead of complex noun-based phrases, and avoid unnecessary details. Moreover, a good title for a record is typically around 10 to 12 words long. A lengthy title may seem unfocused and take the readers’ attention away from an important point.
  2. Avoid: Wrong label issued

    Better: Sample ABCD was issued label 1234 instead of label X4572

  3. Use appropriate descriptive words: A record title should contain key words used in the record and should define the nature of the quality systems event. Think about terms people would use to search for your record and include them in your title.
  4. Avoid: No LIMS label for batch ABDC

    Better: Batch ABDC was missing label Y457 as required by procedure LAB-123

  5. Avoid abbreviations and jargon: Known abbreviations can be used in the title. However, other lesser-known or specific abbreviations and jargon that would not be immediately familiar to the readers should be left out.

It sometimes surprises folks how simple things can have ripple effects. But they do, so plan accordingly and ensure your users are trained on writing a good title. Trust me; it will make things easier in the long run.

ALCOA or ALCOA+

My colleague Michelle Eldridge recently shared this video for the differences between ALCOA and ALCOA+ from learnaboutgmp. It’s cute, it’s to the point, it makes a nice primer.

As I’ve mentioned before, the MHRA in it’s data integrity guidance did take a dig at ALCOA+:

The guidance refers to the acronym ALCOA rather than ‘ALCOA +’. ALCOA being Attributable, Legible, Contemporaneous, Original, and Accurate and the ‘+’ referring to Complete, Consistent, Enduring, and Available. ALCOA was historically regarded as defining the attributes of data quality that are suitable for regulatory purposes. The ‘+’has been subsequently added to emphasise the requirements. There is no difference in expectations regardless of which acronym is used since data governance measures should ensure that data is complete, consistent, enduring and available throughout the data lifecycle.

Two things should be drawn from this:

  1. Data Integrity is a set of best practices that are still developing, so make sure you are pushing that development and not ignoring it. Much better to be pushing the boundaries of the “c” then end up being surprised.
  2. I actually agree with the MHRA. Complete, consistent, enduring and available are really just subsets of the others. But, like they also say the acronym means little, just make sure you are doing it.

Data Integrity, it’s the new quality culture.

Likelihood of occurrence in risk estimation

People use imprecise words to describe the chance of events all the time — “It’s likely to rain,” or “There’s a real possibility they’ll launch before us,” or “It’s doubtful the nurses will strike.” Not only are such probabilistic terms subjective, but they also can have widely different interpretations. One person’s “pretty likely” is another’s “far from certain.” Our research shows just how broad these gaps in understanding can be and the types of problems that can flow from these differences in interpretation.

“If You Say Something Is “Likely,” How Likely Do People Think It Is?” by by Andrew Mauboussin and Michael J. Mauboussin

Risk estimation is based on two components:

  • The probability of the occurrence of harm
  • The consequences of that harm

With a third element of detectability of the harm being used in many tools.

Often-times we simplify probability of the occurrence into likelihood. The quoted article above is a good simple primer on why we should be careful of that. It offers three recommendations that I want to talk about. Go read the article and then come back.

I.                Use probabilities instead of words to avoid misinterpretation

Avoid the simplified quality probability levels, such as “likely to happen”, “frequent”, “can happen, but not frequently”, “rare”, “remote”, and “unlikely to happen.” Instead determine probability levels. even if you are heavily using expert opinion to drive probabilities, given ranges of numbers such as “<10% of the time”, “20-60% of the time” and “greater than 60% of the time.”

It helps to have several sets of scales.

The article has an awesome graph that really is telling for why we should avoid words.

W180614_MAUBOUSSIN_HOWPEOPLE

II.             Use structured approaches to set probabilities

Ideally pressure test these using a Delphi approach, or something similar like paired comparisons or absolute probability judgments. Using the historic data, and expert opinion, spend the time to make sure your probabilities actually capture the realities.

Be aware that when using historical data that if there is a very low frequent of occurrence historically, then any estimate of probability will be uncertain. In these cases its important to use predicative techniques and simulations. Monte Carlo anyone?

III.           Seek feedback to improve your forecasting

Risk management is a lifecycle approach, and you need to be applying good knowledge management to that lifecycle. Have a mechanism to learn from the risk assessments you conduct, and feed that back into your scales. These scales should never be a once and done.

In Conclusion

Risk Management is not new. It’s been around long enough that many companies have the elements in place. What we need to be doing to driving to consistency. Drive out the vague and build best practices that will give the best results. When it comes to likelihood there is a wide body of research on the subject and we should be drawing from it as we work to improve our risk management.

Move beyond setting your scales at the beginning of a risk assessment. Scales should exist as a library (living) that are drawn upon for specific risk evaluations. This will help to ensure that all participants in the risk assessment have a working vocabulary of the criteria, and will keep us honest and prevent any intentional or unintentional manipulation of the criteria based on an expected outcome.

.