Release control process sipoc

Google does not share search terms for privacy reasons, so it is rare that a search term stands out on my blog.

So here you go.

SIPOC for disposition

This is purposefully a high level process.

Quality Review of Records: Batch Record, Packaging Record, the like

Lot Assessment: Evaluation of deviations, of change control and test results; but also of other inputs such as Critical Utilities and Environmental Monitoring Review. Ideally a holistic view.

Lot Disposition: Decision that the product meets all requirements of the GMPs and the market authorization.

Some important regulatory requirements:

  • United States: 21 CFR 211.22(a); 211.22(d)
  • EU: 1.4(xv); 1.9(vii); EU Annex 16
  • World Health Organization: Annex 3-GMP 1.2(g); Annex 3-GMP 9.11, 9.13, 9.15

 

One of the drivers for digital transformation, and a concept at route of the ICHs, is the idea of release by exception. Our systems will be tight enough, our design space robust enough, that most products are automatically released and sent into the market.

 

 

Forms, forms, everywhere

Unless you work in the factory of the future the chances are you have forms — if you are like me over 1100 of them. So what is a form and how does it fit into our document management system?

Merriam-Webster Dictionary defines form (amongst other things) as “a printed or typed document with blank spaces for insertion of required or requested information.”

We use forms to tell what information needs to be captured, and usually to record when and by whom. Forms have the following advantages in our document management system:

  • The user has to write less
  • The user is told or reminded what information has to be supplied
  • There is uniformity
  • Information is collected in writing and so can be reexamined later. Forms almost always have a signature field to allow someone to take responsibility

It is useful to note here that electronic systems do basically the same thing.

Returning to our three major types of documents:

  • Functional Documents provide instructions so people can perform tasks and make decisions safely effectively, compliantly and consistently. This usually includes things like procedures, process instructions, protocols, methods and specifications. Many of these need some sort of training decision. Functional documents should involve a process to ensure they are up-to-date, especially in relation to current practices and relevant standards (periodic review)
  • Records provide evidence that actions were taken and decisions were made in keeping with procedures. This includes batch manufacturing records, logbooks and laboratory data sheets and notebooks. Records are a popular target for electronic alternatives.
  • Reports provide specific information on a particular topic on a formal, standardized way. Reports may include data summaries, findings and actions to be taken.

A form is a functional document that once printed and has data entered onto it becomes a record. That record then needs to be managed and has all sorts of good documentation and data integrity concerns including traceability and retention (archiving).

It is helpful here to also differentiate between a template and a form. A template is a form that is specifically used to build another document — an SOP template or a protocol template for example. Usually the template gives you a document that then goes through its own lifecycle.

What is in a title

Recently I’ve seen a few inspection observations that have provided an observation on the title of quality record (e.g. deviation, CAPA, change control).

The title might seem the most basic part of a quality system record – a simple task – but instead it should receive some serious thought. This is any inspector’s first interaction, it serves as a historical flag that generations of readers will use to become familiar. And everyone falls prey to “judging a book by its cover.” This cognitive bias tends to make readers considerably susceptible to allowing the quality systems title to function as the sole factor influencing their decision of whether to read or skip a record. A bad title could shape an inspection or deprive an important historical record from being evaluated in the future. We can do better.

A good quality systems record title:

  • Condenses the record’s content in a few words
  • Differentiates the record from other records of the same subject area

Some general tips:

  1. Keep it simple and brief: The primary function of a title is to provide a precise summary of the record’s content. So keep the title brief and clear. Use active verbs instead of complex noun-based phrases, and avoid unnecessary details. Moreover, a good title for a record is typically around 10 to 12 words long. A lengthy title may seem unfocused and take the readers’ attention away from an important point.
  2. Avoid: Wrong label issued

    Better: Sample ABCD was issued label 1234 instead of label X4572

  3. Use appropriate descriptive words: A record title should contain key words used in the record and should define the nature of the quality systems event. Think about terms people would use to search for your record and include them in your title.
  4. Avoid: No LIMS label for batch ABDC

    Better: Batch ABDC was missing label Y457 as required by procedure LAB-123

  5. Avoid abbreviations and jargon: Known abbreviations can be used in the title. However, other lesser-known or specific abbreviations and jargon that would not be immediately familiar to the readers should be left out.

It sometimes surprises folks how simple things can have ripple effects. But they do, so plan accordingly and ensure your users are trained on writing a good title. Trust me; it will make things easier in the long run.

ALCOA or ALCOA+

My colleague Michelle Eldridge recently shared this video for the differences between ALCOA and ALCOA+ from learnaboutgmp. It’s cute, it’s to the point, it makes a nice primer.

As I’ve mentioned before, the MHRA in it’s data integrity guidance did take a dig at ALCOA+:

The guidance refers to the acronym ALCOA rather than ‘ALCOA +’. ALCOA being Attributable, Legible, Contemporaneous, Original, and Accurate and the ‘+’ referring to Complete, Consistent, Enduring, and Available. ALCOA was historically regarded as defining the attributes of data quality that are suitable for regulatory purposes. The ‘+’has been subsequently added to emphasise the requirements. There is no difference in expectations regardless of which acronym is used since data governance measures should ensure that data is complete, consistent, enduring and available throughout the data lifecycle.

Two things should be drawn from this:

  1. Data Integrity is a set of best practices that are still developing, so make sure you are pushing that development and not ignoring it. Much better to be pushing the boundaries of the “c” then end up being surprised.
  2. I actually agree with the MHRA. Complete, consistent, enduring and available are really just subsets of the others. But, like they also say the acronym means little, just make sure you are doing it.

Data Integrity, it’s the new quality culture.

Likelihood of occurrence in risk estimation

People use imprecise words to describe the chance of events all the time — “It’s likely to rain,” or “There’s a real possibility they’ll launch before us,” or “It’s doubtful the nurses will strike.” Not only are such probabilistic terms subjective, but they also can have widely different interpretations. One person’s “pretty likely” is another’s “far from certain.” Our research shows just how broad these gaps in understanding can be and the types of problems that can flow from these differences in interpretation.

“If You Say Something Is “Likely,” How Likely Do People Think It Is?” by by Andrew Mauboussin and Michael J. Mauboussin

Risk estimation is based on two components:

  • The probability of the occurrence of harm
  • The consequences of that harm

With a third element of detectability of the harm being used in many tools.

Often-times we simplify probability of the occurrence into likelihood. The quoted article above is a good simple primer on why we should be careful of that. It offers three recommendations that I want to talk about. Go read the article and then come back.

I.                Use probabilities instead of words to avoid misinterpretation

Avoid the simplified quality probability levels, such as “likely to happen”, “frequent”, “can happen, but not frequently”, “rare”, “remote”, and “unlikely to happen.” Instead determine probability levels. even if you are heavily using expert opinion to drive probabilities, given ranges of numbers such as “<10% of the time”, “20-60% of the time” and “greater than 60% of the time.”

It helps to have several sets of scales.

The article has an awesome graph that really is telling for why we should avoid words.

W180614_MAUBOUSSIN_HOWPEOPLE

II.             Use structured approaches to set probabilities

Ideally pressure test these using a Delphi approach, or something similar like paired comparisons or absolute probability judgments. Using the historic data, and expert opinion, spend the time to make sure your probabilities actually capture the realities.

Be aware that when using historical data that if there is a very low frequent of occurrence historically, then any estimate of probability will be uncertain. In these cases its important to use predicative techniques and simulations. Monte Carlo anyone?

III.           Seek feedback to improve your forecasting

Risk management is a lifecycle approach, and you need to be applying good knowledge management to that lifecycle. Have a mechanism to learn from the risk assessments you conduct, and feed that back into your scales. These scales should never be a once and done.

In Conclusion

Risk Management is not new. It’s been around long enough that many companies have the elements in place. What we need to be doing to driving to consistency. Drive out the vague and build best practices that will give the best results. When it comes to likelihood there is a wide body of research on the subject and we should be drawing from it as we work to improve our risk management.

Move beyond setting your scales at the beginning of a risk assessment. Scales should exist as a library (living) that are drawn upon for specific risk evaluations. This will help to ensure that all participants in the risk assessment have a working vocabulary of the criteria, and will keep us honest and prevent any intentional or unintentional manipulation of the criteria based on an expected outcome.

.