Likelihood of occurrence in risk estimation

People use imprecise words to describe the chance of events all the time — “It’s likely to rain,” or “There’s a real possibility they’ll launch before us,” or “It’s doubtful the nurses will strike.” Not only are such probabilistic terms subjective, but they also can have widely different interpretations. One person’s “pretty likely” is another’s “far from certain.” Our research shows just how broad these gaps in understanding can be and the types of problems that can flow from these differences in interpretation.

“If You Say Something Is “Likely,” How Likely Do People Think It Is?” by by Andrew Mauboussin and Michael J. Mauboussin

Risk estimation is based on two components:

  • The probability of the occurrence of harm
  • The consequences of that harm

With a third element of detectability of the harm being used in many tools.

Often-times we simplify probability of the occurrence into likelihood. The quoted article above is a good simple primer on why we should be careful of that. It offers three recommendations that I want to talk about. Go read the article and then come back.

I.                Use probabilities instead of words to avoid misinterpretation

Avoid the simplified quality probability levels, such as “likely to happen”, “frequent”, “can happen, but not frequently”, “rare”, “remote”, and “unlikely to happen.” Instead determine probability levels. even if you are heavily using expert opinion to drive probabilities, given ranges of numbers such as “<10% of the time”, “20-60% of the time” and “greater than 60% of the time.”

It helps to have several sets of scales.

The article has an awesome graph that really is telling for why we should avoid words.

W180614_MAUBOUSSIN_HOWPEOPLE

II.             Use structured approaches to set probabilities

Ideally pressure test these using a Delphi approach, or something similar like paired comparisons or absolute probability judgments. Using the historic data, and expert opinion, spend the time to make sure your probabilities actually capture the realities.

Be aware that when using historical data that if there is a very low frequent of occurrence historically, then any estimate of probability will be uncertain. In these cases its important to use predicative techniques and simulations. Monte Carlo anyone?

III.           Seek feedback to improve your forecasting

Risk management is a lifecycle approach, and you need to be applying good knowledge management to that lifecycle. Have a mechanism to learn from the risk assessments you conduct, and feed that back into your scales. These scales should never be a once and done.

In Conclusion

Risk Management is not new. It’s been around long enough that many companies have the elements in place. What we need to be doing to driving to consistency. Drive out the vague and build best practices that will give the best results. When it comes to likelihood there is a wide body of research on the subject and we should be drawing from it as we work to improve our risk management.

Move beyond setting your scales at the beginning of a risk assessment. Scales should exist as a library (living) that are drawn upon for specific risk evaluations. This will help to ensure that all participants in the risk assessment have a working vocabulary of the criteria, and will keep us honest and prevent any intentional or unintentional manipulation of the criteria based on an expected outcome.

.

FDA Repays Industry by Rushing Risky Drugs to Market — ProPublica

As pharma companies underwrite three-fourths of the FDA’s budget for scientific reviews, the agency is increasingly fast-tracking expensive drugs with…
— Read on www.propublica.org/article/fda-repays-industry-by-rushing-risky-drugs-to-market

This is worth reading. I remember when I first started it was easier to get European approvals before US, and have been surprised by the switch over the last few years.

I also watch all these companies struggle with QbD and wonder if these two trends go hand in hand.

No answers from me, but I do recommend reading this article.

Risk Filtering – A popular tool that is easy to abuse

An article titled “ICE Modified Its ‘Risk Assessment’ software So It Automatically Recommends Detention” is probably guaranteed to reach me, for a myriad of ways.

I believe strongly in professional codes of conduct, and the need to speak out. In this case, I am thinking of two charges:

  1. Hold paramount the safety, health, and welfare of individuals, the public, and the environment.
  2. Avoid conduct that unjustly harms or threatens the reputation of the Society, its members, or the Quality profession.

Reading this article, and doing some digging, tells me that the tools of quality that I hold dear have been abused and I believe it is appropriate to call that out.

Now, a caveat, risk assessment, and management have some flavors out there and I’ll be honest that I once made the mistake of getting into a discussion with a risk management expert from a bank and realizing we had very different ideas of risk management. But supposedly we’re all aligned (sort of) to ISO Guide 73:2009, “Risk management. Vocabulary.” And as such, I’ll try to stick pretty close to those shared commonalities. I also assume that ISO Guide 73:2009 is a shared point between me and whoever designed the ICE risk assessment software.

Risk assessment is one phase in risk management, and I’ll focus on that here. Risk assessment is about identifying risk scenarios. What we do is:

  1. Establish the context and environment that could present a risk
  2. Identify the hazards and considering the hazards these risks could present
  3. Analyze the risks, including an assessment of the various contributing factors
  4. Evaluate and prioritize the risks in terms of further action required
  5. Identify the range of options available to tackle the risks and decide how to implement risk management strategies.

A look at the decision making around this found in the Reuters article, leads me to believe that what ICE is using meets these criteria and we can call it a risk assessment (why it is in quotes in the Motherboard article mystifies me).

There are a lot of risk assessment tools out there. it is important to know that risk assessment is not perfect, and as a result, we are constantly developing better tools and refining the ones we have.

My guess is we are seeing a computerized use of the risk ranking and filtering tool here. Very popular, and something I’ve spent a great deal of time developing. This tool involves breaking a basic risk question down into as many components as needed to capture factors involved in the risk. These factors are then combined into a relative risk score for ranking. Filters are weighting factors used to scale the risks to objectives.

And that is where this tool can often go wrong. It appears ICE under the Trump administration has determined its objective is to jail everyone. By adjusting the filters, the tool easily drives to that conclusion. And this is a problem. Here we see a quality tool being used to excuse inhumane policy choices. It is not the ICE agents separating families and jail people over a misdemeanor, it is the tool. And if that doesn’t strike to the heart of the banality of evil concept I’m not sure what does.

I could go deeper into the tool, how I would have built it, the ways you validate the effectiveness of it. And that all probably will make an excellent follow-up someday. But the reason I’m writing this post is primarily that I read this article and it dawned on me that someone very similar to me in skill set probably created this tool. Someone who maybe I’ve sat across the table at a professional conference, who has read the same articles, probably debates the same qualitative vs. quantitative debates. And this is a great example of when its necessary to speak up and criticize a tool of my profession being used for evil. I probably will never talk to the team who developed this tool, but we all see instances of companies around us being asked to build similar applications, using the tools of our profession, that will be used for the wrong results. And we owe it to our code of ethics to refuse.

 

Questions to ask when contemplating data integrity

Here are a set of questions that should be evaluated in any data integrity risk assessment/evaluation.

  1. Do you have a list of all GxP activities performed in your organization?
  2. Do you know which GxP activities involve intensive data handling tasks?
  3. Do you know the automation status of each GxP activity?
  4. Have you identified a list of GxP records that will be created by each GxP activity?
  5. Have you determined the format in which the official GxP records will be maintained?
  6. Have you determined if a signature is required for each GxP record?
  7. Do you have controls to ensure that observed, measured or processed GxP data is accurate?
  8. Do you have controls to ensure that GxP data is maintained in full without being omitted, discarded or deleted?
  9. Do you have controls to ensure that naming, measurement units, and value limits are defined and applied consistently during GxP data handling?
  10. Do you have controls to ensure that GxP data is recorded at the same time as the observation/measurement is made or shortly thereafter?
  11. Do you have controls to ensure that GxP data is recorded in a clear and human readable form?
  12. Do you have controls to ensure that data values represent the first recording of the GxP data or an exact copy of an original data?
  13. Do you have SOP(s) addressing management of GxP documents and records and good documentation practices?
  14. Do you have SOP(s) addressing the escalation of quality events that also cover data integrity breaches?
  15. Do you have SOP(s) addressing self-inspections/audits with provisions for data integrity?
  16. Do you have SOP(s) addressing management of third parties with provisions for the protection of data integrity?
  17. Do you have SOP(s) for Computerized Systems Compliance?
  18. Do you have SOP(s) for training and does it include training on data integrity for employees handling GxP data?
  19. For GxP activities that generate data essential for product quality, product supply or patient safety, do you have controls to prevent or minimize:
    • Process execution errors due to human inability, negligence or inadequate procedures?
    •  Non-compliance due to unethical practices such as falsification?
  20. Do you have controls to ensure that only authorized employees are granted access to GxP data based on the requirements of their job role?
  21. Do you have controls to ensure that only the GxP activity owner or delegate can grant access to the GxP data?
  22. Do you have controls to eliminate or reduce audiovisual distractions for GxP activities with intensive data handling tasks?
  23. Do you assess the design and configuration of your computerized GxP activity to minimize manual interventions where possible?
  24. Do you have controls for review of audit trail data at relevant points in the process to support important GxP actions or decisions?
  25. Do you have controls, supervision or decision support aids to help employees who perform error-prone data handling activities?
  26. Do you have controls to ensure business continuity if a GxP record essential for product quality, product supply, or patient safety is not available? Both for when there is a temporary interruption to GxP activity or during a disaster scenario?
  27. Do you have a process for ensuring that data integrity requirements are included in the design and configuration of GxP facilities where data handling activities take place?
  28. Have you assessed the compliance status of computerized systems used to automate GxP activities?
  29. Do you have controls to prevent data capture and data handling errors during GxP data creation?
  30. Do you have controls to ensure the accuracy of date and time applied to GxP data, records and documents?
  31. Do you have controls to ensure that changes to GxP data are traceable to who did what, when and if relevant why during the lifecycle of the GxP data?
  32. Do you have controls to ensure that – when required – legally binding signatures can be applied to GxP records and its integrity are ensured during the retention period of the GxP record?
  33. Do you have controls to ensure that GxP computerized systems managing GxP data can:
    • Allow access only to employees with proper authorization?
    • Identify each authorized employee uniquely?
  34. Do you have controls to ensure that GxP data can be protected against accidental or willful harm?
  35. Do you have controls to keep GxP data in a human readable form for the duration of the retention period?
  36. Do you have controls to ensure that the process for offline retention and retrievals is fit for its intended purpose?

Changes become effective

Change Effective, implementation, routine use…these are all terms that swirl in change control, and can mean several different things depending on your organization. So what is truly important to track?

regulatory and change

Taking a look at the above process map I want to focus on three major points, what I like to call the three implementations:

  1. When the change is in use
  2. When the change is regulatory approved
  3. When product is sent to a market

The sequence of these dates will depend on the regulatory impact.

  Tell and Do Do and Tell Do and Report
Change in use After regulatory approval. When change is introduced to the ‘floor’ When change is introduced to the ‘floor’ When change is introduced to the ‘floor’
Regulatory approval Upon approvals After use, before send to market Upon reporting frequency (annual, within 6 months, within 1 year)
Sent to market After regulatory approval and change in use After regulatory approval and change in use After change in use

I’m using ‘floor’ very loosely here. “Change in use” is that point where everything you do is made, tested and/or released under the change. Perhaps it’s a batch record change. Everything that came before is clearly not under the change. Everything that came after clearly is.

You can have the same change fit into all three areas, and your change control system needs to be robust enough to manage this. This is where tracking regulatory approval per country/market is critical, and tracking when the product was first sent.

A complicated change can easily look like this (oversimplification).

building actions

Is this 1, 2 or 3 processes? More? Depends on so many factors, the critical part is building the connections and make sure your change control system both receives inputs and provides outputs. Depending on your company, the data map can get rather complicated.