Quality, the Pharmaceutical Industry, and Public Trust

By-and-large the last few decades have been solid ones for the pharmaceutical industry. The public has largely trusted that what we make is safe, efficacious and in general a quality product. That is, after all, what the regulations are designed to do. And we, as quality professionals, together with our peers, have demonstrated a real passion for the patient.

But as the recent St Louis verdict against Johnson & Johnson shows, the industry has a growing issue with the public’s perception that pharmaceutical corporations need to stop placing profits over safety.

As the New York Times says:

Johnson & Johnson and its peers were once lauded as a collective of hero-innovators and credited with bringing an avalanche of lifesaving, world-changing technology from lab bench to patient bedside. Today they are more readily associated with rampant price gouging, the worst drug overdose epidemic in modern history and a steady beat of cases similar to the talc-cancer one, in which profitable products caused real harm.

We also have the FDA catering to industry, the revolving door between the FDA and industry, and a growth of scary pseudo-science that continues to erode trust (vaccine deniers are fairly highly placed in the current government).

As quality professionals, it is our responsibility to “Hold paramount the safety, health, and welfare of individuals, the public, and the environment.” Whether it’s in our companies, our professional associations, or our connections with the community we need to be striving to be the voice of science, of quality and of integrity.

Risk Filtering – A popular tool that is easy to abuse

An article titled “ICE Modified Its ‘Risk Assessment’ software So It Automatically Recommends Detention” is probably guaranteed to reach me, for a myriad of ways.

I believe strongly in professional codes of conduct, and the need to speak out. In this case, I am thinking of two charges:

  1. Hold paramount the safety, health, and welfare of individuals, the public, and the environment.
  2. Avoid conduct that unjustly harms or threatens the reputation of the Society, its members, or the Quality profession.

Reading this article, and doing some digging, tells me that the tools of quality that I hold dear have been abused and I believe it is appropriate to call that out.

Now, a caveat, risk assessment, and management have some flavors out there and I’ll be honest that I once made the mistake of getting into a discussion with a risk management expert from a bank and realizing we had very different ideas of risk management. But supposedly we’re all aligned (sort of) to ISO Guide 73:2009, “Risk management. Vocabulary.” And as such, I’ll try to stick pretty close to those shared commonalities. I also assume that ISO Guide 73:2009 is a shared point between me and whoever designed the ICE risk assessment software.

Risk assessment is one phase in risk management, and I’ll focus on that here. Risk assessment is about identifying risk scenarios. What we do is:

  1. Establish the context and environment that could present a risk
  2. Identify the hazards and considering the hazards these risks could present
  3. Analyze the risks, including an assessment of the various contributing factors
  4. Evaluate and prioritize the risks in terms of further action required
  5. Identify the range of options available to tackle the risks and decide how to implement risk management strategies.

A look at the decision making around this found in the Reuters article, leads me to believe that what ICE is using meets these criteria and we can call it a risk assessment (why it is in quotes in the Motherboard article mystifies me).

There are a lot of risk assessment tools out there. it is important to know that risk assessment is not perfect, and as a result, we are constantly developing better tools and refining the ones we have.

My guess is we are seeing a computerized use of the risk ranking and filtering tool here. Very popular, and something I’ve spent a great deal of time developing. This tool involves breaking a basic risk question down into as many components as needed to capture factors involved in the risk. These factors are then combined into a relative risk score for ranking. Filters are weighting factors used to scale the risks to objectives.

And that is where this tool can often go wrong. It appears ICE under the Trump administration has determined its objective is to jail everyone. By adjusting the filters, the tool easily drives to that conclusion. And this is a problem. Here we see a quality tool being used to excuse inhumane policy choices. It is not the ICE agents separating families and jail people over a misdemeanor, it is the tool. And if that doesn’t strike to the heart of the banality of evil concept I’m not sure what does.

I could go deeper into the tool, how I would have built it, the ways you validate the effectiveness of it. And that all probably will make an excellent follow-up someday. But the reason I’m writing this post is primarily that I read this article and it dawned on me that someone very similar to me in skill set probably created this tool. Someone who maybe I’ve sat across the table at a professional conference, who has read the same articles, probably debates the same qualitative vs. quantitative debates. And this is a great example of when its necessary to speak up and criticize a tool of my profession being used for evil. I probably will never talk to the team who developed this tool, but we all see instances of companies around us being asked to build similar applications, using the tools of our profession, that will be used for the wrong results. And we owe it to our code of ethics to refuse.