FDA Repays Industry by Rushing Risky Drugs to Market — ProPublica

As pharma companies underwrite three-fourths of the FDA’s budget for scientific reviews, the agency is increasingly fast-tracking expensive drugs with…
— Read on www.propublica.org/article/fda-repays-industry-by-rushing-risky-drugs-to-market

This is worth reading. I remember when I first started it was easier to get European approvals before US, and have been surprised by the switch over the last few years.

I also watch all these companies struggle with QbD and wonder if these two trends go hand in hand.

No answers from me, but I do recommend reading this article.

Risk Filtering – A popular tool that is easy to abuse

An article titled “ICE Modified Its ‘Risk Assessment’ software So It Automatically Recommends Detention” is probably guaranteed to reach me, for a myriad of ways.

I believe strongly in professional codes of conduct, and the need to speak out. In this case, I am thinking of two charges:

  1. Hold paramount the safety, health, and welfare of individuals, the public, and the environment.
  2. Avoid conduct that unjustly harms or threatens the reputation of the Society, its members, or the Quality profession.

Reading this article, and doing some digging, tells me that the tools of quality that I hold dear have been abused and I believe it is appropriate to call that out.

Now, a caveat, risk assessment, and management have some flavors out there and I’ll be honest that I once made the mistake of getting into a discussion with a risk management expert from a bank and realizing we had very different ideas of risk management. But supposedly we’re all aligned (sort of) to ISO Guide 73:2009, “Risk management. Vocabulary.” And as such, I’ll try to stick pretty close to those shared commonalities. I also assume that ISO Guide 73:2009 is a shared point between me and whoever designed the ICE risk assessment software.

Risk assessment is one phase in risk management, and I’ll focus on that here. Risk assessment is about identifying risk scenarios. What we do is:

  1. Establish the context and environment that could present a risk
  2. Identify the hazards and considering the hazards these risks could present
  3. Analyze the risks, including an assessment of the various contributing factors
  4. Evaluate and prioritize the risks in terms of further action required
  5. Identify the range of options available to tackle the risks and decide how to implement risk management strategies.

A look at the decision making around this found in the Reuters article, leads me to believe that what ICE is using meets these criteria and we can call it a risk assessment (why it is in quotes in the Motherboard article mystifies me).

There are a lot of risk assessment tools out there. it is important to know that risk assessment is not perfect, and as a result, we are constantly developing better tools and refining the ones we have.

My guess is we are seeing a computerized use of the risk ranking and filtering tool here. Very popular, and something I’ve spent a great deal of time developing. This tool involves breaking a basic risk question down into as many components as needed to capture factors involved in the risk. These factors are then combined into a relative risk score for ranking. Filters are weighting factors used to scale the risks to objectives.

And that is where this tool can often go wrong. It appears ICE under the Trump administration has determined its objective is to jail everyone. By adjusting the filters, the tool easily drives to that conclusion. And this is a problem. Here we see a quality tool being used to excuse inhumane policy choices. It is not the ICE agents separating families and jail people over a misdemeanor, it is the tool. And if that doesn’t strike to the heart of the banality of evil concept I’m not sure what does.

I could go deeper into the tool, how I would have built it, the ways you validate the effectiveness of it. And that all probably will make an excellent follow-up someday. But the reason I’m writing this post is primarily that I read this article and it dawned on me that someone very similar to me in skill set probably created this tool. Someone who maybe I’ve sat across the table at a professional conference, who has read the same articles, probably debates the same qualitative vs. quantitative debates. And this is a great example of when its necessary to speak up and criticize a tool of my profession being used for evil. I probably will never talk to the team who developed this tool, but we all see instances of companies around us being asked to build similar applications, using the tools of our profession, that will be used for the wrong results. And we owe it to our code of ethics to refuse.

 

Questions to ask when contemplating data integrity

Here are a set of questions that should be evaluated in any data integrity risk assessment/evaluation.

  1. Do you have a list of all GxP activities performed in your organization?
  2. Do you know which GxP activities involve intensive data handling tasks?
  3. Do you know the automation status of each GxP activity?
  4. Have you identified a list of GxP records that will be created by each GxP activity?
  5. Have you determined the format in which the official GxP records will be maintained?
  6. Have you determined if a signature is required for each GxP record?
  7. Do you have controls to ensure that observed, measured or processed GxP data is accurate?
  8. Do you have controls to ensure that GxP data is maintained in full without being omitted, discarded or deleted?
  9. Do you have controls to ensure that naming, measurement units, and value limits are defined and applied consistently during GxP data handling?
  10. Do you have controls to ensure that GxP data is recorded at the same time as the observation/measurement is made or shortly thereafter?
  11. Do you have controls to ensure that GxP data is recorded in a clear and human readable form?
  12. Do you have controls to ensure that data values represent the first recording of the GxP data or an exact copy of an original data?
  13. Do you have SOP(s) addressing management of GxP documents and records and good documentation practices?
  14. Do you have SOP(s) addressing the escalation of quality events that also cover data integrity breaches?
  15. Do you have SOP(s) addressing self-inspections/audits with provisions for data integrity?
  16. Do you have SOP(s) addressing management of third parties with provisions for the protection of data integrity?
  17. Do you have SOP(s) for Computerized Systems Compliance?
  18. Do you have SOP(s) for training and does it include training on data integrity for employees handling GxP data?
  19. For GxP activities that generate data essential for product quality, product supply or patient safety, do you have controls to prevent or minimize:
    • Process execution errors due to human inability, negligence or inadequate procedures?
    •  Non-compliance due to unethical practices such as falsification?
  20. Do you have controls to ensure that only authorized employees are granted access to GxP data based on the requirements of their job role?
  21. Do you have controls to ensure that only the GxP activity owner or delegate can grant access to the GxP data?
  22. Do you have controls to eliminate or reduce audiovisual distractions for GxP activities with intensive data handling tasks?
  23. Do you assess the design and configuration of your computerized GxP activity to minimize manual interventions where possible?
  24. Do you have controls for review of audit trail data at relevant points in the process to support important GxP actions or decisions?
  25. Do you have controls, supervision or decision support aids to help employees who perform error-prone data handling activities?
  26. Do you have controls to ensure business continuity if a GxP record essential for product quality, product supply, or patient safety is not available? Both for when there is a temporary interruption to GxP activity or during a disaster scenario?
  27. Do you have a process for ensuring that data integrity requirements are included in the design and configuration of GxP facilities where data handling activities take place?
  28. Have you assessed the compliance status of computerized systems used to automate GxP activities?
  29. Do you have controls to prevent data capture and data handling errors during GxP data creation?
  30. Do you have controls to ensure the accuracy of date and time applied to GxP data, records and documents?
  31. Do you have controls to ensure that changes to GxP data are traceable to who did what, when and if relevant why during the lifecycle of the GxP data?
  32. Do you have controls to ensure that – when required – legally binding signatures can be applied to GxP records and its integrity are ensured during the retention period of the GxP record?
  33. Do you have controls to ensure that GxP computerized systems managing GxP data can:
    • Allow access only to employees with proper authorization?
    • Identify each authorized employee uniquely?
  34. Do you have controls to ensure that GxP data can be protected against accidental or willful harm?
  35. Do you have controls to keep GxP data in a human readable form for the duration of the retention period?
  36. Do you have controls to ensure that the process for offline retention and retrievals is fit for its intended purpose?

Changes become effective

Change Effective, implementation, routine use…these are all terms that swirl in change control, and can mean several different things depending on your organization. So what is truly important to track?

regulatory and change

Taking a look at the above process map I want to focus on three major points, what I like to call the three implementations:

  1. When the change is in use
  2. When the change is regulatory approved
  3. When product is sent to a market

The sequence of these dates will depend on the regulatory impact.

  Tell and Do Do and Tell Do and Report
Change in use After regulatory approval. When change is introduced to the ‘floor’ When change is introduced to the ‘floor’ When change is introduced to the ‘floor’
Regulatory approval Upon approvals After use, before send to market Upon reporting frequency (annual, within 6 months, within 1 year)
Sent to market After regulatory approval and change in use After regulatory approval and change in use After change in use

I’m using ‘floor’ very loosely here. “Change in use” is that point where everything you do is made, tested and/or released under the change. Perhaps it’s a batch record change. Everything that came before is clearly not under the change. Everything that came after clearly is.

You can have the same change fit into all three areas, and your change control system needs to be robust enough to manage this. This is where tracking regulatory approval per country/market is critical, and tracking when the product was first sent.

A complicated change can easily look like this (oversimplification).

building actions

Is this 1, 2 or 3 processes? More? Depends on so many factors, the critical part is building the connections and make sure your change control system both receives inputs and provides outputs. Depending on your company, the data map can get rather complicated.

29 questions to ask about your change management/change control system

While these questions are very pharma/biotech specific in places, they should serve as thought process for your own system checkup.

  1. Is there a written SOP covering the change control program that has been approved by the Quality Unit?
  2. Do procedures in place describe the actions to be taken if a change is proposed to a starting material, product component, process equipment, process environment (or site), method of production or testing or any other change that may affect product quality or reproducibility/robustness of the process?
  3. Does the SOP ensure that all GMP changes are reviewed and approved by the Quality Unit?
  4. If changes are classified as “major” or “minor,” do procedures clearly define the differences?
  5. Does your change management system include criteria for determining if changes are justified?
  6. Are proposed changes evaluated by expert teams (e.g. HSE, Regulatory, Quality…)?
  7. Is there a process for cancelling a change request prior to implementation? And Is a rationale for cancellation included?”
  8. Does your Change control management site procedure describe clearly the process to close a change request (After all regulatory approvals…)?
  9. Are any delays explained and documented?
  10. Is there a written requirement that change controls implemented during normal or routine maintenance activities be documented in the formal change control program?
  11. Is your change management system linked to other quality systems such as CAPA, validation, training?
  12. Does your change management system include criteria for determining if changes will require qualification/requalification, validation/revalidation and stability studies?
  13. Are “like for like” changes (changes where there is a direct replacement of a component with another that is exactly the same) clearly defined in all aspects (including material of construction, dimensions, functionality,,,) ? Are they adequately documented and commissioned to provide traceability and history?”
  14. Is there an allowance for emergency and temporary changes under described conditions in the procedures?
  15. Are the proposed changes evaluated relative to the marketing authorization and/or current product and process understanding?
  16. Does your change management system include criteria to evaluate whether changes affect a regulatory filling?
  17. Are appropriate regulatory experts involved? Does the regulatory affairs function evaluate and approve all changes that impact regulatory files?
  18. Are changes submitted/implemented in accordance with the regulatory requirements?
  19. Is there a defined system for the formalization, roles, and responsibilities for change control follow-up?
  20. Is the effective date of the change (completion date) recorded and when appropriate the first batch manufactured recorded?
  21. Is there a periodic check of the implementation of Change controls?
  22. Following the implementation, is there an evaluation of the change undertaken to confirm the change objectives were achieved and that there was no adverse impact on product quality?
  23. Is all documentation that provides evidence of change, and documentation of requirements, controlled and retained according to procedure?
  24. When necessary, are personnel trained before the implementation of the change?
  25. Are change controls defined with adequate target dates?
  26. If the change control goes beyond the target date, is there a new date attributed, evaluated and documented by Quality Assurance?
  27. Are there routine evaluations of the Change controls and trends (number, Change controls closure, trends as defined)?
  28. Are changes closed on due date ?
  29. Are the Change controls and follow-up formalized in a report and/or periodic meetings?

These sort of questions form a nice way to periodically checking up on your system performance and ensuring you are moving in the right direction.