WCQI Day 3 – morning

I didn’t make it to the key note. I had a work conference call so I will never learn the quality secrets of Anheuser-Busch.

“A Fresh Approach to Risk Assessment & FMEA: It’s all about severity” by Beverly Daniels.

After yesterday’s Quality 4.0 session I was not going to miss this as the presenter has a blunt, to the point attitutde, that could be interesting and fun to watch.

Very R&R driven mindset, which is a little far away for me but one I find fascinating. Her approach is to get rid of probability and detection on an FMEA. How does she do that?

  • Create a function diagram and process maps as applicable
  • Create an input:output matrix
  • List functions
  • List failure modes: how a failure presents itself
  • List the effects of the failure modes
  • Determine severity of the failure modes at the local level and system level
  • Develop V&V, mitigation and control plans for all high severity failures.

Which means she’s just not using the risk assessment as a consolidation of decisions (hopefully using some other form of matrix) and always uses  testing data for occurrence.

The speaker made the point about static FMEA’s a lot, I’m a big fan of living risk assessments, and I think that is an approach that needs more attention.

Some interesting ideas on probability and testing here, but buried under some strong rhetoric. Luckily she posted a longer write-up which I’ll need to consider more.

“Using Decision Analysis to Improve, Make or Break Decisions” by Kurt Stuke

Someday I’ll write-up more on why I find long credential porn intros annoying. My favorite intro is “Jeremiah Genest works for Sanofi and has 20 years of experience in quality.” Post my damn CV if you want, but seriously my words, my presentation and my references should speak for themselves.

I like the flip sessions, prepping prior is always good. The conference needs to do a better job letting people know about the prep work. The amount of confusion in this session was telling. The app does not even link to the prep work, only way is an email.

Here are Kurt’s resources: https://www.kurtstuke.com/OER/WCQI/

There is no 100% tool, glad he stresses that at the beginning, as we sometimes forget to do that in the profession.

“Whim leads to advocacy approach which means data looses its voice.”

Used KT as a way for decision analysis. Talking about the “must haves” and “nice-to-haves” Maybe it’s because of the proprietary nature of KT, but I feel their methodology is either someone folks are really familiar with or surprised by.

So this is again basic stuff. I’m not sure if this is what I am deciding to go to or if just where I am in my journey. At my table I was the only one really familiar with these tools.

Good presenter. Love the workshop approach. It was great watching and participating with my table-mates and seeing lightbulbs go off. However, this is a basic workshop and not intermediate.

Risk Management is about reducing uncertainty

Risk Management is all about eliminating surprise. So to truly start to understand our risks, we need to understand uncertainty, we need to understand the unknowns. Borrowing from Andreas Schamanek’s Taxonomies of the unknown, let’s explore a few of the various taxonomies of what is not known.

Ignorance Map

I’m pretty sure Ann Kerwin first gave us the “known unknowns” and the “unknown knowns” that people still find a source of amusement about former defense secretary Rumsfield.

KnownUnknown
KnownKnown knowns Known unknowns (conscious ignorance)
Unknown Unknown knowns (tacit knowledge) Unknown unknowns (meta-ignorance)

Understanding uncertainty involves knowledge management, this is why a rigorous knowledge management program is a prerequisite for an effective quality management system.

Risk management is then a way of teasing out the unknowns and allowing us to take action:

  1. Risk assessments mostly easily focus on the ignorance that we are aware of, the ‘known unknowns’.
  2. Risk assessments can also serve as a tool of teasing out the ‘unknown knowns’. This is why participation of subject matter experts is so critical. Through the formal methodology of the risk assessment we expose and explore tacit knowledge.
  3. The third kind of ignorance is what we do now know we do not know, the ‘unknown unknowns’. We generally become aware of unknown unknowns in two ways: hindsight (deviations) and by purposefully expanding our horizons. This expansion includes diversity and also good experimentation. It is the hardest, but perhaps, most valuable part of risk management.

Taxonomy of Ignorance

Different Kinds of Unknowns, Source: Smithson (1989, p. 9); also in Bammer et al. (2008, p. 294).

Smithson distinguishes between passive and active ignorance. Passive ignorance involves areas that we are ignorant of, whereas active ignorance refers to areas we ignore. He uses the term ‘error’ for the unknowns encompassed by passive ignorance and ‘irrelevance’ for active ignorance.

Taboo is fascinating because it gets to the heart of our cultural blindness, those parts of our organization that are closed to scrutiny.

Smithson can help us understand why risk assessments are both a qualitative and a quantitative endeavor. While dealing with the unknown is the bread and butter of statistics, only a small part of the terrain of uncertainty is covered. Under Smithson’s typology, statistics primarily operates in the area of incompleteness, across probability and some kinds of vagueness. In terms of its considerations of sampling bias, statistics also has some overlap with inaccuracy. But, as the typology shows, there is much more to unknowns than the areas statistics deals with. This is another reason that subject matter experts, and different ways of thinking is a must.

Ensuring wide and appropriate expert participation gives additional perspectives on unknowns. There is also synergies by finding unrecognized similarities between disciplines and stakeholders in the unknowns they deal with and there may be great benefit from combining forces. It is important to use these concerns to enrich thinking about unknowns, rather than ruling them out as irrelevant.

Sources of Surprise

Risk management is all about managing surprise. It helps to break surprise down to three types: risk, uncertainty and ignorance.

  • Risk: The condition in which the event, process, or outcomes and the probability that each will occur is known.
    • Issue: In reality, complete knowledge of probabilities and range of potential outcomes or consequences is not usually known and is sometimes unknowable.
  • Uncertainty: The condition in which the event, process, or outcome is known (factually or hypothetically) but the probabilities that it will occur are not known.
    • Issue: The probabilities assigned, if any, are subjective, and ways to establish reliability for different subjective probability estimates are debatable.
  • Ignorance: The condition in which the event, process, or outcome is not known or expected.
    • Issue: How can we anticipate the unknown, improve the chances of anticipating, and, therefore, improve the chances of reducing vulnerability?

Effective use of the methodology moves ideally from ignorance to eventually risk.


Ignorance

DescriptionMethods of Mitigation
Closed Ignorance
Information is available but SMEs are unwilling or unable to consider that some outcomes are unknown to them.

Self-audit process, regular third-party audits, and open and transparent system with global participation
Open Ignorance
Information is available and SMEs are willing to recognize and consider that some outcomes are unknown.
Personal
Surprise occurs because an individual SME lacks knowledge or awareness of the available information.

effective teams xxplore multiple perspectives by including a diverse set of individuals and data sources for data gathering and analysis.

Transparency in process.
Communal
Surprise occurs because a group of SMEs has only similar viewpoints represented or may be less willing to consider views outside the community.
Diversity of viewpoints and sue of tools to overcome group-think and “tribal” knowledge
Novelty
Surprise occurs because the SMEs are unable to anticipate and prepare for external shocks or internal changes in preferences, technologies, and institutions.

Simulating impacts and gaming alternative outcomes of various potentials under different conditions
(Blue Team/Read Team exercises)
Complexity
Surprise occurs when inadequate forecasting tools are used to analyze the available data, resulting in inter-relationships, hidden dependencies, feedback loops, and other negative factors that lead to inadequate or incomplete understanding of the data.
System Thinking


Track changes and interrelationships of various systems to discover potential macro-effect force changes
12-Levers


Risk Management is all about understanding surprise and working to reduce uncertainty and ignorance in order to reduce, eliminate and sometimes accept. As a methodology it is effective at avoiding surrender and denial. With innovation we can even contemplate exploitation. As organizations mature, it is important to understand these concepts and utilize them.

References

  • Gigerenzer, Gerd and Garcia-Retamero, Rocio. Cassandra’s Regret: The Psychology of Not Wanting to Know (March 2017), Psychological Review, 2017, Vol. 124, No. 2, 179–196.
  • House, Robert J., Paul J. Hanges, Mansour Javidan, Peter Dorfman, and Vipin Gupta, eds. 2004. Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies. Thousand Oaks, Calif.: Sage Publications.
  • Kerwin, A. (1993). None Too Solid: Medical Ignorance. Knowledge, 15(2), 166–185.
  • Smithson, M. (1989) Ignorance and Uncertainty: Emerging Paradigms, New York: Springer-Verlag.
  • Smithson, M. (1993) “Ignorance and Science”, Knowledge: Creation, Diffusion, Utilization, 15(2) December: 133-156.

Risk Management of Raw Materials

This paper discusses background information related to RM regulatory requirements and industry challenges, and then highlights key principles to consider in setting up a risk-based RM management approach and control strategy. This paper then provides an example of how to translate those key principles into a detailed RM risk assessment methodology, and how to apply this methodology to specific raw materials. To better illustrate the diversity and nuance in applying a corresponding RM control strategy, a number of case studies with raw materials typically utilized in the manufacture of biological medicinal products have been included as well as discussion on phase-based mitigations.

European Biopharmaceutical Enterprises (2018) “Management and Control of Raw Materials Used in the Manufacture of Biological Medicinal Products and ATMPs

Good foundation document for how to build a risk management program for managing raw materials.

Review of Audit Trails

One of the requirements for data integrity that has changed in detail as the various guidances (FDA, MHRA, PIC/S) have gone through draft has been review of audit trails. This will also probably be one of the more controversial in certain corners as it can be seen by some as going beyond what has traditionally been the focus of good document practices and computer system validation.

What the guidances say

Audit trail review is similar to assessing cross-outs on paper when reviewing data. Personnel responsible for record review under CGMP should review the audit trails that capture changes to data associated with the record as they review the rest of the record (e.g., §§ 211.22(a), 211.101(c) and (d), 211.103, 211.182, 211.186(a), 211.192, 211.194(a)(8), and 212.20(d)). For example, all production and control records, which includes audit trails, must be reviewed and approved by the quality unit (§ 211.192). The regulations provide flexibility to have some activities reviewed by a person directly supervising or checking information (e.g., § 211.188). FDA recommends a quality system approach to implementing oversight and review of CGMP records.

US FDA. “Who should review audit trails?”  Data Integrity and Compliance With Drug CGMP Questions and Answers Guidance for Industry. Section 7, page 8

If the review frequency for the data is specified in CGMP regulations, adhere to that frequency for the audit trail review. For example, § 211.188(b) requires review after each significant step in manufacture, processing, packing, or holding, and § 211.22 requires data review before batch release. In these cases, you would apply the same review frequency for the audit trail.If the review frequency for the data is not specified in CGMP regulations, you should determine the review frequency for the audit trail using knowledge of your processes and risk assessment tools. The risk assessment should include evaluation of data criticality, control mechanisms, and impact on product quality. Your approach to audit trail review and the frequency with which you conduct it should ensure that CGMP requirements are met, appropriate controls are implemented, and the reliability of the review is proven.


US FDA. “How often should audit trails be reviewed?”  Data Integrity and Compliance With Drug CGMP Questions and Answers Guidance for Industry. Section 8, page 8
  Expectations Potential risk of not meeting expectations / items to be checked
1 Consideration should be given to data management and integrity requirements when purchasing and implementing computerised systems. Companies should select software that includes appropriate electronic audit trail functionality.   Companies should endeavour to purchase and upgrade older systems to implement software that includes electronic audit trail functionality.   It is acknowledged that some very simple systems lack appropriate audit trails; however, alternative arrangements to verify the veracity of data must be implemented, e.g. administrative procedures, secondary checks and controls. Additional guidance may be found under section 9.9 regarding Hybrid Systems.   Audit trail functionality should be verified during validation of the system to ensure that all changes and deletions of critical data associated with each manual activity are recorded and meet ALCOA+ principles.   Audit trail functionalities must be enabled and locked at all times and it must not be possible to deactivate the functionality. If it is possible for administrative users to deactivate the audit trail functionality, an automatic entry should be made in the audit trail indicating that the functionality has been deactivated.   Companies should implement procedures that outline their policy and processes for the review of audit trails in accordance with risk management principles. Critical audit trails related to each operation should be independently reviewed with all other records related to the operation and prior to the review of the completion of the operation, e.g. prior to batch release, so as to ensure that critical data and changes to it are acceptable. This review should be performed by the originating department, and where necessary verified by the quality unit, e.g. during self-inspection or investigative activities.   Validation documentation should demonstrate that audit trails are functional, and that all activities, changes and other transactions within the systems are recorded, together with all metadata.   Verify that audit trails are regularly reviewed (in accordance with quality risk management principles) and that discrepancies are investigated.   If no electronic audit trail system exists a paper based record to demonstrate changes to data may be acceptable until a fully audit trailed (integrated system or independent audit software using a validated interface) system becomes available. These hybrid systems are permitted, where they achieve equivalence to integrated audit trail, such as described in Annex 11 of the PIC/S GMP Guide. Failure to adequately review audit trails may allow manipulated or erroneous data to be inadvertently accepted by the Quality Unit and/or Authorised Person.   Clear details of which data are critical, and which changes and deletions must be recorded (audit trail) should be documented.
2 Where available, audit trail functionalities for electronic-based systems should be assessed and configured properly to capture any critical activities relating to the acquisition, deletion, overwriting of and changes to data for audit purposes.   Audit trails should be configured to record all manually initiated processes related to critical data.   The system should provide a secure, computer generated, time stamped audit trail to independently record the date and time of entries and actions that create, modify, or delete electronic records.   The audit trail should include the following parameters: – Who made the change – What was changed, incl. old and new values – When the change was made, incl. date and time – Why the change was made (reason) – Name of any person authorising the change.   The audit trail should allow for reconstruction of the course of events relating to the creation, modification, or deletion of an electronic record. The system must be able to print and provide an electronic copy of the audit trail, and whether looked at in the system or in a copy, the audit trail should be available in a meaningful format.   If possible, the audit trail should retain the dynamic functionalities found in the computer system, e.g. search functionality and export to e.g. Excel Verify the format of audit trails to ensure that all critical and relevant information is captured.   The audit trail must include all previous values and record changes must not obscure previously recorded information.   Audit trail entries should be recorded in true time and reflect the actual time of activities. Systems recording the same time for a number of sequential interactions, or which only make an entry in the audit trail, once all interactions have been completed, may not in compliance with expectations to data integrity, particularly where each discrete interaction or sequence is critical, e.g. for the electronic recording of addition of 4 raw materials to a mixing vessel. If the order of addition is a CPP, then each addition should be recorded individually, with time stamps. If the order of addition is not a CCP then the addition of all 4 materials could be recored as a single timestamped activity.

PIC/S. PI 041-1 “Good Practices for Data Management and Data Integrity in regulated GMP/GDP Environments“ (3rd draft) section 9.4 “Audit trail for computerised systems” page 36

Thoughts

It has long been the requirement that computer systems have audit trails and that these be convertible to a format that can be reviewed as appropriate. What these guidances are stating is:

  • There are key activities captured in the audit trail. These key determined in a risk-based manner.
  • These key activities need to be reviewed when making decisions based on them (determine a frequency)
  • The audit trail needs to be able to show the reviewer the key activity
  • These reviews needs to be captured in the quality system (proceduralized, recorded) 
  • This is part of the validated state of your system

So for example, my deviation system is evaluated and the key activity that needs to be reviewed in the decision to forward process. In this deviation decision quality makes the determination at several points of the workflow. The audit trail review would thus be looking at who made the decision when and did that meet criteria. The frequency might be established at the point of disposition for any deviation still in an opened state and upon closure.

What we are being asked is to evaluate all your computer systems and figure out what parts of the audit trail need to be reviewed when. 

Now here’s the problem. Most audit trails are garbage. Maybe they are human readable by some vague definition of readable (or even human). But they don’t have filters, or search or templates. So  companies need to be  (again based on a risk based approach)  evaluating their audit trails system by system to see if they are up-to-the-task. You then end up with one or more solutions:

  • Rebuild the audit trail to make it human readable and give filters and search criteria. For example on a deviation record there is one view for “disposition” and another for “closure”
  • Add reports (such as a set of crystal reports) to make it human readable and give filters and search criteria. Probably end up with a report for “disposition” and another report for “closure.”
  • Utilize an export function to Excel (or similar program)and use Excel’s functions to filter and search. Remember to ensure you have a data verification process in place.
  • The best solution is to ensure the audit trail is a step in your workflow and the review is captured as part of the audit trail. Ideally this is part of an exception reporting process driven by the system.

What risk based questions should drive this?

  • Overall risk of the system
  • Capabilities of audit trail
  • Can the data be modified after entry? Can it be modified prior to approval?
  • Is the result qualitative or quantitative 
  • Are changes to data visible on the record itself?

Data Integrity and Control of Forms

In “Data Integrity and Compliance With Drug CGMP Questions and Answers Guidance for Industry” the FDA states the following about control of blank forms:

There must be document controls in place to assure product quality (see §§ 211.100, 211.160(a),211.186, 212.20(d), and 212.60(g)). For example, bound paginated notebooks, stamped for official use by a document control group, provide good document control because they allow easy detection of unofficial notebooks as well as any gaps in notebook pages. If used, blank forms (e.g., electronic worksheets, laboratory notebooks, and MPCRs) should be controlled by the quality unit or by another document control method. As appropriate, numbered sets of blank forms may be issued and should be reconciled upon completion of all issued forms. Incomplete or erroneous forms should be kept as part of the permanent record along with written justification for their replacement (see, e.g., §§ 211.192, 211.194, 212.50(a), and 212.70(f)(1)(vi)). All data required to recreate a CGMP activity should be maintained as part of the complete record.

6. How should blank forms be controlled? on page 7 of 13

First sentence “There must be document controls in place to assure product quality” should be interpreted in a risk based approach. All forms should always be published from a controlled manner, ideally an electronic system that ensures the correct version is used and provides a time/date stamp of when the form is published. Some forms (based on risk) should be published in such a way that contemporaneity and originality are more easy to prove. In other words, bind them.

A good rule of thumb for binding a printed form (which is now going to become a record) is as follows:

  1. Is it one large form with individual pages contributing to the whole record that could be easily lost, misplaced or even intentionally altered? 
  2. Is it a form that provides chronological order to the same or similar pieces of information such as a logbook?
  3. Is time of entry important?
  4. Will this form live with a piece of equipment, an instrument, a room for a period of time? Another way to phrase this, if the form is not a once and done that upon completion as a record moves along in a review flow.

If you answer yes to any of these, then the default should be to bind it and control it through a central publishing function, traditionally called document control.

The PIC/S draft on data integrity has more to say here:

Reference Expectation Potential risk of not meeting
expectations/items to be
checked
Distribution and Control Item 2 page 17 of 52 Issue should be controlled by written procedures that include the following controls:
–  Details of who issued the copies and when they were issued.
– using of a secure stamp, or paper colour code not available in the working areas or another appropriate system.
– ensuring that only the current approved version is available for use. – allocating a unique identifier to each blank document issued and recording the issue of each document in a register.  
– Numbering every distributed copy (e.g.: copy 2 of 2) and sequential numbering of issued pages in bound books.   Where the re-issue of additional copies of the blank template is necessary, a controlled process regarding re-issue should be followed. All distributed copies should be maintained and a justification and approval for the need of an extra copy should be recorded, e.g.: “the original template record was damaged”. – All issued records should be reconciled following use to ensure the accuracy and completeness of records.
Without the use of security measures, there is a risk that rewriting or falsification of data may be made after photocopying or scanning the template record (which gives the user another template copy to use). Obsolete version can be used intentionally or by error. A filled record with an anomalous data entry could be replaced by a new rewritten template.   All unused forms should be accounted for, and either defaced and destroyed, or returned for secure filing.