Risk assessment is a pillar of the quality system because it gives us the ability to anticipate in a consistent manner. It is built on some fundamental criteria:
When I teach an introductory risk management class, I usually use an icebreaker of “What is the riskiest activity you can think of doing. Inevitably you will get some version of skydiving, swimming with sharks, jumping off bridges. This activity is great because it starts all conversations around likelihood and severity. At heart, the question brings out the concept of risk important activities and the nature of controls.
The things people think of, such as skydiving, are great examples of activities that are surrounded by activities that control risk. The very activity is based on accepting reducing risk as low as possible and then proceeding in the safest possible pathway. These risk important activities are the mechanism just before a critical step that:
Ensure the appropriate transfer of information and skill
Ensure the appropriate number of actions to reduce risk
Influence the presence or effectiveness of barriers
Influence the ability to maintain positive control of the moderation of hazards
Risk important activities is a concept important to safety-thought and are at the center of a lot of human error reduction tools and practices. Risk important activities are all about thinking through the right set of controls, building them into the procedure, and successfully executing them before reaching the critical step of no return. Checklists are a great example of this mindset at work, but there are a ton of ways of doing them.
In the hospital they use a great thought process, “Five rights of Safe Medication Practices” that are: 1) right patient, 2) right drug, 3) right dose, 4) right route, and 5) right time. Next time you are getting medication in the doctor’s office or hospital evaluate just what your caregiver is doing and how it fits into that process. Those are examples of risk important activities.
Assessing controls during risk assessment
Risk is affected by the overall effectiveness of any controls that are in place.
The key aspects of controls are:
the mechanism by which the controls are intended to modify risk
whether the controls are in place, are capable of operating as intended, and are achieving the expected results
whether there are shortcomings in the design of controls or the way they are applied
whether there are gaps in controls
whether controls function independently, or if they need to function collectively to be effective
whether there are factors, conditions, vulnerabilities or circumstances that can reduce or eliminate control effectiveness including common cause failures
A risk can have more than one control and controls can affect more than one risk.
We always want to distinguish between controls that change likelihood, consequences or both, and controls that change how the burden of risk is shared between stakeholders
Any assumptions made during risk analysis about the actual effect and reliability of controls should be validated where possible, with a particular emphasis on individual or combinations of controls that are assumed to have a substantial modifying effect. This should take into account information gained through routine monitoring and review of controls.
Risk Important Activities, Critical Steps and Process
Critical steps are the way we meet our critical-to-quality requirements. The activities that ensure our product/service meets the needs of the organization.
These critical steps are the points of no-return, the point where the work-product is transformed into something else. Risk important activities are what we do to remove the danger of executing that critical step.
Beyond that critical step, you have rejection or rework. When I am cooking there is a lot of prep work which can be a mixture of critical steps, from which there is no return. I break the egg wrong and get eggshells in my batter, there is a degree of rework necessary. This is true for all our processes.
The risk-based approach to the process is to understand the critical steps and mitigate controls.
We are thinking through the following:
Critical Step: The action that triggers irreversibility. Think in terms of critical-to-quality attributes.
Output: The desired result (positive) or the possible difficulty (negative)
Preconditions: Technical conditions that must exist before the critical step
Resources: What is needed for the critical step to be completed
Local factors: Things that could influence the critical step. When human beings are involved, this is usually what can influence the performer’s thinking and actions before and during the critical step
Good risk management requires a mindset that includes the following attributes:
Expect to be surprised: Our processes are usually underspecified and there is a lot of hidden knowledge. Risk management serves to interrogate the unknowns
Possess a chronic sense of unease: There is no such thing as perfect processes, procedures, training, design, planning. Past performance is not a guarantee of future success.
Bend, not break: Everything is dynamic, especially risk. Quality comes from adaptability.
One cannot control risk, or even successfully identify it unless a system is able flexibly to monitor both its own performance (what happens inside the system’s boundary) and what happens in the environment (outside the system’s boundary). Monitoring improves the ability to cope with possible risks
When performing the risk assessment, challenge existing monitoring and ensure that the right indicators are in place. But remember, monitoring itself is a low-effectivity control.
Ensure that there are leading indicators, which can be used as valid precursors for changes and events that are about to happen.
For each monitoring control, as yourself the following:
How have the indicators been defined? (By analysis, by tradition, by industry consensus, by the regulator, by international standards, etc.)
Relevance
When was the list created? How often is it revised? On which basis is it revised? Who is responsible for maintaining the list?
Type
How many of the indicators are of the ‘leading,’ type and how many are of the lagging? Do indicators refer to single or aggregated measurements?
Validity
How is the validity of an indicator established (regardless of whether it is leading or lagging)? Do indicators refer to an articulated process model, or just to ‘common sense’?
Delay
For lagging indicators, how long is the typical lag? Is it acceptable?
Measurement type
What is the nature of the measurements? Qualitative or quantitative? (If quantitative, what kind of scaling is used?)
Measurement frequency
How often are the measurements made? (Continuously, regularly, every now and then?)
Analysis
What is the delay between measurement and analysis/interpretation? How many of the measurements are directly meaningful and how many require analysis of some kind? How are the results communicated and used?
Stability
Are the measured effects transient or permanent?
Organization Support
Is there a regular inspection scheme or -schedule? Is it properly resourced? Where does this measurement fit into the management review?
The FDA has released the 2021 483 data. With my mind being mostly preoccupied with bioresearch monitoring inspection preparation, let’s look at that data, focusing on the top 10.
CFR Reference in 2021
# 483s 2021
# 483s 2020
# 483s 2019
21 CFR 312.60
90
58
127
FD-1572, protocol compliance
84
54
119
Informed consent
6
4
8
21 CFR 312.62(b)
48
30
60
Case history records- inadequate or inadequate
48
30
60
21 CFR 312.62(a)
13
11
17
Accountability records
12
11
16
Unused drug disposition (investigator)
1
#N/A
1
21 CFR 50.27(a)
9
3
7
Consent form not approved/signed/dated
7
2
6
Copy of consent form not provided
2
1
1
21 CFR 312.64(b)
9
6
7
Safety reports
9
6
7
21 CFR 312.66
8
7
19
Initial and continuing review
6
2
6
Unanticipated problems
2
4
6
21 CFR 312.20(a)
5
1
3
Failure to submit an IND
5
1
3
21 CFR 58.130(a)
4
2
3
Conduct: in accordance with protocol
4
2
3
21 CFR 312.50
3
7
16
General responsibilities of sponsors
3
4
14
21 CFR 50.20
3
5
8
Consent not obtained, exceptions do not apply
3
1
4
Comparison of 2021 Top 10 BIMO 483 categories with 2020 and 2019 data
Based on comparison of number of inspections per year, I am not sure we can really say there was much COVID impact in the data. COVID may have influenced observations, but all it really seemed to do is excaerbate already existing problems,
Key lesson in the data? The GCPs are struggling at accountability of documentation and decision making.
Quality professionals are often defined by our technical knowledge, and with that can come a genuine and intense love and interest in the work. In the pharmaceutical/med-device work, I work in this is defined by both a knowledge of the science and of the regulations (and that stuff inbetween – regulatory science).
The challenge here is that we start defining ourselves by our role as we progress as representing the highest level of expertise in this technical expertise, which means senior Quality (as in the department) jobs are defined in terms of in service to our function – patient safety and product quality (safety, efficacy, and quality). This can then lead to seeing people as the “means” to that end. This inevitably leads to prioritizing that outcome over people.
Do not get me wrong, results matter, and I am a firm proponent of product quality and patient safety. But this approach is reductionist and does not serve to drive fear out of the organization. How can people be safe if they are considered a means to produce value? We need to shift so that we realize we can only get to quality by focusing on our people.
A month back on LinkedIn I complained about a professional society pushing the idea of a document-free quality management system. This has got to be one of my favorite pet peeves that come from Industry 4.0 proponents, and it demonstrates a fundamental failure to understand core concepts. And frankly one of the reasons why many Industry/Quality/Pharma 4.0 initiatives truly fail to deliver. Unfortunately, I didn’t follow through with my idea of proposing a session to that conference, so instead here are my thoughts.
Fundamentally, documents are the lifeblood of an organization. But paper is not. This is where folks get confused. But fundamentally, this confusion is also limiting us.
Let’s go back to basics, which I covered in my 2018 post on document management.
When talking about documents, we really should talk about function and not just by name or type. This allows us to think more broadly about our documents and how they function as the lifeblood.
There are three types of documents:
Functional Documents provide instructions so people can perform tasks and make decisions safely effectively, compliantly, and consistently. This usually includes things like procedures, process instructions, protocols, methods, and specifications. Many of these need some sort of training decision. Functional documents should involve a process to ensure they are up-to-date, especially in relation to current practices and relevant standards (periodic review)
Records provide evidence that actions were taken, and decisions were made in keeping with procedures. This includes batch manufacturing records, logbooks and laboratory data sheets and notebooks. Records are a popular target for electronic alternatives.
Reports provide specific information on a particular topic on a formal, standardized way. Reports may include data summaries, findings, and actions to be taken.
The beating heart of our quality system brings us from functional to record to reports in a cycle of continuous improvement.
Functional documents are how we realize requirements, that is the needs and expectations of our organization. There are multiple ways to serve up the functional documents, the big three being paper, paper-on-glass, and some sort of execution system. That last, an execution system, united function with record, which is a big chunk of the promise of an execution system.
The maturation mind is to go from mostly paper execution, to paper-on-glass, to end-to-end integration and execution to drive up reliability and drive out error. But at the heart, we still have functional documents, records, and reports. Paper goes, but the document is there.
So how is this failing us?
Any process is a way to realize a set of requirements. Those requirements come from external (regulations, standards, etc) and internal (efficiency, business needs) sources. We then meet those requirements through People, Procedure, Principles, and Technology. They are interlinked and strive to deliver efficiency, effectiveness, and excellence.
So this failure to understand documents means we think we can solve this through a single technology application. an eQMS will solve problems in quality events, a LIMS for the lab, an MES for manufacturing. Each of these is a lever for change but alone cannot drive the results we want.
Because of the limitations of this thought process we get systems designed for yesterday’s problems, instead of thinking through towards tomorrow.
We get documentation systems that think of functional documents pretty much the same way we thought of them 30 years ago, as discrete things. These discrete things then interact through a gap with our electronic systems. There is little traceability, which complicates change control and makes it difficult to train experts. The funny thing, is we have the pieces, but because of the limitations of our technology we aren’t leveraging them.
The v-model approach should be leveraged in a risk-based manner to the design of our full system, and not just our technical aspects.
System feasibility matches policy and governance, user requirements allow us to trace to what elements are people, procedure, principles, and/or technology. Everything then stems from there.