Documents and the Heart of the Quality System

A month back on LinkedIn I complained about a professional society pushing the idea of a document-free quality management system. This has got to be one of my favorite pet peeves that come from Industry 4.0 proponents, and it demonstrates a fundamental failure to understand core concepts. And frankly one of the reasons why many Industry/Quality/Pharma 4.0 initiatives truly fail to deliver. Unfortunately, I didn’t follow through with my idea of proposing a session to that conference, so instead here are my thoughts.

Fundamentally, documents are the lifeblood of an organization. But paper is not. This is where folks get confused. But fundamentally, this confusion is also limiting us.

Let’s go back to basics, which I covered in my 2018 post on document management.

When talking about documents, we really should talk about function and not just by name or type. This allows us to think more broadly about our documents and how they function as the lifeblood.

There are three types of documents:

  • Functional Documents provide instructions so people can perform tasks and make decisions safely effectively, compliantly, and consistently. This usually includes things like procedures, process instructions, protocols, methods, and specifications. Many of these need some sort of training decision. Functional documents should involve a process to ensure they are up-to-date, especially in relation to current practices and relevant standards (periodic review)
  • Records provide evidence that actions were taken, and decisions were made in keeping with procedures. This includes batch manufacturing records, logbooks and laboratory data sheets and notebooks. Records are a popular target for electronic alternatives.
  • Reports provide specific information on a particular topic on a formal, standardized way. Reports may include data summaries, findings, and actions to be taken.

The beating heart of our quality system brings us from functional to record to reports in a cycle of continuous improvement.

Functional documents are how we realize requirements, that is the needs and expectations of our organization. There are multiple ways to serve up the functional documents, the big three being paper, paper-on-glass, and some sort of execution system. That last, an execution system, united function with record, which is a big chunk of the promise of an execution system.

The maturation mind is to go from mostly paper execution, to paper-on-glass, to end-to-end integration and execution to drive up reliability and drive out error. But at the heart, we still have functional documents, records, and reports. Paper goes, but the document is there.

So how is this failing us?

Any process is a way to realize a set of requirements. Those requirements come from external (regulations, standards, etc) and internal (efficiency, business needs) sources. We then meet those requirements through People, Procedure, Principles, and Technology. They are interlinked and strive to deliver efficiency, effectiveness, and excellence.

So this failure to understand documents means we think we can solve this through a single technology application. an eQMS will solve problems in quality events, a LIMS for the lab, an MES for manufacturing. Each of these is a lever for change but alone cannot drive the results we want.

Because of the limitations of this thought process we get systems designed for yesterday’s problems, instead of thinking through towards tomorrow.

We get documentation systems that think of functional documents pretty much the same way we thought of them 30 years ago, as discrete things. These discrete things then interact through a gap with our electronic systems. There is little traceability, which complicates change control and makes it difficult to train experts. The funny thing, is we have the pieces, but because of the limitations of our technology we aren’t leveraging them.

The v-model approach should be leveraged in a risk-based manner to the design of our full system, and not just our technical aspects.

System feasibility matches policy and governance, user requirements allow us to trace to what elements are people, procedure, principles, and/or technology. Everything then stems from there.

Phase Appropriate GMPs

Throughout the regulations and guidances you will find something like this: “As with other aspects of the development program, documentation may be ‘less vigorous’ in early phases, but ‘they would still need to be adequate in order to allow for traceability of the manufacturing process.'”

Agencies, like the FDA, have consistently stated that phase 1 is less vigorous but starting in phase 2 you are fully GMP. These regulations are meant to ensure basic safety and documentation standards are met in the manufacture and testing of phase 1 clinical trial material and to encourage the design of quality into the process. It is expected that enhanced process controls and GMP standards will be employed as the material transitions into later clinical stages.

With the speed of development, and the fact early phase material can support commercialization, this phased in approach is an important balancing act in advanced therapeutics like cell and gene therapy.  It is crucial that manufacturers of phase 1 clinical trial material assess potential risks associated with their manufacturing process, facilities, equipment, methods, materials, etc. and the associated impact of these risks on the safety and quality of the material. All significant risks should then be mitigated, and appropriate controls implemented to reduce potential adverse impact for the patients and data generated.

Recognizing the difference between the elements of a strong quality system and what is needed for GMPs. Folks often confuse the two and have difficulties maturing quickly. The stuff in the orange? That’s system and is not GMP dependent.

Some GMP, such as clean room controls or starting materials controls should be robust from the beginning. Others, such as cleaning validation, are developed as you move through the phases.

Root Cause Analysis Deficiencies

An appropriate level of root cause analysis should be applied during the investigation of deviations, suspected product defects and other problems. This can be determined using Quality Risk Management principles. In cases where the true root cause(s) of the issue cannot be determined, consideration should be given to identifying the most likely root cause(s) and to addressing those. Where human error is suspected or identified as the cause, this should be justified having taken care to ensure that process, procedural or system based errors or problems have not been overlooked, if present.

Appropriate corrective actions and/or preventative actions (CAPAs) should be identified and taken in response to investigations. The effectiveness of such actions should be monitored and assessed, in line with Quality Risk Management principles.

EU Guidelines for Good Manufacturing Practice for Medicinal Products for Human and Veterinary Use, Chapter 1 Pharmaceutical System C1.4(xiv)

The MHRA cited 210 companies in 2019 on failure to conduct good root cause analysis and develop appropriate CAPAs. 6 of those were critical and a 100 were major.

My guess is if I asked those 210 companies in 2018 how their root cause analysis and CAPAs were doing, 85% would say “great!” We tend to overestimate our capabilities on the fundamentals (which root cause analysis and CAPA are) and not to continuously invest in improvement.

Of course, without good benchmarking, its really easy to say good enough and not be. There can be a tendency to say “Well we’ve never had a problem here, so we’re good.” Where in reality its just the problem has never been seen in an inspection or has never gone critical.

The FDA has fairly similar observations around root cause analysis. As does anyone who shares their metrics in any way. Bad root cause and bad CAPAs are pretty widespread.

This comes up a lot because the quality of CAPAs (and quantity) are considered key indicators of an organization’s health. CAPAs demonstrate that issues are acknowledged, tracked and remediated in an effective manner to eliminate or reduce the risk of a recurrence. The timeliness and robustness of these processes and records indicate whether an organization demonstrates effective planning and has sufficient resources to manage, resolve and correct past issues and prevent future issues.

A good CAPA system covers problem identification (which can be, and usually is a few different processes), root cause analysis, corrective and preventive actions, CAPA effectiveness, metrics, and governance. It is a house of cards, short one and the whole structure will fall down around you, often when you least need it to.

We can’t freeze our systems with superglue. If we are not continually improving then we are going backwards. No steady state when it comes to quality.

Quality Management as a Program

Quality System Management should be viewed and governed as a program

Program management is commonly defined as “a group of projects that contribute to a common, higher order objective.” The projects in a program are related, and the intent of achieving benefits would not be realized if the projects were managed independently.

Program management includes the practices and processes of strategic alignment, benefits management, stakeholder management, governance, and lifecycle management. Program governance creates the control framework for delivering the programs’ change objectives and making benefit delivery visible to the organization’s control.

There are different styles of program management and what I am focusing on here is what is sometimes called “heartbeat”, which aims to achieve evolutionary improvement of existing systems and processes or organizational change. This program type creates value by reconciling contradicting views and demands for change from various organization actors in order to enhance existing systems and practices while sustaining operations.

Heartbeat program management is all about awareness of the contexts of the program and requires knowledge of strategy, competition, trends in the industry, and differences in management practices between the business units of the company. A good heartbeat program manager is highly concerned about their program’s long-term effects and implications for the company’s business.

Magic triangle of a program manager

Programs exist to create value by improving the management of projects and to create benefits through better organization of projects. The fundamental goals of program management are:

  • Efficiency and effectiveness: Aspects of management that a proficient project manager should address and benefit from coordination.
  • Business focus goal: The external alignment of projects with the requirements, goals, drivers and culture of the wider organization. These goals are associated with defining an appropriate direction for the constituent projects within a program as well as for the program as a whole.
GoalDescription
Efficiency and effectiveness goals
Improved co-ordinationAssist in identification and definition of project inter-dependencies and thereby reduce the incidence of work backlogs, rework and delays
Improved dependency managementReduce the amount of re-engineering required due to inadequate management of the interfaces between projects
More effective resource utilizationImprove the effectiveness and efficiency of the allocation of shared resources
Assist in providing justification for specialist resources that deliver an overall improvement to program delivery and/or business operations
More effective knowledge transferProvide a means to identify and improve upon transferable lessons.
Facilitate organizational learning
Greater senior management ‘visibility’Enable senior management to better monitor, direct and control the implementation process
Business focus goals
More coherent communicationImprove communication of overall goals and direction both internally and externally to the program
Target management attention clearly on the realization of benefits that are defined and understood at the outset and achieved through the lifetime of the program and beyond
Assist in keeping personal agendas in check
Improved project definitionEnsure that project definition is more systematic and objective, thereby reducing the prevalence of projects with a high risk of failure or obsolescence
Enable the unbundling of activities in a strategic project-set into specific projects
Enable the bundling of related projects together to create a greater leverage or achieve economies of scale
Better alignment with business drivers, goals and strategyImproves the linkage between the strategic direction of organizations and the management activities required to achieve these strategic objectives
Provide an enabling framework for the realization of strategic change and the ongoing alignment of strategy and projects in response to a changing business environment (via project addition/culling, etc.)

The Attributes of a Good Heartbeat Program Manager are the Attributes to a Good Quality Leader

As quality leaders we are often ambassadors to ensure that the quality program is progressing despite the conflicting requirements of the various stakeholders. We need to actively influence quality-related decisions of all stakeholders, including people holding superior positions. Having a well-developed personal network within the organization is particularly helpful.

It is critical to always be communicating about the quality program in a visionary way, to be seen as passionate ambassadors. Playing this role requires constant attention to differing expectations of the stakeholders and various ways to influence stakeholders for the benefit of the quality system. To always be striving to build quality, to advance quality.

As advocates for Quality, it is a core competency to be able to stand up and defend, or argue for, the quality program and team members. This ability to challenge others, including their superiors, in a productive way is a critical ability.

A key focus of the quality program should be on engagement with a conscious and sustained drive to secure buy-in from key stakeholders (including senior management) and win over the hearts and minds of those responsible for execution to make changes feel less painful and inflicted. As quality leaders our aim should always be to engender a climate of comprehension, inclusion and trust, and to draw upon expertise globally to create fit for purpose processes and systems

Effective quality leaders need to be “heavyweight” organizational players.

Core Competencies of the Heartbeat Manager

  • Contextual awareness
  • Scenario planning
  • Political skills
  • Courage
  • Networking

A note on program life

Many standard approaches perceive programs to have a finite life. This is constraining given that the strategies themselves, especially as applied to quality, have long lifetimes. I believe that program management has as much to learn from quality management,  and there is a lot of value in seeing an indefinite time horizon as beneficial.

Quality management is an evolutionary approach, and utilizing program management methodologies within it should be taken in the same light.

Layering metrics

We have these quality systems with lots of levers, with interrelated components. And yet we select one or two metrics and realize that even if we meet them, we aren’t really measuring the right stuff nor are we driving continious improvement.

One solution is to create layered metrics, which basically means drill down your process and identify the metrics at each step.

Lots of ways to do this. An easy way to start is to use the 5-why process, a tool most folks are comfortable with.

So for example, CAPA. It is pretty much agreed upon that CAPAs should be completed in a timely manner. That makes this a top level goal. Unfortunately, in this hypothetical example, we are suffering a less than 100% closure goal (or whatever level is appropriate in your organization based on maturity)

Why 1Why was CAPA closure not 100%
Because CAPA tasks were not closed on time.

Success factor needed for this step: CAPA tasks to be closed by due date.

Metric for this step: CAPA closure task success rate
Why 2Why were CAPA tasks not closed on time?
Because individuals did not have appropriate time to complete CAPA tasks.

Metric for this step: Planned versus Actual time commitment
Why 3Why did individuals not have appropriate time to complete CAPA tasks?
Because CAPA task due dates are guessed at.

Metric for this step: CAPA task adherence to target dates based on activity (e.g. it takes 14 days to revise a document and another 14 days to train, the average document revision task should be 28 days)
Why 4Why are CAPA task due dates guessed at?
Because appropriate project planning is not completed.

Metric for this step: Adherence to Process Confirmation
Why 5Why is appropriate project planning not completed?
Because CAPAs are always determined on the last day the deviation is due.

Metric: Adherence to Root Cause Analysis process

I might on report on the top CAPA closure rate and 1 or 2 of these, and keep the others in my process owner toolkit. Maybe we jump right to the last one as what we report on. Depends on what needs to be influenced in my organization and it will change over time.

It helps to compare this output against the 12 system leverage points.

Donella Meadows 12 System Leverage Points

These metrics go from 3 “goals of the system” with completing CAPA tasks effectively and on time, to 4 “self organize” and 5 “rules of the system.” It also has nice feedback loops based on the process confirmations. I’d view them as potentially pretty successful. Of course, we would test these and tinker and basically experiment until we find the right set of metrics that improves our top-level goal.