29 questions to ask about your change management/change control system

While these questions are very pharma/biotech specific in places, they should serve as thought process for your own system checkup.

  1. Is there a written SOP covering the change control program that has been approved by the Quality Unit?
  2. Do procedures in place describe the actions to be taken if a change is proposed to a starting material, product component, process equipment, process environment (or site), method of production or testing or any other change that may affect product quality or reproducibility/robustness of the process?
  3. Does the SOP ensure that all GMP changes are reviewed and approved by the Quality Unit?
  4. If changes are classified as “major” or “minor,” do procedures clearly define the differences?
  5. Does your change management system include criteria for determining if changes are justified?
  6. Are proposed changes evaluated by expert teams (e.g. HSE, Regulatory, Quality…)?
  7. Is there a process for cancelling a change request prior to implementation? And Is a rationale for cancellation included?”
  8. Does your Change control management site procedure describe clearly the process to close a change request (After all regulatory approvals…)?
  9. Are any delays explained and documented?
  10. Is there a written requirement that change controls implemented during normal or routine maintenance activities be documented in the formal change control program?
  11. Is your change management system linked to other quality systems such as CAPA, validation, training?
  12. Does your change management system include criteria for determining if changes will require qualification/requalification, validation/revalidation and stability studies?
  13. Are “like for like” changes (changes where there is a direct replacement of a component with another that is exactly the same) clearly defined in all aspects (including material of construction, dimensions, functionality,,,) ? Are they adequately documented and commissioned to provide traceability and history?”
  14. Is there an allowance for emergency and temporary changes under described conditions in the procedures?
  15. Are the proposed changes evaluated relative to the marketing authorization and/or current product and process understanding?
  16. Does your change management system include criteria to evaluate whether changes affect a regulatory filling?
  17. Are appropriate regulatory experts involved? Does the regulatory affairs function evaluate and approve all changes that impact regulatory files?
  18. Are changes submitted/implemented in accordance with the regulatory requirements?
  19. Is there a defined system for the formalization, roles, and responsibilities for change control follow-up?
  20. Is the effective date of the change (completion date) recorded and when appropriate the first batch manufactured recorded?
  21. Is there a periodic check of the implementation of Change controls?
  22. Following the implementation, is there an evaluation of the change undertaken to confirm the change objectives were achieved and that there was no adverse impact on product quality?
  23. Is all documentation that provides evidence of change, and documentation of requirements, controlled and retained according to procedure?
  24. When necessary, are personnel trained before the implementation of the change?
  25. Are change controls defined with adequate target dates?
  26. If the change control goes beyond the target date, is there a new date attributed, evaluated and documented by Quality Assurance?
  27. Are there routine evaluations of the Change controls and trends (number, Change controls closure, trends as defined)?
  28. Are changes closed on due date ?
  29. Are the Change controls and follow-up formalized in a report and/or periodic meetings?

These sort of questions form a nice way to periodically checking up on your system performance and ensuring you are moving in the right direction.

Data Integrity Thoughts

At the MHRA Blog, a GDP Inspector has posted some thoughts on Data Integrity. As always, it is valuable to read what an agency, or a representative, of an agency in this case, is thinking.

The post starts with a very good point, that I think needs to be continually reiterated. Data Integrity is not new, it is just an evolution of the best practices.

Data Integrity

It is good to see a focus on data integrity from this perspective. Too often we see a focus on the GCP and GMP side, so bringing distribution into the discussion should remind everyone that:

  • Data Integrity oversight and governance is inclusive of;
    • All aspects of the product lifecycle
    • All aspects of the GxP regulated data lifecycle, which begins at the time of creation to the point of use and extends throughout its storage (retention), archival, retrieval, and eventual disposal.

Posts like this should also remind folks that data integrity is still an evolving topic, and we should expect more guidance from the agencies from this in the near future. Make sure you are keeping data integrity in your sites and have a process in place to evaluate and improve.

I recommend starting at the beginning, analyzing the health of your current program and doing a SWOT.

data integrity SWOT

 

 

 

Value of the ASQ

If I were to ask a hundred of my peers “How did you get into quality,” I would probably hear 100 different stories (with of course some commonalities). And yet, quality is definitely a distinct set of expertise and practice.

When I try to describe my job, I often find myself breaking down what I do into categories (I’m a project manager, a trainer, a problem solver, risk manager, a facilitator, a puzzle solver, a detective, etc). some of these are professional paths on their own, others not so much.

It is for this reason that I am a huge fan of the ASQ’s Quality Body of Knowledge, as it does a good job of uniting what we do. Sure, it’s not perfect but it is an excellent framework to build an understanding of just what a quality professional can bring to the table, as well as great development path.

One of the many things I love about this is the ability to learn from folks no matter what their industry. This cross-pollination is vital to innovation. And having the QBOK there gives a framework for common discussions.

With the QBOK goes a technical knowledge bolt-on. For example, in my case pharmaceuticals (strong) and medical devices (average).

The ASQ certification board I believe gets it wrong by calling these specific technical certifications “Leadership.” There is nothing leadership centric by getting the CPGP, for example.

I think we’re better breaking these certifications into QBOK core (e.g. quality improvement associate, quality process analyst, manager of quality), specific skills (e.g. six sigma, haccp, quality auditor, reliability and calibration) and then industry specific (e.g. CPGP, biomedical auditor)

As the ASQ goes through its current transformation, I hope the leadership and members remember the strength of the QBOK, work to enshrine it in everything the organization does, and continues to refine it. This is the value of my ASQ membership.

 

Training assessment as part of change management

One of the key parts of any change (process improvement, project, etc) is preparing people to actually do the work effectively. Every change needs to train.

Building valid and reliable training at the right level for the change is critical. Training is valid when it is tied to the requirements of the job – the objectives; and when it includes evaluations that are linked to the skills and knowledge started in the objectives. Reliability means that the training clearly differentiates between those who can perform the task and those who cannot.

A lot of changes default to read-and-understand training. This quite bluntly is the bane of valid and reliable training with about zero value and would be removed from our toolkit if I had my way.

There are a lot of training models, but I hold there is no single or best method. The most effective and efficient combination of methods should be chosen depending on the training material to be covered and the specific needs of the target group.

For my purposes I’ll draw from Edgar Dale’s Cone of Experience, which incorporates several theories related to instructional design and learning processes. Dale theorized that how a  learner retained information is based on what they “do” as opposed to what is “heard,” “read” or “observed.” This is often called experiential or action learning.

dalescone

Based on this understanding we can break the training types down. For example:

  • Structured discussions are Verbal and some Visual, and lives within the Abstract
  • Computer Based Trainings are mostly Iconic, with a few concrete
  • Instructor Led Trainings are a lot about Concrete
  • On-the-job training is all about the Concrete

Once we have our agreed upon training methods and understand what makes them a good training we can then determine what criteria of a change leads to the best outcome for training. Some example criteria include:

  • Is a change in knowledge or skills needed to execute the procedure?
  • Is the process or change complex? Are there multiple changes?
  • Criticality of Process and risk of performance error? What is the difficulty in detecting errors?
  • What is the identified audience (e.g., location, size, department, single site vs. multiple sites)?
  • Is the goal to change workers‘ conditioned behavior

This sort of questioning gets us to risk based thinking. We are determining where the biggest bang from our training is.

Building training is a different set of skills. I keep threatening a training peer with doing a podcast episode (probably more than one) on the subject (do I really want to do podcasts?).

The last thing I want to leave you is build training evaluations into this. Kilpatrick’s model is a favorite – Level 4 Results evaluations which tell us how effective our training was overtime actually makes a darn good effectiveness review. I strongly recommend building that into a change management process.

Gamestorming

Gamestorming: A Playbook for Innovators, Rulebreakers, and Changemakers by Dave Gray, Sunni Brown, and James Macanufo

Like The Quality Toolbox, this is a book chock-full of usefulness. This book provides a fun approach that makes it possible for collaborative activities to get everyone participating in creative and design-oriented activities. From planning meeting, generating ideas, understanding customers, creating prototypes, or making better decisions, Gamestorming is a way for groups to “work better together.”

Divided into Opening, Exploring and Closing sections, the structure of the book will be familiar to anyone with a facilitation background. I am constantly dipping into this book for activities for team meetings, project kickoffs, development meetings, lessons learned and a whole lot of other meetings.

This book delves into the usage of visual thinking to increase effectiveness and I find dramatically shorten the length of time needed for a group to solve a problem. This book proposes that visual thinking can:

  • Using a simple, shared visual language to increase understanding and information retention;
  • Applying improvisational discovery to keep participants engaged;
  • Mapping the big picture, solving problems and innovating as a team;
  • Creating visual meeting artifacts to drive decisions forward.

What is especially cool is that there is a great webpage dedicated to these games that I hope you will find as useful as I do. It is full of exercises, activities and advice.