When Your Deviation/CAPA Program Runs Smoothly Expect a Period of Increased Deviations

One reason to invest in the CAPA program is that you will see fewer deviations over time as you fix issues. That is true, but it takes time. Yes, you’ve dealt with your backlog, improved your investigations, integrated risk management, built problem-solving into your processes, and are truly driving preventative actions. And yet your deviations remain high. What is going on?

It’s because you are getting good at things and working your way through the bolus of problems. Here’s what is going on:

  1. Improved Detection and Reporting: As a CAPA program matures, it enhances an organization’s ability to detect and report deviations. Employees become more adept at identifying and documenting deviations due to better training and awareness, leading to a temporary increase in reported deviations.
  2. Thorough Root Cause Analysis: A well-functioning CAPA program emphasizes thorough root cause analysis. This process often uncovers previously unnoticed issues and identifies additional deviations that need to be addressed.
  3. Increased Scrutiny and Compliance: As the CAPA program gains momentum, management usually scrutinizes it more, which can lead to the discovery of more deviations. Organizations become more vigilant in maintaining compliance, resulting in more deviations being reported and documented.
  4. Systematic Process Improvements: The CAPA process often leads to systemic improvements in processes and procedures. As these improvements are implemented, any deviations from the new standards are more likely to be identified and recorded, contributing to an initial rise in deviation reports.
  5. Cultural Shift Towards Quality: A successful CAPA program fosters a culture of quality and continuous improvement. Employees may feel more empowered and responsible for reporting deviations, increasing the number of deviations captured.

Expect these changes and build your metric program around them. Avoid introducing a metric like a reduction in deviations in the first year, as such a metric will drive bad behavior. Instead, focus on metrics that demonstrate the success of the changes and, over time, introduce metrics to see the overall benefits.

System Boundary

A system boundary for equipment or utility refers to the demarcation points that define the extent of a system’s components and the scope of its operations. This boundary is crucial for managing, validating, maintaining, and securing the system.

    • For utilities, the last valve before the system being supplied can be used as the boundary, which can also serve as a Lock Out Tag Out (LOTO) point.
    • Physical connections like tri-clamp connections or flanges can define the boundary for packaged systems or skids.
    • For critical and non-critical systems, such as air or HVAC systems, filters can be the boundary between systems.

    Defining system boundaries is crucial during the design of equipment and systems. It helps identify where the equipment starts and stops and where the breakpoints are situated. This ensures a smooth transition and handover during the commissioning process.

    1. Early Definition: Define system boundaries as early as possible in the system’s development life cycle to reduce costs and ensure effective security controls are implemented from the start.
    2. Stakeholder Involvement: Relevant stakeholders, such as system engineers, utility providers, and maintenance teams, should be involved in defining system boundaries to ensure alignment and a clear understanding of responsibilities.
    3. Documentation and Traceability: To ensure consistency and traceability, document and maintain system boundaries in relevant diagrams (e.g., P&IDs, system architecture diagrams) and commissioning/qualification protocols.
    4. Periodic Review: Regularly review and update system boundaries as the system evolves or the environment changes, using change management and configuration management processes to ensure consistency and completeness.
    5. Enterprise-level Coordination: At an enterprise level, coordinate and align system boundaries across all major systems to identify gaps, overlaps, and seamless coverage of security responsibilities.

    Applying Systems Thinking

    Systems thinking and modeling techniques are essential for managing and improving complex systems. These approaches help understand the interconnected nature of systems, identify key variables, and make informed decisions to enhance performance, reliability, and sustainability. Here’s how these methodologies can be applied:

    Holistic Approach

      • Systems thinking involves viewing the system as an integrated whole rather than isolated components. This approach acknowledges that the system has qualities that the sum of individual parts cannot explain.
      • When developing frameworks, models, and best practices for systems, consider the interactions between people, processes, technology, and the environment.

      Key Elements:

      • Interconnectedness: Recognize that all parts of the utility system are interconnected. Changes in one part can affect other parts, sometimes in unexpected ways.
      • Feedback Loops: Identify feedback loops where outputs from one part of the system influence other parts. These can be reinforcing or balancing loops that affect system behavior over time.
      • Time Consideration: Understand that effects rarely ripple through a complex system instantaneously. Consider how changes will affect the system over time.

      Process Architecture

      Building a good process requires clear ownership and a deliberate plan. There is a fair amount of work that goes into it, which can be broken down as follows:

         
      Category   
         
      Sub-category   
         
      Basic theme   
         

         

         

         

         

         

         

         

         

         

         

         

         

         

         

         

         
      Planning   
         

         

         

         
      Process
         
      Measurement   
         
      Identify,   design, and implement balanced   process metrics and measurement   
         
      Implement process   metrics and   measurement reporting mechanisms   
         
      Identify and implement KPIs   (And KRIs)    aligned   to process   
         
      Evaluate cycle times   and identify potential wastes   
         
      Align   level and recognition of people involved in the   process to align with process   
         

         
      Customer
         
      Experience   
         
      Process design   with customer interaction trigger mechanisms   
         
      Design process in line with customer expectations   
         
      Identify customer process performance expectations   
         
      Design customer entry points and   define transaction types   
         

         

         

         
      Process Change   
         
      Identify incremental and re-engineering process   enhancement opportunities with staff involvement   
         
      Design process with minimal process   hand-off’s   
         
      Create and execute process improvement plans   
         
      Identify process   automation opportunities   
         
      Pilot process   design to ensure meeting performance objectives   
         
      Governance   
         
      Design efficient process with   governance & internal control considerations   
         
      Capacity   
         
      Conduct demand   and capacity planning   activities   
         

         

         
      Staff Training   
         
      Develop and conduct staff   training initiatives in line with customer,
         
      process, product, and systems expectations   
         
      Develop skills   matrix and staff capability requirements in line with process design   
         
      Technology   
         
      Define technology enablers   
         
      Alignment   
         
      Align process objectives with organizational goals   
         
      Change
         
      Management   
         
      Engage impacted stakeholders on process changes   
         

         

         

         

         

         
      Control   
         

         

         
      Process
         
      Measurement   
         
      Process performance monitoring   
         
      Report on process and staff performance with utilization of visual management tools   
         
      Obtain continuous customer satisfaction and expectation of process   
         
      Active management of process exceptions   
         
      Monitor staff performance metrics   
         

         
      Process Change   
         
      Identify process   improvement opportunities on a continuous basis   
         
      Focused process hand-off management and   tracking   
         
      Capacity   
         
      Demand and capacity planning and monitoring   
         

         

         
      Governance   
         
      Process Change   
         
      Process maintenance and continuous update   
         
      Define and conform to process documentation standards   
         
      Change
         
      Management   
         
      Process communication and awareness   
         
      Staff Training   
         
      Utilize process documentation knowledge to facilitate staff training   

      Like any activity, it helps to document it. I use a template like this.

      Documents and the Heart of the Quality System

      A month back on LinkedIn I complained about a professional society pushing the idea of a document-free quality management system. This has got to be one of my favorite pet peeves that come from Industry 4.0 proponents, and it demonstrates a fundamental failure to understand core concepts. And frankly one of the reasons why many Industry/Quality/Pharma 4.0 initiatives truly fail to deliver. Unfortunately, I didn’t follow through with my idea of proposing a session to that conference, so instead here are my thoughts.

      Fundamentally, documents are the lifeblood of an organization. But paper is not. This is where folks get confused. But fundamentally, this confusion is also limiting us.

      Let’s go back to basics, which I covered in my 2018 post on document management.

      When talking about documents, we really should talk about function and not just by name or type. This allows us to think more broadly about our documents and how they function as the lifeblood.

      There are three types of documents:

      • Functional Documents provide instructions so people can perform tasks and make decisions safely effectively, compliantly, and consistently. This usually includes things like procedures, process instructions, protocols, methods, and specifications. Many of these need some sort of training decision. Functional documents should involve a process to ensure they are up-to-date, especially in relation to current practices and relevant standards (periodic review)
      • Records provide evidence that actions were taken, and decisions were made in keeping with procedures. This includes batch manufacturing records, logbooks and laboratory data sheets and notebooks. Records are a popular target for electronic alternatives.
      • Reports provide specific information on a particular topic on a formal, standardized way. Reports may include data summaries, findings, and actions to be taken.

      The beating heart of our quality system brings us from functional to record to reports in a cycle of continuous improvement.

      Functional documents are how we realize requirements, that is the needs and expectations of our organization. There are multiple ways to serve up the functional documents, the big three being paper, paper-on-glass, and some sort of execution system. That last, an execution system, united function with record, which is a big chunk of the promise of an execution system.

      The maturation mind is to go from mostly paper execution, to paper-on-glass, to end-to-end integration and execution to drive up reliability and drive out error. But at the heart, we still have functional documents, records, and reports. Paper goes, but the document is there.

      So how is this failing us?

      Any process is a way to realize a set of requirements. Those requirements come from external (regulations, standards, etc) and internal (efficiency, business needs) sources. We then meet those requirements through People, Procedure, Principles, and Technology. They are interlinked and strive to deliver efficiency, effectiveness, and excellence.

      So this failure to understand documents means we think we can solve this through a single technology application. an eQMS will solve problems in quality events, a LIMS for the lab, an MES for manufacturing. Each of these is a lever for change but alone cannot drive the results we want.

      Because of the limitations of this thought process we get systems designed for yesterday’s problems, instead of thinking through towards tomorrow.

      We get documentation systems that think of functional documents pretty much the same way we thought of them 30 years ago, as discrete things. These discrete things then interact through a gap with our electronic systems. There is little traceability, which complicates change control and makes it difficult to train experts. The funny thing, is we have the pieces, but because of the limitations of our technology we aren’t leveraging them.

      The v-model approach should be leveraged in a risk-based manner to the design of our full system, and not just our technical aspects.

      System feasibility matches policy and governance, user requirements allow us to trace to what elements are people, procedure, principles, and/or technology. Everything then stems from there.

      Measuring Training Effectiveness for Organizational Performance

      When designing training we want to make sure four things happen:

      • Training is used correctly as a solution to a performance problem
      • Training has the the right content, objectives or methods
      • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
      • Training delivers the expected learning

      Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

      The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

      Level 1: Reaction

      Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

      Level 2: Learning

      Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

      Level 3: Behavior

      Level 3 measures whether the learning is transferred into practice in the workplace.

      Level 4: Results

      Measures the effect on the business environment. Do we meet objectives?

      Evaluation LevelCharacteristicsExamples
      Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
      ▪ Did they like the venue, equipment, timing, domestics, etc?
      ▪ Did the trainees like and enjoy the training?
      ▪ Was it a good use of their time?
      ▪ Level of participation
      ▪ Ease and comfort of experience
      ▪ feedback forms based on subjective personal reaction to the training experience
      ▪ Verbal reaction which can be analyzed
      ▪ Post-training surveys or questionnaires
      ▪ Online evaluation or grading by delegates
      ▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
      ▪ typically ‘happy sheets’
      Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
      ▪ Did the trainees learn what intended to be taught?
      ▪ Did the trainee experience what was intended for them to experience?
      ▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
      ▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
      ▪ Typically assessments or tests before and after the training
      ▪ Methods of assessment need to be closely related to the aims of the learning
      ▪ Reliable, clear scoring and measurements need to be established
      ▪ hard-copy, electronic, online or interview style assessments are all possible
      Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
      ▪ Did the trainees put their learning into effect when back on the job?
      ▪ Were the relevant skills and knowledge used?
      ▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
      ▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
      ▪ Was the change in behavior and new level of knowledge sustained?
      ▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
      ▪ Assessments need to be designed to reduce subjective judgment of the observer
      ▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
      ▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
      Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

      Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
      The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
      ▪ This process overlays normal good management practice – it simply needs linking to the training input
      ▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
      4 Levels of Training Effectiveness

      Example in Practice – CAPA

      When building a training program, start with the intended behaviors that will drive results. Evaluating our CAPA program, we have two key aims, which we can apply measures against.

      BehaviorMeasure
      Investigate to find root cause% recurring issues
      Implement actions to eliminate root causePreventive to corrective action ratio

      To support each of these top-level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

      Our four levels to measure training effectiveness will now look like this:

      LevelMeasure
      Level 1: Reaction Personal action plan and a happy sheet
      Level 2: Learning Completion of Rubric on a sample event
      Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
      Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

      This is all about measuring the effectiveness of the transfer of behaviors.

      Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
      Training participants are required to attend follow-up sesions and other transfer interventions.

      What is indicates:
      Individuals and teams are committed to the change and obtaining the intended benefits.
      Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

      What is indicates:
      They key factor of a trainee is attendance, not behavior change.
      The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

      What is indicates:
      The organization has a clear vision and expectation on what the training should accomplish.
      The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

      What is indicates:
      The organization only has a vague idea of what the training should accomplish.
      Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

      What is indicates:
      Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
      Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

      What is indicates:
      Transfer is not considered very important in the organziaiton. Managers have more important things to do.
      Each training ends with careful planning of individual transfer intentions.

      What is indicates:
      Defining transfer intentions is a central component of the training.
      Transfer planning at the end of the training does not take place or only sporadically.

      What is indicates:
      Defining training intentions is not (or not an essential) part of the training.

      Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.