X-Matrix for Strategic Execution

Quality needs to be managed as a program, and as such, it must walk a delicate line between setting long-term goals, short-term goals, improvement priorities, and interacting with a suite of portfolios, programs, and KPIs. As quality professionals navigate increasingly complex regulatory landscapes, technological disruptions, and evolving customer expectations, the need for structured approaches to quality planning has never been greater.

At the heart of this activity, I use an x-matrix, a powerful tool at the intersection of strategic planning and quality management. The X-Matrix provides a comprehensive framework that clarifies the chaos, visually representing how long-term quality objectives cascade into actionable initiatives with clear ownership and metrics – connecting the dots between aspiration and execution in a single, coherent framework.

Understanding the X-Matrix: Structure and Purpose

The X-Matrix is a strategic planning tool from Hoshin Kanri methodology that brings together multiple dimensions of organizational strategy onto a single page. Named for its distinctive X-shaped pattern of relationships, this tool enables us to visualize connections between long-term breakthroughs, annual objectives, improvement priorities, and measurable targets – all while clarifying ownership and resource allocation.

The X-Matrix is structured around four key quadrants that create its distinctive shape:

  1. South Quadrant (3-5 Year Breakthrough Objectives): These are the foundational, long-term quality goals that align with organizational vision and regulatory expectations. In quality contexts, these might include achieving specific quality maturity levels, establishing new quality paradigms, or fundamentally transforming quality systems.
  2. West Quadrant (Annual Objectives): These represent the quality priorities for the coming year that contribute directly to the longer-term breakthroughs. These objectives are specific enough to be actionable within a one-year timeframe.
  3. North Quadrant (Improvement Priorities): These are the specific initiatives, projects, and process improvements that will be undertaken to achieve the annual objectives. Each improvement priority should have clear ownership and resource allocation.
  4. East Quadrant (Targets/Metrics): These are the measurable indicators that will be used to track progress toward both annual objectives and breakthrough goals. In quality planning, these often include process capability indices, deviation rates, right-first-time metrics, and other key performance indicators.

The power of the X-Matrix lies in the correlation points where these quadrants intersect. These intersections show how initiatives support objectives and how objectives align with long-term goals. They create a clear line of sight from strategic quality vision to daily operations and improvement activities.

Why the X-Matrix Excels for Quality Planning

Traditional quality planning approaches often suffer from disconnection between strategic objectives and tactical activities. Quality initiatives may be undertaken in isolation, with limited understanding of how they contribute to broader organizational goals. The X-Matrix addresses this fragmentation through its integrated approach to planning.

The X-Matrix provides visibility into the interdependencies within your quality system. By mapping the relationships between long-term quality objectives, annual goals, improvement priorities, and key metrics, quality leaders can identify potential resource conflicts, capability gaps, and opportunities for synergy.

Developing an X-Matrix necessitates cross-functional input and alignment to ensure that quality objectives are not isolated but integrated with operations, regulatory, supply chain, and other critical functions. The development of an X-Matrix encourages the back-and-forth dialogue necessary to develop realistic, aligned goals.

Perhaps most importantly for quality organizations, the X-Matrix provides the structure and rigor to ensure quality planning is not left to chance. As the FDA and other regulatory bodies increasingly emphasize Quality Management Maturity (QMM) as a framework for evaluating pharmaceutical operations, the disciplined approach embodied in the X-Matrix becomes a competitive advantage. The matrix systematically considers resource constraints, capability requirements, and performance measures – all essential components of mature quality systems.

Mapping Modern Quality Challenges to the X-Matrix

The quality landscape is evolving rapidly, with several key challenges that must be addressed in any comprehensive quality planning effort. The X-Matrix provides an ideal framework for addressing these challenges systematically. Building on the post “The Challenges Ahead for Quality” we can start to build our an X-matrix.

Advanced Analytics and Digital Transformation

As data sources multiply and processing capabilities expand, quality organizations face increased expectations for data-driven insights and decision-making. An effective X-Matrix for quality planning couldinclude:

3-5 Year Breakthrough: Establish a predictive quality monitoring system that leverages advanced analytics to identify potential quality issues before they manifest.

Annual Objectives: Implement data visualization tools for key quality metrics; establish data governance framework for GxP data; develop predictive models for critical quality attributes.

Improvement Priorities: Create cross-functional data science capability; implement automated data capture for batch records; develop real-time dashboards for process parameters.

Metrics: Percentage of quality decisions made with data-driven insights; predictive model accuracy; reduction in quality investigation cycle time through analytics.

Operational Stability in Complex Supply Networks

As pharmaceutical manufacturing becomes increasingly globalized with complex supplier networks, operational stability emerges as a critical challenge. Operational stability represents the state where manufacturing and quality processes exhibit consistent, predictable performance over time with minimal unexpected variation. The X-Matrix can address this through:

3-5 Year Breakthrough: Achieve Level 4 (Proactive) operational stability across all manufacturing sites, networks and key suppliers.

Annual Objectives: Implement statistical process control for critical processes; establish supplier quality alignment program; develop operational stability metrics and monitoring system.

Improvement Priorities: Deploy SPC training and tools; conduct operational stability risk assessments; implement regular supplier quality reviews; establish cross-functional stability team.

Metrics: Process capability indices (Cp, Cpk); right-first-time batch rates; deviation frequency and severity patterns; supplier quality performance.

Using the X-Matrix to Address Validation Challenges

Validation presents unique challenges in modern pharmaceutical operations, particularly as data systems become more complex and interconnected. Handling complex data types and relationships can be time-consuming and difficult, while managing validation rules across large datasets becomes increasingly costly and challenging. The X-Matrix offers a structured approach to addressing these validation challenges:

3-5 Year Breakthrough: Establish a risk-based, continuous validation paradigm that accommodates rapidly evolving systems while maintaining compliance.

Annual Objectives: Implement risk-based validation approach for all GxP systems; establish automated testing capabilities for critical applications; develop validation strategy for AI/ML applications.

Improvement Priorities: Train validation team on risk-based approaches; implement validation tool for automated test execution; develop validation templates for different system types; establish validation center of excellence.

Metrics: Validation cycle time reduction; percentage of validation activities conducted via automated testing; validation resource efficiency; validation effectiveness (post-implementation defects).

This X-Matrix approach to validation challenges ensures that validation activities are not merely compliance exercises but strategic initiatives that support broader quality objectives. By connecting validation priorities to annual objectives and long-term breakthroughs, organizations can justify the necessary investments and resources while maintaining a clear focus on business value.

Connecting X-Matrix Planning to Quality Maturity Models

The FDA’s Quality Management Maturity (QMM) model provides a framework for assessing an organization’s progression from reactive quality management to optimized, continuous improvement. This model aligns perfectly with the X-Matrix planning approach, as both emphasize systematic progression toward excellence.

The X-Matrix can be structured to support advancement through quality maturity levels by targeting specific capabilities associated with each level:

Maturity LevelX-Matrix Breakthrough ObjectiveAnnual ObjectivesImprovement Priorities
Reactive (Level 1)Move from reactive to controlled quality operationsEstablish baseline quality metrics; implement basic SOPs; define critical quality attributesProcess mapping; basic training program; deviation management system
Controlled (Level 2)Transition from controlled to predictive quality systemsImplement statistical monitoring; establish proactive quality planning; develop quality risk managementSPC implementation; risk assessment training; preventive maintenance program
Predictive (Level 3)Advance from predictive to proactive quality operationsEstablish leading indicators; implement knowledge management; develop cross-functional quality ownershipPredictive analytics capability; knowledge database; quality circles
Proactive (Level 4)Progress from proactive to innovative quality systemsImplement continuous verification; establish quality innovation program; develop supplier quality maturityContinuous process verification; innovation workshops; supplier development program
Innovative (Level 5)Maintain and leverage innovative quality capabilitiesEstablish industry leading practices; develop quality thought leadership; implement next-generation quality approachesQuality research initiatives; external benchmarking; technology innovation pilots

This alignment between the X-Matrix and quality maturity models offers several advantages. First, it provides a clear roadmap for progression through maturity levels. Second, it helps organizations prioritize initiatives based on their current maturity level and desired trajectory. Finally, it creates a framework for measuring and communicating progress toward maturity goals.

Implementation Best Practices for Quality X-Matrix Planning

Implementing an X-Matrix approach to quality planning requires careful consideration of several key factors.

1. Start With Clear Strategic Quality Imperatives

The foundation of any effective X-Matrix is a clear set of strategic quality imperatives that align with broader organizational goals. These imperatives should be derived from:

  • Regulatory expectations and trends
  • Customer quality requirements
  • Competitive quality positioning
  • Organizational quality vision

These imperatives form the basis for the 3-5 year breakthrough objectives in the X-Matrix. Without this clarity, the remaining elements of the matrix will lack focus and alignment.

2. Leverage Cross-Functional Input

Quality does not exist in isolation; it intersects with every aspect of the organization. Effective X-Matrix planning requires input from operations, regulatory affairs, supply chain, R&D, and other functions. This cross-functional perspective ensures that quality objectives are realistic, supported by appropriate capabilities, and aligned with broader organizational priorities.

The catchball process from Hoshin Kanri provides an excellent framework for this cross-functional dialogue, allowing for iterative refinement of objectives, priorities, and metrics based on input from various stakeholders.

3. Focus on Critical Few Priorities

The power of the X-Matrix lies in its ability to focus organizational attention on the most critical priorities. Resist the temptation to include too many initiatives, objectives, or metrics. Instead, identify the vital few that will drive meaningful progress toward quality maturity and operational excellence.

This focus is particularly important in regulated environments where resource constraints are common and compliance demands can easily overwhelm improvement initiatives. A well-designed X-Matrix helps quality leaders maintain strategic focus amid the daily demands of compliance activities.

4. Establish Clear Ownership and Resource Allocation

The X-Matrix should clearly identify who is responsible for each improvement priority and what resources they will have available. This clarity is essential for execution and accountability. Without explicit ownership and resource allocation, even the most well-conceived quality initiatives may fail to deliver results.

The structure of the X-Matrix facilitates this clarity by explicitly mapping resources to initiatives and objectives. This mapping helps identify potential resource conflicts early and ensures that critical initiatives have the support they need.

Balancing Structure with Adaptability in Quality Planning

A potential criticism of highly structured planning approaches like the X-Matrix is that they may constrain adaptability and innovation. However, a well-designed X-Matrix actually enhances adaptability by providing a clear framework for evaluating and integrating new priorities. The structure of the matrix makes it apparent when new initiatives align with strategic objectives and when they represent potential distractions. This clarity helps quality leaders make informed decisions about where to focus limited resources when disruptions occur.

The key lies in building what might be called “bounded flexibility”—freedom to innovate within well-understood boundaries. By thoroughly understanding which process parameters truly impact critical quality attributes, organizations can focus stability efforts where they matter most while allowing flexibility elsewhere. The X-Matrix supports this balanced approach by clearly delineating strategic imperatives (where stability is essential) from tactical initiatives (where adaptation may be necessary).

Change management systems represent another critical mechanism for balancing stability with innovation. Well-designed change management ensures that innovations are implemented in a controlled manner that preserves operational stability. The X-Matrix can incorporate change management as a specific improvement priority, ensuring that the organization’s ability to adapt is explicitly addressed in quality planning.

The X-Matrix as the Engine of Quality Excellence

The X-Matrix represents a powerful approach to quality planning that addresses the complex challenges facing modern quality organizations. By providing a structured framework for aligning long-term quality objectives with annual goals, specific initiatives, and measurable targets, the X-Matrix helps quality leaders navigate complexity while maintaining strategic focus.

As regulatory bodies evolve toward Quality Management Maturity models, the systematic approach embodied in the X-Matrix will become increasingly valuable. Organizations that establish and maintain strong operational stability through structured planning will find themselves well-positioned for both compliance and competition in an increasingly demanding pharmaceutical landscape.

The journey toward quality excellence is not merely technical but cultural and organizational. It requires systematic approaches, appropriate metrics, and balanced objectives that recognize quality not as an end in itself but as a means to deliver value to patients, practitioners, and the business. The X-Matrix provides the framework needed to navigate this journey successfully, translating quality vision into tangible results that advance both organizational performance and patient outcomes.

By adopting the X-Matrix approach to quality planning, organizations can ensure that their quality initiatives are not isolated efforts but components of a coherent strategy that addresses current challenges while building the foundation for future excellence. In a world of increasing complexity and rising expectations, this structured yet flexible approach to quality planning may well be the difference between merely complying and truly excelling.

Quality, Decision Making and Putting the Human First

Quality stands in a position, sometimes uniquely in an organization, of engaging with stakeholders to understand what objectives and unique positions the organization needs to assume, and the choices that are making in order to achieve such objectives and positions.

The effectiveness of the team in making good decisions by picking the right choices depends on their ability of analyzing a problem and generating alternatives. As I discussed in my post “Design Lifecycle within PDCA – Planning” experimentation plays a critical part of the decision making process. When designing the solution we always consider:

  • Always include a “do nothing” option: Not every decision or problem demands an action. Sometimes, the best way is to do nothing.
  • How do you know what you think you know? This should be a question everyone is comfortable asking. It allows people to check assumptions and to question claims that, while convenient, are not based on any kind of data, firsthand knowledge, or research.
  • Ask tough questions Be direct and honest. Push hard to get to the core of what the options look like.
  • Have a dissenting option. It is critical to include unpopular but reasonable options. Make sure to include opinions or choices you personally don’t like, but for which good arguments can be made. This keeps you honest and gives anyone who see the pros/cons list a chance to convince you into making a better decision than the one you might have arrived at on your own.
  • Consider hybrid choices. Sometimes it’s possible to take an attribute of one choice and add it to another. Like exploratory design, there are always interesting combinations in decision making. This can explode the number of choices, which can slow things down and create more complexity than you need. Watch for the zone of indifference (options that are not perceived as making any difference or adding any value) and don’t waste time in it.
  • Include all relevant perspectives. Consider if this decision impacts more than just the area the problem is identified in. How does it impact other processes? Systems?

A struggle every organization has is how to think through problems in a truly innovative way.  Installing new processes into an old bureaucracy will only replace one form of control with another. We need to rethink the very matter of control and what it looks like within an organization. It is not about change management, on it sown change management will just shift the patterns of the past. To truly transform we need a new way of thinking. 

One of my favorite books on just how to do this is Humanocracy: Creating Organizations as Amazing as the People Inside Them by Gary Hamel and Michele Zanini. In this book, the authors advocate that business must become more fundamentally human first.  The idea of human ability and how to cultivate and unleash it is an underlying premise of this book.

Visualized by Rose Fastus

it’s possible to capture the benefits of bureaucracy—control, consistency, and coordination—while avoiding the penalties—inflexibility, mediocrity, and apathy.

Gary Hamel and Michele Zanini, Humanocracy, p. 15

The above quote really encapsulates the heart of this book, and why I think it is such a pivotal read for my peers. This books takes the core question of a bureaurcacy is “How do we get human beings to better serve the organization?”. The issue at the heart of humanocracy becomes: “What sort of organization elicits and merits the best that human beings can give?” Seems a simple swap, but the implications are profound.

Bureaucracy versus Humanocracy. Source: Gary Hamel and Michele Zanini, Humanocracy, p. 48

I would hope you, like me, see the promise of many of the central tenets of Quality Management, not least Deming’s 8th point. The very real tendency of quality to devolve to pointless bureaucracy is something we should always be looking to combat.

Humanocracy’s central point is that by truly putting the employee first in our organizations we drive a human-centered organization that powers and thrives on innovation. Humanocracy is particularly relevant as organizations seek to be more resilient, agile, adaptive, innovative, customer centric etc. Leaders pursuing such goals seek to install systems like agile, devops, flexible teams etc.  They will fail, because people are not processes.  Resiliency, agility, efficiency, are not new programming codes for people.  These goals require more than new rules or a corporate initiative.  Agility, resilience, etc. are behaviors, attitudes, ways of thinking that can only work when you change the deep ‘systems and assumptions’ within an organization.  This book discusses those deeper changes.

Humanocracy lays out seven tips for success in experimentation. I find they align nicely with Kotter’s 8 change accelerators.

Humanocracy’s TipKotter’s Accelerator
Keep it SimpleGenerate (and celebrate) short-term wins
Use VolunteersEnlist a volunteer army
Make it FunSustain Acceleration
Start in your own backyardForm a change vision and strategic initiatives
Run the new parallel with the oldEnable action by removing barriers
Refine and RetestSustain acceleration
Stay loyal to the problemCreate a Sense of Urgency around a
Big Opportunity
Comparison to Kotter’s Eight Accelerators for Change

Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with the intended behaviors that will drive results. Evaluating our CAPA program, we have two key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top-level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

Site Training Needs

Institute training on the job.

Principle 6, W. Edwards Deming

(a) Each person engaged in the manufacture, processing, packing, or holding of a drug product shall have education, training, and experience, or any combination thereof, to enable that person to perform the assigned functions. Training shall be in the particular operations that the employee performs and in current good manufacturing practice (including the current good manufacturing practice regulations in this chapter and written procedures required by these regulations) as they relate to the employee’s functions. Training in current good manufacturing practice shall be conducted by qualified individuals on a continuing basis and with sufficient frequency to assure that employees remain familiar with CGMP requirements applicable to them.

(b) Each person responsible for supervising the manufacture, processing, packing, or holding of a drug product shall have the education, training, and experience, or any combination thereof, to perform assigned functions in such a manner as to provide assurance that the drug product has the safety, identity, strength, quality, and purity that it purports or is represented to possess.

(c) There shall be an adequate number of qualified personnel to perform and supervise the manufacture, processing, packing, or holding of each drug product.

US FDA 21CFR 210.25

All parts of the Pharmaceutical Quality system should be adequately resourced with competent personnel, and suitable and sufficient premises, equipment and facilities.

EU EMA/INS/GMP/735037/201 2.1

The organization shall determine and provide the resources needed for the establishment,
implementation, maintenance and continual improvement of the quality management system. The organization shall consider:

a) the capabilities of, and constraints on, existing internal resources;
b) what needs to be obtained from external providers.

ISO 9001:2015 requirement 7.1.1

It is critical to have enough people with the appropriate level of training to execute their tasks.

It is fairly easy to define the individual training plan, stemming from the job description and the process training requirements. In the aggregate we get the ability to track overdue training, and a forward look at what training is coming due. Quite frankly, lagging indicators that show success at completing assigned training but give no insight to the central question – do we have enough qualified individuals to do the work?

To get this proactive, we start with the resource plan. What operations need to happen in a time frame and what are the resources needed. We then compare that to the training requirements for those operations.

We can then evaluate current training status and retention levels and determine how many instructors we will need to ensure adequate training.

We perform a gap assessment to determine what new training needs exist

We then take a forward look at what new improvements are planned and ensure appropriate training is forecasted.

Now we have a good picture of what an “adequate number” is. We can now set a leading KPI to ensure that training is truly proactive.

ASQ Audit Conference – Day 2 Morning

Jay Arthur “The Future of Quality”

Starts with our “Heroes are gone” and “it is time to stand on our  two feet.”

Focuses on the time and effort to train people on lean and six sigma, and how many people do not actually do projects. Basic point is that we use the tools in old ways which are not nimble and aligned to today’s needs. The tools we use versus the tools we are taught.

Hacking lean six sigma is along a similar line to Art Smalley’s four problems.

Applying the spirit of hacking to quality.

Covers valuestream mapping and spaghetti diagrams with a focus on “they delays in between.” Talks about how control charts are not more standard. Basic point is people don’t spend enough time with the tools of quality. A point I have opinions on that will end up in another post.

Overcooked data versus raw data – summarized data has little or no nutritional value.

Brings this back to the issue of lack of problem diagnosis and not problem solving. Comes back to a need for a few easy tools and not the long-tail of six sigma.

This talk is very focused on LSS and the use of very specific tools, which seems like an odd choice at an Audit conference.

“Objectives and Process Measures: ISO 13485:2016 and ISO 9001:2015” by Nancy Pasquan

I appreciate it when the session manager (person who introduces the speaker and manages time) does a safety moment. Way to practice what we preach. Seriously, it should be a norm at all conferences.

Connects with the audience with a confession that the speaker is here to share her pain.

Objective – where we are going. Provide a flow chart of mission/vision (scope) ->establish process -> right direction? -> monitor and measure

Objectives should challenge the organization. Should not be too easy. References SMART. Covers objectives in very standard way. “Remember the purpose is to focus the effort of the entire organization toward these goals.” Links process objectives to the overall company objectives.

Process measures are harder. Uses training for an example. Which tells me adult learning practice is not as much as the QBOK way of thinking as I would like. Kilpatrick is a pretty well-known model.

Process measures will not tell us if we have the right process is a pretty loaded concept. Being careful of what you measure is good advice.

“Auditing Current Trends in Cleaning Validation” by Cathelene Compton

One of the trends in 2019 FDA Warning letters has been cleaning. While not one of the four big ones, cleaning validation always seems relevant and I’m looking forward to this presentation.

Starting with the fact that 15% if all observations on 483 forms related to leaning validation and documentation.

Reviews the three stages from the 2011 FDA Process Validation Guidance and then delvers into a deeper validation lifecycle flowchart.

Some highlights:

Stage 1 – choosing the right cleaning agent; different manufacturers of cleaning agents; long-term damage to equipment parts and cleaning agent compatibility. Vendor study for cleaning agent; concentration levels; challenge the cleaning process with different concentrations.

Delves more into cleaning acceptance limits and the importance of calculating in multiple ways. Stresses the importance of an involvement of a toxicologist. Stresses the use of Permitted Daily Exposure and how it can be difficult to get the F-factors.

Ensure that analytical methods meet ICHQ2(R1). Recovery studies on materials of construction. For cleaning agent look for target marker, check if other components in the laboratory also use this marker. Pitfall is the glassware washer not validated.

Trends around recovery factors, for example recoveries for stainless tell should be 90%.

Discusses matrix rationales from the Mylan 483 stressing the need to ensure all toxicity levels are determined and pharmaceological potency is there.

Stage 2 all studies should include visual inspection, micro and analytical. Materials of construction and surface area calculations and swabs on hard to clean or water hold up locations. Chromatography must be assessed for extraneous peaks.

Verification vs verification – validation always preferred.

Training – qualify the individuals who swab. Qualify visual inspectors.

Should see campaign studies, clean hold studies and dirty equipment hold studies.

Stage 3 – continuous is so critical, where folks fall flat. Do every 6 months, no more than a year or manual. CIP should be under a periodic review of mechanical aspects which means requal can be 2-3 years out.