Implementing a Quality Ambassador Program

Quality ambassadors can influence their peers to prioritize quality, thereby strengthening the culture of quality in the organization. Quality leaders can use this guide to develop a quality ambassador program by identifying, training, and engaging ambassadors.

Utilizing Kotter’s eight accelerators for change, we can implement a Quality Ambassador program like this:

AcceleratorActions
Create a strong sense of urgency around a big opportunityDemonstrate the organizational value of Ambassadors by performing a needs analysis to assess the current state of employee engagement with quality.
Build and evolve a guiding coalitionBring together key stakeholders from across the organization who will provide input in the program’s design and support its implementation.
Form a change vision and strategic initiativesIdentify the key objectives for implementing a Quality Ambassador program and outline the lines of effort required to successfully design and pilot it.
Enlist a volunteer armyReach out and engage informal leaders at all levels of the organization. Find your current informal Ambassadors and draw them in.
Enable action by removing barriersBe vigilant for factors that impede progress. Work with your Ambassadors and senior leaders to give teams the freedom and support to succeed.
Generate and celebrate short-term winsPilot the program. Create success stories by looking at the successful outcomes of teams that have Quality Ambassadors and by listening to team members and their customers for evidence that quality culture is improving. Your goal will be to create an environment where teams that do not have Quality Ambassadors are asking how they can participate.
Sustain accelerationScale the impact of your program by implementing it more broadly within the organization.

Define the Key Responsibilities of Quality Ambassadors

  
What activities should Quality Ambassadors focus on?  Example: Reinforce key quality messages with co-workers. Drive participation in quality improvement projects. Provide inputs to improve culture of quality. Provide inputs to improve and maintain data integroty
What will Quality Ambassadors need from their managers?    Example: Approval to participate, must be renewed annually
What will Quality Ambassadors receive from the Quality team?    Example: Training on ways to improve employee engagement with quality. Support for any questions/objections that ariseTraining on data integrity  
What are Quality Ambassadors’ unique responsibilities?    Example: Acting as the point of contact for all quality-related queries. Reporting feedback from their teams to the Quality leadership. Conveying to employees the personal impact of quality on their effectiveness. Mitigating employee objections about pursuing quality improvement projects. Tackling obstacles to rolling out quality initiatives
What responsibilities do Quality Ambassadors share with other employees?    Example: Constantly prioritize quality in their day-to-day work  
Expected time commitment    Example: 8-10 hours/month, plus 6 hours of training at launch

Metrics to Measure Success

Type of MetricsList of MetricsDirect Impact of Ambassador’s workRecommendations
Active Participation LevelsPercentage of organizational units adopting culture of quality program.
The number of nominations for quality recognition programs. Quality observations were identified during Gemba walks. Participation or effectiveness of problem-solving or root-cause processes. The number of ongoing quality improvement projects. Percentage of employees receiving quality training  
HighAmbassadors should be directly held responsible for these metrics
Culture of Quality AssessmentsCulture of quality surveys. Culture of quality maturity assessmentsMediumThe Quality Ambassador program is a factor for improvement.
Overall Quality PerformanceKey KPI associated with Quality. Audit scoresCost of poor qualityLowThe Quality Ambassador program is a factor for improvement.

Process Architecture

Building a good process requires clear ownership and a deliberate plan. There is a fair amount of work that goes into it, which can be broken down as follows:

   
Category   
   
Sub-category   
   
Basic theme   
   

   

   

   

   

   

   

   

   

   

   

   

   

   

   

   

   
Planning   
   

   

   

   
Process
   
Measurement   
   
Identify,   design, and implement balanced   process metrics and measurement   
   
Implement process   metrics and   measurement reporting mechanisms   
   
Identify and implement KPIs   (And KRIs)    aligned   to process   
   
Evaluate cycle times   and identify potential wastes   
   
Align   level and recognition of people involved in the   process to align with process   
   

   
Customer
   
Experience   
   
Process design   with customer interaction trigger mechanisms   
   
Design process in line with customer expectations   
   
Identify customer process performance expectations   
   
Design customer entry points and   define transaction types   
   

   

   

   
Process Change   
   
Identify incremental and re-engineering process   enhancement opportunities with staff involvement   
   
Design process with minimal process   hand-off’s   
   
Create and execute process improvement plans   
   
Identify process   automation opportunities   
   
Pilot process   design to ensure meeting performance objectives   
   
Governance   
   
Design efficient process with   governance & internal control considerations   
   
Capacity   
   
Conduct demand   and capacity planning   activities   
   

   

   
Staff Training   
   
Develop and conduct staff   training initiatives in line with customer,
   
process, product, and systems expectations   
   
Develop skills   matrix and staff capability requirements in line with process design   
   
Technology   
   
Define technology enablers   
   
Alignment   
   
Align process objectives with organizational goals   
   
Change
   
Management   
   
Engage impacted stakeholders on process changes   
   

   

   

   

   

   
Control   
   

   

   
Process
   
Measurement   
   
Process performance monitoring   
   
Report on process and staff performance with utilization of visual management tools   
   
Obtain continuous customer satisfaction and expectation of process   
   
Active management of process exceptions   
   
Monitor staff performance metrics   
   

   
Process Change   
   
Identify process   improvement opportunities on a continuous basis   
   
Focused process hand-off management and   tracking   
   
Capacity   
   
Demand and capacity planning and monitoring   
   

   

   
Governance   
   
Process Change   
   
Process maintenance and continuous update   
   
Define and conform to process documentation standards   
   
Change
   
Management   
   
Process communication and awareness   
   
Staff Training   
   
Utilize process documentation knowledge to facilitate staff training   

Like any activity, it helps to document it. I use a template like this.

Using the Outcome Identification Loop

The Outcome Identification Loop asks four questions around a given outcome which can be very valuable in understanding a proposed design, event, or risk.

The four questions are:

1Who else might this affect?Stakeholder Question
2What else might affect them?Stakeholder Impact Question
3What else might affect this?System/analysis Design Question
4What else might this affect?Consequence Question?
4 questions in the Outcome Identification Loop
Outcome Identification Loop

Through answering these questions, outcomes and relationships to further define a central question, and can be used to shape problem-solving, risk mitigation, and process improvement.

Questions 1 “Who else might this affect?’ and 2 “What else might affect them?’ are paired questions from stakeholder identification and analysis techniques.

Question 3 “What else might affect this?” relates to system analysis and design and can be fed by, and lead to, the chains of outcomes elicited using analysis methods, such as process modelling and root cause analysis.

Question 4 “What else might this affect?” considers uncertainty and risk.

These four questions can be iterative. Use them near the beginning to define the problem and then at the end to tie together the entire work.

Managing Events Systematically

Being good at problem-solving is critical to success in an organization. I’ve written quite a bit on problem-solving, but here I want to tackle the amount of effort we should apply.

Not all problems should be treated the same. There are also levels of problems. And these two aspects can contribute to some poor problem-solving practices.

It helps to look at problems systematically across our organization. The iceberg analogy is a pretty popular way to break this done focusing on Events, Patterns, Underlying Structure, and Mental Model.

Iceberg analogy

Events

Events start with the observation or discovery of a situation that is different in some way. What is being observed is a symptom and we want to quickly identify the problem and then determine the effort needed to address it.

This is where Art Smalley’s Four Types of Problems comes in handy to help us take a risk-based approach to determining our level of effort.

Type 1 problems, Troubleshooting, allows us to set problems with a clear understanding of the issue and a clear pathway. Have a flat tire? Fix it. Have a document error, fix it using good documentation practices.

It is valuable to work the way through common troubleshooting and ensure the appropriate linkages between the different processes, to ensure a system-wide approach to problem solving.

Corrective maintenance is a great example of troubleshooting as it involved restoring the original state of an asset. It includes documentation, a return to service and analysis of data. From that analysis of data problems are identified which require going deeper into problem-solving. It should have appropriate tie-ins to evaluate when the impact of an asset breaking leads to other problems (for example, impact to product) which can also require additional problem-solving.

It can be helpful for the organization to build decision trees that can help folks decide if a given problem stays as troubleshooting or if it it also requires going to type 2, “gap from standard.”

Type 2 problems, gap from standard, means that the actual result does not meet the expected and there is a potential of not meeting the core requirements (objectives) of the process, product, or service. This is the place we start deeper problem-solving, including root cause analysis.

Please note that often troubleshooting is done in a type 2 problem. We often call that a correction. If the bioreactor cannot maintain temperature during a run, that is a type 2 problem but I am certainly going to immediately apply troubleshooting as well. This is called a correction.

Take documentation errors. There is a practice in place, part of good documentation practices, for addressing troubleshooting around documents (how to correct, how to record a comment, etc). By working through the various ways documentation can go wrong, applying which ones are solved through troubleshooting and don’t involve type 2 problems, we can create a lot of noise in our system.

Core to the quality system is trending, looking for possible signals that require additional effort. Trending can help determine where problems lay and can also drive up the level of effort necessary.

Underlying Structure

Root Cause Analysis is about finding the underlying structure of the problem that defines the work applied to a type 2 problem.

Not all problems require the same amount of effort, and type 2 problems really have a scale based on consequences, that can help drive the level of effort. This should be based on the impact to the organization’s ability to meet the quality objectives, the requirements behind the product or service.

For example, in the pharma world there are three major criteria:

  •  safety, rights, or well-being of patients (including subjects and participants human and non-human)
  • data integrity (includes confidence in the results, outcome, or decision dependent on the data)
  • ability to meet regulatory requirements (which stem from but can be a lot broader than the first two)

These three criteria can be sliced and diced a lot of ways, but serve our example well.

To these three criteria we add a scale of possible harm to derive our criticality, an example can look like this:

ClassificationDescription
CriticalThe event has resulted in, or is clearly likely to result in, any one of the following outcomes:   significant harm to the safety, rights, or well-being of subjects or participants (human or non-human), or patients; compromised data integrity to the extent that confidence in the results, outcome, or decision dependent on the data is significantly impacted; or regulatory action against the company.
MajorThe event(s), were they to persist over time or become more serious, could potentially, though not imminently, result in any one of the following outcomes:  
harm to the safety, rights, or well-being of subjects or participants (human or non-human), or patients; compromised data integrity to the extent that confidence in the results, outcome, or decision dependent on the data is significantly impacted.
MinorAn isolated or recurring triggering event that does not otherwise meet the definitions of Critical or Major quality impacts.
Example of Classification of Events in a Pharmaceutical Quality System

This level of classification will drive the level of effort on the investigation, as well as drive if the CAPA addresses underlying structures alone or drives to addressing the mental models and thus driving culture change.

Mental Model

Here is where we address building a quality culture. In CAPA lingo this is usually more a preventive action than a corrective action. In the simplest of terms, corrective actions is address the underlying structures of the problem in the process/asset where the event happened. Preventive actions deal with underlying structures in other (usually related) process/assets or get to the Mindsets that allowed the underlying structures to exist in the first place.

Solving Problems Systematically

By applying this system perspective to our problem solving, by realizing that not everything needs a complete rebuild of the foundation, by looking holistically across our systems, we can ensure that we are driving a level of effort to truly build the house of quality.

Q9 (r1) Risk Management Draft

Q9 (r1) starts with all the same sections on scope and purpose. There are slight differences in ordering in scope, mainly because of the new sections below, but there isn’t much substantially different.

4.1 Responsibilities

This is the first major change with added paragraphs on subjectivity, which basically admits that it exists and everyone should be aware of that. This is the first major change that should be addressed in the quality system “All participants involved with quality risk management activities should acknowledge, anticipate, and address the potential for subjectivity.”

Aligned with that requirement is a third bullet for decision-makers: “assure that subjectivity in quality risk management activities is controlled and minimised, to facilitate scientifically robust risk-based decision making.”

Solid additions, if a bit high level. A topic of some interest on this blog, recognizing the impact of subjectivity is critical to truly developing good risk management.

Expect to start getting questions on how you acknowledge, anticipate and address subjectivity. It will take a few years for this to work its way through the various inspectorates after approval, but it will. There are various ways to crack this, but it will require both training and tools to make it happen. It also reinforces the need for well-trained facilitators.

5.1 Formality in Quality Risk Management

“The degree of rigor and formality of quality risk management should reflect available knowledge and be commensurate with the complexity and/ or criticality of the issue to be addressed.”

That statement in Q9 has long been a nugget of long debate, so it is good to see section 5.1 added to give guidance on how to implement it, utilizing 3 axis:

  • Uncertainty: This draft of Q9 utilizes a fairly simple definition of uncertainty and needs to be better aligned to ISO 31000. This is where I am going to definitely submit comments. Taking a straight knowledge management approach and defining uncertainty solely on lack of knowledge misses the other element of uncertainty that are important.
  • Importance: This was probably the critical determination folks applied to formality in the past.
  • Complexity: Not much said on complexity, which is worrisome because this is a tough one to truly analyze. It requires system thinking, and a ot of folks really get complicated and complex confused.

This section is important, the industry needs it as too many companies have primitive risk management approaches because they shoe-horn everything into a one size fits all level of formality and thus either go overboard or do not go far enough. But as written this draft of Q9 is a boon to consultants.

We then go on to get just how much effort should go into higher formality versus lower level of formality which boils down to higher formality is more stand alone and lower formality happens within another aspect of the quality system.

5.2 Risk-based Decision Making

Another new section, definitely designed to align to ISO 9001-2015 thinking. Based on the level of formality we are given three types with the first two covering separate risk management activities and the third being rule-based in procedures.

6. INTEGRATION OF QUALITY RISK MANAGEMENT INTO INDUSTRY AND REGULATORY OPERATIONS

Section 6 gets new subsection “The role of Quality Risk Management in addressing Product Availability Risks,” “Manufacturing Process Variation and State of Control (internal and external),” “Manufacturing Facilities,” “Oversight of Outsourced Activities and Suppliers.” These new subsections expand on what used to be solely a list of bullet points and provide some points to consider in their topic area. They are also good things to make sure risk management is built into if not already there.

Overall Thoughts

The ICH members did exactly what they told us they were going to do, and pretty much nothing else. I do not think they dealt with the issues deeply and definitively enough, and have added a whole lot of ambiguity into the guidance. which is better than being silent on the topic, but I’m hoping for a lot more.

Subjectivity, uncertainty, and formality are critical topics. Hopefully your risk management program is already taking these into account.

I’m hoping we will also see a quick revision of the PIC/S “Assessment of Quality Risk Management Implementation” to align to these concepts.