Implementing a Quality Ambassador Program

Quality ambassadors can influence their peers to prioritize quality, thereby strengthening the culture of quality in the organization. Quality leaders can use this guide to develop a quality ambassador program by identifying, training, and engaging ambassadors.

Utilizing Kotter’s eight accelerators for change, we can implement a Quality Ambassador program like this:

AcceleratorActions
Create a strong sense of urgency around a big opportunityDemonstrate the organizational value of Ambassadors by performing a needs analysis to assess the current state of employee engagement with quality.
Build and evolve a guiding coalitionBring together key stakeholders from across the organization who will provide input in the program’s design and support its implementation.
Form a change vision and strategic initiativesIdentify the key objectives for implementing a Quality Ambassador program and outline the lines of effort required to successfully design and pilot it.
Enlist a volunteer armyReach out and engage informal leaders at all levels of the organization. Find your current informal Ambassadors and draw them in.
Enable action by removing barriersBe vigilant for factors that impede progress. Work with your Ambassadors and senior leaders to give teams the freedom and support to succeed.
Generate and celebrate short-term winsPilot the program. Create success stories by looking at the successful outcomes of teams that have Quality Ambassadors and by listening to team members and their customers for evidence that quality culture is improving. Your goal will be to create an environment where teams that do not have Quality Ambassadors are asking how they can participate.
Sustain accelerationScale the impact of your program by implementing it more broadly within the organization.

Define the Key Responsibilities of Quality Ambassadors

  
What activities should Quality Ambassadors focus on?  Example: Reinforce key quality messages with co-workers. Drive participation in quality improvement projects. Provide inputs to improve culture of quality. Provide inputs to improve and maintain data integroty
What will Quality Ambassadors need from their managers?    Example: Approval to participate, must be renewed annually
What will Quality Ambassadors receive from the Quality team?    Example: Training on ways to improve employee engagement with quality. Support for any questions/objections that ariseTraining on data integrity  
What are Quality Ambassadors’ unique responsibilities?    Example: Acting as the point of contact for all quality-related queries. Reporting feedback from their teams to the Quality leadership. Conveying to employees the personal impact of quality on their effectiveness. Mitigating employee objections about pursuing quality improvement projects. Tackling obstacles to rolling out quality initiatives
What responsibilities do Quality Ambassadors share with other employees?    Example: Constantly prioritize quality in their day-to-day work  
Expected time commitment    Example: 8-10 hours/month, plus 6 hours of training at launch

Metrics to Measure Success

Type of MetricsList of MetricsDirect Impact of Ambassador’s workRecommendations
Active Participation LevelsPercentage of organizational units adopting culture of quality program.
The number of nominations for quality recognition programs. Quality observations were identified during Gemba walks. Participation or effectiveness of problem-solving or root-cause processes. The number of ongoing quality improvement projects. Percentage of employees receiving quality training  
HighAmbassadors should be directly held responsible for these metrics
Culture of Quality AssessmentsCulture of quality surveys. Culture of quality maturity assessmentsMediumThe Quality Ambassador program is a factor for improvement.
Overall Quality PerformanceKey KPI associated with Quality. Audit scoresCost of poor qualityLowThe Quality Ambassador program is a factor for improvement.

Process Architecture

Building a good process requires clear ownership and a deliberate plan. There is a fair amount of work that goes into it, which can be broken down as follows:

   
Category   
   
Sub-category   
   
Basic theme   
   

   

   

   

   

   

   

   

   

   

   

   

   

   

   

   

   
Planning   
   

   

   

   
Process
   
Measurement   
   
Identify,   design, and implement balanced   process metrics and measurement   
   
Implement process   metrics and   measurement reporting mechanisms   
   
Identify and implement KPIs   (And KRIs)    aligned   to process   
   
Evaluate cycle times   and identify potential wastes   
   
Align   level and recognition of people involved in the   process to align with process   
   

   
Customer
   
Experience   
   
Process design   with customer interaction trigger mechanisms   
   
Design process in line with customer expectations   
   
Identify customer process performance expectations   
   
Design customer entry points and   define transaction types   
   

   

   

   
Process Change   
   
Identify incremental and re-engineering process   enhancement opportunities with staff involvement   
   
Design process with minimal process   hand-off’s   
   
Create and execute process improvement plans   
   
Identify process   automation opportunities   
   
Pilot process   design to ensure meeting performance objectives   
   
Governance   
   
Design efficient process with   governance & internal control considerations   
   
Capacity   
   
Conduct demand   and capacity planning   activities   
   

   

   
Staff Training   
   
Develop and conduct staff   training initiatives in line with customer,
   
process, product, and systems expectations   
   
Develop skills   matrix and staff capability requirements in line with process design   
   
Technology   
   
Define technology enablers   
   
Alignment   
   
Align process objectives with organizational goals   
   
Change
   
Management   
   
Engage impacted stakeholders on process changes   
   

   

   

   

   

   
Control   
   

   

   
Process
   
Measurement   
   
Process performance monitoring   
   
Report on process and staff performance with utilization of visual management tools   
   
Obtain continuous customer satisfaction and expectation of process   
   
Active management of process exceptions   
   
Monitor staff performance metrics   
   

   
Process Change   
   
Identify process   improvement opportunities on a continuous basis   
   
Focused process hand-off management and   tracking   
   
Capacity   
   
Demand and capacity planning and monitoring   
   

   

   
Governance   
   
Process Change   
   
Process maintenance and continuous update   
   
Define and conform to process documentation standards   
   
Change
   
Management   
   
Process communication and awareness   
   
Staff Training   
   
Utilize process documentation knowledge to facilitate staff training   

Like any activity, it helps to document it. I use a template like this.

Using the Outcome Identification Loop

The Outcome Identification Loop asks four questions around a given outcome which can be very valuable in understanding a proposed design, event, or risk.

The four questions are:

1Who else might this affect?Stakeholder Question
2What else might affect them?Stakeholder Impact Question
3What else might affect this?System/analysis Design Question
4What else might this affect?Consequence Question?
4 questions in the Outcome Identification Loop
Outcome Identification Loop

Through answering these questions, outcomes and relationships to further define a central question, and can be used to shape problem-solving, risk mitigation, and process improvement.

Questions 1 “Who else might this affect?’ and 2 “What else might affect them?’ are paired questions from stakeholder identification and analysis techniques.

Question 3 “What else might affect this?” relates to system analysis and design and can be fed by, and lead to, the chains of outcomes elicited using analysis methods, such as process modelling and root cause analysis.

Question 4 “What else might this affect?” considers uncertainty and risk.

These four questions can be iterative. Use them near the beginning to define the problem and then at the end to tie together the entire work.

Managing Events Systematically

Being good at problem-solving is critical to success in an organization. I’ve written quite a bit on problem-solving, but here I want to tackle the amount of effort we should apply.

Not all problems should be treated the same. There are also levels of problems. And these two aspects can contribute to some poor problem-solving practices.

It helps to look at problems systematically across our organization. The iceberg analogy is a pretty popular way to break this done focusing on Events, Patterns, Underlying Structure, and Mental Model.

Iceberg analogy

Events

Events start with the observation or discovery of a situation that is different in some way. What is being observed is a symptom and we want to quickly identify the problem and then determine the effort needed to address it.

This is where Art Smalley’s Four Types of Problems comes in handy to help us take a risk-based approach to determining our level of effort.

Type 1 problems, Troubleshooting, allows us to set problems with a clear understanding of the issue and a clear pathway. Have a flat tire? Fix it. Have a document error, fix it using good documentation practices.

It is valuable to work the way through common troubleshooting and ensure the appropriate linkages between the different processes, to ensure a system-wide approach to problem solving.

Corrective maintenance is a great example of troubleshooting as it involved restoring the original state of an asset. It includes documentation, a return to service and analysis of data. From that analysis of data problems are identified which require going deeper into problem-solving. It should have appropriate tie-ins to evaluate when the impact of an asset breaking leads to other problems (for example, impact to product) which can also require additional problem-solving.

It can be helpful for the organization to build decision trees that can help folks decide if a given problem stays as troubleshooting or if it it also requires going to type 2, “gap from standard.”

Type 2 problems, gap from standard, means that the actual result does not meet the expected and there is a potential of not meeting the core requirements (objectives) of the process, product, or service. This is the place we start deeper problem-solving, including root cause analysis.

Please note that often troubleshooting is done in a type 2 problem. We often call that a correction. If the bioreactor cannot maintain temperature during a run, that is a type 2 problem but I am certainly going to immediately apply troubleshooting as well. This is called a correction.

Take documentation errors. There is a practice in place, part of good documentation practices, for addressing troubleshooting around documents (how to correct, how to record a comment, etc). By working through the various ways documentation can go wrong, applying which ones are solved through troubleshooting and don’t involve type 2 problems, we can create a lot of noise in our system.

Core to the quality system is trending, looking for possible signals that require additional effort. Trending can help determine where problems lay and can also drive up the level of effort necessary.

Underlying Structure

Root Cause Analysis is about finding the underlying structure of the problem that defines the work applied to a type 2 problem.

Not all problems require the same amount of effort, and type 2 problems really have a scale based on consequences, that can help drive the level of effort. This should be based on the impact to the organization’s ability to meet the quality objectives, the requirements behind the product or service.

For example, in the pharma world there are three major criteria:

  •  safety, rights, or well-being of patients (including subjects and participants human and non-human)
  • data integrity (includes confidence in the results, outcome, or decision dependent on the data)
  • ability to meet regulatory requirements (which stem from but can be a lot broader than the first two)

These three criteria can be sliced and diced a lot of ways, but serve our example well.

To these three criteria we add a scale of possible harm to derive our criticality, an example can look like this:

ClassificationDescription
CriticalThe event has resulted in, or is clearly likely to result in, any one of the following outcomes:   significant harm to the safety, rights, or well-being of subjects or participants (human or non-human), or patients; compromised data integrity to the extent that confidence in the results, outcome, or decision dependent on the data is significantly impacted; or regulatory action against the company.
MajorThe event(s), were they to persist over time or become more serious, could potentially, though not imminently, result in any one of the following outcomes:  
harm to the safety, rights, or well-being of subjects or participants (human or non-human), or patients; compromised data integrity to the extent that confidence in the results, outcome, or decision dependent on the data is significantly impacted.
MinorAn isolated or recurring triggering event that does not otherwise meet the definitions of Critical or Major quality impacts.
Example of Classification of Events in a Pharmaceutical Quality System

This level of classification will drive the level of effort on the investigation, as well as drive if the CAPA addresses underlying structures alone or drives to addressing the mental models and thus driving culture change.

Mental Model

Here is where we address building a quality culture. In CAPA lingo this is usually more a preventive action than a corrective action. In the simplest of terms, corrective actions is address the underlying structures of the problem in the process/asset where the event happened. Preventive actions deal with underlying structures in other (usually related) process/assets or get to the Mindsets that allowed the underlying structures to exist in the first place.

Solving Problems Systematically

By applying this system perspective to our problem solving, by realizing that not everything needs a complete rebuild of the foundation, by looking holistically across our systems, we can ensure that we are driving a level of effort to truly build the house of quality.