Quality, Decision Making and Putting the Human First

Quality stands in a position, sometimes uniquely in an organization, of engaging with stakeholders to understand what objectives and unique positions the organization needs to assume, and the choices that are making in order to achieve such objectives and positions.

The effectiveness of the team in making good decisions by picking the right choices depends on their ability of analyzing a problem and generating alternatives. As I discussed in my post “Design Lifecycle within PDCA – Planning” experimentation plays a critical part of the decision making process. When designing the solution we always consider:

  • Always include a “do nothing” option: Not every decision or problem demands an action. Sometimes, the best way is to do nothing.
  • How do you know what you think you know? This should be a question everyone is comfortable asking. It allows people to check assumptions and to question claims that, while convenient, are not based on any kind of data, firsthand knowledge, or research.
  • Ask tough questions Be direct and honest. Push hard to get to the core of what the options look like.
  • Have a dissenting option. It is critical to include unpopular but reasonable options. Make sure to include opinions or choices you personally don’t like, but for which good arguments can be made. This keeps you honest and gives anyone who see the pros/cons list a chance to convince you into making a better decision than the one you might have arrived at on your own.
  • Consider hybrid choices. Sometimes it’s possible to take an attribute of one choice and add it to another. Like exploratory design, there are always interesting combinations in decision making. This can explode the number of choices, which can slow things down and create more complexity than you need. Watch for the zone of indifference (options that are not perceived as making any difference or adding any value) and don’t waste time in it.
  • Include all relevant perspectives. Consider if this decision impacts more than just the area the problem is identified in. How does it impact other processes? Systems?

A struggle every organization has is how to think through problems in a truly innovative way.  Installing new processes into an old bureaucracy will only replace one form of control with another. We need to rethink the very matter of control and what it looks like within an organization. It is not about change management, on it sown change management will just shift the patterns of the past. To truly transform we need a new way of thinking. 

One of my favorite books on just how to do this is Humanocracy: Creating Organizations as Amazing as the People Inside Them by Gary Hamel and Michele Zanini. In this book, the authors advocate that business must become more fundamentally human first.  The idea of human ability and how to cultivate and unleash it is an underlying premise of this book.

Visualized by Rose Fastus

it’s possible to capture the benefits of bureaucracy—control, consistency, and coordination—while avoiding the penalties—inflexibility, mediocrity, and apathy.

Gary Hamel and Michele Zanini, Humanocracy, p. 15

The above quote really encapsulates the heart of this book, and why I think it is such a pivotal read for my peers. This books takes the core question of a bureaurcacy is “How do we get human beings to better serve the organization?”. The issue at the heart of humanocracy becomes: “What sort of organization elicits and merits the best that human beings can give?” Seems a simple swap, but the implications are profound.

Bureaucracy versus Humanocracy. Source: Gary Hamel and Michele Zanini, Humanocracy, p. 48

I would hope you, like me, see the promise of many of the central tenets of Quality Management, not least Deming’s 8th point. The very real tendency of quality to devolve to pointless bureaucracy is something we should always be looking to combat.

Humanocracy’s central point is that by truly putting the employee first in our organizations we drive a human-centered organization that powers and thrives on innovation. Humanocracy is particularly relevant as organizations seek to be more resilient, agile, adaptive, innovative, customer centric etc. Leaders pursuing such goals seek to install systems like agile, devops, flexible teams etc.  They will fail, because people are not processes.  Resiliency, agility, efficiency, are not new programming codes for people.  These goals require more than new rules or a corporate initiative.  Agility, resilience, etc. are behaviors, attitudes, ways of thinking that can only work when you change the deep ‘systems and assumptions’ within an organization.  This book discusses those deeper changes.

Humanocracy lays out seven tips for success in experimentation. I find they align nicely with Kotter’s 8 change accelerators.

Humanocracy’s TipKotter’s Accelerator
Keep it SimpleGenerate (and celebrate) short-term wins
Use VolunteersEnlist a volunteer army
Make it FunSustain Acceleration
Start in your own backyardForm a change vision and strategic initiatives
Run the new parallel with the oldEnable action by removing barriers
Refine and RetestSustain acceleration
Stay loyal to the problemCreate a Sense of Urgency around a
Big Opportunity
Comparison to Kotter’s Eight Accelerators for Change

Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with the intended behaviors that will drive results. Evaluating our CAPA program, we have two key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top-level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

Site Training Needs

Institute training on the job.

Principle 6, W. Edwards Deming

(a) Each person engaged in the manufacture, processing, packing, or holding of a drug product shall have education, training, and experience, or any combination thereof, to enable that person to perform the assigned functions. Training shall be in the particular operations that the employee performs and in current good manufacturing practice (including the current good manufacturing practice regulations in this chapter and written procedures required by these regulations) as they relate to the employee’s functions. Training in current good manufacturing practice shall be conducted by qualified individuals on a continuing basis and with sufficient frequency to assure that employees remain familiar with CGMP requirements applicable to them.

(b) Each person responsible for supervising the manufacture, processing, packing, or holding of a drug product shall have the education, training, and experience, or any combination thereof, to perform assigned functions in such a manner as to provide assurance that the drug product has the safety, identity, strength, quality, and purity that it purports or is represented to possess.

(c) There shall be an adequate number of qualified personnel to perform and supervise the manufacture, processing, packing, or holding of each drug product.

US FDA 21CFR 210.25

All parts of the Pharmaceutical Quality system should be adequately resourced with competent personnel, and suitable and sufficient premises, equipment and facilities.

EU EMA/INS/GMP/735037/201 2.1

The organization shall determine and provide the resources needed for the establishment,
implementation, maintenance and continual improvement of the quality management system. The organization shall consider:

a) the capabilities of, and constraints on, existing internal resources;
b) what needs to be obtained from external providers.

ISO 9001:2015 requirement 7.1.1

It is critical to have enough people with the appropriate level of training to execute their tasks.

It is fairly easy to define the individual training plan, stemming from the job description and the process training requirements. In the aggregate we get the ability to track overdue training, and a forward look at what training is coming due. Quite frankly, lagging indicators that show success at completing assigned training but give no insight to the central question – do we have enough qualified individuals to do the work?

To get this proactive, we start with the resource plan. What operations need to happen in a time frame and what are the resources needed. We then compare that to the training requirements for those operations.

We can then evaluate current training status and retention levels and determine how many instructors we will need to ensure adequate training.

We perform a gap assessment to determine what new training needs exist

We then take a forward look at what new improvements are planned and ensure appropriate training is forecasted.

Now we have a good picture of what an “adequate number” is. We can now set a leading KPI to ensure that training is truly proactive.

ASQ Audit Conference – Day 2 Morning

Jay Arthur “The Future of Quality”

Starts with our “Heroes are gone” and “it is time to stand on our  two feet.”

Focuses on the time and effort to train people on lean and six sigma, and how many people do not actually do projects. Basic point is that we use the tools in old ways which are not nimble and aligned to today’s needs. The tools we use versus the tools we are taught.

Hacking lean six sigma is along a similar line to Art Smalley’s four problems.

Applying the spirit of hacking to quality.

Covers valuestream mapping and spaghetti diagrams with a focus on “they delays in between.” Talks about how control charts are not more standard. Basic point is people don’t spend enough time with the tools of quality. A point I have opinions on that will end up in another post.

Overcooked data versus raw data – summarized data has little or no nutritional value.

Brings this back to the issue of lack of problem diagnosis and not problem solving. Comes back to a need for a few easy tools and not the long-tail of six sigma.

This talk is very focused on LSS and the use of very specific tools, which seems like an odd choice at an Audit conference.

“Objectives and Process Measures: ISO 13485:2016 and ISO 9001:2015” by Nancy Pasquan

I appreciate it when the session manager (person who introduces the speaker and manages time) does a safety moment. Way to practice what we preach. Seriously, it should be a norm at all conferences.

Connects with the audience with a confession that the speaker is here to share her pain.

Objective – where we are going. Provide a flow chart of mission/vision (scope) ->establish process -> right direction? -> monitor and measure

Objectives should challenge the organization. Should not be too easy. References SMART. Covers objectives in very standard way. “Remember the purpose is to focus the effort of the entire organization toward these goals.” Links process objectives to the overall company objectives.

Process measures are harder. Uses training for an example. Which tells me adult learning practice is not as much as the QBOK way of thinking as I would like. Kilpatrick is a pretty well-known model.

Process measures will not tell us if we have the right process is a pretty loaded concept. Being careful of what you measure is good advice.

“Auditing Current Trends in Cleaning Validation” by Cathelene Compton

One of the trends in 2019 FDA Warning letters has been cleaning. While not one of the four big ones, cleaning validation always seems relevant and I’m looking forward to this presentation.

Starting with the fact that 15% if all observations on 483 forms related to leaning validation and documentation.

Reviews the three stages from the 2011 FDA Process Validation Guidance and then delvers into a deeper validation lifecycle flowchart.

Some highlights:

Stage 1 – choosing the right cleaning agent; different manufacturers of cleaning agents; long-term damage to equipment parts and cleaning agent compatibility. Vendor study for cleaning agent; concentration levels; challenge the cleaning process with different concentrations.

Delves more into cleaning acceptance limits and the importance of calculating in multiple ways. Stresses the importance of an involvement of a toxicologist. Stresses the use of Permitted Daily Exposure and how it can be difficult to get the F-factors.

Ensure that analytical methods meet ICHQ2(R1). Recovery studies on materials of construction. For cleaning agent look for target marker, check if other components in the laboratory also use this marker. Pitfall is the glassware washer not validated.

Trends around recovery factors, for example recoveries for stainless tell should be 90%.

Discusses matrix rationales from the Mylan 483 stressing the need to ensure all toxicity levels are determined and pharmaceological potency is there.

Stage 2 all studies should include visual inspection, micro and analytical. Materials of construction and surface area calculations and swabs on hard to clean or water hold up locations. Chromatography must be assessed for extraneous peaks.

Verification vs verification – validation always preferred.

Training – qualify the individuals who swab. Qualify visual inspectors.

Should see campaign studies, clean hold studies and dirty equipment hold studies.

Stage 3 – continuous is so critical, where folks fall flat. Do every 6 months, no more than a year or manual. CIP should be under a periodic review of mechanical aspects which means requal can be 2-3 years out.

ASQ Audit Conference – Day 1 Morning

Day 1 of the 2019 Audit Conference.

Grace Duffy is the keynote speaker. I’ve known Grace for years and consider her a mentor and I’m always happy to hear her speak. Grace has been building on a theme around her Modular Kaizen approach and the use of the OODA Loop, and this presentation built nicely on what she presented at the Lean Six Sigma Conference in Phoenix, at WCQI and in other places.

Audits as a form of sustainability is an important point to stress, and hopefully this will be a central theme throughout the conference.

The intended purpose is to build on a systems view for preparation for an effective audit and using the OODA loop to approach evolutionary and revolutionary change approaches.

John Boyd’s OODA loop

Grace starts with a brief overview of system and process and then from vision to strategy to daily, and how that forms a mobius strip of macro, meso, micro and individual. She talks a little about the difference between Deming and Juran’s approaches and does a little what-if thinking about how Lean would have devoted if Juran had gone to Japan instead of Deming.

Breaking down OODA (Observe, Orient, Decide Act) as “Where am I and where is the organization” and then feed into decision making. Stresses how Orient discusses culture and discusses understanding the culture. Her link to Lean is a little tenuous in my mind.

She then discusses Tom Pearson’s knowledge management model with: Local Action; Management Action; Exploratory Analysis; Knowledge Building; Complex Systems; Knowledge Management; Scientific Creativity. Units all this with system thinking and psychology.  “We’re going to share shamelessly because that’s how we learn.” “If we can’t have fun with this stuff it’s no good.”

Uniting the two, she describes the knowledge management model as part of Orient.

Puts revolutionary and evolutionary change in light of Juran’s Breakthrough versus Continuous Improvement. From here she covers modular kaizen, starting with incremental change versus process redesign. From there she breaks it down into a DMAIC model and goes into how much she loves the measure. She discusses how the human brain is better at connections, which is a good reinforce of the OODA model.

Breaks down a culture model of Culture/Beliefs, Visions/Goals and Activities/Plans-and-actions influenced by external events and how evolutionary improvements stem out of compatibility with those. OODA is the tool to help determine that compatibility.

Discusses briefly on how standardization fits into systems and pushes a look from a stability.

Goes back to the culture model but now adds idea generation and quality test with decisions off of it that lead to revolutionary improvements. Links back to OODA.

Then quickly covers DMAIC versus DMADV and how that is another way of thinking about these concepts.

Covers Gina Wickman’s concept of visionary and integrator from Traction.

Ties back OODA to effective auditing: focus on patterns and not just numbers, Grasp the bigger picture, be adaptive.

This is a big sprawling topic for a key note and at times it felt like a firehose.. Keynotes often benefit from a lot more laser focus. OODA alone would have been enough. My head is reeling, and I feel comfortable with this material. Grace is an amazing, passionate educator and she finds this material exciting. I hope most of the audience picked that up in this big gulp approach. This system approach, building on culture and strategy is critical.

OODA as an audit tool is relevant, and it is a tool I think we should be teaching better. Might be a good tool to do for TWEF as it ties into the team/workplace excellence approach. OODA and situational awareness are really united in my mind and that deserves a separate post.

Concurrent Sessions

After the keynote there are the breakout sessions. As always, I end up having too many options and must make some decisions. Can never complain about having too many options during a conference.

First Impressions: The Myth of the Objective & Impartial Audit

First session is “First Impressions: The Myth of the Objective & Impartial Audit” by William Taraszewski. I met Bill back at the 2018 World Conference of Quality Improvement.

Bill starts by discussing how subjectivity and first impressions and how that involves audits from the very start.

Covers the science of first impressions, point to research of bias and how negative behavior weighs more than positive and how this can be contextual. Draws from Amy Cuddy’s work and lays a good foundation of Trust and Competence and the importance in work and life in general.

Brings this back to ISO 19011:2018 “Guidelines for auditing management systems” and clause 7.2 determining auditor competence placing personal behavior over knowledge and skills.

Brings up video auditing and the impressions generated from video vs in-person are pretty similar but the magnitude of the bad impressions are greater and the magnitude of positive is lower. That was an interesting point and I will need to follow-up with that research.

Moves to discussing impartiality in context of ISO 19011:2018, pointing out the halo and horn effects.

Discusses prejudice vs experience as an auditor and covers confirmation bias and how selective exposure and selective perception fits into our psychology with the need to be careful since negative outweighs.

Moves into objective evidence and how it fits into an audit.

Provides top tips for good auditor first impressions with body language and eye contact. Most important, how to check your attitude.

This was a good fundamental on the topics that reinforces some basics and goes back to the research. Quality as a profession really needs to understand how objectivity and impartiality are virtually impossible and how we can overcome bias.

Auditing Risk Management

Barry Craner presented on :Are you ready for an audit of your risk management system?”

Starts with how risk management is here to stay and how it is in most industries. The presenter is focused on medical devices but the concepts are very general.

“As far possible” as a concept is discussed and residual risk. Covers this at a high level.

Covers at a high level the standard risk management process (risk identification, risk analysis, risk control, risk monitoring, risk reporting) asking the question is “RM system acceptable? Can you describe and defend it?”

Provides an example of a risk management file sequence that matches the concept of living risk assessments. This is a flow that goes from Preliminary Hazard analysis to Fault Tree Analysis (FTA) to FMEA. With the focus on medical devices talks about design and process for both the FTA and the FMEA. This is all from the question “Can you describe and defend your risk management program?”

In laying out the risk management program focused in on personnel qualification being pivotal. Discusses answering the question “Are these ready for audit?” When discussing the plan asks the questions “Is your risk management plan: documented and reasonable; ready to audit; and, SOP followed by your company?”

When discussing risk impact breaks it down to “Is the risk acceptable or not.” Goes on to discuss how important it is to defend the scoring rubric, asking the question”Well defined, can we defend?”

Goes back and discusses some basic concepts of hazard and harm. Asks the questions “Did you do this hazard assessment with enough thoroughness? Were the right hazards identified?” Recommends building a example of hazards table. This is good advice. From there answer the question “Do your hazard analses yield reasonable, useful information? Do you use it?”

Provides a nice example of how to build a mitigation plan out of a fault tree analysis.

Discussion on FMEAs faultered on detection, probably could have gone into controls a lot deeper here.

With both the PTA and FMEA discussed how the results needs to be defendable.

Risk management review, with the right metrics are discussed at a high level. This easily can be a session on its own.

Asks the question “Were there actionable tasks? Progress on these tasks?”

It is time to stop having such general overviews at conferences, especially at a conference which are not targeted to junior personnel.