Hierarchy is not inevitable

I’m on the record in believing that Quality as a process is an inherently progressive one and that when we stray from those progressive roots we become exactly what we strive to avoid. One only has to look at the history of Six Sigma, TQM, and even Lean to see that.

I’m a big proponent of Humanocracy for that very reason.

One cannot read much of business writing without coming across the great leader (or even worse great man) hypothesis, which serves to naturalize power and existing forms of authority. One cannot even escape the continued hagiography of Jack Welch, even though he’s been discredited in many ways for his toxic legacy.

We cannot drive out fear unless we unmask power by revealing its contradictions, hypocrisies, and reliance on violence and coercion. The way we work is a result of human decisions, and thus capable of being remade.

We all have a long way to go here. I, for example, catch myself all the time speaking of leadership in hierarchical ways. One of the current things I am working on is exorcising the term ‘leadership team’ from my vocabulary. It doesn’t serve any real purpose and it fundamentally puts the idea of leadership as a hierarchical entity.

Another thing I am working on is to tackle the thorn of positional authority, the idea that the higher the rank in the organization the more decision-making authority you have. Which is absurd. In every organization, I’ve been in people have positions of authority that cover areas they do not have the education, experience, and training to make decisions in. This is why we need to have clear decision matrixes, establish empowered process owners and drive democratic leadership throughout the organization.

The stick is broken, regulatory agencies are toothless

Admit it, we’ve all been through GxP training that utilizes the stick. I’m assuming many of you have designed it. It might have looked like this:

Perhaps you have went over the hundred-and-fifty-plus years history of regulatory action, discussing Elixir Sulfanilamide, thalidomide, and a dozen other noteworthy cases that shared the modern regulatory environment.

Or perhaps you just like to show a slide with recent headlines on it.

Let’s put aside all the excellent research about the power of positive messaging etc. Valid stuff but not the point I’m trying to make.

The point I want to make in this post is that the regulatory stick has long been broken. Companies suffer at most a slap on the wrist, fines that are weeks or months of profit. But real repercussions are absent.

The Sackler family walks away with billions, MacKenzie gets a slap on the wrist, and other companies are all protected from their deliberate actions in fueling the opioid epidemic.

J&J avoids all real accountability for knowingly causing cancer.

The list goes on.

Frankly, I think this is really bad for our industry. If the price of being caught is pennies to the dollar earned, it has become merely a cost of doing business.

This erodes trust in the safety of our drug supply. And if the last year hasn’t brought home the importance of that trust, you may be hiding under a rock.

We need more perp walks. We need a real system of deterrence that involves arrests and punishments that match the crimes. We can’t even count on the one form of deterrence left, liability lawsuits because companies are playing shenanigans with bankruptcy laws.

We talk about how quality culture starts at the top. But as we see again and again, the top only cares about profit.

That makes me fundamentally worry about the safety of our drugs and medical devices. And if I someone who has dear friends who work at large and small pharma worry, I must admit I can understand why people start to hold suspicions.

Engaging for Quality

When building a quality organization, we are striving to do three things: get employees (and executives) to feel the need for quality in their bones; get them to understand what quality is and why it is important; and build the process, procedure, and tools to make quality happen. Practitioners in change management often call this heart, head, and hands.

Engage the heart, head and hands to build a quality culture

In our efforts we strive to answer give major themes of questions about why building a culture of quality is critical.

ThemeQuestions
WhyWhy do we need quality? Why is it important? What are the regulatory expectations? What happens if we do nothing?
WhatWhat results are expected for our patients? Our organization? Our people? What does out destination look and feel like?
HowHow will we get there? What’s our plan and process? What new behaviors do we each need to demonstrate?
YouWhat do you need to fulfill your role in quality? What do we need from you?
MeWhat do I commit to as a leader? What will I do to make change a reality? How will I support my team?
Five Themes of Change

The great part of this is that the principles of building a quality culture are the same mindsets we want embedded in our culture. By demonstrating them, we build and strengthen the culture, and will reap the dividends.

Be Preventative: What actions can be taken to prevent undesirable/unintended consequences with employees and other stakeholders. We do this by:

  • Involving end-users in the design process
  • Conduct risk assessments and lessons learned to predict possible failures
  • Ensure the reason for change is holistic and accounts for all internal and external obligations
  • Determine metrics as soon as possible
  • Focus on how the organization is responding to ongoing change
  • Think through how roles need to change and what employees need to be accountable for

Be Proactive: What actions can be taken to successfully meet objectives?

Be Responsive: What evidence-based techniques can be used to respond to issues, including resistance?

This is all about leveraging the 8 change accelerators and effectively developing strategies for change.

Quality, Decision Making and Putting the Human First

Quality stands in a position, sometimes uniquely in an organization, of engaging with stakeholders to understand what objectives and unique positions the organization needs to assume, and the choices that are making in order to achieve such objectives and positions.

The effectiveness of the team in making good decisions by picking the right choices depends on their ability of analyzing a problem and generating alternatives. As I discussed in my post “Design Lifecycle within PDCA – Planning” experimentation plays a critical part of the decision making process. When designing the solution we always consider:

  • Always include a “do nothing” option: Not every decision or problem demands an action. Sometimes, the best way is to do nothing.
  • How do you know what you think you know? This should be a question everyone is comfortable asking. It allows people to check assumptions and to question claims that, while convenient, are not based on any kind of data, firsthand knowledge, or research.
  • Ask tough questions Be direct and honest. Push hard to get to the core of what the options look like.
  • Have a dissenting option. It is critical to include unpopular but reasonable options. Make sure to include opinions or choices you personally don’t like, but for which good arguments can be made. This keeps you honest and gives anyone who see the pros/cons list a chance to convince you into making a better decision than the one you might have arrived at on your own.
  • Consider hybrid choices. Sometimes it’s possible to take an attribute of one choice and add it to another. Like exploratory design, there are always interesting combinations in decision making. This can explode the number of choices, which can slow things down and create more complexity than you need. Watch for the zone of indifference (options that are not perceived as making any difference or adding any value) and don’t waste time in it.
  • Include all relevant perspectives. Consider if this decision impacts more than just the area the problem is identified in. How does it impact other processes? Systems?

A struggle every organization has is how to think through problems in a truly innovative way.  Installing new processes into an old bureaucracy will only replace one form of control with another. We need to rethink the very matter of control and what it looks like within an organization. It is not about change management, on it sown change management will just shift the patterns of the past. To truly transform we need a new way of thinking. 

One of my favorite books on just how to do this is Humanocracy: Creating Organizations as Amazing as the People Inside Them by Gary Hamel and Michele Zanini. In this book, the authors advocate that business must become more fundamentally human first.  The idea of human ability and how to cultivate and unleash it is an underlying premise of this book.

Visualized by Rose Fastus

it’s possible to capture the benefits of bureaucracy—control, consistency, and coordination—while avoiding the penalties—inflexibility, mediocrity, and apathy.

Gary Hamel and Michele Zanini, Humanocracy, p. 15

The above quote really encapsulates the heart of this book, and why I think it is such a pivotal read for my peers. This books takes the core question of a bureaurcacy is “How do we get human beings to better serve the organization?”. The issue at the heart of humanocracy becomes: “What sort of organization elicits and merits the best that human beings can give?” Seems a simple swap, but the implications are profound.

Bureaucracy versus Humanocracy. Source: Gary Hamel and Michele Zanini, Humanocracy, p. 48

I would hope you, like me, see the promise of many of the central tenets of Quality Management, not least Deming’s 8th point. The very real tendency of quality to devolve to pointless bureaucracy is something we should always be looking to combat.

Humanocracy’s central point is that by truly putting the employee first in our organizations we drive a human-centered organization that powers and thrives on innovation. Humanocracy is particularly relevant as organizations seek to be more resilient, agile, adaptive, innovative, customer centric etc. Leaders pursuing such goals seek to install systems like agile, devops, flexible teams etc.  They will fail, because people are not processes.  Resiliency, agility, efficiency, are not new programming codes for people.  These goals require more than new rules or a corporate initiative.  Agility, resilience, etc. are behaviors, attitudes, ways of thinking that can only work when you change the deep ‘systems and assumptions’ within an organization.  This book discusses those deeper changes.

Humanocracy lays out seven tips for success in experimentation. I find they align nicely with Kotter’s 8 change accelerators.

Humanocracy’s TipKotter’s Accelerator
Keep it SimpleGenerate (and celebrate) short-term wins
Use VolunteersEnlist a volunteer army
Make it FunSustain Acceleration
Start in your own backyardForm a change vision and strategic initiatives
Run the new parallel with the oldEnable action by removing barriers
Refine and RetestSustain acceleration
Stay loyal to the problemCreate a Sense of Urgency around a
Big Opportunity
Comparison to Kotter’s Eight Accelerators for Change

Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with with the intended behaviors that will drive results. Evaluating our CAPA program, we have three key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.