Harvard Business Review – Whitewashing business executives is their core business

In the latest edition of “Executive utilizes Harvard Business Review to whitewash their activities” we have Hubert Joly, CEO of Best Buy, who informs us that we should all

  • Making meaningful purpose a genuine priority of business operations
  • The “human magic” of empowered and self-directed employees
  • Admitting you don’t have all the answers is a sign of strong leadership.

Let’s see how Best Buy puts those practices in place.

It is hard to take the editors of HBR seriously when they discuss what a good company culture looks like when they whitewash corporate leaders with this sort of track record.

Probably Best Buy paid a lot of money for the reputation bump just before Christmas.

Treating All Investigations the Same

Stephanie Gaulding, a colleague in the ASQ, recently wrote an excellent post for Redica on “How to Avoid Three Common Deviation Investigation Pitfalls“, a subject near and dear to my heart.

The three pitfalls Stephanie gives are:

  1. Not getting to root case
  2. Inadequate scoping
  3. Treating investigations the same

All three are right on the nose, and I’ve posted a bunch on the topics. Definitely go and read the post.

What I want to delve deeper into is Stephanie’s point that “Deviation systems should also be built to triage events into risk-based categories with sufficient time allocated to each category to drive risk-based investigations and focus the most time and effort on the highest risk and most complex events.”

That is an accurate breakdown, and exactly what regulators are asking for. However, I think the implementation of risk-based categories can sometimes lead to confusion, and we can spend some time unpacking the concept.

Risk is the possible effect of uncertainty. Risk is often described in terms of risk sources, potential events, their consequences, and their likelihoods (where we get likelihoodXseverity from).

But there are a lot of types of uncertainty, IEC31010 “Risk management – risk management techniques” lists the following examples:

  • uncertainty as to the truth of assumptions, including presumptions about how people or systems might behave
  • variability in the parameters on which a decision is to be based
  • uncertainty in the validity or accuracy of models which have been established to make predictions about the future
  • events (including changes in circumstances or conditions) whose occurrence, character or consequences are uncertain
  • uncertainty associated with disruptive events
  • the uncertain outcomes of systemic issues, such as shortages of competent staff, that can have wide ranging impacts which cannot be clearly defined lack of knowledge which arises when uncertainty is recognized but not fully understood
  • unpredictability
  • uncertainty arising from the limitations of the human mind, for example in understanding complex data, predicting situations with long-term consequences or making bias-free judgments.

Most of these are only, at best, obliquely relevant to risk categorizing deviations.

So it is important to first build the risk categories on consequences. At the end of the day these are the consequence that matter in the pharmaceutical/medical device world:

  • harm to the safety, rights, or well-being of patients, subjects or participants (human or non-human)
  • compromised data integrity so that confidence in the results, outcome, or decision dependent on the data is impacted

These are some pretty hefty areas and really hard for the average user to get their minds around. This is why building good requirements, and understanding how systems work is so critical. Building breadcrumbs in our procedures to let folks know what deviations are in what category is a good best practice.

There is nothing wrong with recognizing that different areas have different decision trees. Harm to safety in GMP can mean different things than safety in a GLP study.

The second place I’ve seen this go wrong has to do with likelihood, and folks getting symptom confused with problem confused with cause.

bridge with a gap

All deviations are with a situation that is different in some way from expected results. Deviations start with the symptom, and through analysis end up with a root cause. So when building your decision-tree, ensure it looks at symptoms and how the symptom is observed. That is surprisingly hard to do, which is why a lot of deviation criticality scales tend to focus only on severity.

4 major types of symptoms

SIPOC for Data Governance

The SIPOC is a great tool for understanding data and fits nicely into a larger data process mapping initiative.

By understanding where the data comes from (Suppler), what it is used for (Customer) and what is done to the data on its trip from supplier to the customer (Process), you can:

  • Understand the requirements that the customer has for the data
  • Understand the rules governing how the data is provided
  • Determine the gap between what is required and what is provided
  • Track the root cause of data failures – both of type and of quality
  • Create requirements for modifying the processes that move the data

The SIPOC can be applied at many levels of detail. At a high level, for example, batch data is used to determine supply. At a detailed level, a rule for calculating a data element can result in an unexpected number because of a condition that was not anticipated.

SIPOC for manufacturing data utilizing the MES (high level)

Hierarchy is not inevitable

I’m on the record in believing that Quality as a process is an inherently progressive one and that when we stray from those progressive roots we become exactly what we strive to avoid. One only has to look at the history of Six Sigma, TQM, and even Lean to see that.

I’m a big proponent of Humanocracy for that very reason.

One cannot read much of business writing without coming across the great leader (or even worse great man) hypothesis, which serves to naturalize power and existing forms of authority. One cannot even escape the continued hagiography of Jack Welch, even though he’s been discredited in many ways for his toxic legacy.

We cannot drive out fear unless we unmask power by revealing its contradictions, hypocrisies, and reliance on violence and coercion. The way we work is a result of human decisions, and thus capable of being remade.

We all have a long way to go here. I, for example, catch myself all the time speaking of leadership in hierarchical ways. One of the current things I am working on is exorcising the term ‘leadership team’ from my vocabulary. It doesn’t serve any real purpose and it fundamentally puts the idea of leadership as a hierarchical entity.

Another thing I am working on is to tackle the thorn of positional authority, the idea that the higher the rank in the organization the more decision-making authority you have. Which is absurd. In every organization, I’ve been in people have positions of authority that cover areas they do not have the education, experience, and training to make decisions in. This is why we need to have clear decision matrixes, establish empowered process owners and drive democratic leadership throughout the organization.

Success/Failure Space, or Why We Can Sometimes Seem Pessimistic

When evaluating a system we can look at it in two ways. We can identify ways a thing can fail or the various ways it can succeed.

Success/Failure Space

These are really just two sides of the coin in many ways, with identifiable points in success space coinciding with analogous points in failure space. “Maximum anticipated success” in success space coincides with “minimum anticipated failure” in failure space.

Like everything, how we frame the question helps us find answers. Certain questions require us to think in terms of failure space, others in success. There are advantages in both, but in risk management, the failure space is incredibly valuable.

It is generally easier to attain concurrence on what constitutes failure than it is to agree on what constitutes success. We may desire a house that has great windows, high ceilings, a nice yard. However, the one we buy can have a termite-infested foundation, bad electrical work, and a roof full of leaks. Whether the house is great is a matter of opinion, but we certainly know all it is a failure based on the high repair bills we are going to accrue.

Success tends to be associated with the efficiency of a system, the amount of output, the degree of usefulness. These characteristics are describable by continuous variables which are not easily modeled in terms of simple discrete events, such as “water is not hot” which characterizes the failure space. Failure, in particular, complete failure, is generally easy to define, whereas the event, success, maybe more difficult to tie down

Theoretically the number of ways in which a system can fail and the number of ways in which a system can ·succeed are both infinite, from a practical standpoint there are generally more ways to success than there are to failure. From a practical point of view, the size of the population in the failure space is less than the size of the population in the success space. This leads to risk management focusing on the failure space.

The failure space maps really well to nominal scales for severity, which can be helpful as you build your own scales for risk assessments.

For example, let’s look at an example of a morning commute.

Example of the failure space for a morning commute