Defining Values, with Speaking Out as an example

Which espoused values and desired behaviors will best enable an organization to live its quality purpose? There’s been a lot of writing and thought on this, and for this post, I am going to start with ISO 10018-2020 “Quality management — Guidance for people engagement” and develop an example of a value to build in your organization.

ISO 10018-2020 gives 6 areas:

  • Context of the organization and quality culture
  • Leadership
  • Planning and Strategy
  • Knowledge and Awareness
  • Competence
  • Improvement

This list is pretty well aligned to other models, including the Malcolm Baldrige Excellence Framework (NIST), EFQM Excellence Model, SIQ Model for Performance Excellence, and such tools as the PDA Culture of Quality Assessment.

A concept that we find in ISO 10018-2020 (and everywhere else) is the handling of errors, mistakes, everyday problems and ‘niggles’, near misses, critical incidents, and failures; to ensure they are reported and recorded honestly and transparently. That the time is taken for these to be discussed openly and candidly, viewed as opportunities for learning how to prevent their recurrence by improving systems but also as potentially protective of potentially larger and more consequential failures or errors. The team takes the time and effort to engage in ‘second orderproblem-solving. ‘First order’ problem solving is the quick fixing of issues as they appear so as to stop them disrupting normal workflow. ‘Second order’ problem solving involves identifying the root causes of problems and taking action to address these rather than their signs and symptoms. The team takes ownership of mistakes instead of blaming, accusing, or scapegoating individual team members. The team proactively seeks to identify errors and problems it may have missed in its processes or outputs by seeking feedback and asking for help from external stakeholders, e.g. colleagues in other teams, and customers, and also by engaging in frequent experimentation and testing.

We can tackle this in two ways. The first is to define all the points above as a value. The second would be to look at themes for this and the other aspects of robust quality culture and come up with a set of standard values, for example:

  • Accountable
  • Ownership
  • Action Orientated
  • Speak up

Don’t be afraid to take a couple of approaches to get values that really sing in your organization.

Values can be easily written in the following format:

  1. Value: A one or two-word title for each value
  2. Definition: A two or three sentence description that clearly states what this value means in your organization
  3. Desired Behaviors: “I statement” behaviors that simply state activities. The behaviors we choose reinforce the values’ definitions by describing exactly how you want members of the organization to interact.
    • Is this observable behavior? Can we assess someone’s demonstration of this behavior by watching and/or listening to their interactions? By seeing results?
    • Is this behavior measurable? Can we reliably “score” this behavior? Can we rank how individual models or demonstrates this behavior?

For the rest of this post, I am going to focus on how you would write a value statement for Speak Up.

First, ask two questions:

  • Specific to your organization’s work environment, how would you define “Speak Up.”
  • What phrase or sentences describe what you mean by “Speak Up.”

Then broaden by considering how fellow leaders and team members would act to demonstrate “Speak Up”, as you defined it.

  • How would leaders and team members act so that, when you observe them, you would see a demonstration of Speaking Up? Note three or four behaviors that would clearly demonstrate your definition.

Next, answer these questions exclusively from your team member’s perspective:

  • How would employees define Speaking Out?
  • How would their definition differ from yours? Why?
  • What behaviors would employees feel they must model to demonstrate Speaking Out properly?
  • How would their modeled behaviors differ from yours? Why?

This process allows us to create common alignment based on a shared purpose.

By going through this process we may end up with a Value that looks like this:

  1. Value: Speaking Out
  2. Definition: Problems are reported and recorded honestly and transparently. Employees are not afraid to speak up, identify quality issues, or challenge the status quo for improved quality; they believe management will act on their suggestions. 
  3. Desired Behaviors:
    • I hold myself accountable for raising problems and issues to my team promptly.
    • I attack process and problems, not people.
    • I work to anticipate and fend off the possibility of failures occurring.
    • I approach admissions of errors and lack of knowledge/skill with support.

Avoiding Logical Pitfalls

When documenting a root cause analysis or risk assessment or any of the myriad other technical reports we are making a logical argument. In this post, I want to evaluate six common pitfalls to avoid in your writing.

Claiming to follow logically: Non Sequiturs and Genetic Fallacies

Non-sequiturs and genetic fallacies involve statements that are offered in a way that suggests they follow logically one from the other, when in fact no such link exists.

Non-sequiturs (meaning ‘that which does not follow’) often happens when we make connective explanations without justification. Genetic fallacies occur when we draw assumptions about something by tracing its origins back even though no necessary link can be made between the present situation and the claimed original one.

This is a very common mistake and usually stems from poor use of causal thinking. The best way to address it in an organization is continuing to build discipline in thought processes and documenting the connections and why things are connected.

Making Assumptions: Begging the Question

Begging the question, assuming the very point at issue happens a lot in investigations. One of the best ways to avoid this is to ensure a proper problem statement.

Restricting the Options to Two: ‘Black and White’ Thinking

In black and white thinking or the false dichotomy, the arguer gives only two options when other alternatives are possible.

Being Unclear: Equivocation and Ambiguity

  • Lexical: Refers to individual words
  • Referential: Occurs when the context is unclear
  • Syntactical: Results from grammatical confusions

Just think of all the various meanings of validation and you can understand this problem.

Thinking Wishfully

Good problem-solving will drive down the tendency to assume conclusions, but these probably exist in every organization.

Detecting the Whiff of Red Herrings

Human error is the biggest red herring of them all.

Six logical fallacies

Managing Events Systematically

Being good at problem-solving is critical to success in an organization. I’ve written quite a bit on problem-solving, but here I want to tackle the amount of effort we should apply.

Not all problems should be treated the same. There are also levels of problems. And these two aspects can contribute to some poor problem-solving practices.

It helps to look at problems systematically across our organization. The iceberg analogy is a pretty popular way to break this done focusing on Events, Patterns, Underlying Structure, and Mental Model.

Iceberg analogy

Events

Events start with the observation or discovery of a situation that is different in some way. What is being observed is a symptom and we want to quickly identify the problem and then determine the effort needed to address it.

This is where Art Smalley’s Four Types of Problems comes in handy to help us take a risk-based approach to determining our level of effort.

Type 1 problems, Troubleshooting, allows us to set problems with a clear understanding of the issue and a clear pathway. Have a flat tire? Fix it. Have a document error, fix it using good documentation practices.

It is valuable to work the way through common troubleshooting and ensure the appropriate linkages between the different processes, to ensure a system-wide approach to problem solving.

Corrective maintenance is a great example of troubleshooting as it involved restoring the original state of an asset. It includes documentation, a return to service and analysis of data. From that analysis of data problems are identified which require going deeper into problem-solving. It should have appropriate tie-ins to evaluate when the impact of an asset breaking leads to other problems (for example, impact to product) which can also require additional problem-solving.

It can be helpful for the organization to build decision trees that can help folks decide if a given problem stays as troubleshooting or if it it also requires going to type 2, “gap from standard.”

Type 2 problems, gap from standard, means that the actual result does not meet the expected and there is a potential of not meeting the core requirements (objectives) of the process, product, or service. This is the place we start deeper problem-solving, including root cause analysis.

Please note that often troubleshooting is done in a type 2 problem. We often call that a correction. If the bioreactor cannot maintain temperature during a run, that is a type 2 problem but I am certainly going to immediately apply troubleshooting as well. This is called a correction.

Take documentation errors. There is a practice in place, part of good documentation practices, for addressing troubleshooting around documents (how to correct, how to record a comment, etc). By working through the various ways documentation can go wrong, applying which ones are solved through troubleshooting and don’t involve type 2 problems, we can create a lot of noise in our system.

Core to the quality system is trending, looking for possible signals that require additional effort. Trending can help determine where problems lay and can also drive up the level of effort necessary.

Underlying Structure

Root Cause Analysis is about finding the underlying structure of the problem that defines the work applied to a type 2 problem.

Not all problems require the same amount of effort, and type 2 problems really have a scale based on consequences, that can help drive the level of effort. This should be based on the impact to the organization’s ability to meet the quality objectives, the requirements behind the product or service.

For example, in the pharma world there are three major criteria:

  •  safety, rights, or well-being of patients (including subjects and participants human and non-human)
  • data integrity (includes confidence in the results, outcome, or decision dependent on the data)
  • ability to meet regulatory requirements (which stem from but can be a lot broader than the first two)

These three criteria can be sliced and diced a lot of ways, but serve our example well.

To these three criteria we add a scale of possible harm to derive our criticality, an example can look like this:

ClassificationDescription
CriticalThe event has resulted in, or is clearly likely to result in, any one of the following outcomes:   significant harm to the safety, rights, or well-being of subjects or participants (human or non-human), or patients; compromised data integrity to the extent that confidence in the results, outcome, or decision dependent on the data is significantly impacted; or regulatory action against the company.
MajorThe event(s), were they to persist over time or become more serious, could potentially, though not imminently, result in any one of the following outcomes:  
harm to the safety, rights, or well-being of subjects or participants (human or non-human), or patients; compromised data integrity to the extent that confidence in the results, outcome, or decision dependent on the data is significantly impacted.
MinorAn isolated or recurring triggering event that does not otherwise meet the definitions of Critical or Major quality impacts.
Example of Classification of Events in a Pharmaceutical Quality System

This level of classification will drive the level of effort on the investigation, as well as drive if the CAPA addresses underlying structures alone or drives to addressing the mental models and thus driving culture change.

Mental Model

Here is where we address building a quality culture. In CAPA lingo this is usually more a preventive action than a corrective action. In the simplest of terms, corrective actions is address the underlying structures of the problem in the process/asset where the event happened. Preventive actions deal with underlying structures in other (usually related) process/assets or get to the Mindsets that allowed the underlying structures to exist in the first place.

Solving Problems Systematically

By applying this system perspective to our problem solving, by realizing that not everything needs a complete rebuild of the foundation, by looking holistically across our systems, we can ensure that we are driving a level of effort to truly build the house of quality.

Is-Is Not Matrix

The Is-Is Not matrix is a great tool for problem-solving, that I usually recommend to help frame the problem. It is based on the 5W2H methodology and then asks the question of what is different between the problem and what has been going right.

ISIS NOT
WhatWhat specific objects have the deviation? What is the specific deviation?What similar object(s) could reasonably have the deviation, but does not? What other deviations could reasonably be observed, but are not?
WhereWhere is the object when the deviation is observed (geographically)? Where is the deviation on the object?Where else could the object be when the deviations are observed but are not? Where else could the deviation be located on the object, but is not?
WhenWhen was the deviation observed first (in clock and calendar time)? When since that time has the deviation been observed? Any pattern? When, in the object’s history or life cycle, was the deviation observed first?When else could the deviation have been observed, but was not? When since that time could the deviation have been observed, but was not? When else, in the object’s history or life cycle, could the deviation have been observed first, but was not?
ExtentHow many objects have the deviation? What is the size of a single deviation? How many deviations are on each object? What is the trend? (…in the object?) (…in the number of occurrences of the deviation?) (…in the size of the deviation?)How many objects could have the deviation, but do not? What other size could the deviation be, but is not? How many deviations could there be on each object, but are not? What could be the trend, but is not? (…in the object?) (…in the number of occurrences of the deviation?) (…in the size of the deviation?)
WhoWho is involved (avoid blame, stick to roles, shifts, etc) To whom, by whom, near whom does this occurWho is not involved? Is there a trend of a specific role, shift, or another distinguishing factor?
Is-Is Not Matrix

Here is a template for use.

Treating All Investigations the Same

Stephanie Gaulding, a colleague in the ASQ, recently wrote an excellent post for Redica on “How to Avoid Three Common Deviation Investigation Pitfalls“, a subject near and dear to my heart.

The three pitfalls Stephanie gives are:

  1. Not getting to root case
  2. Inadequate scoping
  3. Treating investigations the same

All three are right on the nose, and I’ve posted a bunch on the topics. Definitely go and read the post.

What I want to delve deeper into is Stephanie’s point that “Deviation systems should also be built to triage events into risk-based categories with sufficient time allocated to each category to drive risk-based investigations and focus the most time and effort on the highest risk and most complex events.”

That is an accurate breakdown, and exactly what regulators are asking for. However, I think the implementation of risk-based categories can sometimes lead to confusion, and we can spend some time unpacking the concept.

Risk is the possible effect of uncertainty. Risk is often described in terms of risk sources, potential events, their consequences, and their likelihoods (where we get likelihoodXseverity from).

But there are a lot of types of uncertainty, IEC31010 “Risk management – risk management techniques” lists the following examples:

  • uncertainty as to the truth of assumptions, including presumptions about how people or systems might behave
  • variability in the parameters on which a decision is to be based
  • uncertainty in the validity or accuracy of models which have been established to make predictions about the future
  • events (including changes in circumstances or conditions) whose occurrence, character or consequences are uncertain
  • uncertainty associated with disruptive events
  • the uncertain outcomes of systemic issues, such as shortages of competent staff, that can have wide ranging impacts which cannot be clearly defined lack of knowledge which arises when uncertainty is recognized but not fully understood
  • unpredictability
  • uncertainty arising from the limitations of the human mind, for example in understanding complex data, predicting situations with long-term consequences or making bias-free judgments.

Most of these are only, at best, obliquely relevant to risk categorizing deviations.

So it is important to first build the risk categories on consequences. At the end of the day these are the consequence that matter in the pharmaceutical/medical device world:

  • harm to the safety, rights, or well-being of patients, subjects or participants (human or non-human)
  • compromised data integrity so that confidence in the results, outcome, or decision dependent on the data is impacted

These are some pretty hefty areas and really hard for the average user to get their minds around. This is why building good requirements, and understanding how systems work is so critical. Building breadcrumbs in our procedures to let folks know what deviations are in what category is a good best practice.

There is nothing wrong with recognizing that different areas have different decision trees. Harm to safety in GMP can mean different things than safety in a GLP study.

The second place I’ve seen this go wrong has to do with likelihood, and folks getting symptom confused with problem confused with cause.

bridge with a gap

All deviations are with a situation that is different in some way from expected results. Deviations start with the symptom, and through analysis end up with a root cause. So when building your decision-tree, ensure it looks at symptoms and how the symptom is observed. That is surprisingly hard to do, which is why a lot of deviation criticality scales tend to focus only on severity.

4 major types of symptoms