Navigating Metrics in Quality Management: Leading vs. Lagging Indicators, KPIs, KRIs, KBIs, and Their Role in OKRs

Understanding how to measure success and risk is critical for organizations aiming to achieve strategic objectives. As we develop Quality Plans and Metric Plans it is important to explore the nuances of leading and lagging metrics, define Key Performance Indicators (KPIs), Key Behavioral Indicators (KBIs), and Key Risk Indicators (KRIs), and explains how these concepts intersect with Objectives and Key Results (OKRs).

Leading vs. Lagging Metrics: A Foundation

Leading metrics predict future outcomes by measuring activities that drive results. They are proactive, forward-looking, and enable real-time adjustments. For example, tracking employee training completion rates (leading) can predict fewer operational errors.

Lagging metrics reflect historical performance, confirming whether quality objectives were achieved. They are reactive and often tied to outcomes like batch rejection rates or the number of product recalls. For example, in a pharmaceutical quality system, lagging metrics might include the annual number of regulatory observations, the percentage of batches released on time, or the rate of customer complaints related to product quality. These metrics provide a retrospective view of the quality system’s effectiveness, allowing organizations to assess their performance against predetermined quality goals and industry standards. They offer limited opportunities for mid-course corrections

The interplay between leading and lagging metrics ensures organizations balance anticipation of future performance with accountability for past results.

Defining KPIs, KRIs, and KBIs

Key Performance Indicators (KPIs)

KPIs measure progress toward Quality System goals. They are outcome-focused and often tied to strategic objectives.

  • Leading KPI Example: Process Capability Index (Cpk) – This measures how well a process can produce output within specification limits. A higher Cpk could indicate fewer products requiring disposition.
  • Lagging KPI Example: Cost of Poor Quality (COPQ) -The total cost associated with products that don’t meet quality standards, including testing and disposition cost.

Key Risk Indicators (KRIs)

KRIs monitor risks that could derail objectives. They act as early warning systems for potential threats. Leading KRIs should trigger risk assessments and/or pre-defined corrections when thresholds are breached.

  • Leading KRI Example: Unresolved CAPAs (Corrective and Preventive Actions) – Tracks open corrective actions for past deviations. A rising number signals unresolved systemic issues that could lead to recurrence
  • Lagging KRI Example: Repeat Deviation Frequency – Tracks recurring deviations of the same type. Highlights ineffective CAPAs or systemic weaknesses

Key Behavioral Indicators (KBIs)

KBIs track employee actions and cultural alignment. They link behaviors to Quality System outcomes.

  • Leading KBI Example: Frequency of safety protocol adherence (predicts fewer workplace accidents).
  • Lagging KBI Example: Employee turnover rate (reflects past cultural challenges).

Applying Leading and Lagging Metrics to KPIs, KRIs, and KBIs

Each metric type can be mapped to leading or lagging dimensions:

  • KPIs: Leading KPIs drive action while lagging KPIs validate results
  • KRIs: Leading KRIs identify emerging risks while lagging KRIs analyze past incidents
  • KBIs: Leading KBIs encourage desired behaviors while lagging KBIs assess outcomes

Oversight Framework for the Validated State

An example of applying this for the FUSE(P) program.

CategoryMetric TypeFDA-Aligned ExamplePurposeData Source
KPILeading% completion of Stage 3 CPV protocolsProactively ensures continued process verification aligns with validation master plans Validation tracking systems
LaggingAnnual audit findings related to validation driftConfirms adherence to regulator’s “state of control” requirementsInternal/regulatory audit reports
KRILeadingOpen CAPAs linked to FUSe(P) validation gapsIdentifies unresolved systemic risks affecting process robustness Quality management systems (QMS)
LaggingRepeat deviations in validated batchesReflects failure to address root causes post-validation Deviation management systems
KBILeadingCross-functional review of process monitoring trendsEncourages proactive behavior to maintain validation stateMeeting minutes, action logs
LaggingReduction in human errors during requalificationValidates effectiveness of training/behavioral controlsTraining records, deviation reports

This framework operationalizes a focus on data-driven, science-based programs while closing gaps cited in recent Warning Letters.


Goals vs. OKRs: Alignment with Metrics

Goals are broad, aspirational targets (e.g., “Improve product quality”). OKRs (Objectives and Key Results) break goals into actionable, measurable components:

  • Objective: Reduce manufacturing defects.
  • Key Results:
    • Decrease batch rejection rate from 5% to 2% (lagging KPI).
    • Train 100% of production staff on updated protocols by Q2 (leading KPI).
    • Reduce repeat deviations by 30% (lagging KRI).

KPIs, KRIs, and KBIs operationalize OKRs by quantifying progress and risks. For instance, a leading KRI like “number of open CAPAs” (Corrective and Preventive Actions) informs whether the OKR to reduce defects is on track.


More Pharmaceutical Quality System Examples

Leading Metrics

  • KPI: Percentage of staff completing GMP training (predicts adherence to quality standards).
  • KRI: Number of unresolved deviations in the CAPA system (predicts compliance risks).
  • KBI: Daily equipment calibration checks (predicts fewer production errors).

Lagging Metrics

  • KPI: Batch rejection rate due to contamination (confirms quality failures).
  • KRI: Regulatory audit findings (reflects past non-compliance).
  • KBI: Employee turnover in quality assurance roles (indicates cultural or procedural issues).

Metric TypePurposeLeading ExampleLagging Example
KPIMeasure performance outcomesTraining completion rateQuarterly profit margin
KRIMonitor risksOpen CAPAsRegulatory violations
KBITrack employee behaviorsSafety protocol adherence frequencyEmployee turnover rate

Building Effective Metrics

  1. Align with Strategy: Ensure metrics tie to Quality System goals. For OKRs, select KPIs/KRIs that directly map to key results.
  2. Balance Leading and Lagging: Use leading indicators to drive proactive adjustments and lagging indicators to validate outcomes.
  3. Pharmaceutical Focus: In quality systems, prioritize metrics like right-first-time rate (leading KPI) and repeat deviation rate (lagging KRI) to balance prevention and accountability.

By integrating KPIs, KRIs, and KBIs into OKRs, organizations create a feedback loop that connects daily actions to long-term success while mitigating risks. This approach transforms abstract goals into measurable, actionable pathways—a critical advantage in regulated industries like pharmaceuticals.

Understanding these distinctions empowers teams to not only track performance but also shape it proactively, ensuring alignment with both immediate priorities and strategic vision.

Types of Work, an Explainer

The concepts of work-as-imagined, work-as-prescribed, work-as-done, work-as-disclosed, and work-as-reported have been discussed and developed primarily within the field of human factors and ergonomics. These concepts have been elaborated by various experts, including Steven Shorrock, who has written extensively on the topic and I cannot recommend enough.

  • Work-as-Imagined: This concept refers to how people think work should be done or imagine it is done. It is often used by policymakers, regulators, and managers who design work processes without direct involvement in the actual work.
  • Work-as-Prescribed: This involves the formalization of work through rules, procedures, and guidelines. It is how work is officially supposed to be done, often documented in organizational standards.
  • Work-as-Done: This represents the reality of how work is actually performed in practice, including the adaptations and adjustments made by workers to meet real-world demands.
  • Work-as-Disclosed: Also known as work-as-reported or work-as-explained, this is how people describe or report their work, which may differ from both work-as-prescribed and work-as-done due to various factors, including safety and organizational culture[3][4].
  • Work-as-Reported: This term is often used interchangeably with work-as-disclosed and refers to the accounts of work provided by workers, which may be influenced by what they believe should be communicated to others.
  • Work-as-Measured: The quantifiable aspects of work that are tracked and assessed, often focusing on performance metrics and outcomes
AspectWork-as-DoneWork-as-ImaginedWork-as-InstructedWork-as-PrescribedWork-as-ReportedWork-as-Measured
DefinitionActual activities performed in the workplace.How work is thought to be done, based on assumptions and expectation.Direct instructions given to workers on task performance.Formalized work according to rules, policies, and procedures.Description of work as shared verbally or in writing.Quantitative assessment of work performance.
PurposeAchieve objectives in real-world conditions, adapting as necessary.Conceptual understanding and planning of work.Ensure tasks are performed correctly and efficiently.Standardize and control work for compliance and safety.Communicate work processes and outcomes.Evaluate work efficiency and effectiveness.
CharacteristicsAdaptive, context-dependent, often involves improvisation.Based on assumptions, may not align with reality.Clear, direct, and often specific to tasks.Detailed, formal, assumed to be the correct way to work.May not fully reflect reality, influenced by audience and context.Objective, based on metrics and data.
AspectWork-as-MeasuredWork-as-Judged
DefinitionQuantification or classification of aspects of work.Evaluation or assessment of work based on criteria or standards.
PurposeTo assess, understand, and evaluate work performance using metrics and data.To form opinions or make decisions about work quality or effectiveness.
CharacteristicsObjective and subjective measures, often numerical; can lack stability and validity.Subjective, influenced by personal biases, experiences, and expectations.
AgencyConducted by supervisors, managers, or specialists in various fields.Performed by individuals or groups with authority to evaluate work performance.
GranularityCan range from coarse (e.g., overall productivity) to fine (e.g., specific actions).Typically broader, considering overall performance rather than specific details.
InfluenceAffected by technological, social, and regulatory contexts.Affected by preconceived notions and potential biases.

Further Reading

Culture of Quality Initiatives

At the heart of culture is a set of behaviors and beliefs, that indicate what is important to the organization and drive all decision-making. Culture, and weaknesses within it, drive the root cause of many problems, and improving quality culture is an essential part of continuous improvement.

Culture is often the true reason for the behavior of people within an organization and it can often be deeply unconscious and not rationally recognized by most members. These ideas are so integrated that they can be difficult to confront or debate and thus difficult to change.

How we Build Quality

A critical part for improving culture is being able to measure the current situation. A great place to start is using a survey-based to gather input from employees on the current culture of quality. Some of the topic areas can include:

Some of the feedback methods to utilize once you have a baseline can include:

Feedback MethodWhen to use
Focus GroupsYou want detailed feedback on a number if issues AND employees are generally willing to speak on the record
Short, targeted surveysYou have a number of close-ended findings to test AND your organization is not suffering survey fatigue
Informal conversationsYou want to gain context of a few data points AND you have a trusted circle

As you build improvements, you will introduce better metrics of success.

Once you a good set of findings select 2-3 key ones and design experiments.

Pitfalls and Keys to Success for Experiments in Quality Culture
Experiment for Success

Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with the intended behaviors that will drive results. Evaluating our CAPA program, we have two key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top-level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

Management Review – a Structured Analysis of Reality

What is Management Review?

ISO9001:2015 states “Top management shall review the organization’s quality management system, at planned intervals, to ensure its continuing suitability, adequacy, effectiveness and alignment with the strategic direction of the organization.”

Management review takes inputs of system performance and converts it to outputs that drive improvement.

Just about every standard and guidance aligns with the ISO9001:2015 structure.

The Use of PowerPoint in Management Review

Everyone makes fun of PowerPoint, and yet it is still with us. As a mechanism for formal communication it is the go-to form, and I do not believe that will change anytime soon.

One of the best pieces of research on PowerPoint and management review is Kaplan’s examination of PowerPoint slides used in a manufacturing firm. Kaplan found that generating slides was “embedded in the discursive practices of strategic knowledge production” and made up “part of the epistemic machinery that undergirds the know-ledge production culture.” Further, “the affordances of PowerPoint,” Kaplan pointed out, “enabled the difficult task of collaborating to negotiate meaning in an uncertain environment, creating spaces for discussion, making recombinations possible, [and] allowing for adjustments as ideas evolved”. She concluded that PowerPoint slide decks should be regarded not as merely effective or ineffective reports but rather as an essential part of strategic decision making.

Kaplan’s findings are not isolated, there is a broad wealth of relevant research in the fields of genre and composition studies as well as research on material objects that draw similar conclusions. Powerpoint, as a method of formal communication, can be effective.

Management Review as Formal Communication

Management review is a formal communication and by understanding how these formal communications participate in the fixed and emergent conditions of knowledge work as prescribed, being-composed, and materialized-texts-in-use, we can understand how to better structure our knowledge sharing.

Management review mediates between Work-As-Imagined and Work-As-Done.

As-Prescribed

The quality management reviews have “fixity” and bring a reliable structure to the knowledge-work process by specifying what needs to become known and by when, forming a step-by-step learning process.

As-Being-Composed

Quality management always starts with a plan for activities, but in the process of providing analysis through management review, the organization learns much more about the topic, discovers new ideas, and uncover inconsistencies in our thinking that cause us to step back, refine, and sometimes radically change our plan. By engaging in the writing of these presentations we make the tacit knowledge explicit.

A successful management review imagines the audience who needs the information, asks questions, raises objections, and brings to the presentation a body of experience and a perspective that differs from that of the party line. Management review should be a process of dialogue that draws inferences and constructs relationships between ideas, apply logic to build complex arguments, reformulate ideas, reflects on what is already known, and comes to understand the material in a new way.

As-Materialized

Management review is a textually mediated conversation that enables knowledge integration within and
across groups in, and outside of, the organization. The records of management review are focal points around which users can discuss what they have learned, discover diverse understandings, and depersonalize debate. Management review records drive the process of incorporating the different domain specific
knowledge of various decision makers and experts into some form of systemic group knowledge and applies that knowledge to decision making and action.

Sources

  • Alvesson, M. (2004). Knowledge work and knowledge-intensive firms. Oxford University Press.
  • Bazerman, C. (2003). What is not institutionally visible does not count: The problem of making activity assessable, accountable, and plannable. In C. Bazerman & D. Russell (Eds.), Writing selves/writing societies: Research from activity perspectives (pp. 428–482). WAC Clearinghouse
  • Edmondson, A. C. (2012). Teaming: How organizations learn, innovate, and compete in the knowledge economy. Jossey-Bass
  • Kaplan, S. (2015). Strategy and PowerPoint: An inquiry into the epistemic culture and machinery of strategy making. Organization Science, 22, 320–346.
  • Levitin, D. J. (2014). The organized mind: Thinking straight in the age of information overload. Penguin
  • Mengis, J. (2007). Integrating knowledge through communication: The case of experts and decision makers. In Proceedings of the 2007 International Conference on Organizational Knowledge, Learning, and Capabilities (pp. 699–720). OLKC. Retrieved from https://warwick.ac.uk/fac/soc/wbs/conf/olkc/archive/olkc2/papers/mengis.pdf