Navigating Metrics in Quality Management: Leading vs. Lagging Indicators, KPIs, KRIs, KBIs, and Their Role in OKRs

Understanding how to measure success and risk is critical for organizations aiming to achieve strategic objectives. As we develop Quality Plans and Metric Plans it is important to explore the nuances of leading and lagging metrics, define Key Performance Indicators (KPIs), Key Behavioral Indicators (KBIs), and Key Risk Indicators (KRIs), and explains how these concepts intersect with Objectives and Key Results (OKRs).

Leading vs. Lagging Metrics: A Foundation

Leading metrics predict future outcomes by measuring activities that drive results. They are proactive, forward-looking, and enable real-time adjustments. For example, tracking employee training completion rates (leading) can predict fewer operational errors.

Lagging metrics reflect historical performance, confirming whether quality objectives were achieved. They are reactive and often tied to outcomes like batch rejection rates or the number of product recalls. For example, in a pharmaceutical quality system, lagging metrics might include the annual number of regulatory observations, the percentage of batches released on time, or the rate of customer complaints related to product quality. These metrics provide a retrospective view of the quality system’s effectiveness, allowing organizations to assess their performance against predetermined quality goals and industry standards. They offer limited opportunities for mid-course corrections

The interplay between leading and lagging metrics ensures organizations balance anticipation of future performance with accountability for past results.

Defining KPIs, KRIs, and KBIs

Key Performance Indicators (KPIs)

KPIs measure progress toward Quality System goals. They are outcome-focused and often tied to strategic objectives.

  • Leading KPI Example: Process Capability Index (Cpk) – This measures how well a process can produce output within specification limits. A higher Cpk could indicate fewer products requiring disposition.
  • Lagging KPI Example: Cost of Poor Quality (COPQ) -The total cost associated with products that don’t meet quality standards, including testing and disposition cost.

Key Risk Indicators (KRIs)

KRIs monitor risks that could derail objectives. They act as early warning systems for potential threats. Leading KRIs should trigger risk assessments and/or pre-defined corrections when thresholds are breached.

  • Leading KRI Example: Unresolved CAPAs (Corrective and Preventive Actions) – Tracks open corrective actions for past deviations. A rising number signals unresolved systemic issues that could lead to recurrence
  • Lagging KRI Example: Repeat Deviation Frequency – Tracks recurring deviations of the same type. Highlights ineffective CAPAs or systemic weaknesses

Key Behavioral Indicators (KBIs)

KBIs track employee actions and cultural alignment. They link behaviors to Quality System outcomes.

  • Leading KBI Example: Frequency of safety protocol adherence (predicts fewer workplace accidents).
  • Lagging KBI Example: Employee turnover rate (reflects past cultural challenges).

Applying Leading and Lagging Metrics to KPIs, KRIs, and KBIs

Each metric type can be mapped to leading or lagging dimensions:

  • KPIs: Leading KPIs drive action while lagging KPIs validate results
  • KRIs: Leading KRIs identify emerging risks while lagging KRIs analyze past incidents
  • KBIs: Leading KBIs encourage desired behaviors while lagging KBIs assess outcomes

Oversight Framework for the Validated State

An example of applying this for the FUSE(P) program.

CategoryMetric TypeFDA-Aligned ExamplePurposeData Source
KPILeading% completion of Stage 3 CPV protocolsProactively ensures continued process verification aligns with validation master plans Validation tracking systems
LaggingAnnual audit findings related to validation driftConfirms adherence to regulator’s “state of control” requirementsInternal/regulatory audit reports
KRILeadingOpen CAPAs linked to FUSe(P) validation gapsIdentifies unresolved systemic risks affecting process robustness Quality management systems (QMS)
LaggingRepeat deviations in validated batchesReflects failure to address root causes post-validation Deviation management systems
KBILeadingCross-functional review of process monitoring trendsEncourages proactive behavior to maintain validation stateMeeting minutes, action logs
LaggingReduction in human errors during requalificationValidates effectiveness of training/behavioral controlsTraining records, deviation reports

This framework operationalizes a focus on data-driven, science-based programs while closing gaps cited in recent Warning Letters.


Goals vs. OKRs: Alignment with Metrics

Goals are broad, aspirational targets (e.g., “Improve product quality”). OKRs (Objectives and Key Results) break goals into actionable, measurable components:

  • Objective: Reduce manufacturing defects.
  • Key Results:
    • Decrease batch rejection rate from 5% to 2% (lagging KPI).
    • Train 100% of production staff on updated protocols by Q2 (leading KPI).
    • Reduce repeat deviations by 30% (lagging KRI).

KPIs, KRIs, and KBIs operationalize OKRs by quantifying progress and risks. For instance, a leading KRI like “number of open CAPAs” (Corrective and Preventive Actions) informs whether the OKR to reduce defects is on track.


More Pharmaceutical Quality System Examples

Leading Metrics

  • KPI: Percentage of staff completing GMP training (predicts adherence to quality standards).
  • KRI: Number of unresolved deviations in the CAPA system (predicts compliance risks).
  • KBI: Daily equipment calibration checks (predicts fewer production errors).

Lagging Metrics

  • KPI: Batch rejection rate due to contamination (confirms quality failures).
  • KRI: Regulatory audit findings (reflects past non-compliance).
  • KBI: Employee turnover in quality assurance roles (indicates cultural or procedural issues).

Metric TypePurposeLeading ExampleLagging Example
KPIMeasure performance outcomesTraining completion rateQuarterly profit margin
KRIMonitor risksOpen CAPAsRegulatory violations
KBITrack employee behaviorsSafety protocol adherence frequencyEmployee turnover rate

Building Effective Metrics

  1. Align with Strategy: Ensure metrics tie to Quality System goals. For OKRs, select KPIs/KRIs that directly map to key results.
  2. Balance Leading and Lagging: Use leading indicators to drive proactive adjustments and lagging indicators to validate outcomes.
  3. Pharmaceutical Focus: In quality systems, prioritize metrics like right-first-time rate (leading KPI) and repeat deviation rate (lagging KRI) to balance prevention and accountability.

By integrating KPIs, KRIs, and KBIs into OKRs, organizations create a feedback loop that connects daily actions to long-term success while mitigating risks. This approach transforms abstract goals into measurable, actionable pathways—a critical advantage in regulated industries like pharmaceuticals.

Understanding these distinctions empowers teams to not only track performance but also shape it proactively, ensuring alignment with both immediate priorities and strategic vision.

Metrics Plan for Facility, Utility, System and Equipment

As October rolls around I am focusing on 3 things: finalizing a budget; organization design and talent management; and a 2025 metrics plan. One can expect those three things to be the focus of a lot of my blog posts in October.

Go and read my post on Metrics plans. Like many aspects of a quality management system we don’t spend nearly enough time planning for metrics.

So over the next month I’m going to develop the strategy for a metrics plan to ensure the optimal performance, safety, and compliance of our biotech manufacturing facility, with a focus on:

  1. Facility and utility systems efficiency
  2. Equipment reliability and performance
  3. Effective commissioning, qualification, and validation processes
  4. Robust quality risk management
  5. Stringent contamination control measures

Following the recommended structure of a metrics plan, here is the plan:

Rationale and Desired Outcomes

Implementing this metrics plan will enable us to:

  • Improve overall facility performance and product quality
  • Reduce downtime and maintenance costs
  • Ensure regulatory compliance
  • Minimize contamination risks
  • Optimize resource allocation

Metrics Framework

Our metrics framework will be based on the following key areas:

  1. Facility and Utility Systems
  2. Equipment Performance
  3. Commissioning, Qualification, and Validation (CQV)
  4. Quality Risk Management (QRM)
  5. Contamination Control

Success Criteria

Success will be measured by:

  • Reduction in facility downtime
  • Improved equipment reliability
  • Faster CQV processes
  • Decreased number of quality incidents
  • Reduced contamination events

Implementation Plan

Steps, Timelines & Milestones

  1. Develop detailed metrics for each key area (Month 1)
  2. Implement data collection systems (Month 2)
  3. Train personnel on metrics collection and analysis (Month 3)
  4. Begin data collection and initial analysis (Month 4)
  5. Review and refine metrics (Month 9)
  6. Full implementation and ongoing analysis (Month 12 onwards)

This plan gets me ready to evaluate these metrics as part of governance in January of next year.

In October I will breakdown some metrics, explaining them and provide the rationale, and demonstrate how to collect. I’ll be striving to break these metrics into key performance indicators (KPI), key behavior indicators (KBI) and key risk indicators (KRI).

The Failure Space of Clinical Trials – Protocol Deviations and Events

Let us turn our failure space model, and level of problems, to deviations in a clinical trial. This is one of those areas that regulations and tribal practice have complicated, perhaps needlessly. It is also complicated by the different players of clinical sites, sponsor, and usually these days a number of Contract Research Organizations (CRO).

What is a Protocol Deviation?

Protocol deviation is any change, divergence, or departure from the study design or procedures defined in the approved protocol.

Protocol deviations may include unplanned instances of protocol noncompliance. For example, situations in which the clinical investigator failed to perform tests or examinations as required by the protocol or failures on the part of subjects to complete scheduled visits as required by the protocol, would be considered protocol deviations.

In the case of deviations which are planned exceptions to the protocol such deviations should be reviewed and approved by the IRB, the sponsor, and by the FDA for medical devices, prior to implementation, unless the change is necessary to eliminate apparent immediate hazards to the human subjects (21 CFR 312.66), or to protect the life or physical well-being of the subject (21 CFR 812.150(a)(4)).

The FDA, July 2020. Compliance Program Guidance Manual for Clinical Investigator Inspections (7348.811).

In assessing protocol deviations/violations, the FDA instructs field staff to determine whether changes to the protocol were: (1) documented by an amendment, dated, and maintained with the protocol; (2) reported to the sponsor (when initiated by the clinical investigator); and (3) approved by the IRB and FDA (if applicable) before implementation (except when necessary to eliminate apparent immediate hazard(s) to human subjects).

Regulation/GuidanceStates
ICH E-6 (R2) Section 4.5.1-4.5.44.5.1“trial should be conducted in compliance with the protocol agreed to by the sponsor and, if required by the regulatory authorities…”
4.5.2 The investigator should not implement any deviation from, or changes of, the protocol without agreement by the sponsor and prior review and documented approval/favorable opinion from the IRB/IEC of an amendment, except where necessary to eliminate an immediate hazard(s) to trial subjects, or when the change(s) involves only logistical or administrative aspects of the trial (e.g., change in monitor(s), change of telephone number(s)).
4.5.3 The investigator, or person designated by the investigator, should document and explain any deviation from the approved protocol.
4.5.4 The investigator may implement a deviation from, or a change in, the protocol to eliminate an immediate hazard(s) to trial subjects without prior IRB/IEC approval/favorable opinion.
ICH E3, section 9.6The sponsor should describe the quality management approach implemented in the trial and summarize important deviations from the predefined quality tolerance limits and remedial actions taken in the clinical study report
21CFR 312.53(vi) (a)investigators selected “Will conduct the study(ies) in accordance with the relevant, current protocol(s) and will only make changes in a protocol after notifying the sponsor, except when necessary to protect the safety, the rights, or welfare of subjects.”
21CFR 56.108(a)IRB shall….ensur[e] that changes in approved research….may not be initiated without IRB review and approval except where necessary to eliminate apparent immediate hazards to the human subjects.
21 CFR 56.108(b)“IRB shall….follow written procedures for ensuring prompt reporting to the IRB, appropriate institutional officials, and the Food and Drug Administration of… any unanticipated problems involving risks to human subjects or others…[or] any instance of serious or continuing noncompliance with these regulations or the requirements or determinations of the IRB.”
45 CFR 46.103(b)(5)Assurances applicable to federally supported or conducted research shall at a minimum include….written procedures for ensuring prompt reporting to the IRB….[of] any unanticipated problems involving risks to subjects or others or any serious or continuing noncompliance with this policy or the requirements or determinations of the IRB.
FDA Form-1572 (Section 9)lists the commitments the investigator is undertaking in signing the 1572 wherein the clinical investigator agrees “to conduct the study(ies) in accordance with the relevant, current protocol(s) and will only make changes in a protocol after notifying the sponsor, except when necessary to protect the safety, the rights, or welfare of subjects… [and] not to make any changes in the research without IRB approval, except where necessary to eliminate apparent immediate hazards to the human subjects.”
A few key regulations and guidances (not meant to be a comprehensive list)

How Protocol Deviations are Implemented

Many companies tend to have a failure scale built into their process, differentiating between protocol deviations and violations based on severity. Others use a minor, major, and even critical scale to denote differences in severity. The axis here for severity is the degree to which affects the subject’s rights, safety, or welfare, and/or the integrity of the resultant data (i.e., the sponsor’s ability to use the data in support of the drug).

Other companies divide into protocol deviations and violations:

  • Protocol Deviation: A protocol deviation occurs when, without significant consequences, the activities on a study diverge from the IRB-approved protocol, e.g., missing a visit window because the subject is traveling. Not as serious as a protocol violation.
  • Protocol Violation: A divergence from the protocol that materially (a) reduces the quality or completeness of the data, (b) makes the ICF inaccurate, or (c) impacts a subject’s safety, rights or welfare. Examples of protocol violations may include: inadequate or delinquent informed consent; inclusion/exclusion criteria not met; unreported SAEs; improper breaking of the blind; use of prohibited medication; incorrect or missing tests; mishandled samples; multiple visits missed or outside permissible windows; materially inadequate record-keeping; intentional deviation from protocol, GCP or regulations by study personnel; and subject repeated noncompliance with study requirements.

This is probably a place when nomenclature can serve to get in the way, rather than provide benefit. The EMA says pretty much the same in “ICH guideline E3 – questions and answers (R1).

Principles of Events in Clinical Practice

  1. Severity of the event is based on degree to which affects the subject’s rights, safety, or welfare, and/or the integrity of the resultant data
  2. Events (problems, deviations, etc) will happen at all levels of a clinical practice (Sponsor, CRO, Site, etc)
  3. Events happen beyond the Protocol. These need to be managed appropriately as well.
  4. The event needs to be categorized, evaluated and trended by the sponsor

Severity of the Event

Starting in the study planning stage, ICH E6(R2) GCP requires sponsors to identify risks to critical study processes and study data and to evaluate these risks based on likelihood, detectability and impact on subject safety and data integrity.

Sponsors then establish key quality indicators (KQIs) and quality tolerance thresholds. KQI is really just a key risk indicator and should be treated similarly.

Study events that exceed the risk threshold should trigger an evaluation to determine if action is needed. In this way, sponsors can proactively manage risk and address protocol noncompliance.

The best practice here is to have a living risk assessment for each study. Evaluate across studies to understand your overall organization risk, and look for opportunities for wide-scale mitigations. Feedup into your risk register.

Event Classification for Clinical Protocols and GCPs

Where the Event happens

Deviations in the clinical space are a great example of the management of supplier events, and at the end of the day there is little difference between a GMP supplier event management, a GLP or a GCP. The individual requirements might be different but the principles and the process are the same.

Each entity in the trial organization should have their own deviation system where they investigate deviations, performing root cause investigation and enacting CAPAs.

This is where it starts to get tricky. first of all, not all sites have the infrastructure to do this well. Second the nature of reporting, usually through the Electronic Data Capture (EDC) system, can lead to balkanization at the site. Site’s need to have strong compliance programs through compiling deviation details into a single sitewide system that allows the site to trend deviations across studies in addition to following sponsor reporting requirements.

Unfortunately too many site’s rely on the sponsor’s program. Sponsors need to be evaluating the strength of this program during site selection and through auditing.

Events Happen

Consistent Event Reporting is Critical

Deviations should be to all process, procedure and plans, and just not the protocol.

Categorizing deviations is usually a pain point and an area where more consistency needs to be driven. I recommend first having a good standard set of categorizations. The industry would benefit from adopting a standard, and I think Norman Goldfarb’s proposal is still the best.

Once you have categories, and understand to your KQIs and other aspects you need to make sure they are consistently done. The key mechanisms of this are:

  1. Training
  2. Monitoring (in all its funny permutations)
  3. Periodic evaluations and Trending

Deviations should be trended, at a minimum, in several ways:

  1. Per site per study
  2. Per site all activities
  3. All sites per study
  4. All sites all activities

And remember, trending doesn’t count of you do not analyze the problem and take appropriate CAPAs.

This will allow trends to be identified and appropriate corrective and preventive actions identified to systematically improve.