Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with with the intended behaviors that will drive results. Evaluating our CAPA program, we have three key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

Process Owners

Process owners are a fundamental and visible difference part of building a process oriented organizations and are crucial to striving for an effective organization. As the champion of a process, they take overall responsibility for process performance and coordinate all the interfaces in cross-functional processes.

Being a process owner should be the critical part of a person’s job, so they can shepherd the evolution of processes and to keep the organization always moving forward and prevent the reversion to less effective processes.

The Process Owner’s Role

The process owner plays a fundamental role in managing the interfaces between key processes with the objective of preventing horizontal silos and has overall responsibility of the performance of the end-to-end process, utilizing metrics to track, measure and monitor the status and drive continuous improvement initiatives. Process owners ensure that staff are adequately trained and allocated to processes. As this may result in conflicts arising between process owners, teams, and functional management it is critical that process owners exist in a wider community of practice with appropriate governance and senior leadership support.

Process owners are accountable for designing processes; day-to-day management of processes; and fostering process related learning.

Process owners must ensure that process staff are trained to have both organizational knowledge and process knowledge. To assist in staff training, processes, standards and procedures should be documented, maintained, and reviewed regularly.

Process Owners should be supported by the right infrastructure. You cannot be a SME on a end-to-end-process, provide governance and drive improvement and be expected to be a world class tech writer, training developer and technology implementer. The process owner leads and sets the direction for those activities.

The process owner sits in a central role as we build culture and drive for maturity.

The difference between complex and complicated

We often think that complicated and complex are on a continuum, that complex is just a magnitude above complicated; or that they are synonyms. These are actually different, and one cannot address complex systems in the same way as complicated. Many improvement efforts fail by not seeing the difference and they throw resources at projects that are bound for failure because they are looking at the system the wrong way.

Complicated problems originate from causes that can be individually distinguished; they can be address­ed piece by­ piece; for each input to the system there is a proportionate output; the relevant systems can be controlled and the problems they present admit permanent solutions.

Complex problems result from networks of multiple interacting causes that cannot be individually distinguished and must be addressed as entire systems. In complex systems the same starting conditions can produce different outcomes, depending on interactions of the elements in the system. They cannot be addressed in a piecemeal way; they are such that small inputs may result in disproportionate effects; the problems they present cannot be solved once and for ever, but require to be systematically managed and typically any intervention merges into new problems as a result of the interventions dealing with them;  and the relevant systems cannot be controlled – the best one can do is to influence them, or learn to “dance with them” as Donella Meadows said.

Lets break down some ways these look and act different by looking at some of the key terminology.

Causality, the relationship between the thing that happens and the thing that causes it

Complicated Linear cause-and-effect pathways allow us to identify individual causes for observed effects.
ComplexBecause we are dealing with patterns arising from networks of multiple interacting (and interconnected) causes, there are no clearly distinguishable cause-and-effect pathways.

This challenges the usefulness of root cause analysis. Most common root cause analysis methodologies are based on cause-and-effect.

Linearity,  the relationships between elements of a process and the output

ComplicatedEvery input has a proportionate output
ComplexOutputs are not proportional or linearly related to inputs; small changes in one part of the system can cause sudden and unexpected outputs in other parts of the system or even system-wide reorganization.

Think on how many major changes, breakthroughs and transformations, fail.

Reducibility, breaking down the problem

ComplicatedWe can decompose the system into its structural parts and fully understand the functional relationships between these parts in a piecemeal way.
Complex The structural parts of the system are multi-functional — the same function can be performed by different structural parts.  These parts are also richly inter-related i.e. they change one another in unexpected ways as they interact.  We can therefore never fully understand these inter-relationships

This is the challenge for our problem solving methodologies, which mostly assume that a problem can be broken down into its constituent parts. Complex problems present as emergent patterns resulting from dynamic interactions between multiple non-linearly connected parts.  In these systems, we’re rarely able to distinguish the real problem, and even small and well-intentioned interventions may result in disproportionate and unintended consequences.

Constraint

Complicated One structure-one function due to their environments being delimited i.e. governing constraints are in place that allows the system to interact only with selected or approved types of systems.  Functions can be delimited either by closing the system (no interaction) or closing its environment (limited or constrained interactions).

Complicated systems can be fully known as a result and are mappable.
Complex Complex systems are open systems, to the extent that it is often difficult to determine where the system ends and another start.   Complex systems are also nested they are part of larger scale complex systems, e.g. an organisation within an industry within an economy.  It is therefore impossible to separate the system from its context.

This makes modeling an issue of replicating the system, it cannot be reduced. We cannot transform complex systems into complicated ones by spending more time and resources on collecting more data or developing better maps.

Some ideas for moving forward

Once you understand that you are in a complex system instead of a complicated process you can start looking for ways to deal with it. These are areas we need to increase capabilities with as quality professionals.

  • Methodologies and best practices to decouple parts of a larger system so they are not so interdependent and build in redundancy to reduce the chance of large-scale failures.
  • Use storytelling and counterfactuals. Stories can give great insight because the storyteller’s reflections are not limited by available data.
  • Ensure our decision making captures different analytical perspectives.
  • Understand our levers

Falsification and error

At the heart, data integrity is a lot about culture. There are technical requirements, but mostly we are returning to the same principles as quality culture and just keep coming back to Deming. A great example of this is the use of the fraud triangle and human error.

The fraud triangle was developed by Donald Cressey in the 1950s when investigating financial fraud and embezzlement. The principles Cressey identified are directly relevant to data integrity, and to quality culture as a whole.

Falsification Triangle
Element Exists When To Break
Incentive or Pressure Why commit falsification of data? Managerial pressure or financial gains are the two main drivers here to push people to commit fraud. Setting unrealistic objectives such as stretch goals, turnaround time or key performance indicators that are totally divorced from reality especially when these are linked to pay or advancement will only encourage staff to falsify data to receive rewards. These goals coupled with poor analytical instruments and methods will only ensure that corners will be cut to meet deadlines or targets. Management must lead by example – not through communication or establishing data governance structures but by ensuring the pressure to falsify data is removed. This means setting realistic expectations that are compatible with the organization’s capacity and process capability.
Rationalization or Incentive To commit fraud people must either have an incentive or can rationalize that this is an acceptable practice within an organization or department. Staff need to understand how their actions can impact the health of the patient. Ensure individuals know the importance of reliable and accurate data to the wellbeing of the patient as well as the business health of the company.
Opportunity The opportunity to falsify data can be due to encouragement by management as a means of keeping cost down or a combination of lax controls or poor oversight of activities that contribute to staff being able to commit fraud. Implement a process that is technically controlled so there is little, if any, opportunity to commit falsification of data.

Mistakes are human nature – we all have fat finger moments. This is why we build our processes and technologies to ensure we capture these errors and self-correct them. These errors should be tracked and trended, but only as a way to drive continuous improvement. It is important to have the capability in your quality systems to be able to evaluate mistakes up-to-and including fraud.

It helps to be able to classify issues and determine if there are changes to governance, management systems and behaviors necessary.

Events should be classified based on how intentional they are

Human error should be built into investigative systems. Yes, whenever possible we are looking for technical controls, but the human exists and needs to be fully taken into consideration.

The best way to ensure data integrity is the best way to build a quality culture.

System Model