Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with with the intended behaviors that will drive results. Evaluating our CAPA program, we have three key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

Site Training Needs

Institute training on the job.

Principle 6, W. Edwards Deming

(a) Each person engaged in the manufacture, processing, packing, or holding of a drug product shall have education, training, and experience, or any combination thereof, to enable that person to perform the assigned functions. Training shall be in the particular operations that the employee performs and in current good manufacturing practice (including the current good manufacturing practice regulations in this chapter and written procedures required by these regulations) as they relate to the employee’s functions. Training in current good manufacturing practice shall be conducted by qualified individuals on a continuing basis and with sufficient frequency to assure that employees remain familiar with CGMP requirements applicable to them.

(b) Each person responsible for supervising the manufacture, processing, packing, or holding of a drug product shall have the education, training, and experience, or any combination thereof, to perform assigned functions in such a manner as to provide assurance that the drug product has the safety, identity, strength, quality, and purity that it purports or is represented to possess.

(c) There shall be an adequate number of qualified personnel to perform and supervise the manufacture, processing, packing, or holding of each drug product.

US FDA 21CFR 210.25

All parts of the Pharmaceutical Quality system should be adequately resourced with competent personnel, and suitable and sufficient premises, equipment and facilities.

EU EMA/INS/GMP/735037/201 2.1

The organization shall determine and provide the resources needed for the establishment,
implementation, maintenance and continual improvement of the quality management system. The organization shall consider:

a) the capabilities of, and constraints on, existing internal resources;
b) what needs to be obtained from external providers.

ISO 9001:2015 requirement 7.1.1

It is critical to have enough people with the appropriate level of training to execute their tasks.

It is fairly easy to define the individual training plan, stemming from the job description and the process training requirements. In the aggregate we get the ability to track overdue training, and a forward look at what training is coming due. Quite frankly, lagging indicators that show success at completing assigned training but give no insight to the central question – do we have enough qualified individuals to do the work?

To get this proactive, we start with the resource plan. What operations need to happen in a time frame and what are the resources needed. We then compare that to the training requirements for those operations.

We can then evaluate current training status and retention levels and determine how many instructors we will need to ensure adequate training.

We perform a gap assessment to determine what new training needs exist

We then take a forward look at what new improvements are planned and ensure appropriate training is forecasted.

Now we have a good picture of what an “adequate number” is. We can now set a leading KPI to ensure that training is truly proactive.

Experts think differently

Research on expertise has identified the following differences between expert performers and beginners

  • Experts have larger and more integrative knowledge units, and their represen­tations of information are more functional and abstract than those of novices, whose knowledge base is more fragmentary. For example, a beginning piano player reads sheet music note by note, whereas a concert pianist is able to see the whole row or even several rows of music notation at the same time.
  • When solving problems, experts may spend more time on the initial prob­lem evaluation and planning than novices. This enables them to form a holistic and in-depth understanding of the task and usually to reach a solution more swiftly than beginners.
  • Basic functions related to tasks or the job are automated in experts, whereas beginners need to pay attention to these functions. For instance, in a driving Basic functions related to tasks or the job are automated in experts, whereas beginners need to pay attention to these functions. For instance, in a driving school, a young driver focuses his or her attention on controlling devices and pedals, while an experienced driver performs basic strokes automatically. For this reason, an expert driver can observe and anticipate traffic situations better than a beginning driver.
  • Experts outperform novices in their metacognitive and reflective thinking. In other words, they make sharp observations of their own ways of think­ing, acting, and working, especially in non-routine situations when auto­ mated activities are challenged. Beginners’ knowledge is mainly explicit and they are dependent on learned rules. In addition to explicit knowledge, experts have tacit or implicit knowledge that accumulates with experience. This kind of knowledge makes it possible to make fast decisions on the basis of what is often called intuition.
  • In situations where something has gone wrong or when experts face totally new problems but are not required to make fast decisions, they critically reflect on their actions. Unlike beginners, experienced professionals focus their thinking not only on details but rather on the totality consisting of the details.
  • Experts’ thinking is more holistic than the thinking of novices. It seems that the quality of thinking is associated with the quality and amount of knowledge. With a fragmentary knowledge base, a novice in any field may remain on lower levels of thinking: things are seen as black and white, without any nuances. In contrast, more experienced colleagues with a more organized and holistic know­ledge base can access more material for their thinking, and, thus, may begin to explore different perspectives on matters and develop more relativistic views concerning certain problems. At the highest levels of thinking, an individual is able to reconcile different perspectives, either by forming a synthesis or by inte­grating different approaches or views.
LevelPerformance
BeginnerFollows simple directions
NovicePerforms using memory of facts and simple rules
CompetentMakes simple judgmentsfor typical tasksMay need help withcomplex or unusual tasksMay lack speed andflexibility
ProficientPerformance guided by deeper experience Able to figure out the most critical aspects of a situation Sees nuances missed by less-skilled performers Flexible performance
ExpertPerformance guided by extensive practice and easily retrievable knowledge and skillsNotices nuances, connections, and patterns Intuitive understanding based on extensive practice Able to solve difficult problems, learn quickly, and find needed resources
Levels of Performance

Sources

  • Clark, R. 2003. Building Expertise: Cognitive Methods for Training and Performance Improvement, 2nd ed. Silver Spring, MD: International Society for Performance Improvement.
  • Ericsson, K.A. 2016. Peak: Secrets From the New Science of Expertise. Boston: Houghton Mifflin Harcourt
  • Kallio, E, ed. Development of Adult Thinking : Interdisciplinary Perspectives on Cognitive Development and Adult Learning. Taylor & Francis Group, 2020.

.

Know the Knows

When developing training programs and cultural initiative sit is useful to break down what we really want people to know. I find it useful to think in terms of the following:

  • know-how: The technical skills to do the work
  • know-what: The ability to perform functional problem-solving, to adapt the process and innovate
  • know-who: networking and interpersonal skills, with social/emotional intelligence, for empathy or social network capacities
  • know-where: institutional and system knowledge of how the work fits into a larger ecosystem
  • know-who/how: strategic and leadership skills, for political ‘ nous’ in setting agendas, managing institutions, mobilizing resources;
  • know-why: creation of meaning, significance, identity, morality, with practical intuition for creative arts, sports, everyday social exchange.

To build all six elements requires a learning culture and a recognition that knowledge and awareness do not start and end at initial training on a process. We need to build the mechanisms to:

  • Communicate in a way to continually facilitate the assimilation of knowledge
  • Incorporate ongoing uses of tools such as coaching and mentoring in our processes and systems
  • Motivate the ongoing enhancement of learning
  • Nurture the development and retention of knowledge

We are striving at building competence, to be able to grow and apply the knowledge and abilities of our workers to solve problems and innovate.

Training, Development, Knowledge Management, Problem-Solving – these are a continuum but too often we balkanize responsibility of these in our organizations when what we need is an ecosystem approach.

Level of Training

I want to talk about levels of training. I am not going to go into an Instructional Design model/framework, but more stay focused on the purpose of training in the quality system. I am also going to try to discuss training in terms that will make sense to folks who mostly dwell in a verification/validation mindset. So, all my professional learning developer friends please be gentle.

Categories of Training

There are three levels of training (lots of subdivides) that can be viewed as a risk based approach

Awareness Training

This can be barely considered training. Awareness training conveys the subject matter to an audience with the goal of making the audience aware of the content of the communication. It is either informational or actionable. At best, just a ”tell” activity.

Read-and-understand fits in this bucket.

Facilitated Training

Facilitated training strives to improve the workplace proficiency and is hopefully based on some real adult learning principles. There are a lot of delivery modalities that are usually broken into two big buckets of eLearning and classroom delivery. It always has an assessment component to ensure the training had the desired impact. Usually a “tell, show” model with limited “do”.

Employee Qualification

On the job, hands on training that confirms the individual can do the work by independently performing the tasks while being monitored and assessed by the trainer. Usually follows a “tell, show, do, follow-up” model.

The Level of Training is Risk Based

The level of training should be driven by the criticality of the process/procedure/task. I recommend several questions driving this:

  • The complexity knowledge or skills needed to execute the changed process?
  • How complicated/complex is the process/procedure/task?
  • Criticality of Process and risk of performance error? What is the difficulty in detecting errors?
  • What is the identified audience (e.g., location, size, department, single site vs. multiple sites)?
  • Is the goal to change workers conditioned behavior?

The Personnel Qualification Model

Qualification means fitness for some purpose, shown by meeting necessary conditions or qualifying criteria. This applies as much to our people as it does to our equipment, and we can break this own with the three phases of IQ/OQ/PQ:

  • Personnel IQ is provides objective evidence that the trainee has the requisite education and experience for the process/procedure/task.
  • Personnel OQ is proves that the trainee can function in the training situation (event) in an appropriate fashion and performance is within the control limits set by the process/procedure/task. It proves that the trainee can perform the task correctly and independently.
  • Personnel PQ demonstrates the acceptable performance during representative operational conditions. The trainee’s performance consistently produces results that meet the standards set by the process/procedure/task.

Once the process of employee qualification I successfully completed, the employee is qualified and stays so unless and until they become disqualified or the process/procedure/task changes significantly enough to require requalification.

Disqualification and requalification

There should be a process for disqualification, whether from extended absences, job changes or a detrimental trend in performance such as serious or repeated deviations.