Measuring Training Effectiveness for Organizational Performance

When designing training we want to make sure four things happen:

  • Training is used correctly as a solution to a performance problem
  • Training has the the right content, objectives or methods
  • Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
  • Training delivers the expected learning

Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.

The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.

Level 1: Reaction

Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.

Level 2: Learning

Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.

Level 3: Behavior

Level 3 measures whether the learning is transferred into practice in the workplace.

Level 4: Results

Measures the effect on the business environment. Do we meet objectives?

Evaluation LevelCharacteristicsExamples
Level 1: ReactionReaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant?
▪ Did they like the venue, equipment, timing, domestics, etc?
▪ Did the trainees like and enjoy the training?
▪ Was it a good use of their time?
▪ Level of participation
▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience
▪ Verbal reaction which can be analyzed
▪ Post-training surveys or questionnaires
▪ Online evaluation or grading by delegates
▪ Subsequent verbal or written reports given by delegates to managers back at their jobs
▪ typically ‘happy sheets’
Level 2: LearningLearning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
▪ Did the trainees learn what intended to be taught?
▪ Did the trainee experience what was intended for them to experience?
▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent
▪ Typically assessments or tests before and after the training
▪ Methods of assessment need to be closely related to the aims of the learning
▪ Reliable, clear scoring and measurements need to be established
▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: BehaviorBehavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation:
▪ Did the trainees put their learning into effect when back on the job?
▪ Were the relevant skills and knowledge used?
▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level?
▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change
▪ Assessments need to be designed to reduce subjective judgment of the observer
▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees
▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: ResultsResults evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test

Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured
▪ This process overlays normal good management practice – it simply needs linking to the training input
▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness

Example in Practice – CAPA

When building a training program, start with with the intended behaviors that will drive results. Evaluating our CAPA program, we have three key aims, which we can apply measures against.

BehaviorMeasure
Investigate to find root cause% recurring issues
Implement actions to eliminate root causePreventive to corrective action ratio

To support each of these top level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.

Our four levels to measure training effectiveness will now look like this:

LevelMeasure
Level 1: Reaction Personal action plan and a happy sheet
Level 2: Learning Completion of Rubric on a sample event
Level 3: Behavior Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results Improvements in % of recurring issues and an increase in preventive to corrective actions

This is all about measuring the effectiveness of the transfer of behaviors.

Strong Signals of Transfer Expectations in the OrganizationSignals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.

What is indicates:
Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.

What is indicates:
They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)

What is indicates:
The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)

What is indicates:
The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.

What is indicates:
Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.

What is indicates:
Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.

What is indicates:
Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.

What is indicates:
Defining training intentions is not (or not an essential) part of the training.

Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.

Team Effectiveness

With much of the work in organizations accomplished through teams it is important to determine the factors that lead to effective as well as ineffective team processes and to better specify how, why, and when they contribute. It doesn’t matter if the team is brought together for a specific project and then disbands, or if it is a fairly permanent part of the organization, similar principles are at work.

Input-Process-Output model

The input-process-output model of teams is a great place to start. While simplistic, it can offer a good model of what makes teams works and is applicable to the different types of teams.

Input factors are the organizational context, team composition, task design that influence the team. Process factors are what mediates between the inputs and desired outputs.

  • Leadership:  The leadership style(s) (participative, facilitative, transformational, directive, etc) of the team leader influences the team toward the achievement of goals.
  • Management support refers to the help or effort provided by senior management to assist the project team, including managerial involvement and resource support.
  • Rewards are the recompense that the organization gives in return for good work.
  • Knowledge/skills are the knowledge, experience and capability of team members to process, interpret, manipulate and use information.
  • Team diversity includes functional diversity as well as overall diversity.
  • Goal clarity is the degree to which the goals of the project are well defined and the importance of the goals to the organization is clearly communicated to all team members.
  • Cooperation is the measure of how well team members work with each other and with other groups.
  • Communication is the exchange of knowledge and information related to tasks with the team (internal) or between team members and external stakeholders (external).
  • Learning activities are the process by which a team takes action, contains feedback and makes changes to improve. Under this fits the PDCA lifecycle, including Lean, SixSigma and similar problem solving methodologies..
  • Cohesion is the spirit of togetherness and support for other team members that helps team members quickly resolve conflicts without residual hard feelings, also referred to as team trust, team spirit, team member support or team member involvement.
  • Effort includes the amount of time that team members devote to the project.
  • Commitment refers to the condition where team members are bound emotionally or intellectually to the project and to each other during the team process.

Process Factors are usually the focus on team excellence frameworks, such as the ASQ or the PMI.

Outputs, or outcomes, are the consequences of the team’s actions or activities:

  • Effectiveness is the extent a project achieves the performance expectations of key project stakeholders. Expectations are usually different for different projects and across different stake-holders; thus, various measures have been used to evaluate effectiveness, usually quality, functionality, or reliability. Effectiveness can be meeting customer/user requirements, meeting project goals or some other related set of measures.
  • Efficiency is the ability of the project team to meet its budget and schedule goals and utilize resources within constraints Measures include: adherence to budget, adherence to schedule, resource utilization within constraints, etc.
  • Innovation is the creative accomplishment of teams in generating new ideas, methods, approaches, inventions, or applications and the degree to which the project outputs were novel.

Under this model we can find a various levers to improve out outcomes and enhance the culture of our teams.

Lessons Learned and Change Management

One of the hallmarks of a quality culture is learning from our past experiences, to eliminate repeat mistakes and to reproduce success. The more times you do an activity, the more you learn, and the better you get (within limits for simple activities).  Knowledge management is an enabler of quality systems, in part, to focus on learning and thus accelerate learning across the organization as a whole, and not just one person or a team.

This is where the” lessons learned” process comes in.  There are a lot of definitions of lessons learned out there, but the definition I keep returning to is that a lessons learned is a change in personal or organizational behavior as a result from learning from experience. Ideally, this is a permanent, institutionalized change, and this is often where our quality systems can really drive continuous improvement.

Lessons learned is activity to lessons identified to updated processes
Lessons Learned

Part of Knowledge Management

The lessons learned process is an application of knowledge management.

Lessons identified is generate, assess, and share.

Updated processes (and documents) is contextualize, apply and update.

Lessons Learned in the Context of Knowledge Management

Identify Lessons Learned

Identifying lessons needs to be done regularly, the closer to actual change management and control activities the better. The formality of this exercise depends on the scale of the change. There are basically a few major forms:

  • After action reviews: held daily (or other regular cycle) for high intensity learning. Tends to be very focused on questions of the day.
  • Retrospective: Held at specific periods (for example project gates or change control status changes. Tends to have a specific focus on a single project.
  • Consistency discussions: Held periodically among a community of practice, such as quality reviewers or multiple site process owners. This form looks holistically at all changes over a period of time (weekly, monthly, quarterly). Very effective when linked to a set of leading and lagging indicators.
  • Incident and events: Deviations happen. Make sure you learn the lessons and implement solutions.

The chosen formality should be based on the level of change. A healthy organization will be utilizing all of these.

Level of ChangeForm of Lesson Learned
TransactionalConsistency discussion
After action (when things go wrong)
OrganizationalRetrospective
After action (weekly, daily as needed)
TransformationalRetrospective
After action (daily)

Successful lessons learned:

  • Are based on solid performance data: Based on facts and the analysis of facts.
  • Look at positive and negative experiences.
  • Refer back to the change management process, objectives of the change, and other success criteria
  • Separate experience from opinion as much as possible. A lesson arises from actual experience and is an objective reflection on the results.
  • Generate distinct lessons from which others can learn and take action. A good action avoids generalities.

In practice there are a lot of similarities between the techniques to facilitate a good lessons learned and a root cause analysis. Start with a good core of questions, starting with the what:

  • What were some of the key issues?
  • What were the success factors?
  • What worked well?
  • What did not work well?
  • What were the challenges and pitfalls?
  • What would you approach differently if you ever did this again?

From these what questions, we can continue to narrow in on the learnings by asking why and how questions. Ask open questions, and utilize all the techniques of root cause analysis here.

Then once you are at (or close) to a defined issue for the learning (a root cause), ask a future-tense question to make it actionable, such as:

  • What would your advice be for someone doing this in the future?
  • What would you do next time?

Press for specifics. if it is not actionable it is not really a learning.

Update the Process

Learning implies memory, and an organization’s memories usually require procedures, job aids and other tools to be updated and created. In short, lessons should evolve your process. This is often the responsibility of the change management process owner. You need to make sure the lesson actually takes hold.

Differences between effectiveness reviews and lesson’s learned

There are three things to answer in every change

  1. Was the change effective – did it meet the intended purposes
  2. Did the change have any unexpected effects
  3. What can we learn from this change for the next change?

Effectiveness reviews are 1 and 2 (based on a risk based approach) while lessons learned is 3. Lessons learned contributes to the health of the system and drives continuous improvements in the how we make changes.

Citations

  • Lesson learned management model for solving incidents. (2017). 2017 12th Iberian Conference on Information Systems and Technologies (CISTI), Information Systems and Technologies (CISTI), 2017 12th Iberian Conference On, 1.
  • Fowlin, J. j & Cennamo, K. (2017). Approaching Knowledge Management Through the Lens of the Knowledge Life Cycle: a Case Study Investigation. TechTrends: Linking Research & Practice to Improve Learning61(1), 55–64. 
  • Michell, V., & McKenzie, J. (2017). Lessons learned: Structuring knowledge codification and abstraction to provide meaningful information for learning. VINE: The Journal of Information & Knowledge Management Systems47(3), 411–428.
  • Milton, N. J. (2010). The Lessons Learned Handbook : Practical Approaches to Learning From Experience. Burlington: Chandos Publishing.
  • Paul R. Carlile. (2004). Transferring, Translating, and Transforming: An Integrative Framework for Managing Knowledge across Boundaries. Organization Science, (5), 555.
  • Secchi, P. (Ed.) (1999). Proceedings of Alerts and Lessons Learned: An Effective way to prevent failures and problems. Technical Report WPP-167. Noordwijk, The Netherlands: ESTEC

Effective Organizations — Think Different

Effectiveness I recently had a bit of a wake-up call via Twitter. I asked the following question: “What’s the one thing /above all/ that makes for an effective organisation?” My thanks to all those who took the time to reply with their viewpoint. The wake-up call for me was the variety of these responses. All […]

via Effectiveness — Think Different

Great thought-piece over on “Think Different” on effectiveness, with a nice tie-in to Donnella Meadow’s “Twelve Leverage Points to Intervene in a System.”

In quality management systems, it is critical to look at effectiveness. If you do not measure, you do not know if the system is working the ways you expect and desire.

We often discuss lagging (output measurement) and leading (predictive) indicators, and this is a good way to start, but if we apply System Thinking and use Meadow’s twelve leverage points we can see that most metrics tend to be around 7-12, with the more effective levers being the least utilized.

I think there are a lot of value in finding metrics within these levers.

So for example, a few indicators on the effectiveness of lever 4 “The Power to Add, Change, Evolve, or Self-Organize System Structure”:

Lagging Leading
Effective CAPAs to the System Number of changes initiated by level of organization and scale of change
Deviation Reduction