Does training in your organization seem like death by PowerPoint? Is learning viewed as something an expert dumps in the lap of the learner.? However, that’s not what learning is – lectures and one-way delivery end up resulting in very little learning.
For deeper meaning to occur, invest in professionally facilitated experiences that enable staff to form mental models they remember. Get people thinking before and after the training to ensure that the mental model stays fresh in the mind.
Culture of Cutting Time
Avoid the desire for training in shorter and shorter chunks. The demands of the workplace are increasingly complex and stressful, so any time out of the office is a serious cost. The paradox is that by shortening the training, we don’t give the time for structured learning, which sabotages the investment when the training program could be substantially improved by adding the time to allow the learning to be consolidated.
We know that learning takes place when people have fun, stress is low, and the environment encourages discovery. Make training cheerful and open rather than dull and quiet. Encourage lots of informal learning opportunities. Give more control to the learner to shape their experience. Have fun!
When designing training we want to make sure four things happen:
Training is used correctly as a solution to a performance problem
Training has the the right content, objectives or methods
Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
Training delivers the expected learning
Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.
The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.
Level 1: Reaction
Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.
Level 2: Learning
Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.
Level 3: Behavior
Level 3 measures whether the learning is transferred into practice in the workplace.
Level 4: Results
Measures the effect on the business environment. Do we meet objectives?
Level 1: Reaction
Reaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant? ▪ Did they like the venue, equipment, timing, domestics, etc? ▪ Did the trainees like and enjoy the training? ▪ Was it a good use of their time? ▪ Level of participation ▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience ▪ Verbal reaction which can be analyzed ▪ Post-training surveys or questionnaires ▪ Online evaluation or grading by delegates ▪ Subsequent verbal or written reports given by delegates to managers back at their jobs ▪ typically ‘happy sheets’
Level 2: Learning
Learning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience: ▪ Did the trainees learn what intended to be taught? ▪ Did the trainee experience what was intended for them to experience? ▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent ▪ Typically assessments or tests before and after the training ▪ Methods of assessment need to be closely related to the aims of the learning ▪ Reliable, clear scoring and measurements need to be established ▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: Behavior
Behavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation: ▪ Did the trainees put their learning into effect when back on the job? ▪ Were the relevant skills and knowledge used? ▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles? ▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level? ▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change ▪ Assessments need to be designed to reduce subjective judgment of the observer ▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees ▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: Results
Results evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test
Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured ▪ This process overlays normal good management practice – it simply needs linking to the training input ▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness
Example in Practice – CAPA
When building a training program, start with the intended behaviors that will drive results. Evaluating our CAPA program, we have two key aims, which we can apply measures against.
(a) Each person engaged in the manufacture, processing, packing, or holding of a drug product shall have education, training, and experience, or any combination thereof, to enable that person to perform the assigned functions. Training shall be in the particular operations that the employee performs and in current good manufacturing practice (including the current good manufacturing practice regulations in this chapter and written procedures required by these regulations) as they relate to the employee’s functions. Training in current good manufacturing practice shall be conducted by qualified individuals on a continuing basis and with sufficient frequency to assure that employees remain familiar with CGMP requirements applicable to them.
(b) Each person responsible for supervising the manufacture, processing, packing, or holding of a drug product shall have the education, training, and experience, or any combination thereof, to perform assigned functions in such a manner as to provide assurance that the drug product has the safety, identity, strength, quality, and purity that it purports or is represented to possess.
(c) There shall be an adequate number of qualified personnel to perform and supervise the manufacture, processing, packing, or holding of each drug product.
US FDA 21CFR 210.25
All parts of the Pharmaceutical Quality system should be adequately resourced with competent personnel, and suitable and sufficient premises, equipment and facilities.
EU EMA/INS/GMP/735037/201 2.1
The organization shall determine and provide the resources needed for the establishment, implementation, maintenance and continual improvement of the quality management system. The organization shall consider:
a) the capabilities of, and constraints on, existing internal resources; b) what needs to be obtained from external providers.
ISO 9001:2015 requirement 7.1.1
It is critical to have enough people with the appropriate level of training to execute their tasks.
It is fairly easy to define the individual training plan, stemming from the job description and the process training requirements. In the aggregate we get the ability to track overdue training, and a forward look at what training is coming due. Quite frankly, lagging indicators that show success at completing assigned training but give no insight to the central question – do we have enough qualified individuals to do the work?
To get this proactive, we start with the resource plan. What operations need to happen in a time frame and what are the resources needed. We then compare that to the training requirements for those operations.
We can then evaluate current training status and retention levels and determine how many instructors we will need to ensure adequate training.
We perform a gap assessment to determine what new training needs exist
We then take a forward look at what new improvements are planned and ensure appropriate training is forecasted.
Now we have a good picture of what an “adequate number” is. We can now set a leading KPI to ensure that training is truly proactive.
As discussed in the post “CVs and JDs and Training Plans” the training plan takes the job description and then says what a given individual needs for training requirements. It does this by looking at the role on a job description and cross-referencing it with the training requirements for the role established by the process owner.
The functional manager is responsible for determining for any given job which roles within a process an individual has.
The process owner, for each process, then sets the training requirements for each role.
Take, for example, a job description that has these three job responsibilities
Lead inspection readiness activities and provide support during regulatory site inspections
Participate in the vendor management process including the creation and review of Quality Agreements with suppliers
Write, review and manage approval of deviations, change controls and CAPA’s
Those three bullets contain a ton of job requirements that translate to roles in processes.
Regulatory Site Inspections
Author Reviewer Approver
Author Reviewer Approver
Roles and Processes from an Example Job Description
The functional manager when writing the job description should understand the exact roles in the processes. A good practice is to have a catalog as part of the process framework. For example, participant may not be the right role name and a more specific role should be used.
The process owner for each of those processes has (usually with help from the training unit) determined the right training for each role. These are usually made up of curricula with individual items.
It’s useful to think of these curricula as building blocks. For example, quality agreements, deviation/CAPA, and change control all have a technical writing curricula. The training unit can have a curricula for technical writing and add that to specific roles as appropriate.
The training plan is a key record, on which everything else hinges.
In an ideal world this should be automated. But in my experience it is manual as the job description is not functionality in the Learning Management System and it involves a degree of translation to build the training plan. This is an excellent opportunity for anyone who reads my blog that works for a company doing an LMS to wow me.
Changes to the job description drive changes to the training plan. As an individuals work changes, so to does the processes and roles they interact with. This is the responsibility of the individual and the functional manager is accountable.
Changes to the processes drive changes to the training plan. The process owner is accountable here.