One of the key parts of any change (process improvement, project, etc) is preparing people to actually do the work effectively. Every change needs to train.
Building valid and reliable training at the right level for the change is critical. Training is valid when it is tied to the requirements of the job – the objectives; and when it includes evaluations that are linked to the skills and knowledge started in the objectives. Reliability means that the training clearly differentiates between those who can perform the task and those who cannot.
A lot of changes default to read-and-understand training. This quite bluntly is the bane of valid and reliable training with about zero value and would be removed from our toolkit if I had my way.
There are a lot of training models, but I hold there is no single or best method. The most effective and efficient combination of methods should be chosen depending on the training material to be covered and the specific needs of the target group.
For my purposes I’ll draw from Edgar Dale’s Cone of Experience, which incorporates several theories related to instructional design and learning processes. Dale theorized that how a learner retained information is based on what they “do” as opposed to what is “heard,” “read” or “observed.” This is often called experiential or action learning.
Based on this understanding we can break the training types down. For example:
- Structured discussions are Verbal and some Visual, and lives within the Abstract
- Computer Based Trainings are mostly Iconic, with a few concrete
- Instructor Led Trainings are a lot about Concrete
- On-the-job training is all about the Concrete
Once we have our agreed upon training methods and understand what makes them a good training we can then determine what criteria of a change leads to the best outcome for training. Some example criteria include:
- Is a change in knowledge or skills needed to execute the procedure?
- Is the process or change complex? Are there multiple changes?
- Criticality of Process and risk of performance error? What is the difficulty in detecting errors?
- What is the identified audience (e.g., location, size, department, single site vs. multiple sites)?
- Is the goal to change workers‘ conditioned behavior
This sort of questioning gets us to risk based thinking. We are determining where the biggest bang from our training is.
Building training is a different set of skills. I keep threatening a training peer with doing a podcast episode (probably more than one) on the subject (do I really want to do podcasts?).
The last thing I want to leave you is build training evaluations into this. Kilpatrick’s model is a favorite – Level 4 Results evaluations which tell us how effective our training was overtime actually makes a darn good effectiveness review. I strongly recommend building that into a change management process.