Previously I’ve talked about defining the values and behavior associated with quality culture. Once you’ve established these behaviors, a key way to make them happen is through microfeedback, a skill each quality professional, supervisor, and leader in your organization should be trained on.
We are all familiar with the traditional feedback loop: you receive feedback, reflect on it, make a plan, and then take action. This means feedback is given after a series of actions have taken place. Feedback addresses a few key observations for future improvements. In a situation when actions and sequences are quite complicated and interdependent, feedback can fail to provide useful insights to improve performance. Micro-feedback potentially canbe leveraged to prevent critical mistakes and mitigate risks, which makes it a great way to build culture and drive performance.
Micro-feedback is a specific and just-in-time dose of information or insights that can reduce gaps between the desired behavioral goals and reality. Think of it as a microscope used to evaluate an individuals comprehension and behavior and prescribe micro-interventions to adjust performance and prevent mistakes.
Microfeedback, provided during the activity observed, is a fundamental aspect of the Gemba walk. These small tweaks can be adapted, and utilized to provide timely insights and easy-to-accomplish learning objectives, to drive deep clarity and stay motivated to modify their performance.
Where and when the microfeedback happens is key:
1. Task–based microfeedback focuses corrective or suggestive insights on the content of a task. To provide higher impact focus micro-feedback on the correct actions rather than incorrect performance. For example “Report this issue as an incident…”
2. Process-based micro-feedback focuses on the learning processes and works best to foster critical thinking in a complex environment. For example, “This issue can be further processed based on the decision tree strategies we talked about earlier.”
3. Self-regulation-based micro-feedback focuses on giving suggestive or directive insights helping individuals to better manage and regulate their own learning. For example, “Pause once you have completed the task and ask yourself a set of questions following the 5W2H formula.”
For microfeedback to be truly successful it needs to be in the context of a training program, where clear behavorial goals has been set. This training program should include a specific track for managers that allows them to provide microfeedback to close the gap between where the learner is and where the learner aims to be. This training will provide specific cues or reinforcement toward a well-understood task and focus on levels of task, process, or self-regulation.
During change management, provide positive micro-feedback on correct, rather than incorrect, performance. This can be very valuable as you think about sustainability of the change.
Leveraged sucessful, but well trained observers and peers, microfeedback will provide incremental and timely adjustments to drive behavior.
Organize resources so it’s easy to understand. Reduce cognitive load by breaking information down into small, digestible chunks and arranging them into patterns that make sense to the individual. Always start by giving an overview so individuals know how all the smaller chunks fit together.
Use visuals. The brain has an incredible ability to remember visual images so you must exploit that as you look for ways to reinforce key learning points. Create tools that are primarily visual rather than word-based. Use images in place of text (or at least minimize the text). Use videos and animations to help people understand key concepts.
We can drive a lot of effectiveness into our processes by structuring information to make complex documents more transparent and accessible to their users. Visual cues can provide an ‘attention hierarchy’, making sure that what is most important is not overlooked. People tend to find more usable what they find beautiful, and a wall of text simply looks scary, cumbersome, and off-putting for most people. I am a strong advocate of beauty in system design, and I would love to see Quality departments better known for their aesthetic principles and for tying all our documents into good cognitive principles.
Cognitive Load Theory
Cognitive load theory (CLT) can help us understand why people struggle so much in reading and understanding contracts. Developed by John Sweller, while initially studying problem-solving, CLT postulates that learning happens best when information is presented in a way that takes into consideration human cognitive structures. Limited working memory capacity is one of the characteristic aspects of human cognition: thus, comprehension and learning can be facilitated by presenting information in ways minimizing working memory load.
Adapted from Atkinson, R.C. and Shiffrin, R.M. (1968). ‘Human memory: A Proposed System and its Control Processes’. In Spence, K.W. and Spence, J.T. The psychology of learning and motivation, (Volume 2). New York: Academic Press. pp. 89–195
Structure and Display
Information structure (how the content is ordered and organized) and information display (how it is visually presented) play a key role in supporting comprehension and performance. A meaningful information structure helps readers preserve continuity, allowing the formation of a useful and easy-to-process mental model. Visual information display facilitates mental model creation by representing information structures and relationships more explicitly, so readers do not have to use cognitive resources to develop a mental model from scratch.
Leveraging in your process/procedure documents
Much of what is considered necessary SOP structure is not based on how people need to find and utilize information. Many of the parts of a document taken for granted (e.g. reference documents, definitions) are relics from paper-based systems. It is past time to reinvent the procedure.
Does training in your organization seem like death by PowerPoint? Is learning viewed as something an expert dumps in the lap of the learner.? However, that’s not what learning is – lectures and one-way delivery end up resulting in very little learning.
For deeper meaning to occur, invest in professionally facilitated experiences that enable staff to form mental models they remember. Get people thinking before and after the training to ensure that the mental model stays fresh in the mind.
Culture of Cutting Time
Avoid the desire for training in shorter and shorter chunks. The demands of the workplace are increasingly complex and stressful, so any time out of the office is a serious cost. The paradox is that by shortening the training, we don’t give the time for structured learning, which sabotages the investment when the training program could be substantially improved by adding the time to allow the learning to be consolidated.
We know that learning takes place when people have fun, stress is low, and the environment encourages discovery. Make training cheerful and open rather than dull and quiet. Encourage lots of informal learning opportunities. Give more control to the learner to shape their experience. Have fun!
As part of his model for Proxies for Work-as-Done, Steven Shorrock covers Work-as-Instructed. I think the entire series is salient to the work of building a quality organization, so please spend the time to read the entire series. You’ll definitely see inspiration in many of the themes I’ve been discussing.
When designing training we want to make sure four things happen:
Training is used correctly as a solution to a performance problem
Training has the the right content, objectives or methods
Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
Training delivers the expected learning
Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.
The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.
Level 1: Reaction
Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.
Level 2: Learning
Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.
Level 3: Behavior
Level 3 measures whether the learning is transferred into practice in the workplace.
Level 4: Results
Measures the effect on the business environment. Do we meet objectives?
Evaluation Level
Characteristics
Examples
Level 1: Reaction
Reaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: ▪ Did trainee consider the training relevant? ▪ Did they like the venue, equipment, timing, domestics, etc? ▪ Did the trainees like and enjoy the training? ▪ Was it a good use of their time? ▪ Level of participation ▪ Ease and comfort of experience
▪ feedback forms based on subjective personal reaction to the training experience ▪ Verbal reaction which can be analyzed ▪ Post-training surveys or questionnaires ▪ Online evaluation or grading by delegates ▪ Subsequent verbal or written reports given by delegates to managers back at their jobs ▪ typically ‘happy sheets’
Level 2: Learning
Learning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience: ▪ Did the trainees learn what intended to be taught? ▪ Did the trainee experience what was intended for them to experience? ▪ What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?
▪ Interview or observation can be used before and after although it is time-consuming and can be inconsistent ▪ Typically assessments or tests before and after the training ▪ Methods of assessment need to be closely related to the aims of the learning ▪ Reliable, clear scoring and measurements need to be established ▪ hard-copy, electronic, online or interview style assessments are all possible
Level 3: Behavior
Behavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation: ▪ Did the trainees put their learning into effect when back on the job? ▪ Were the relevant skills and knowledge used? ▪ Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles? ▪ Would the trainee be able to transfer their learning to another person? is the trainee aware of their change in behavior, knowledge, skill level? ▪ Was the change in behavior and new level of knowledge sustained?
▪ Observation and interview over time are required to assess change, relevance of change, and sustainability of change ▪ Assessments need to be designed to reduce subjective judgment of the observer ▪ 360-degree feedback is useful method and need not be used before training, because respondents can make a judgment as to change after training, and this can be analyzed for groups of respondents and trainees ▪ Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols
Level 4: Results
Results evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test
Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.
The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured ▪ This process overlays normal good management practice – it simply needs linking to the training input ▪ For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training
4 Levels of Training Effectiveness
Example in Practice – CAPA
When building a training program, start with the intended behaviors that will drive results. Evaluating our CAPA program, we have two key aims, which we can apply measures against.
To support each of these top-level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.
Our four levels to measure training effectiveness will now look like this:
Level
Measure
Level 1: Reaction
Personal action plan and a happy sheet
Level 2: Learning
Completion of Rubric on a sample event
Level 3: Behavior
Continued performance and improvement against the Rubric and the key review behavior indicators
Level 4: Results
Improvements in % of recurring issues and an increase in preventive to corrective actions
This is all about measuring the effectiveness of the transfer of behaviors.
Strong Signals of Transfer Expectations in the Organization
Signals that Weaken Transfer Expectations in the Organization
Training participants are required to attend follow-up sesions and other transfer interventions.
What is indicates: Individuals and teams are committed to the change and obtaining the intended benefits.
Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization.
What is indicates: They key factor of a trainee is attendance, not behavior change.
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”)
What is indicates: The organization has a clear vision and expectation on what the training should accomplish.
The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”)
What is indicates: The organization only has a vague idea of what the training should accomplish.
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas.
What is indicates: Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top.
Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role.
What is indicates: Transfer is not considered very important in the organziaiton. Managers have more important things to do.
Each training ends with careful planning of individual transfer intentions.
What is indicates: Defining transfer intentions is a central component of the training.
Transfer planning at the end of the training does not take place or only sporadically.
What is indicates: Defining training intentions is not (or not an essential) part of the training.
Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.