Utilizing Rubrics

A rubric is a tool used primarily in educational settings to evaluate and assess student performance. It provides a clear set of criteria and standards that describe varying levels of quality for a specific assignment or task. Rubrics are designed to ensure consistency and objectivity in grading and feedback, making them a valuable resource for both teachers and students.

Rubrics are useful in assessing competencies and skills within organizations, providing a structured way to evaluate strengths and weaknesses, which makes them perfect for knowledge based activities to gauge appropriate training and execution. They can really help demonstrate that an outcome is a good one.

Key Features of a Rubric

  • Criteria: Rubrics list specific criteria that are important for the assignment. These criteria outline what is expected from the intended work, such as clarity, organization, and mechanics in a writing assignment.
  • Performance Levels: Rubrics define different levels of achievement for each criterion, often using descriptive language (e.g., excellent, good, needs improvement) or numerical scores (e.g., 4, 3, 2, 1).
  • Feedback and Guidance: Rubrics provide detailed feedback, helping individuals understand their strengths and areas for improvement. This feedback can guide executors in revising their work to meet learning objectives more effectively.

Types of Rubrics

  • Analytic Rubrics: These break down the assignment into several components, each with its own set of criteria and performance levels. This type provides detailed feedback on specific areas of the work.
  • Holistic Rubrics: These assess the work as a whole rather than individual components. They provide a single overall score based on the general quality of the work.
  • Single-Point Rubrics: These focus on a single level of performance for each criterion, highlighting areas that meet expectations and those that need improvement.
An example from a deviation rubric

Benefits of Using Rubrics

  • Clarity and Consistency: Rubrics help clarify expectations for students, ensuring they understand what is required to be good. They also promote consistency across activities.
  • Self-Assessment: Rubrics encourage individuals to reflect on their own work and understand the standards they need to meet. This can lead to improved learning outcomes as individuals become more aware of their progress and areas needing improvement.

I love rubrics. They are great for all quality systems. They can be used for on-the-job training, for record writing and review, for re-qualifications. By creatin rubrics you define what good looks like by providing a structured and objective framework that improves clarity, consistency, and specificity in evaluations. It holds both the writer and the reviewer accountable.

Train-the-Trainer

A firm requirement throughout the GxP regulations and the various ISO standards is that individuals are appropriately trained and qualified to do their work.

Inevitably, that appropriately trained and qualified comes down to the trainer who is conducting the training. How do we train our trainers to ensure that individuals are learning and acquiring all of the skills, knowledge, and insight they need to perform their roles well.

Inadequate training is a consistent finding across the GxPs, so you have to ask are we training our trainers to an appropriate level to make the training effective? Equally important you are spending a lot of time and money training people so you want it to be effective and worth the resources spent.

There are really two options for trainers: 1. Trainers who become qualified to teach a course and 2. SMEs who are qualified to be trainers. In either case, you need that qualification mechanism to ensure your trainers can train. I’ll be honest for technical material I prefer SMEs being trained to be trainers as the experience is usually a whole lot better for the trainee.

This training focuses on being able to deliver informal and formal learning solutions in a manner that is both engaging and effective. Be able to:

  • Manage the learning environment.
  • Prepare for training delivery.
  • Convey objectives.
  • Align learning solutions with course objectives and learner needs.
  • Establish credibility as an instructor.
  • Create a positive learning climate.
  • Deliver various learning methodologies.
  • Facilitate learning.
  • Encourage participation and build learner motivation.
  • Deliver constructive feedback.
  • Ensure learning outcomes.
  • Evaluate solutions.

This qualification path will prove itself valuable.Through this we can ensure that our trainings meet their four objectives and that participants can demonstrate:

  1. Awareness: Participant says, “I’ve heard that!”
  2. Understanding: Participant recognizes the subject matter and then explains it.
  3. Practice: Participant actually uses the learning on the job.
  4. Mastery: Participant can use the acquired knowledge to teach others.

Being a trainer is critical for true subject matter expertise and process ownership.

And as an aside, notice I didn’t include instructional design, this is where your training team can really add value!

Designing Level 2 Training Effectiveness Assessments

In the Kilpatrick model, a level 2 assessment measures how much individuals learned. It is asking did the learners actually learn what we wanted them to learn? Did we actually advance knowledge?

For many of us, the old go-to is the multiple-choice quiz.

If we actually want to assess a learner’s ability to do something or think critically about a topic, a multiple-choice quiz isn’t going to work. This isn’t to say that a multiple-choice quiz can’t be challenging, but the focus of a multiple-choice quiz is on the learner’s understanding of the content, not on the learner’s knowledge of how to apply the content to a variety of different contexts.

Say we are designing a root cause analysis course. By the end of the course, your learners should be able to understand some core principles of root cause analysis so that they can perform better investigations, find root causes and determine appropriate CAPAs. While there may be some inherently wrong approaches to root cause analysis that could be assessed in a multiple-choice quiz, a skilled investigator will likely not be dealing with obvious “right” and “wrong” ways to identify causes. Most investigations require complex interactions with people. As such, there may be multiple decisions an investigator needs to make and, within the scope of a course, it could be really hard to identify what skills a budding investigator needs to develop through multiple-choice quizzes alone.

So, what kinds of assessments could you use beyond multiple-choice quizzes and when should you use them? There’s a lot of complexity to these choices which ultimately need to align what
you want people in the course to learn with how you think they can best demonstrate evidence of that learning.

Assessment InstrumentWhen to use itExample
Multiple-Choice Quiz or
Exam
To assess a learner’s understanding of a concept, definition, or specific process. Could also be used to assess responses or reactions to a scenario-based question if there are clear “right” or “wrong” responses.Understanding of core concepts of root cause analysis. Simple branching choices, for example what tool to use when.
Open-Ended QuestionsTo assess a learner’s ability to interpret and apply a new idea. Could also be used to assess a learner’s ability to describe an approach to a process or problem.Demonstrate knowledge of root cause analysis techniques through various practice exercises.
Long-Form Written
Assignment
To assess a learner’s ability to make an argument, analyze a text or current event, or use outside evidence to inform a particular claim. Could also be used to assess a learner’s understanding of how to produce a piece of writing specific to a particular field or discipline (for example, a lab report in a lab sciences context or a policy memo in a public policy context).Write an analysis and investigation report up from a example.
ProjectTo assess a learner’s ability to make a new product and apply skills learned to build an independent work. Could also be used to assess a learner’s understanding of how to
create a field-specific artifact.
Conduct a root cause analysis from an exercise.

On the job training.
PortfolioTo assess a learner’s ability to grow, revise, and create a body of work over a particular period of time. Review of investigations on periodic basis
Assessment Types

A lot of learning experiences will implement a combination of these types of assessments in a course, and it’s likely that at different phases of your course and for different purposes, you
will need to select more than one assessment or evaluation method.

Remember that an assessment serves two additional purposes: It helps the learners recognize where they are in the course so that they have an understanding of the progress, and it helps you, as the facilitator, see what challenges and triumphs the learners are experiencing all the way throughout the course.