Why Knowledge Checks are Measuring the Wrong Thing

When I taught middle school math, tests were used to assess knowledge comprehension and some application with word problems and a few complex questions requiring logic proofs.  Results were captured via a score; an indication as to how well you answered the questions.  Whether you call it a quiz, a knowledge check or any other name it is still assessing some form of knowledge comprehension.

Life sciences companies are required by law to conduct annual regulations training (GXP Refreshers) so as to remain current in the operations being performed.  Given the design and delivery of most refresher training sessions, it is heavy with content that is delivered to a passive learner.  Listening to a speaker or reading slide after slide and clicking the next button is not active engagement.  It’s boring, mind-numbing and, in some cases, downright painful for learners.  As soon as the training is over, it’s very easy to move past the experience.  That also means most of the content is also soon forgotten.  Regulatory agencies are no stranger to this style of compliance training and delivery.  But when departures from SOPs and other investigations reveal a lack of GXP knowledge, they will question the effectiveness of the GXP training program. 

So why are we still using tests?

In our quest for effective compliance training, we have borrowed the idea of testing someone’s knowledge as a measure of effectiveness.  This implies that a corporate classroom mirrors an educational classroom and testing means the same thing – a measure of knowledge comprehension.  However, professors, colleges, universities and academic institutions are not held to the same results standard.  In the Life Sciences, two very common situations occur where knowledge checks and “the quiz” are heavily used: Annual GXP Refresher and the Read & Understand Approach for SOPs. 

Typical Knowledge Check

But what is the knowledge assessment measuring?  Is it mapped to the course objectives or are the questions so general that they can be answered correctly without having to attend the sessions?  Or worse yet, are the questions being recycled from year to year / event-to-event?  What does it mean for the employee to pass the knowledge check or receive 80% or better? When does s/he learn of the results? In most sessions, there is no more time left to debrief the answers. This is a lost opportunity to leverage feedback into a learning activity.  How do employees know if they are leaving the session with the “correct information”?

The other common practice is to include a multiple-choice quiz consisting of 5 questions as the knowledge check for SOPs that are “Read & Understood” especially for revisions.  What does it mean if employees get all 5 questions right?  That they will not make a mistake?  That the R & U method of SOP training is effective?  The search function in most e-doc systems is really good at finding the answers.  It doesn’t mean that they read the entire procedure and retained the information correctly.  What does it mean for the organization if human errors and deviations from procedures are still occurring?  Does it really mean “the training” is ineffective?

Conditions must be the same for evaluation as it for performance and training

What should we be measuring?

The conditions under which employees are expected to perform need to be the same conditions under which we “test” them.  So, it makes sense to train them under those same conditions as well.  What do you want/need your employees (learners) to do after the instruction is finished? What do you want them to remember and use from the instruction in the heat of their work moments?  Both the design and assessment need to mirror these expectations. 

Ask yourself, when in their day to day activities will employees need to use this GMP concept?  Or, where in the employees’ workflow will this procedure change need to be applied?  Isn’t this what we are training them for?  Your knowledge checks need to ensure that employees have the knowledge, confidence and capability to perform as trained.  It’s time to re-think what knowledge checks are supposed to do for you. And that means developing objectives that guide the instruction and form the basis of the assessment.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program.  These statements aka objectives determine the content and they also drive the assessment.  For example, a quiz or a knowledge check is typically used for classroom sessions that ask questions about the content.  In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring?  How much of the content they remember and maybe how much of the content they anticipate applying when they return to work? 

When the desired outcomes are compliance back on the job, knowledge checks need to shift to become more application-oriented (performance-based) such as “what if situations” and real scenarios that will require an employee to analyze the situation, propose a workaround and then evaluate if the problem-solving idea meets the stated course objectives.

Performance objectives drive a higher level of course design

Training effectiveness is really an evaluation of whether we achieved the desired outcome.  So, I ask you, what is the desired outcome for your GXP training and SOP Training: to gain knowledge (new content) and/or to use the content correctly back in the workplace? The objectives need to reflect the desired outcomes in order to determine the effectiveness of training; not just knowledge retention.

What is the desired outcome?

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish.  While the content may be the same or very similar, how we determine whether employees are able to execute what they learned post training requires more thought about the accuracy of the assessment.  It must be developed from the performance objectives in order for it to be a valid “instrument”.  The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real workplace conditions.

Accuracy of the assessment tools

The tools associated with the 4 Levels of Evaluation can be effective when used for the right type of assessment.  For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments.  See below – Formative vs. Summative Distinction.  Level 2 (Learning) knowledge assessments are effective in measuring retention and minimum comprehension and go hand in hand with learning-based objectives.  But when the desired outcomes are actually performance-based, Level 3 (Behavior) checklists can be developed for the performance of skills demonstrations and samples of finished work products. Note: Level 4 is Results and is business impact-focused.

Difference between formative and summative assessments

Trainers are left out of the loop

Today’s trainers don’t always have the instructional design skill set developed.  They do the best they can with the resources given including reading books and scouring the Internet.  For the most part, their training courses are decent and the assessments reflect passing scores.  But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the business performance expectations.  This is where the gap exists.  Trainers build courses based on knowledge/content instead and develop learning objectives that determine what learners should learn.  They create assessments to determine whether attendees have learned the content, but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Levels of objectives, who knew?

Many training mangers have become familiar with the 4 Levels of Evaluation over the course of their time in training, but less are acquainted with Bloom’s Taxonomy of Objectives.  Yes, objectives have levels of increasing complexity resulting in higher levels of performance.  Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. 

Bloom’s Revised Taxonomy of Objectives

Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid-range.  Evaluating and creating are at the highest levels.  If your end in mind is knowledge gained ONLY, continue to use the lower level objectives.  If; however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet the stated performance outcomes. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Did we succeed as intended? Was the training effective in achieving the desired outcomes?

To ensure learner success via the chosen assessment, the training activities must also be aligned with the level of the objectives.  This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   The training design also needs to include practice sessions where making mistakes is an additional learning opportunity and teach learners how to recognize a deviation is occurring.  Michael Allen refers to this as “building an authentic performance environment”. 

Our training outcomes need to be both knowledge gained, and performance based.  Now, the agency is expecting us to document that our learners have the knowledge AND can apply it successfully in order to follow SOPs and comply with the regulations. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended.  Are you willing to step up and do what it takes to ensure training is truly effective? – VB

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

Who is Vivian Bringslimark, the Author?

Need to write better Knowledge Check questions?
Tips for Writing KCs
Find out the Do’s and Don’ts for Writing Assessment Questions

Want to see the Table of Contents before you request it? No problem!

(c) HPIS Consulting, Inc.