Why Knowledge Checks are Measuring the Wrong Thing

When I taught middle school math, tests were used to assess knowledge comprehension and some application with word problems and a few complex questions requiring logic proofs.  Results were captured via a score; an indication as to how well you answered the questions.  Whether you call it a quiz, a knowledge check or any other name it is still assessing some form of knowledge comprehension.

Life sciences companies are required by law to conduct annual regulations training (GXP Refreshers) so as to remain current in the operations being performed.  Given the design and delivery of most refresher training sessions, it is heavy with content that is delivered to a passive learner.  Listening to a speaker or reading slide after slide and clicking the next button is not active engagement.  It’s boring, mind-numbing and, in some cases, downright painful for learners.  As soon as the training is over, it’s very easy to move past the experience.  That also means most of the content is also soon forgotten.  Regulatory agencies are no stranger to this style of compliance training and delivery.  But when departures from SOPs and other investigations reveal a lack of GXP knowledge, they will question the effectiveness of the GXP training program. 

So why are we still using tests?

In our quest for effective compliance training, we have borrowed the idea of testing someone’s knowledge as a measure of effectiveness.  This implies that a corporate classroom mirrors an educational classroom and testing means the same thing – a measure of knowledge comprehension.  However, professors, colleges, universities and academic institutions are not held to the same results standard.  In the Life Sciences, two very common situations occur where knowledge checks and “the quiz” are heavily used: Annual GXP Refresher and the Read & Understand Approach for SOPs. 

Typical Knowledge Check

But what is the knowledge assessment measuring?  Is it mapped to the course objectives or are the questions so general that they can be answered correctly without having to attend the sessions?  Or worse yet, are the questions being recycled from year to year / event-to-event?  What does it mean for the employee to pass the knowledge check or receive 80% or better? When does s/he learn of the results? In most sessions, there is no more time left to debrief the answers. This is a lost opportunity to leverage feedback into a learning activity.  How do employees know if they are leaving the session with the “correct information”?

The other common practice is to include a multiple-choice quiz consisting of 5 questions as the knowledge check for SOPs that are “Read & Understood” especially for revisions.  What does it mean if employees get all 5 questions right?  That they will not make a mistake?  That the R & U method of SOP training is effective?  The search function in most e-doc systems is really good at finding the answers.  It doesn’t mean that they read the entire procedure and retained the information correctly.  What does it mean for the organization if human errors and deviations from procedures are still occurring?  Does it really mean “the training” is ineffective?

Conditions must be the same for evaluation as it for performance and training

What should we be measuring?

The conditions under which employees are expected to perform need to be the same conditions under which we “test” them.  So, it makes sense to train them under those same conditions as well.  What do you want/need your employees (learners) to do after the instruction is finished? What do you want them to remember and use from the instruction in the heat of their work moments?  Both the design and assessment need to mirror these expectations. 

Ask yourself, when in their day to day activities will employees need to use this GMP concept?  Or, where in the employees’ workflow will this procedure change need to be applied?  Isn’t this what we are training them for?  Your knowledge checks need to ensure that employees have the knowledge, confidence and capability to perform as trained.  It’s time to re-think what knowledge checks are supposed to do for you. And that means developing objectives that guide the instruction and form the basis of the assessment.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program.  These statements aka objectives determine the content and they also drive the assessment.  For example, a quiz or a knowledge check is typically used for classroom sessions that ask questions about the content.  In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring?  How much of the content they remember and maybe how much of the content they anticipate applying when they return to work? 

When the desired outcomes are compliance back on the job, knowledge checks need to shift to become more application-oriented (performance-based) such as “what if situations” and real scenarios that will require an employee to analyze the situation, propose a workaround and then evaluate if the problem-solving idea meets the stated course objectives.

Performance objectives drive a higher level of course design

Training effectiveness is really an evaluation of whether we achieved the desired outcome.  So, I ask you, what is the desired outcome for your GXP training and SOP Training: to gain knowledge (new content) and/or to use the content correctly back in the workplace? The objectives need to reflect the desired outcomes in order to determine the effectiveness of training; not just knowledge retention.

What is the desired outcome?

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish.  While the content may be the same or very similar, how we determine whether employees are able to execute what they learned post training requires more thought about the accuracy of the assessment.  It must be developed from the performance objectives in order for it to be a valid “instrument”.  The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real workplace conditions.

Accuracy of the assessment tools

The tools associated with the 4 Levels of Evaluation can be effective when used for the right type of assessment.  For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments.  See below – Formative vs. Summative Distinction.  Level 2 (Learning) knowledge assessments are effective in measuring retention and minimum comprehension and go hand in hand with learning-based objectives.  But when the desired outcomes are actually performance-based, Level 3 (Behavior) checklists can be developed for the performance of skills demonstrations and samples of finished work products. Note: Level 4 is Results and is business impact-focused.

Difference between formative and summative assessments

Trainers are left out of the loop

Today’s trainers don’t always have the instructional design skill set developed.  They do the best they can with the resources given including reading books and scouring the Internet.  For the most part, their training courses are decent and the assessments reflect passing scores.  But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the business performance expectations.  This is where the gap exists.  Trainers build courses based on knowledge/content instead and develop learning objectives that determine what learners should learn.  They create assessments to determine whether attendees have learned the content, but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Levels of objectives, who knew?

Many training mangers have become familiar with the 4 Levels of Evaluation over the course of their time in training, but less are acquainted with Bloom’s Taxonomy of Objectives.  Yes, objectives have levels of increasing complexity resulting in higher levels of performance.  Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. 

Bloom’s Revised Taxonomy of Objectives

Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid-range.  Evaluating and creating are at the highest levels.  If your end in mind is knowledge gained ONLY, continue to use the lower level objectives.  If; however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet the stated performance outcomes. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Did we succeed as intended? Was the training effective in achieving the desired outcomes?

To ensure learner success via the chosen assessment, the training activities must also be aligned with the level of the objectives.  This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   The training design also needs to include practice sessions where making mistakes is an additional learning opportunity and teach learners how to recognize a deviation is occurring.  Michael Allen refers to this as “building an authentic performance environment”. 

Our training outcomes need to be both knowledge gained, and performance based.  Now, the agency is expecting us to document that our learners have the knowledge AND can apply it successfully in order to follow SOPs and comply with the regulations. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended.  Are you willing to step up and do what it takes to ensure training is truly effective? – VB

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

Who is Vivian Bringslimark, the Author?

Need to write better Knowledge Check questions?
Tips for Writing KCs
Find out the Do’s and Don’ts for Writing Assessment Questions

Want to see the Table of Contents before you request it? No problem!

(c) HPIS Consulting, Inc.

Are all your SMEs Qualified Trainers?

I got a phone call from my Lead SME’s boss one morning. “How many more sessions do you need”, I asked him. I had already delivered 4 back-to-back workshops with class sizes of 25-30 SMEs; which was beyond optimal. So I asked him why. I needed to find out what was driving the surge in identified Qualified Trainers (QTs). I learned that a retrospective qualification needed to take place in order to close out an inspection observation. The total number of SMEs needing “proper paperwork” was well over 700. Since the redesigned training system was now in effect, these undocumented SMEs as Trainers would have to follow the new procedure. Or would they? Our discussion shifted to what type of training these SMEs will be delivering.

I then shared a related story with him. Several years prior, I got entangled with a “CAPA crisis” that involved QTs. No sooner did we launched the QT program and put the new procedure into effect, the CAPA quality system temporarily shut down shipping over a weekend. Upon return to the site, I was summoned to an emergency meeting from the security gate. Amazingly, a new practice/rule that only a Qualified Trainer can conduct training evolved from “only OJT QT’s can deliver OJT and perform Qualification Events” as per the SOP! This was clearly a case of misunderstood scope.

Does every SME need to be qualified as a Trainer?

Trainers Grid for determining QTs Scope of Training

In the Life Sciences arena, there are 5 recurring situations that require training: Self, Corrective Actions, Classroom (ILT), Structured OJT, and Qualification Events (Final Performance Demos).

Self can be achieved by the individual reading the procedure and signing the training record. This is also known as Read & Understand (R & U) for SOPs. I personally don’t think of it as training, it is reading. Yet, in some situations, reading is all that is required to gather the SOP information.  If on the other hand, you need to execute the steps of the SOP and complete required forms, then additional training with the SOP Author or a QT is the appropriate next level of training.

Deviations/ Corrective Actions stemming from a Corrective Action Preventive Action incident. Minimally an SME or the SOP Author is needed to ensure the credibility of the content. These types of training sessions have become known as Corrective Actions “Awareness” Training.  And more and more SMEs are now being required to deliver this training in a classroom setting.  They need to be qualified to deliver classroom sessions especially if the event is related to a significant CAPA or regulatory inspection observation.

Classroom (Instructor-Led Training) is preferred for knowledge-based content affecting a wide range of employees. The skillset needed is facilitation / managing the classroom and delivering content as designed by the instructional designer. Think of GMP Refresher sessions in the Training Room.

Years ago, it was a lot clearer to distinguish between classroom trainers and SMEs as OJT Trainers.  OJT was delivered 1-1 by “following Joe/Jane” around.  Classroom Trainers delivered their content in a classroom of many learners using slides, flipcharts, and handouts.  They were usually full-time dedicated training staff.  Instructor-led training requires training in learning theory design and practice in what used to be referred to as platform skills.  Today, it is more commonly known as “Running a Classroom” or “Basic Facilitation Skills”.  

Many of today’s OJT QTs are also being requested to deliver “Group Training” sessions on content found within their SOPs.  While the target audience may be the same set of peers, the scope, objectives, and tools used to deliver instructor-led training is vastly different from the OJT train the trainer course.

Group Training vs. OJT
Are your QT’s becoming Duo-Purposed?

Structured OJT is On the Job Training delivered by a Qualified OJT SME using the approved OJT Methodology. OJT QT’s attend the Qualified Trainers Workshop which focuses on the OJT Steps Model, how to perform the equipment, and complex SOPs via hands-on and the challenges of Life as a Trainer.  Should every seasoned employee become a QT based on their seniority and subject matter expertise? No, not necessarily.  Because there are some SMEs that don’t want to share their knowledge and therefore, may not make an effective OJT Trainer.  Establishing a set of nominating criteria provides an objective rationale for additional interpersonal qualities that help define a more well- rounded SME. 

Qualification Events (the Final Performance Demonstration) are formally documented observations of learners performing the procedure/task at hand in front of a Qualified OJT SME using an approved SOJT Checklist or rubric.  It is these events that set apart a Technical SME from a Qualified Trainer.  The QT workshop includes a dedicated lesson on what to look for during Q-Events and what the QT signature means for the integrity of the Employee Qualification Program.

Can having too many QTs be a problem?

It can be when there is no one else to train; to deliver OJT steps. While many of you may be wishing for this situation, it can eventually happen if staffing levels are adequate, shifts are normalized and SOPs revisions are managed via R & U only with the LMS.   How do you keep your QTs engaged and fresh if there are no opportunities for OJT sessions? I have some ideas for you to explore.

Re-examine the practice of online R & U only for SOP revisions.  I bet some of those revisions were significant enough for a face to face discussion (aka Group Training)  and there is probably at least one SOP revision in the past year that should have required a demonstration of task for optimum transfer of learning back on the job.  *Just because all employees are now qualified, doesn’t mean the program sits in hiatus waiting for new hires to join the company

When you have too many QTs who may be underutilized, it is also an appropriate time to administer the Trainer Mojo Assessment.  Based on the QTs scores, it might be time to say thank you for a job well done for the low scoring QTs.  You may be pleasantly surprised by who is ready to walk away from the training role?  Or you may have a cadre of QTs who legitimately need more training and hence, the need for some new modules is now justified.  Many of your excess SMEs were identified long before criteria was put into a place or the SOP was established.  If the Trainer Mojo Assessment doesn’t bring any discussion, perhaps it’s time to “re-nominate” them using the criteria within the SOP and offer a refresher series on the QT Workshop content. Or arrange for developmental assignments that expand their subject matter expertise or advances their training repertoire into a classroom facilitators? 

What is exciting for me is that many OJT-QTs are stepping up and volunteering to attend the SMEs as Classroom Facilitators workshop as part of expanding their QT’s toolkit.  Many of them want to learn more about teaching peers and working with adults.  A few have now become promoted to full-time trainer for L&D /QA departments.  Which of your OJT QTS are ready to step up and move into the classroom?  It’s time to find out and be part of the current trend.  -VB

ready to have a chat with Vivian?

Who is Vivian Bringslimark?

The Trainer MOJO Assessment can be yours. Sign up for the mailing list and be sure to mention the Trainer MOJO Assessment.

(c) HPIS Consulting, Inc.