When I left the manufacturing shop floor and moved into training, full-time trainers presented in the classroom using a host of techniques, tools and relied on their platform skills to present content. Subject matter experts (or the most senior person) conducted technical training on the shop floor in front of a piece of equipment, at a laboratory station or a workbench.
This blog post has been merged with “Batteries Not Included: Not All Trainers are Instructional Designer or Classroom Facilitators.
When I taught middle school math, tests were used to assess knowledge comprehension and some application with word problems and a few complex questions requiring logic proofs. Results were captured via a score; an indication as to how well you answered the questions. Whether you call it a quiz, a knowledge check or any other name it is still assessing some form of knowledge comprehension.
Life sciences companies are required by law to conduct annual regulations training (GXP Refreshers) so as to remain current in the operations being performed. Given the design and delivery of most refresher training sessions, it is heavy with content that is delivered to a passive learner. Listening to a speaker or reading slide after slide and clicking the next button is not active engagement. It’s boring, mind-numbing and, in some cases, downright painful for learners. As soon as the training is over, it’s very easy to move past the experience. That also means most of the content is also soon forgotten. Regulatory agencies are no stranger to this style of compliance training and delivery. But when departures from SOPs and other investigations reveal a lack of GXP knowledge, they will question the effectiveness of the GXP training program.
So why are we still using tests?
In our quest for effective compliance training, we have borrowed the idea of testing someone’s knowledge as a measure of effectiveness. This implies that a corporate classroom mirrors an educational classroom and testing means the same thing – a measure of knowledge comprehension. However, professors, colleges, universities and academic institutions are not held to the same results standard. In the Life Sciences, two very common situations occur where knowledge checks and “the quiz” are heavily used: Annual GXP Refresher and the Read & Understand Approach for SOPs.
But what is the knowledge assessment measuring? Is it mapped to the course objectives or are the questions so general that they can be answered correctly without having to attend the sessions? Or worse yet, are the questions being recycled from year to year / event-to-event? What does it mean for the employee to pass the knowledge check or receive 80% or better? When does s/he learn of the results? In most sessions, there is no more time left to debrief the answers. This is a lost opportunity to leverage feedback into a learning activity. How do employees know if they are leaving the session with the “correct information”?
The other common practice is to include a multiple-choice quiz consisting of 5 questions as the knowledge check for SOPs that are “Read & Understood” especially for revisions. What does it mean if employees get all 5 questions right? That they will not make a mistake? That the R & U method of SOP training is effective? The search function in most e-doc systems is really good at finding the answers. It doesn’t mean that they read the entire procedure and retained the information correctly. What does it mean for the organization if human errors and deviations from procedures are still occurring? Does it really mean “the training” is ineffective?
What should we be measuring?
The conditions under which employees are expected to perform need to be the same conditions under which we “test” them. So, it makes sense to train them under those same conditions as well. What do you want/need your employees (learners) to do after the instruction is finished? What do you want them to remember and use from the instruction in the heat of their work moments? Both the design and assessment need to mirror these expectations.
Ask yourself, when in their day to day activities will employees need to use this GMP concept? Or, where in the employees’ workflow will this procedure change need to be applied? Isn’t this what we are training them for? Your knowledge checks need to ensure that employees have the knowledge, confidence and capability to perform as trained. It’s time to re-think what knowledge checks are supposed to do for you. And that means developing objectives that guide the instruction and form the basis of the assessment.
Objectives drive the design and the assessment
Instructional Design 101 begins with well-developed objective statements for the course, event, or program. These statements aka objectives determine the content and they also drive the assessment. For example, a quiz or a knowledge check is typically used for classroom sessions that ask questions about the content. In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring? How much of the content they remember and maybe how much of the content they anticipate applying when they return to work?
When the desired outcomes are compliance back on the job, knowledge checks need to shift to become more application-oriented (performance-based) such as “what if situations” and real scenarios that will require an employee to analyze the situation, propose a workaround and then evaluate if the problem-solving idea meets the stated course objectives.
Performance objectives drive a higher level of course design
Training effectiveness is really an evaluation of whether we achieved the desired outcome. So, I ask you, what is the desired outcome for your GXP training and SOP Training: to gain knowledge (new content) and/or to use the content correctly back in the workplace? The objectives need to reflect the desired outcomes in order to determine the effectiveness of training; not just knowledge retention.
When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish. While the content may be the same or very similar, how we determine whether employees are able to execute what they learned post training requires more thought about the accuracy of the assessment. It must be developed from the performance objectives in order for it to be a valid “instrument”. The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real workplace conditions.
Accuracy of the assessment tools
The tools associated with the 4 Levels of Evaluation can be effective when used for the right type of assessment. For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments. See below – Formative vs. Summative Distinction. Level 2 (Learning) knowledge assessments are effective in measuring retention and minimum comprehension and go hand in hand with learning-based objectives. But when the desired outcomes are actually performance-based, Level 3 (Behavior) checklists can be developed for the performance of skills demonstrations and samples of finished work products. Note: Level 4 is Results and is business impact-focused.
Trainers are left out of the loop
Today’s trainers don’t always have the instructional design skill set developed. They do the best they can with the resources given including reading books and scouring the Internet. For the most part, their training courses are decent and the assessments reflect passing scores. But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the business performance expectations. This is where the gap exists. Trainers build courses based on knowledge/content instead and develop learning objectives that determine what learners should learn. They create assessments to determine whether attendees have learned the content, but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.
Levels of objectives, who knew?
Many training mangers have become familiar with the 4 Levels of Evaluation over the course of their time in training, but less are acquainted with Bloom’s Taxonomy of Objectives. Yes, objectives have levels of increasing complexity resulting in higher levels of performance. Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective.
Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid-range. Evaluating and creating are at the highest levels. If your end in mind is knowledge gained ONLY, continue to use the lower level objectives. If; however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet the stated performance outcomes. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.
Did we succeed as intended? Was the training effective in achieving the desired outcomes?
To ensure learner success via the chosen assessment, the training activities must also be aligned with the level of the objectives. This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event. The training design also needs to include practice sessions where making mistakes is an additional learning opportunity and teach learners how to recognize a deviation is occurring. Michael Allen refers to this as “building an authentic performance environment”.
Our training outcomes need to be both knowledge gained, and performance based. Now, the agency is expecting us to document that our learners have the knowledge AND can apply it successfully in order to follow SOPs and comply with the regulations. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended. Are you willing to step up and do what it takes to ensure training is truly effective? – VB
Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.
When you think about evaluating training, what comes to mind? It’s usually a “smile sheet”/ feedback survey about the course, the instructor and what you found useful. As a presenter/instructor, I find the results from these surveys very helpful, so thank you for completing them. I can make changes to the course objectives, modify content or tweak activities based on the comments. I can even pay attention to my platform skills where noted. But does this information help us evaluate if the course was successful?
On the one end of “The Learner Participation Continuum” is a lecture which is a one-way communication and requires very little participation. At the other end, we have experiential learning and now immersive learning environments with the introduction of 3D graphics, virtual simulations, and augmented reality.
This blog has been merged with “Batteries Not Included: Not all Trainers are Instructional Designers and Classroom Facilitators”.
Many QA /HR Training Managers have the responsibility for providing a train-the-trainer course for their designated trainers. While some companies send their folks to public workshop offerings, many chose to keep the program in-house. And then an interesting phenomenon occurs. The course content grows with an exciting and overwhelming list of learning objectives.
The supervisors of the SMEs struggle with the loss of productivity for the 2 – 3 day duration and quickly develop a “one and done” mindset. Given the opening to “train” newly identified SMEs as Trainers, the instructional designer gets one opportunity to teach them how to be trainers. So s/he tends to add “a lot of really cool stuff” to the course in the genuine spirit of sharing, all justifiable in the eyes of the designer. However, there is no hope in breaking this adversarial cycle if the Training Manager doesn’t know how to cut content.
I used to deliver a two-day (16 hour) workshop for OJT Trainers. I included all my favorite topics. Yes, the workshop was long. Yes, I loved teaching these concepts. I honestly believed that knowing these “extra” learning theory concepts would make my OJT Trainers better trainers. Yes, I was in love with own my content. And then one day, that all changed.
Do they really need to know Maslow’s Hierarchy of Needs?
During a rapid design session I was leading, I got questioned on the need to know Maslow’s Hierarchy of Needs. As I began to deliver my auto-explanation, I stopped mid-sentence. I had an epiphany. My challenger was right. Before I continued with my response, I feverishly racked my brain thinking about the training Standard Operating Procedures (SOPs) we revised, the forms we created, and reminded myself of the overall goal of the OJT Program. I was searching for that one moment during an OJT session when Maslow was really needed. When would an OJT Qualified Trainer use this information back on the job, if ever I asked myself?
It belongs in the Intermediate Qualified Trainers Workshop, I said out loud. In that moment, that one question exercise was like a laser beam cutting out all nice-to-know content. I eventually removed up to 50% of the content from the workshop.
Oh, but what content do we keep?
Begin with the overall goal of the training program: a defendable and reproducible methodology for OJT. The process is captured in the redesigned SOPs and does not need to be repeated in the workshop. See Have you flipped your OJT TTT Classroom yet?
Seek agreement with key stakeholders on what the OJT QTs are expected to do after the workshop is completed. If these responsibilities are not strategic or high priority, then the course will not add any business value. Participation remains simply a means to check the compliance box. Capture these expectations as performance objectives.
Once there is agreement with the stated performance objectives, align the content to match these. Yes, there is still ample room in the course for learning theory, but it is tailored for the need to know only topics.
In essence, the learning objectives become evident. When challenged to add certain topics, the instructional designer now refers to the performance objectives and ranks the consequences of not including the content in the workshop against the objectives and business goal for the overall program.
What is the value of the written assessment?
With the growing demand for training effectiveness, the addition of a written test was supposed to illustrate the commitment for compliance expectations around effectiveness and evaluation. To meet this client need, I put on my former teacher hat and created a 10 question open book written assessment. This proved to need additional time to execute and hence, more content was cut to accommodate the classroom duration.
My second epiphany occurred during the same rapid design project, albeit a few weeks later. What is the purpose of the classroom written assessment when back on the job the OJT QTs are expected to deliver (perform) OJT; not just know it from memory? The true measure of effectiveness for the workshop is whether they can deliver OJT according to the methodology, not whether they retained 100% of the course content! So I removed the knowledge test and created a qualification activity for the OJT QTs to demonstrate their retained knowledge in a simulated demonstration using their newly redesigned OJT checklist. Now the OJT QT Workshop is value added and management keeps asking for another round of the workshop to be scheduled. -VB
“I teach GMP Basics and conduct Annual GMP Refreshers several times a year and preach to audiences that you must follow the procedure otherwise it’s a deviation. And in less than two weeks, I am expected to teach a process that is changing daily! Yet on the other hand, how could I teach a work instruction that is known to be broken; is being re-designed and not yet finalized?”
When Rapid Design for E Learning found its way into my vocabulary, I loved it and all the derivatives like rapid prototyping. And soon, I starting seeing Agile this and Agile that. It seemed that Agile was everywhere I looked. When Michael Allen published his book, LEAVING ADDIE for SAM, I was intrigued and participated in an ATD (formerly known as ASTD) sponsored webinar. It made a lot of sense to me and “I bought into the concept”. Or so I thought …