Employee Qualification is the Ultimate Level 3 Training Evaluation

On the Job Training is as old as some of the original apprentice-style forms of learning and ranges from very informal like follow Joe around to structured OJT that is formally documented and includes a qualification event observed by a Qualified Trainer.  While OJT means on the job training, the steps for OJT can also vary from trainer to trainer and from company to company unless the methodology is captured in an approved written procedure. 

Multiple Performance Demonstrations Occur

One of the first instances of a demonstration occurs from the trainer himself.  S/he shows the learner how to perform the technique, task, or process.  The learner observes and asks questions.  Then the roles reserve and the learner performs a mimicked rendition of what s/he observed.  The trainer provides feedback and sometimes will ask questions intended to assess the knowledge gained as well. 

Is one demonstration enough to determine OJT is done?  Sometimes it is.  When the task is simple, one time is all that most learners need.  When the task or process is complicated, it will take more than one demonstration to get the SOP steps right.  The nature of the SOP or the complexity of the task at hand determines this. 

But, how do I proceduralize that, you ask?  It starts by not arbitrarily picking the magic number 3. I have engaged in countless discussions regarding the exhaustive list of exceptions to forcing the rule of 3 times to practice.  And some QT’s will argue for more than 3 sessions especially when the procedure is so infrequently performed.  It’s not strictly about the number of times. We recognize that multiple sessions become practice sessions when the learner is still demonstrating the procedure under the supervision of his/her trainer.  But documenting the number of demonstrations and/or practice sessions is still a challenge for the Life Sciences Industry. 

At what point, is the learner going to be qualified to perform independently? As an industry, there is no standard number of times.  There are no standard learners either.  There is a range of “quick studys” to typical to slow learners.  The caveat to this is monitoring both the quick study and the slow learner.  In the QT workshops, this topic is explored using scenarios with tips and techniques that are shared during the debriefings.  Qualified Trainers know what is typical and they are empowered to evaluate the outcome of the learner’s demonstration(s).  Is the procedure being performed according to the SOP or is the learner still a bit hesitant about the next step? Is s/he relying on the QT for the assurance that the step is right?  While the steps may be performed correctly, is it also the confidence of both the QT and learner that we are assessing as well.

How many times is enough? Until both the learner and the QT are confident that s/he is not going to have an operator error deviation a week after going solo.  The QT is ultimately the one who has to assess progress and determine that “with a few more sessions”, my learner will get this or no, s/he may never get it and it’s time to have a discussion with the manager.

BTW, what does “Qualified Employee” mean?

Being SOP Qualified is the demonstrated ability of an employee to accurately perform a task or SOP independent of his OJT Qualified Trainer with consistency to meet acceptable quality standards. It satisfies the CFR ξ 211.25 (c ) regulation, “there shall be an adequate number of qualified employees to perform”.

Don’t be tempted to take the Performance Demo short-cut!

The end goal of OJT and the Qualification Event is for the employee to perform independently of his/her QT.  In order to be “released to task”, a final performance demonstration is scheduled, observed, and documented by an OJT Qualified Trainer. But don’t be fooled into taking the performance demo short cut!  The last step in the training portion of OJT is a performance demonstration to show the OJT-QT that the employee can perform the steps AND perform at the same level of proficiency as his/her peer group. If s/he can’t perform at this level, then the learner is not ready to “go solo”.

He may need more encouragement to build up confidence, correct paperwork documentation errors, and time to become proficient with his/her speed while maintaining accuracy.  That’s what practice sessions are for; time to master confidence with the steps and increase speed.  When his/her performance is on par with “business as usual” performance levels, then the employee is ready to perform the final demonstration aka the Qualification Event.  While the “readiness indicator” may not be documented, the (Q-Event) must be formally captured, assessed with the outcome being documented and communicated to both the learner and his/her supervision.  It is a separate event from the OJT demonstrations

Final Performance Demo = Qualification Event

During the final performance demonstration, the QT observes the learner’s performance.  When feedback is provided, it is evaluative and the rating result is formally documented.  Granted, when someone is watching us, we tend to follow the rules.  With enough repeated practice sessions,  learners tend to perform procedures as “business as usual”.  It’s how they learn the ebb and flow from their peers.  This is the optimum moment to determine if s/he is truly ready to perform without coaching or supervision from his QT.  If a QT has to interrupt to correct a misstep or remind the employee that his step is out of sequence, the event is terminated and documented as requires more review. 

More training practice is then scheduled until readiness is once again achieved.  And this also means the learner cannot sign for his work without his trainer’s co-signature or initials.  Do not misinterpret this as signing for the verification entry aka the second check.  In this situation, the Qualified Trainer cannot be both the co-signer and the second check person verification/reviewer.  You will need three sets of initials to properly document the supervision of a learner requiring more practice.  Otherwise you violate data integrity rules around independent verification.

Qualification events are not intended to be a rushed get ‘er done / one and done paperwork exercise.  Sufficient time for proficiency and expected department productivity levels is required to ensure knowledge has been retained and skill can be accurately repeated.  OJT demonstrations are not to be misused as the Q-Event.  This distinction is critical to ensuring a successful qualification event and the confidence of consistently performing the SOP tomorrow, next week, etc.  And not creating a deviation one day or one week after declaring the learner qualified.

It happens when QT’s are urged to “get’em done” by impatient or overly anxious supervisors consumed with productivity and not quality metrics.  With the qualification event being so recent, the QT will most certainly be interviewed as part of the investigation.  The checklist will also be examinedThis tool is supposed to help the QT be as objective as possible and consistently evaluate performance as demonstrated.  But typically, the checklist used to qualify individuals shows all Yeses; otherwise, they wouldn’t be qualified status.  And that, of course, depends on how well the critical steps and behaviors are captured in the OJT Checklist.  Yes, he was able to demonstrate the step, critical task, and/or behavior, but what we don’t know is how well?  Are we to assume that No means “No, not at all” and Yes means performed “Well” or it is “As Expected” or “Adequate” or maybe, in this case, it was “Sort of”?  The comments column would have been the ideal place to record observations and enter comments.

Validating Your SOP Effectiveness

Meeting FDA expectations for qualified employees is paramount.  But the “100% Trained on Curricula Requirements” printouts aren’t winning favor with FDA.   In the March 2015 article, Moving Beyond Read & Understand SOP Training”, I asserted that the current 100% trained reports and SOP quizzes would not be enough to satisfy the performance challenge for training effectiveness.  Are your employees qualified? How do you know? has become the training effectiveness question asked at every inspection.  The use of “100% completed” reports is a metric for completeness only; a commonly used data point from the LMS.  It does not address the transfer of learning into performance back on the job.  Neither does the 5-question multiple-choice “SOP Quiz”. The true measure of effective OJT is an observed performance demonstration of the SOP; aka the qualification event.

Employee Qualification is the ultimate Level 3 Training Effectiveness Strategy

Level 3 Behavior Change –> Transfer of Training/Learning

The focus of Employee Qualification is about the employee’s ability to apply knowledge and skill learned during OJT back on the job or in the workplace setting.  I call this Transfer of Training.  Others in the training industry refer to this as Level 3 – Behavior Change.   Actual performance is the ultimate assessment of learning transfer.   If an employee is performing the job task correctly during the final  performance demonstration (Q-Event), his performance meets the expectation for successful “OJT Required SOP”.

Yet, according to the 2009 ATD research study “The Value of Evaluation”, only 54.6% of respondents indicated that their organization conducts Level 3 evaluations.   The top technique used is follow-up surveys with participants (31%), while observation of the job was fourth (23.9%).    

If on the job assessment is the “ultimate” measure of transfer, then why isn’t it being used more frequently? “Post-training” assessments are time and labor-intensive.   But for organizations that have to meet compliance requirements (46.9% of survey respondents), documenting training effectiveness is now on FDA performance radar.

Not all SOPs require a Qualification Event

SOPs generally fall into two categories: FYI-type and OJT Required.  The more complex an SOP is, the more likely errors will occur.  Observing “critical to quality” steps is a key focus during the final performance demonstration.  However, a 1-1 documentation path for every OJT related SOP may not be needed.  Instead, batch SOPs a/o multiple SOPs of similar processes can be grouped into a “module” with documentation supporting similarity.  Where there are differences in these SOPs, then the Q Event would also require observation of these unique CTQ differences.

Two Types of SOPs
Two Types of SOPs: Only Critical Task-Based SOPs Required OJT and Qualification Events

An active Employee Qualification Program also verifies that the training content in this case the SOP, accurately describes how to execute the steps for the task at hand.  If the SOP is not correct or the qualifying documentation (checklist) is too confusing, a cause analysis needs to be conducted. Successful qualification events also validate the OJT methodology is effective. That Qualified OJT Trainers are consistently delivering OJT sessions for “OJT Required SOPs”. 

What does “Qualified Employee” mean for a company?

Qualified Employee status is not only a compliance imperative but a business driver as well. A qualified workforce means a team of well-trained employees who know how to execute their tasks accurately and with compliance in mind, own, and document their work properly.  When anyone in the organization can emphatically answer “Yes, my employees are qualified and yes, I have the OJT checklists to back that up”, then the Employee Qualification Program is not only working but is also effective at producing approved products or devices fit for use. The bonus is a renewed level of confidence in the ability of employees to deliver on performance outcomes for an organization.

*The Value of Evaluation: Making Training Evaluations More Effective. An ASTD Research Study, 2009, ASTD.

What happens when the performance demonstration becomes more of a "this is how I do it discussion" instead of an actual demonstration? Read the Impact Story - I've Fired My Vendor - to learn more.

Who is Vivian Bringslimark?

(c) HPIS Consulting, Inc.

Is Documenting Your OJT Methodology Worth It?

The short answer is yes! 

Years ago, during the Qualified Trainers (QT) workshop, I would ask QTs the following two questions:

  • What is OJT?
  • How do you deliver OJT?

Invariably, all would answer the first question the same way: on the job training.  And then I would ask the attendees to form into groups to discuss the second question among fellow peers.  I purposely mixed the groups so that there was equal representation of manufacturing trainers and QC analytical laboratory trainers and a fascinating exchange occurred between the attendees.  During the debriefing activity, we learned that there was a lot of variability in how the trainers were conducting OJT.  How can that be when they all answered the first question so consistently? “Well”, one of them said, “we don’t have a procedure on it so I just go about it the way I think it should be done.  I’ve had success so far, so I keep on doing what I’ve done in the past.”

Declaring your OJT Model

In order to get consistent OJT, you need to define your OJT steps and you need to ensure that only approved content (i.e. SOJT checklists)  will be used to deliver OJT; not the SME’s secret sauce. I’m not proposing a cookie-cutter approach for QTs to become all the same.  But rather, I am advocating a clear distinction between each step/stage / phase so that both the learner and the QT know exactly where they are in the OJT process, what is expected of them in that step and why the OJT step is needed.  This is no longer just go follow Joe or Jane around; this is structuring the OJT sessions.  Defining your OJT steps in a methodology ensures that all QT’s consistently deliver their 1-1 sessions and document the progression through the steps.

I’m less focused on what you call these steps or how many there are.   I am looking to see how these steps move a new hire along the journey to becoming qualified prior to being released to task.  For me, this is what makes OJT really structured.  And that model needs to be captured in a standard operating procedure or embedded in a training procedure so that all employees are informed and aware of how they will receive their OJT.   

The Assess Step

What is your purpose for this step?  Is it to evaluate the learners’ knowledge and skill via a performance demonstration, the effectiveness of the OJT sessions, or to determine qualified status?  The answer matters, because the QT will be providing feedback that impacts very different outcomes.

Be clear on the purpose of the performance demonstration

 The visual on the right indicates the main difference between how QTs feedback is used during a performance demonstration for OJT vs. feedback for the Qualification Event. Despite that, the learner is asked to perform the procedure the same way.  Is it clear to the learner what the performance demonstration means?  Does your methodology articulate the difference between a practice demo, a readiness demo, a performance demo and/or a Qualification Event demonstration?

Documenting OJT sessions presents a challenge for many trainers and document control staff.  What is considered an OJT session?  My favorite lament is – “Do you know what that will do to our database not to mention the amount of paperwork that would create!” A workaround to all these questions and concerns is to capture at least one session along the progression of each OJT step as per the OJT Model, thus documenting adherence to the procedure. 

For example, the first OJT step may be READ.  It means Read the SOP first, if not already completed. We are pretty good at documenting R & U for SOPs.  Then, the next step may be DISCUSS / INTRODUCE the SOP.  Capture when that discussion occurred if it’s different from Step 1- READ.  The “trainer demonstrates” portion can also be documented.  Where it gets tricky is when we ask the learner to demonstrate and practice.  Are we required to capture every OJT session or just one? 

Why not capture the last instance when it is confirmed the learner is ready to qualify?  If we keep it simple and document that our learners have experienced at least one instance of each step /stage, then we are complying with our OJT methodology and minimally documenting their OJT progression.   It is important to describe how to document the OJT progression in the SOP.  Don’t leave that up to the QT’s to figure out.  It is in our documentation, that we also need to be consistent.

How do you know if someone is qualified to perform this SOP?

Ideally, the answer would be because:

1.) we have a structured OJT process that includes a task specific OJT Checklist and

2.) we can look up the date of completion (Qualification Event) in our LMS history. 

And that, of course, depends on how well the critical steps and behaviors are captured in the SOJT checklist. The checklist is a tool to help the QT be as objective as possible and consistently evaluate performance as demonstrated. In the article, “A Better Way to Measure Soft Skills”, author Judith Hale explains the difference between a checklist and a rubric.

            “Checklists only record whether a behavior occurred, though, and not the quality of the behavior.  Rubrics, on the other hand, measure how well the learner executed the behavior.”  p. 62.

Yes, but was it tolerable, adequate or exemplary?

What I typically see are checklists with varying levels of tasks, steps and/or behaviors with a column for Yes, No and Comments.  What I don’t see is a column to mark how well the learner performed!  Is it enough to mark Yes or No for each item since most Q Events are Pass or “needs more practice”?  Maybe. 

But consider the following situation.  A human error deviation has occurred and the LMS indicates the technician has earned qualified status.  The document used to qualify this individual shows all Yeses.  Yes s/he was able to demonstrate the step, critical task, and/or behavior, but what we don’t know is how well?  Are we to assume that No means “No, not at all” and Yes means performed “Well” or it is “As Expected” or “Adequate” or maybe in this case it was “Sort of”? 

An additional column describing what it means to score low, medium, high or in our situation: Poor, Adequate, As Expected, and even Exemplar could provide much needed insight for the root cause analysis and investigation that will follow this deviation.  It provides a level of detail about the performance that goes beyond Yes, No, or Comment.  In most checklists I’ve reviewed, the comments column is hardly ever used.

What does the QT Signature Mean?

What the signature means on the document used to qualify an employee performing the task, technique or procedure as defined in the tool is whether or not the performance matched criteria (Y/N) or to what degree if using a qualitative rubric.

  • It does not mean that said employee completed all his curricula requirements.
  • It does not mean that said employee explained how to execute the procedure without performing it.
  • It does not mean the QT is responsible for the future performance of said employee.

In fact, it means just the opposite.  It documents that on this date, said employee was capable of performing the procedure as expected and that from this date forward, said employee owns his/her own work including deviations.  The employee is no longer being supervised by the QT for this SOP.  Without this understanding and agreement, the integrity of the whole program is put into question, not just the effectiveness of SOJT.  Be sure to explain this in the QT workshop and in the Robust Training System SOPs.  – VB

Hale,JA Better Way to Measure Soft Skills, TD, ATD, August, 2018, pps. 61-64.

Who is Vivian Bringslimark?

Is your OJT Methodology being documented consistently?

(c) HPIS Consulting, Inc.

What’s Your Training Effectiveness Strategy? It needs to be more than a survey or knowledge checks

When every training event is delivered using the same method, it’s easy to standardize the evaluation approach and the tool. Just answer these three questions:

  • What did they learn?
  • Did it transfer back to job?
  • Was the training effective?

In this day and age of personalized learning and engaging experiences, one-size training for all may be efficient for an organizational roll out but not the most effective for organizational impact or even change in behavior. The standard knowledge check can indicate how much they remembered. It might be able to predict what will be used back on the job. But be able to evaluate how effective the training was? That’s asking a lot from a 10 question multiple choice/ true false “quiz”. Given the level of complexity of the task or the significance of improvement for the organization such as addressing a consent decree or closing a warning letter, it would seem that allocating budget for proper training evaluation techniques would not be challenged.

Do you have a procedure for that?

Perhaps the sticking point is explaining to regulators how decisions are made using what criteria. Naturally documentation is expected and this also requires defining the process in a written procedure. It can be done. It means being in tune with training curricula, awareness of the types of training content being delivered and recognizing the implication of the evaluation results. And of course, following the execution plan as described in the SOP.   Three central components frame a Training Effectiveness Strategy: Focus, Timing and Tools.

TRAINING EFFECTIVENESS STRATEGY: Focus on Purpose

Our tendency is to look at the scope (the what) first. I ask that you pause long enough to consider your audience, identify your stakeholders; determine who wants to know what. This analysis shapes the span and level of your evaluation policy. For example, C-Suite stakeholders ask very different questions about training effectiveness than participants.

The all purpose standard evaluation tool weakens the results and disappoints most stakeholders. While it can provide interesting statistics, the real question is what will “they” do with the results? What are stakeholders prepared to do except cut training budget or stop sending employees to training? Identify what will be useful to whom by creating a stakeholder matrix.

Will your scope also include the training program (aka Training Quality System) especially if it is not included in the Internal Audit Quality System? Is the quality system designed efficiently to process feedback and make the necessary changes that result from the evaluation results? Assessing how efficiently the function performs is another opportunity to improve the workflow by reducing redundancies thus increasing form completion speed and humanizing the overall user experience. What is not in scope? Is it clearly articulated?

TRAINING EFFECTIVENESS STRATEGY: Timing is of course, everything

Your strategy needs to include when to administer your evaluation studies. With course feedback surveys, we are used to immediately after otherwise, the return rate drops significantly. For knowledge checks we also “test” at the end of the session. Logistically it’s easier to administer because participants are still in the event and we also increase the likelihood of higher “retention” scores.

But when does it make more sense to conduct the evaluation? Again, it depends on what the purpose is.

  • Will you be comparing before and after results? Then baseline data needs to be collected before the event begins. I.e. current set of Key Performing Indicators, Performance Metrics
  • How much time do the learners need to become proficient enough so that the evaluation is accurate? I.e. immediately after, 3 months or realistically 6 months after?
  • When are metrics calculated and reported? Quarterly?
  • When will they be expected to perform back on the job?

Measuring Training Transfer: 3, 6 and maybe 9 months later

We can observe whether a behavior occurs and record the number of people who are demonstrating the new set of expected behaviors on the job. We can evaluate the quality of a work product (such as a completed form or executed batch record) by recording the number of people whose work product satisfies the appropriate standard or target criteria. We can record the frequency with which target audience promotes the preferred behaviors in dialogue with peers and supervisors and in their observed actions.

It is possible to do this; however, the time, people and budget to design the tools and capture the incidents are at the core of management support for a more vigorous training effectiveness strategy. How important is it to the organization to determine if your training efforts are effectively transferring back to the job? How critical is it to mitigate the barriers that get in the way when the evaluation results show that performance improved only marginally? It is cheaper to criticize the training event(s) rather than address the real root cause(s). See Training Does Not Stand Alone (Transfer Failure Section).

TRAINING EFFECTIVENESS STRATEGY: Right tool for the right evaluation type

How will success be defined for each “training” event or category of training content? Are you using tools/techniques that meet your stakeholders’ expectations for training effectiveness? If performance improvement is the business goal, how are you going to measure it? What are the performance goals that “training” is supposed to support? Seek confirmation on what will be accepted as proof of learning, evidence of transfer to the workplace, and identification of leading indicators of organizational improvement. These become the criteria by which the evaluation has value for your stakeholders. Ideally, the choice of tool should be decided after the performance analysis is discussed and before content development begins.

Performance Analysis first; then possibly a training needs analysis

Starting with a performance analysis recognizes that performance occurs within organizational systems. The analysis provides a 3-tiered picture of what’s encouraging/blocking performance for the worker, work tasks, and/or the workplace and what must be in place for these same three levels in order to achieve sustained improvement. The “solutions” are tailored to the situation based on the collected data and not on an assumption that training is needed. Otherwise, you have a fragment of the solution with high expectations for solving “the problem” and relying on the evaluation tool to provide effective “training” results. Only when the cause analysis reveals a true lack of knowledge, will training be effective.

Why aren’t more Performance Analyses being conducted?
For starters, most managers want the quick fix of training because it’s a highly visible activity that everyone is familiar and comfortable with. The second possibility lies in the inherent nature of performance improvement work. Very often the recommended solution resides outside of the initiating department and requires the cooperation of others.   Would a request to fix someone else’s system go over well where you work? A third and most probable reason is that it takes time, resources, and a performance consulting skill set to identify the behaviors, decisions and “outputs” that are expected as a result of the solution. How important will it be for you to determine training effectiveness for strategic corrective actions?

You need an execution plan

Given the variety of training events and level of strategic importance occurring within your organization, one standard evaluation tool may no longer be suitable. Does every training event need to be evaluated at the same level of rigor? Generally speaking, the more strategic the focus is, the more tedious and timely the data collection will be. Again, review your purpose and scope for the evaluation. Refer to your stakeholder matrix and determine what evaluation tool(s) is better suited to meet their expectations.

For example, completing an after-training survey for every event is laudable; however, executive leadership values this data the least. According to Jack and Patricia Phillips (2010), they want to see business impact the most. Tools like balanced scorecards can be customized to capture and report on key performing indicators and meaningful metrics. Develop your plan wisely, generate a representative sample size initially and seek stakeholder agreement to conduct the evaluation study.

Life after the evaluation: What are you doing with the data collected?

Did performance improve? How will the evaluation results change future behavior and/or influence design decisions? Or perhaps the results will be used for budget justification, support for additional programs or even a corporate case study? Evaluation comes at the end but in reality, it is continuous throughout. Training effectiveness means evaluating the effectiveness of your training: your process, your content and your training quality system. It’s a continuous and cyclical process that doesn’t end when the training is over. – VB

 

Jack J. Phillips and Patricia P. Phillips, “How Executives View Learning Metrics”, CLO, December 2010.

Recommend Reading:

Jean-Simon Leclerc and Odette Mercier, “How to Make Training Evaluation a Useful Tool for Improving L &D”, Training Industry Quarterly, May-June, 2017.

 

Did we succeed as intended? Was the training effective?

When you think about evaluating training, what comes to mind? It’s usually a “smile sheet”/ feedback survey about the course, the instructor and what you found useful. As a presenter/instructor, I find the results from these surveys very helpful, so thank you for completing them. I can make changes to the course objectives, modify content or tweak activities based on the comments. I can even pay attention to my platform skills where noted. But does this information help us evaluate if the course was successful?

Formative vs. Summative Distinction

Formative assessments provide data about the course design. Think form-ative; form-at of the course. The big question to address is whether the course as designed met the objectives. For example, the type of feedback I receive from surveys gives me comments and suggestions about the course.

Summative assessments are less about the course design and more about the results and impact. Think summative; think summary. It’s more focused on the learner; not the instructional design. But when the performance expectations are not met or the “test” scores are marginal, then the focus shifts back to the course, instructor/trainer and instructional designer with the intent to find out what happened? What went wrong? When root cause analysis fails to find the cause, it’s time to look a little deeper at the objectives.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program. These statements aka objectives determine the content and they also drive the assessment. For example, a written test or knowledge check is typically used for classroom sessions that ask questions about the content. In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring? How much of the content they remember and maybe how much of the content they can apply when they return to work?

Training effectiveness on the other hand is really an evaluation of whether we achieved the desired outcome. So I ask you, what is the desired outcome for your training: to gain knowledge (new content) or to use the content correctly back in the workplace? The objectives need to reflect the desired outcome in order to determine the effectiveness of training.

What is your desired outcome from training?

Levels of objectives, who knew?

Many training professionals have become familiar with Kirkpatrick’s 4 Levels of Evaluation over the course of their careers, but less are acquainted with Bloom’s Taxonomy of Objectives. Yes, objectives have levels of increasing complexity resulting in higher levels of performance. Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid range. Evaluating and creating are at the highest levels.

If your end in mind is knowledge gained ONLY, continue to use the lower level objectives. If however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet performance expectations. They need to become more performance based. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Accuracy of the assessment tools

The tools associated with the 4 levels of evaluation can be effective when used for the right type of assessment. For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments. Level 2 (Learning) are effective in measuring retention and minimum comprehension and go hand in hand with learning based objectives. But when the desired outcomes are actually performance based, Level 2 knowledge checks need to shift up to become more application oriented such as “what if situations” and scenarios requiring analysis, evaluating, and even problem solving. Or shift altogether to Level 3 (Behavior) and develop a new level of assessments such as demonstrations and samples of finished work products.

Trainers are left out of the loop

But, today’s trainers don’t always have the instructional design skill set developed. They do the best they can with the resources given including reading books and scouring the Internet. For the most part, their training courses are decent and the assessments reflect passing scores. But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the performance expectations. This is where the gap exists. Trainers build courses based on knowledge / content instead and develop learning objectives that determine what learners should learn. They create assessments to determine whether attendees have learned the content; but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Performance objectives drive a higher level of course design

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish. While the content may be the same or very similar, how we determine whether employees are able to execute post training requires more thought about the accuracy of the assessment. It must be developed from the performance objectives in order for it to be a valid “instrument”. The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real work place conditions.

To ensure learner success with the assessment, the training activities must also be aligned with the level of the objectives. This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   This includes making mistakes and how to recognize a deviation is occurring. Michael Allen refers to this as “building an authentic performance environment”. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended. Are you willing to step up and do what it takes to ensure training is truly effective? – VB

 

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

Why Knowledge Checks are Measuring the Wrong Thing

When I taught middle school math, tests were used to assess knowledge comprehension and some application with word problems and a few complex questions requiring logic proofs. Results were captured via a score; a metric if you will as to how well you answered the questions and very appropriate in academia.

In our quest for training evaluation metrics, we have borrowed the idea of testing someone’s knowledge as a measure of effectiveness. This implies that a corporate classroom mirrors an educational classroom and testing means the same thing – a measure of knowledge comprehension. However, professors, colleges, universities and academic institutions are not held to the same results oriented standard. In the business world, results need to be performance oriented, not knowledge gained.

So why are we still using tests?

Call it a quiz, a knowledge check or any other name it is still assessing some form of knowledge comprehension. In training effectiveness parlance, it is also known as a level 2 evaluation. Having the knowledge is no guarantee that it will be used correctly back on the job. Two very common situations occur in the life science arena where “the quiz” and knowledge checks are heavily used: Annual GMP Refresher and Read & Understand Approach for SOPs.

Life sciences companies are required by law to conduct annual regulations training (GMP Refreshers) so as to remain current. To address the training effectiveness challenge, a quiz / questionnaire / knowledge assessment (KA) is added to the event. But what is the KA measuring? Is it mapped to the course /session objectives or are the questions so general that they can be answered correctly without having to attend the sessions? Or worse yet, are the questions being recycled from year to year / event-to-event? What does it mean for the employee to pass the knowledge check or receive 80% or better? When does s/he learn of the results? In most sessions, there is no more time left to debrief the answers. This is a lost opportunity to leverage feedback into a learning activity. How do employees know if they are leaving the session with the “correct information”?

The other common practice is to include a 5 multiple choice as a knowledge check for Read & Understood (R & U) SOPs especially for revisions. What does it mean if employees get all 5 questions right? That they will not make a mistake? That the R & U method of SOP training is effective? The search function in most e-doc systems is really good at finding the answers. It doesn’t necessarily mean that they read the entire procedure and retained the information correctly. What does it mean for the organization if human errors and deviations from procedures are still occurring? Does it really mean the training is ineffective?

What should we be measuring?

The conditions under which employees are expected to perform need to be the same conditions under which we “test” them. So it makes sense to train ‘em under those same conditions as well. What do you want/need your employees (learners) to do after the instruction is finished? What do you want them to remember and use from the instruction in the heat of their work moments? Both the design and assessment need to mirror these expectations. And that means developing objectives that guide the instruction and form the basis of the assessment.

So ask yourself, when in their day to day activities will employees need to use this GMP concept? Or, where in the employees’ workflow will this procedure change need to be applied? Isn’t this what we are training them for? Your knowledge checks need to ensure that employees have the knowledge, confidence and capability to perform as trained. It’s time to re-think what knowledge checks are supposed to do for you. – VB

Who is Vivian Bringslimark?

Need to write better Knowledge Check questions?
Tips for Writing KCs
Find out the Do’s and Don’ts for Writing Assessment Questions

(c) HPIS Consulting, Inc.