Is Documenting Your OJT Methodology Worth It?

The short answer is yes!  In this blog post, I intend to share with you how I came to this answer and to make the case for you to also say yes.  I will also explore the challenges of documenting OJT as promised in a previous blog post.

Years ago, during the Qualified Trainers (QT) workshop, I would ask QTs the following two questions:

  • What is OJT?
  • How do you deliver OJT?

Invariably, all would answer the first question the same way: on the job training.  And then I would ask the attendees to form into groups to discuss the second question among fellow peers.  I purposely mixed the groups so that there was equal representation of manufacturing trainers and QC analytical laboratory trainers and a fascinating exchange occurred between the attendees.  During the debriefing activity, we learned that there was a lot of variability in how the trainers were conducting OJT.  How can that be when they all answered the first question so consistently? “Well”, one of them said, “we don’t have a procedure on it so I just go about it the way I think it should be done.  I’ve had success so far, so I keep on doing what I’ve done in the past.”

How many ways is there to train on this procedure?

In the blog post, “When SMEs have too much secret sauce”, I share the story about a Director of Operations who had to find out from an FDA Investigator, that his SMEs were teaching techniques for a critical process procedure that 1) were not written down nor were they approved (aka their secret sauce) and 2) were not at all consistent with each other.  Which lead to a FD-483 observation, a high visibility corrective action project with global impact and a phone call to HPISC.

In order to get consistent OJT, you need to define the process and you need to approve the content from which QTs will be using to deliver OJT.  I’m not proposing a cookie cutter approach for QTs to become all the same.  I am advocating a clear distinction between each step / stage / phase so that both the learner and the QT know exactly where they are in the process, what is expected of them in that step and why it is needed.  This is no longer just go follow Joe or Jane around which is how traditional OJT happened in the past.  

Declaring your OJT Model

I’m less focused on what you call these steps or how many there are.   I am looking to see how these steps move a new hire along the journey to becoming qualified prior to being released to task.  For me, this is what makes OJT really structured.  And that model needs to be captured in a standard operating procedure or embedded in a training procedure so that all employees are informed and aware of how they will receive their OJT.  The last step has to include the final evaluation of performance, not to be confused with demonstrating proficiency as in practice sessions. 

How many times does a learner have to practice an SOP before s/he is ready for the qualification event (Q-Event)? 

The nature of the SOP or the complexity of the task at hand determines this.  But, how do I proceduralize that, you ask?  It starts by not arbitrarily picking the magic number 3.  I have engaged in countless discussions regarding the exhaustive list of exceptions to forcing the rule of 3 times to practice.  And some QT’s will argue for more than 3 sessions especially when the procedure is so infrequently performed.  It’s not about the number of times, folks. 

Documenting OJT sessions presents a challenge for many trainers and document control staff.  Are we required to capture every OJT session or just one?  What is considered an OJT session?  My favorite lament is – “Do you know what that will do to our database not to mention the amount of paperwork that would create!” A workaround to all these questions and concerns is to capture at least one session along the progression of each OJT step as per the OJT Model, thus documenting adherence to the procedure.  For example, the first step is to Read SOP 123456.  As mentioned in other HPISC blogs and white papers, we are pretty good at this already.  Then, the next step is to discuss / introduce the SOP, so capture when that discussion occurred if it’s different from Step 1 READ.  The “trainer demonstrates” portion can also be captured.  Where it gets tricky is when we ask the learner to demonstrate and practice.  Why not capture the last instance when it is confirmed the learner is ready to qualify?  If we keep it simple and document that our learners have experienced each step /stage, then we are complying with our OJT methodology and minimally documenting their OJT progression.

Is one qualification session enough to pass?

At some point in these documentation discussions, we have to let the QT evaluate the outcome of the learner’s demonstration(s).  Does the performance meet “business as usual” expectations?  If it does, the learner is ready to qualify in order to perform independently.  If not, feedback is provided and the learner undergoes more practice.  How many times is enough? Until both the learner and the QT are confident that s/he is not going to have an operator error deviation a week after going solo.  The QT is ultimately the one who has to assess progress and determine that “with a few more sessions”, my learner will get this or no, s/he may never get it and it’s time to have a discussion with the supervisor.

How do you know if someone is qualified to perform task?

Ideally, the answer would be because we can look it up in our LMS history.  And that of course depends on how well the critical steps and behaviors are captured in the documentation tool. The tool is supposed to help the QT be as objective as possible and consistently evaluate performance as demonstrated. In the article, “A Better Way to Measure Soft Skills”, author Judith Hale explains the difference between a checklist and a rubric.

            “Checklists only record whether a behavior occurred, though, and not the quality of the behavior.  Rubrics, on the other hand, measure how well the learner executed the behavior.”  p. 62.

Yes, but was it tolerable, adequate or exemplary?

What I typically see are checklists with varying levels of tasks, steps and/or behaviors with a column for Yes, No and Comments.  What I don’t see is a column to mark how well the learner performed!  Is it enough to mark Yes or No for each item since most Q Events are Pass or “needs more practice”?  Maybe.  But consider the following situation.  A human error deviation has occurred and the LMS indicates the technician has earned qualified status.  The document used to qualify this individual shows all Yeses.  Yes s/he was able to demonstrate the step, critical task, and/or behavior, but what we don’t know is how well?  Are we to assume that No means “No, not at all” and Yes means performed “Well” or it is “As Expected” or “Adequate” or maybe in this case it was “Sort of”? 

An additional column describing what it means to score low, medium, high or in our situation: Poor, Adequate, As Expected, and even Exemplar could provide much needed insight for the root cause analysis and investigation that will follow this deviation.  It provides a level of detail about the performance that goes beyond Yes, No, or Comment.  In most checklists I’ve reviewed, the comments column is hardly ever used.

In future posts, I will blog about what the QT signature means.  Until then, is documenting your OJT methodology worth it?  What is your answer? – VB  

Hale,JA Better Way to Measure Soft Skills, TD, ATD, August, 2018, pps. 61-64.

Curricula Creation and Maintenance are NOT a “one and done event”!

Over the last few weeks, I’ve been blogging about structured on the job training (SOJT) with refreshed insights from my Life Sciences consulting projects.  Allow me to have a sidebar blog about the need for keeping curricula up to date.  I realize the work is tedious even painful at times.  That’s why donuts show up for meetings scheduled in the morning, pizza bribes if it’s lunchtime and quite possibly even cookies for a late afternoon discussion.  So I get it when folks don’t want to look at their curricula again or even have a conversation about them. 

Once a year curricula upkeep?

It’s like having a fichus hedge on your property.  If you keep it trimmed, pruning is easier than hacking off the major overgrowth that’s gone awry a year later.  And yet, I continue to get push back when I recommend quarterly curricula updates.  Even semi-annual intervals are met with disdain.  In the end we settle for once a year and I cringe on the inside.  Why?  Because once a year review can be like starting all over again.

Don’t all databases know the difference between new and revised SOPs?

Consider for a moment the number of revisions your procedures go through in a year.  If your learning management system (LMS) is mature enough to manage revisions with a click to revise and auto-update all affected curricula, then once a year may be the right time span for your company. 

Others in our industry don’t have that functionality within their training database.  For these administrators, revisions mean manual creation into the “course catalog” each time with a deactivation/retirement of the previous version; some may be able to perform batch uploads with a confirmation activity post submission.  And then, the manual search for all curricula so that the old SOP number can be removed and replaced with the next revision.  Followed by a manual notification to all employees assigned to either that SOP or to the curricula depending on how the database is configured.  I’m exhausted just thinking about this workload.  

Over the course of a year, how many corrective actions have resulted in major SOP revisions that require a new OJT session and quite possibly a new qualification event?  What impact do all these changes have on the accuracy of your curricula? Can your administrator click the revision button for these as well?   And then there’s the periodic review of SOPs, which in most companies is two years.  What is the impact of SOP’s that become deleted as a result of the review?  Can your LMS / training database search for affected curricula and automatically remove these SOPs as well? 

The Real Purpose for Curricula

Let’s not lose sight of why we have curricula in the first place.  So that folks are trained in the “particular operations that the employee performs” (21CFR§211.25).  And “each manufacturer shall establish procedures for identifying training needs and ensure that all personnel are trained to adequately perform their assigned responsibilities” (21CFR§820.25).  Today’s LMSes perform reconciliation of training completion against curricula requirements.  So I’m grateful that this task is now automated.  But it depends on the level of functionality of the database in use.  Imagine having to manually reconcile each individual in your company against their curricula requirements.  There are not enough hours in a normal workday for one person to keep this up to date!  And yet in some organizations, this is the only way they know who is trained.  Their database is woefully limited in functionality.

The quality system regulation for training is quite clear regarding a procedure for identifying training needs.  To meet that expectation, industry practice is to have a process for creating curricula and maintaining the accuracy and completeness of curricula requirements.  Yes, it feels like a lot of paperwork.  §820.25 also states “Training shall be documented”.   For me, it’s not just the completion of the Read & Understood for SOPs.  It includes the OJT process, the qualification event AND the ownership for curricula creation and maintenance. (Stay tuned for a future blog on documenting OJT.)

Whose responsibility is it, anyway?

Who owns curricula in your company?  Who has the responsibility to ensure that curricula are accurate and up to date?  What does your procedure include?  Interestingly enough, I have seen companies who get cited with training observations often have outdated and inaccurate curricula!  Their documentation for curricula frequently shows reviews overdue by 2 – 3 years, not performed since original creation and in some places, no specialized curricula at all!   “They were set up wrong.”  “The system doesn’t allow us to differentiate enough.”  “Oh, we were in the process of redoing them, but then the project was put on the back burner.”  Are you waiting to be cited by an agency investigator during biennial GMP inspection or Pre-Approval Inspection?

The longer we wait to conduct a curricula review, the bigger the training gap becomes.  And that can snowball into missing training requirements, which leads to employees performing duties without being trained and qualified.  Next thing you know, you have a bunch of Training CAPA notifications sitting in your inbox.  Not to mention a FD-483 and quite possibly a warning letter.  How sophisticated is your training database?  Will once a year result in a “light trim” of curricula requirements or a “hack job” of removing outdated requirements and inaccurate revision numbers? Will you be rebuilding curricula all over again?  Better bring on the donuts and coffee!  -VB

How many procedures does it take to describe a training program?

From Traditional OJT to SOJT: What else makes OJT structured?

SOJT needs a clearly defined scope.  If you don’t articulate the boundaries, you can end up hearing statements like “only a Qualified Trainer (QT) can deliver deviation related training sessions” and wonder how in the world did this get so out of hand?  It comes back to how well you define the scope of their responsibilities.  More specifically, are there different categories of QT’s or is it strictly a manufacturing program/ initiative?  For example, do you have qualified trainers who deliver OJT (on the job training) and SMEs as Classroom Trainers who are qualified to deliver warning letter corrective actions for overhauled quality systems? The scope will clarify who is responsible for what kind of training situation.

SOJT includes a deliberate review of regulatory, departmental and positional /functional requirements.  Yes, this is curricula.  But having curricula is not enough to call it SOJT as discussed in the previous blog.  It also includes a TRAINING SCHEDULE for the curricula requirements and an individualized learning plan for new hires; not just a matrix with their name highlighted in yellow marker. 

SOJT is formal and it’s documented.  For years, we have been documenting that we’ve read and understood our required SOPs, so that part is covered.  QT’s need “Quality Control Unit” approved documents (aka SOPs) to use as the main document to train with and the proper documentation to record an OJT session.  But documenting OJT sessions has been a bit of a challenge for the life sciences industry.  Perhaps a future blog can reveal some of the issues. 

Even more challenging is adding the OJT requirement to curricula.  Unfortunately, it generates almost double the requirements. So for companies who focus solely on the number of requirements, this is daunting.  What helps is differentiating between R & U stage and completion of the actual OJT events.  Some organizations go one step further and also add the Qualification Event as a final requirement unless their LMS can be configured to mimic the OJT Methodology steps; thus reducing the number of times the same procedure is listed albeit for a different training step.  HPISC Robust Training System also includes the OJT methodology that delineates the steps to conducting OJT and the qualification process.  Stay tuned for a future blog about OJT Methodologies.

SOJT is also delivered by a department SME qualified to deliver OJT.  The old adage for selecting department trainers based on seniority and documented R & U SOP paperwork no longer meets regulatory expectations.  You need a QA approved process for qualifying SMEs that includes a train-the-trainer workshop that focuses on hands on training and NOT how to develop power point slides.   In future blog(s), I will provide more discussion points regarding Qualified Trainer requirements.

So it’s as easy as that, right?

Moving from T-OJT to Structured OJT

Actually no. Last time, I blogged about QT’s indicating that their organizations are just at the brink of SOJT and that scheduling seems to be the barrier to moving into SOJT.  In future posts, I will share with you some organizational issues to manage as well as shifting the managers’ mindset that all we need is a LMS

Until then, you might enjoy the HPISC Impact Story below:

Does having curricula make OJT structured?

The Evolution

First there was “Just go follow Joe around” training …

And then came “and it shall be documented” …

Next the follow up question:  “Are they trained in everything they need to know?”

So line managers used the SOP Binder Index and “Read and Understand Training” became a training method…

But alas, they complained that it was much too much training and errors were still occurring …

So training requirements were created and curricula were born.

Soon afterwards, LMS vendors showed up in our lobbies and promised us with a click and a report, we could have a training system!

But upper management called forth for METRICS! So dashboards became a visible tool. Leaderboards helped create friendly competition among colleagues while “walls of shame” made folks hang their heads and ask for leniency, exemptions and extensions  … 

But just having curricula doesn’t necessarily make OJT structured.  During the HPISC Qualified Trainers workshop, I present the difference between t-ojt (traditional) and s-ojt (structured). 

Moving from t-ojt to s-ojt

When I ask the QT’s where they feel their organization is, most of them will say still in t-ojt box but closer to the middle of the range.  Why I ask?  Invariably, they’ll tell me OJT is not scheduled.  “Just because I have the list (curricula requirements) doesn’t mean the training gets scheduled or that qualification events get conducted”.  But rather, it happens when someone makes it a priority, an inspection is coming or a CAPA includes it as part of corrective actions. So what else makes it structured?

In the next blog, I’ll continue the “discussion”.  In the meantime, feel free to share your thoughts regarding how OJT is handled at your site.  -VB

HPIS C. has articles, impact stories and white papers.

Impact Story: Reduce Training by 10%

The Journey of a New Hire to Qualified Employee: What really happens at your company?

After weeks if not months of waiting for your new hire, she is finally here, finishing up 1st day orientation. Day 2, she’s all yours. Are you excited or anxious? The LMS printout of training requirements is overwhelming; even for you. Bottom line question running through your mind — when can she be released to task? Isn’t there a faster way to expedite this training, you ask? There is, it is called S – OJT.

Structured on the job training (S-OJT) is an organized and planned approach for completing training requirements. Yet for many line managers, they want their trainees now. Ironically, the faster you “push” trainees through their training matrix, the slower the learning curve. This in turn often leads to more errors, deviations, and quite possibly CAPA investigations for numerous training incidents. It’s a classic case of pay now or pay later.

This proactive vs. reactive dilemma is not new. Traditional OJT aka “follow Joe around” looks like a win – win for everyone on the surface. The new hire gets OJT experience, a SME is “supervising” for mistakes, and supervisors are keeping up with the production schedule. So what’s wrong, you ask?

[SOJT] is the planned process of developing task level expertise by having an experienced employee train a novice employee at our near the actual work setting.” Jacobs & Jones, 1995

After 6 months or so, the trainee isn’t new anymore and everyone “expects” your new employee to be fully qualified by then with no performance issues and no deviations resulting from operator error. Without attentive monitoring of the trainee’s progress, the trainee is at the mercy of the daily schedule.  S/he is expected to dive right in to whatever process or part of the process is running that day without taking into account where the trainee is on their learning curve.  The assigned SME or perhaps the “buddy” for the day is tasked with not only trying to perform the procedure correctly but explain what he’s doing and why it may be out of sequence in some cases.  The burden of the learning gap falls to the SME who does his best to answer the why.  

The structured approach puts the trainee’s needs center stage. What makes sense for him/her to learn what and when? The result is a learning plan individualized for this new hire that includes realistic time frames. Added to the plan is a Qualified Trainer who can monitor the progression towards more complex procedures and increase success for first time qualification success. Still too much time to execute? How many hours will you spend investigating errors, counseling the employee and repeating the training? Seems worth it to me. – VB

You may also like: Moving Beyond R & U SOPs

Jacobs RL, Jones MJ. Structured on – the – job training: Unleashing employee expertise in the workplace. San Francisco: Berrett – Koehler,1995.

What’s Your Training Effectiveness Strategy? It needs to be more than a survey or knowledge checks

When every training event is delivered using the same method, it’s easy to standardize the evaluation approach and the tool. Just answer these three questions:

  • What did they learn?
  • Did it transfer back to job?
  • Was the training effective?

In this day and age of personalized learning and engaging experiences, one-size training for all may be efficient for an organizational roll out but not the most effective for organizational impact or even change in behavior. The standard knowledge check can indicate how much they remembered. It might be able to predict what will be used back on the job. But be able to evaluate how effective the training was? That’s asking a lot from a 10 question multiple choice/ true false “quiz”. Given the level of complexity of the task or the significance of improvement for the organization such as addressing a consent decree or closing a warning letter, it would seem that allocating budget for proper training evaluation techniques would not be challenged.

Do you have a procedure for that?

Perhaps the sticking point is explaining to regulators how decisions are made using what criteria. Naturally documentation is expected and this also requires defining the process in a written procedure. It can be done. It means being in tune with training curricula, awareness of the types of training content being delivered and recognizing the implication of the evaluation results. And of course, following the execution plan as described in the SOP.   Three central components frame a Training Effectiveness Strategy: Focus, Timing and Tools.

TRAINING EFFECTIVENESS STRATEGY: Focus on Purpose

Our tendency is to look at the scope (the what) first. I ask that you pause long enough to consider your audience, identify your stakeholders; determine who wants to know what. This analysis shapes the span and level of your evaluation policy. For example, C-Suite stakeholders ask very different questions about training effectiveness than participants.

The all purpose standard evaluation tool weakens the results and disappoints most stakeholders. While it can provide interesting statistics, the real question is what will “they” do with the results? What are stakeholders prepared to do except cut training budget or stop sending employees to training? Identify what will be useful to whom by creating a stakeholder matrix.

Will your scope also include the training program (aka Training Quality System) especially if it is not included in the Internal Audit Quality System? Is the quality system designed efficiently to process feedback and make the necessary changes that result from the evaluation results? Assessing how efficiently the function performs is another opportunity to improve the workflow by reducing redundancies thus increasing form completion speed and humanizing the overall user experience. What is not in scope? Is it clearly articulated?

TRAINING EFFECTIVENESS STRATEGY: Timing is of course, everything

Your strategy needs to include when to administer your evaluation studies. With course feedback surveys, we are used to immediately after otherwise, the return rate drops significantly. For knowledge checks we also “test” at the end of the session. Logistically it’s easier to administer because participants are still in the event and we also increase the likelihood of higher “retention” scores.

But when does it make more sense to conduct the evaluation? Again, it depends on what the purpose is.

  • Will you be comparing before and after results? Then baseline data needs to be collected before the event begins. I.e. current set of Key Performing Indicators, Performance Metrics
  • How much time do the learners need to become proficient enough so that the evaluation is accurate? I.e. immediately after, 3 months or realistically 6 months after?
  • When are metrics calculated and reported? Quarterly?
  • When will they be expected to perform back on the job?

Measuring Training Transfer: 3, 6 and maybe 9 months later

We can observe whether a behavior occurs and record the number of people who are demonstrating the new set of expected behaviors on the job. We can evaluate the quality of a work product (such as a completed form or executed batch record) by recording the number of people whose work product satisfies the appropriate standard or target criteria. We can record the frequency with which target audience promotes the preferred behaviors in dialogue with peers and supervisors and in their observed actions.

It is possible to do this; however, the time, people and budget to design the tools and capture the incidents are at the core of management support for a more vigorous training effectiveness strategy. How important is it to the organization to determine if your training efforts are effectively transferring back to the job? How critical is it to mitigate the barriers that get in the way when the evaluation results show that performance improved only marginally? It is cheaper to criticize the training event(s) rather than address the real root cause(s). See Training Does Not Stand Alone (Transfer Failure Section).

TRAINING EFFECTIVENESS STRATEGY: Right tool for the right evaluation type

How will success be defined for each “training” event or category of training content? Are you using tools/techniques that meet your stakeholders’ expectations for training effectiveness? If performance improvement is the business goal, how are you going to measure it? What are the performance goals that “training” is supposed to support? Seek confirmation on what will be accepted as proof of learning, evidence of transfer to the workplace, and identification of leading indicators of organizational improvement. These become the criteria by which the evaluation has value for your stakeholders. Ideally, the choice of tool should be decided after the performance analysis is discussed and before content development begins.

Performance Analysis first; then possibly a training needs analysis

Starting with a performance analysis recognizes that performance occurs within organizational systems. The analysis provides a 3-tiered picture of what’s encouraging/blocking performance for the worker, work tasks, and/or the workplace and what must be in place for these same three levels in order to achieve sustained improvement. The “solutions” are tailored to the situation based on the collected data and not on an assumption that training is needed. Otherwise, you have a fragment of the solution with high expectations for solving “the problem” and relying on the evaluation tool to provide effective “training” results. Only when the cause analysis reveals a true lack of knowledge, will training be effective.

Why aren’t more Performance Analyses being conducted?
For starters, most managers want the quick fix of training because it’s a highly visible activity that everyone is familiar and comfortable with. The second possibility lies in the inherent nature of performance improvement work. Very often the recommended solution resides outside of the initiating department and requires the cooperation of others.   Would a request to fix someone else’s system go over well where you work? A third and most probable reason is that it takes time, resources, and a performance consulting skill set to identify the behaviors, decisions and “outputs” that are expected as a result of the solution. How important will it be for you to determine training effectiveness for strategic corrective actions?

You need an execution plan

Given the variety of training events and level of strategic importance occurring within your organization, one standard evaluation tool may no longer be suitable. Does every training event need to be evaluated at the same level of rigor? Generally speaking, the more strategic the focus is, the more tedious and timely the data collection will be. Again, review your purpose and scope for the evaluation. Refer to your stakeholder matrix and determine what evaluation tool(s) is better suited to meet their expectations.

For example, completing an after-training survey for every event is laudable; however, executive leadership values this data the least. According to Jack and Patricia Phillips (2010), they want to see business impact the most. Tools like balanced scorecards can be customized to capture and report on key performing indicators and meaningful metrics. Develop your plan wisely, generate a representative sample size initially and seek stakeholder agreement to conduct the evaluation study.

Life after the evaluation: What are you doing with the data collected?

Did performance improve? How will the evaluation results change future behavior and/or influence design decisions? Or perhaps the results will be used for budget justification, support for additional programs or even a corporate case study? Evaluation comes at the end but in reality, it is continuous throughout. Training effectiveness means evaluating the effectiveness of your training: your process, your content and your training quality system. It’s a continuous and cyclical process that doesn’t end when the training is over. – VB

 

Jack J. Phillips and Patricia P. Phillips, “How Executives View Learning Metrics”, CLO, December 2010.

Recommend Reading:

Jean-Simon Leclerc and Odette Mercier, “How to Make Training Evaluation a Useful Tool for Improving L &D”, Training Industry Quarterly, May-June, 2017.

 

Did we succeed as intended? Was the training effective?

When you think about evaluating training, what comes to mind? It’s usually a “smile sheet”/ feedback survey about the course, the instructor and what you found useful. As a presenter/instructor, I find the results from these surveys very helpful, so thank you for completing them. I can make changes to the course objectives, modify content or tweak activities based on the comments. I can even pay attention to my platform skills where noted. But does this information help us evaluate if the course was successful?

Formative vs. Summative Distinction

Formative assessments provide data about the course design. Think form-ative; form-at of the course. The big question to address is whether the course as designed met the objectives. For example, the type of feedback I receive from surveys gives me comments and suggestions about the course.

Summative assessments are less about the course design and more about the results and impact. Think summative; think summary. It’s more focused on the learner; not the instructional design. But when the performance expectations are not met or the “test” scores are marginal, then the focus shifts back to the course, instructor/trainer and instructional designer with the intent to find out what happened? What went wrong? When root cause analysis fails to find the cause, it’s time to look a little deeper at the objectives.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program. These statements aka objectives determine the content and they also drive the assessment. For example, a written test or knowledge check is typically used for classroom sessions that ask questions about the content. In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring? How much of the content they remember and maybe how much of the content they can apply when they return to work?

Training effectiveness on the other hand is really an evaluation of whether we achieved the desired outcome. So I ask you, what is the desired outcome for your training: to gain knowledge (new content) or to use the content correctly back in the workplace? The objectives need to reflect the desired outcome in order to determine the effectiveness of training.

What is your desired outcome from training?

Levels of objectives, who knew?

Many training professionals have become familiar with Kirkpatrick’s 4 Levels of Evaluation over the course of their careers, but less are acquainted with Bloom’s Taxonomy of Objectives. Yes, objectives have levels of increasing complexity resulting in higher levels of performance. Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid range. Evaluating and creating are at the highest levels.

If your end in mind is knowledge gained ONLY, continue to use the lower level objectives. If however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet performance expectations. They need to become more performance based. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Accuracy of the assessment tools

The tools associated with the 4 levels of evaluation can be effective when used for the right type of assessment. For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments. Level 2 (Learning) are effective in measuring retention and minimum comprehension and go hand in hand with learning based objectives. But when the desired outcomes are actually performance based, Level 2 knowledge checks need to shift up to become more application oriented such as “what if situations” and scenarios requiring analysis, evaluating, and even problem solving. Or shift altogether to Level 3 (Behavior) and develop a new level of assessments such as demonstrations and samples of finished work products.

Trainers are left out of the loop

But, today’s trainers don’t always have the instructional design skill set developed. They do the best they can with the resources given including reading books and scouring the Internet. For the most part, their training courses are decent and the assessments reflect passing scores. But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the performance expectations. This is where the gap exists. Trainers build courses based on knowledge / content instead and develop learning objectives that determine what learners should learn. They create assessments to determine whether attendees have learned the content; but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Performance objectives drive a higher level of course design

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish. While the content may be the same or very similar, how we determine whether employees are able to execute post training requires more thought about the accuracy of the assessment. It must be developed from the performance objectives in order for it to be a valid “instrument”. The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real work place conditions.

To ensure learner success with the assessment, the training activities must also be aligned with the level of the objectives. This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   This includes making mistakes and how to recognize a deviation is occurring. Michael Allen refers to this as “building an authentic performance environment”. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended. Are you willing to step up and do what it takes to ensure training is truly effective? – VB

 

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

Why Knowledge Checks are Measuring the Wrong Thing

When I taught middle school math, tests were used to assess knowledge comprehension and some application with word problems and a few complex questions requiring logic proofs. Results were captured via a score; a metric if you will as to how well you answered the questions and very appropriate in academia.

In our quest for training evaluation metrics, we have borrowed the idea of testing someone’s knowledge as a measure of effectiveness. This implies that a corporate classroom mirrors an educational classroom and testing means the same thing – a measure of knowledge comprehension. However, professors, colleges, universities and academic institutions are not held to the same results oriented standard. In the business world, results need to be performance oriented, not knowledge gained.

So why are we still using tests?

Call it a quiz, a knowledge check or any other name it is still assessing some form of knowledge comprehension. In training effectiveness parlance, it is also known as a level 2 evaluation. Having the knowledge is no guarantee that it will be used correctly back on the job. Two very common situations occur in the life science arena where “the quiz” and knowledge checks are heavily used: Annual GMP Refresher and Read & Understand Approach for SOPs.

Life sciences companies are required by law to conduct annual regulations training (GMP Refreshers) so as to remain current. To address the training effectiveness challenge, a quiz / questionnaire / knowledge assessment (KA) is added to the event. But what is the KA measuring? Is it mapped to the course /session objectives or are the questions so general that they can be answered correctly without having to attend the sessions? Or worse yet, are the questions being recycled from year to year / event-to-event? What does it mean for the employee to pass the knowledge check or receive 80% or better? When does s/he learn of the results? In most sessions, there is no more time left to debrief the answers. This is a lost opportunity to leverage feedback into a learning activity. How do employees know if they are leaving the session with the “correct information”?

The other common practice is to include a 5 multiple choice as a knowledge check for Read & Understood (R & U) SOPs especially for revisions. What does it mean if employees get all 5 questions right? That they will not make a mistake? That the R & U method of SOP training is effective? The search function in most e-doc systems is really good at finding the answers. It doesn’t necessarily mean that they read the entire procedure and retained the information correctly. What does it mean for the organization if human errors and deviations from procedures are still occurring? Does it really mean the training is ineffective?

What should we be measuring?

The conditions under which employees are expected to perform need to be the same conditions under which we “test” them. So it makes sense to train ‘em under those same conditions as well. What do you want/need your employees (learners) to do after the instruction is finished? What do you want them to remember and use from the instruction in the heat of their work moments? Both the design and assessment need to mirror these expectations. And that means developing objectives that guide the instruction and form the basis of the assessment. (See Performance Objectives are not the same as Learning Objectives.)

So ask yourself, when in their day to day activities will employees need to use this GMP concept? Or, where in the employees’ workflow will this procedure change need to be applied? Isn’t this what we are training them for? Your knowledge checks need to ensure that employees have the knowledge, confidence and capability to perform as trained. It’s time to re-think what knowledge checks are supposed to do for you. – VB

Need to write better Knowledge Check questions?  Need to advise peers and colleagues on the Do’s and Don’ts for writing test questions?

Instructional Design: Not Just for Full Time Trainers Anymore

When I left the manufacturing shop floor and moved into training, full time trainers presented in the classroom using a host of techniques, tools, and relied on their platform skills to present content. Subject matter experts (or the most senior person) conducted technical training on the shop floor in front of a piece of equipment, at a laboratory station or a work bench.

For years, this distinction was clearly practiced where I worked. Trainers were in the classroom and SMEs delivered OJT. Occasionally a “full time” trainer would consult with a SME on content or request his/her presence in the room during delivery as a back-up or for the Q & A portion of a “presentation”. It seemed that the boundaries at the time, were so well understood, that one could determine the type of training simply by where it was delivered.

Training boundaries are limitless today

Today, that’s all changed. No longer confined to location or delivery methods, full time trainers can be found on the shop floor fully gowned delivering GMP (Good Manufacturing Practices) content for example. And SMEs are now in the classroom more each day with some of the very tools used by full time trainers! What defines a full time trainer from a SME is less important, what is imperative however is what defines effective instruction.

Instructional Design is a recognized profession

What goes into good instructional design?

Believe it or not, instruction design (ID) / instructional technology is a degreed program offered at numerous colleges and universities. Underlying the design, is a methodology for “good” course design and really good instructional designers will confess that there is a bit of an art form to it as well. Unfortunately, with shrinking budgets and downsized L&D staffs, there are less resources available to develop training materials. Not to mention, shrinking time lines for the deliverables. So it makes sense to tap SMEs for more training opportunities since many are already involved in training at their site. But, pasting their expert content into a power point slide deck is not instructional design. Nor is asking a SME to “deliver training” using a previously created power point presentation effective delivery.

What is effective design?

To me, effective design is when learners not only meet the learning objectives during training but also transfer that learning experience back on the job and achieve performance objectives / outcomes. That’s a tall order for a SME, even for full time trainers who have not had course design training. The methodology a course designer follows be that ADDIE, Agile, SAM (Successive Approximation Model), Gagne’s 9 Principles, etc., provides a process with steps to facilitate the design rationale and then development of content including implementation and evaluation of effectiveness. It ensures that key elements are not unintentionally left out or forgotten about until after the fact like evaluation/ effectiveness or needs assessment. In an attempt to expedite training, these methodology driven elements are easily skipped without fully understanding the impact the consequences can have on overall training effectiveness. There is a science to instructional design.

The “art form” occurs when a designer creates visually appealing slides and eLearning scenes as well as aligned activities and engaging exercises designed to provide exploration, practice and proficiency for the performance task back on the job. The course materials “package” is complete when a leader’s guide is also created that spells out the design rationale and vision for delivery, especially when someone else will be delivering the course such as SMEs as Classroom Facilitators.

The Leaders Guide

Speaker notes embedded at the bottom of the notes pages within power point slides is not a leader’s guide. While handy for scripting what to say for the above slide, it does not provide ample space for facilitating other aspects of the course such as visual cues, tips for “trainer only” and managing handouts, etc. A well-designed leader’s guide has the key objectives identified and the essential learning points to cover. These learning points are appropriately sequenced with developed discussion questions to be used with activities; thus removing the need for the facilitator to think on demand while facilitating the activity. This also reduces the temptation to skip over the exercise/activity if s/he is nervous or not confident with interactive activities.

A really good guide will also include how to segue to the next slide and manage seamless transitions to next topic sections. Most helpful, are additional notes about what content MUST be covered, tips about expected responses for activities and clock time duration comments for keeping to the classroom schedule. Given all the time and effort to produce the leaders guide, it is wasted if the course designer and SME as Facilitator do not have a knowledge transfer session. Emailing the guide or downloading it from a share point site will not help the SME in following the guide during delivery unless an exchange occurs in which SMEs can begin to mark up his/her copy.

Using previously developed materials

I am not criticizing previous course materials if they were effective. But replacing clip art with new images and updating the slide deck to incorporate the new company background is not going to change the effectiveness of the course unless content was revised and activities were improved. For many SMEs, having a previous slide deck is both a gift and a curse.

While they are not starting with a blank storyboard, there is a tendency to use as-is and try to embellish it with speaker notes because the original producer of the power point slide did not include them or worse, provided no leader’s guide. The SME has the burden to make content decisions such as what content is critical; what content can be cut if no time. Perhaps even more crucial is how to adapt content and activities to different learner groups or off-shift needs. SMEs who attend a HPISC. ID basics course learn how to use design checklists for previously developed materials.   These checklists allow them to confidently assess the quality of the materials and justify what needs to be removed, revised or added; thus truly upgrading previously developed materials.

What’s so special about SMEs as Course Designers?

They have expertise and experience and are expected to share it via training their peers. But now the venue is the classroom as well. It’s training on course design methodology that is needed. SMEs and most trainers do not automatically have this knowledge. Some develop it by reading A LOT, attending well-designed courses, and over time with trial and error and painful feedback. The faster way is to provide funds to get SMEs as Course Designers at least exposed to how to effectively design for learning experiences so that they can influence the outcome of the objectives. This is management support for SMEs as Trainers. -VB