Is Documenting Your OJT Methodology Worth It?

The short answer is yes!  In this blog post, I intend to share with you how I came to this answer and to make the case for you to also say yes.  I will also explore the challenges of documenting OJT as promised in a previous blog post.

Years ago, during the Qualified Trainers (QT) workshop, I would ask QTs the following two questions:

  • What is OJT?
  • How do you deliver OJT?

Invariably, all would answer the first question the same way: on the job training.  And then I would ask the attendees to form into groups to discuss the second question among fellow peers.  I purposely mixed the groups so that there was equal representation of manufacturing trainers and QC analytical laboratory trainers and a fascinating exchange occurred between the attendees.  During the debriefing activity, we learned that there was a lot of variability in how the trainers were conducting OJT.  How can that be when they all answered the first question so consistently? “Well”, one of them said, “we don’t have a procedure on it so I just go about it the way I think it should be done.  I’ve had success so far, so I keep on doing what I’ve done in the past.”

How many ways is there to train on this procedure?

In the blog post, “When SMEs have too much secret sauce”, I share the story about a Director of Operations who had to find out from an FDA Investigator, that his SMEs were teaching techniques for a critical process procedure that 1) were not written down nor were they approved (aka their secret sauce) and 2) were not at all consistent with each other.  Which lead to a FD-483 observation, a high visibility corrective action project with global impact and a phone call to HPISC.

In order to get consistent OJT, you need to define the process and you need to approve the content from which QTs will be using to deliver OJT.  I’m not proposing a cookie cutter approach for QTs to become all the same.  I am advocating a clear distinction between each step / stage / phase so that both the learner and the QT know exactly where they are in the process, what is expected of them in that step and why it is needed.  This is no longer just go follow Joe or Jane around which is how traditional OJT happened in the past.  

Declaring your OJT Model

I’m less focused on what you call these steps or how many there are.   I am looking to see how these steps move a new hire along the journey to becoming qualified prior to being released to task.  For me, this is what makes OJT really structured.  And that model needs to be captured in a standard operating procedure or embedded in a training procedure so that all employees are informed and aware of how they will receive their OJT.  The last step has to include the final evaluation of performance, not to be confused with demonstrating proficiency as in practice sessions. 

How many times does a learner have to practice an SOP before s/he is ready for the qualification event (Q-Event)? 

The nature of the SOP or the complexity of the task at hand determines this.  But, how do I proceduralize that, you ask?  It starts by not arbitrarily picking the magic number 3.  I have engaged in countless discussions regarding the exhaustive list of exceptions to forcing the rule of 3 times to practice.  And some QT’s will argue for more than 3 sessions especially when the procedure is so infrequently performed.  It’s not about the number of times, folks. 

Documenting OJT sessions presents a challenge for many trainers and document control staff.  Are we required to capture every OJT session or just one?  What is considered an OJT session?  My favorite lament is – “Do you know what that will do to our database not to mention the amount of paperwork that would create!” A workaround to all these questions and concerns is to capture at least one session along the progression of each OJT step as per the OJT Model, thus documenting adherence to the procedure.  For example, the first step is to Read SOP 123456.  As mentioned in other HPISC blogs and white papers, we are pretty good at this already.  Then, the next step is to discuss / introduce the SOP, so capture when that discussion occurred if it’s different from Step 1 READ.  The “trainer demonstrates” portion can also be captured.  Where it gets tricky is when we ask the learner to demonstrate and practice.  Why not capture the last instance when it is confirmed the learner is ready to qualify?  If we keep it simple and document that our learners have experienced each step /stage, then we are complying with our OJT methodology and minimally documenting their OJT progression.

Is one qualification session enough to pass?

At some point in these documentation discussions, we have to let the QT evaluate the outcome of the learner’s demonstration(s).  Does the performance meet “business as usual” expectations?  If it does, the learner is ready to qualify in order to perform independently.  If not, feedback is provided and the learner undergoes more practice.  How many times is enough? Until both the learner and the QT are confident that s/he is not going to have an operator error deviation a week after going solo.  The QT is ultimately the one who has to assess progress and determine that “with a few more sessions”, my learner will get this or no, s/he may never get it and it’s time to have a discussion with the supervisor.

How do you know if someone is qualified to perform task?

Ideally, the answer would be because we can look it up in our LMS history.  And that of course depends on how well the critical steps and behaviors are captured in the documentation tool. The tool is supposed to help the QT be as objective as possible and consistently evaluate performance as demonstrated. In the article, “A Better Way to Measure Soft Skills”, author Judith Hale explains the difference between a checklist and a rubric.

            “Checklists only record whether a behavior occurred, though, and not the quality of the behavior.  Rubrics, on the other hand, measure how well the learner executed the behavior.”  p. 62.

Yes, but was it tolerable, adequate or exemplary?

What I typically see are checklists with varying levels of tasks, steps and/or behaviors with a column for Yes, No and Comments.  What I don’t see is a column to mark how well the learner performed!  Is it enough to mark Yes or No for each item since most Q Events are Pass or “needs more practice”?  Maybe.  But consider the following situation.  A human error deviation has occurred and the LMS indicates the technician has earned qualified status.  The document used to qualify this individual shows all Yeses.  Yes s/he was able to demonstrate the step, critical task, and/or behavior, but what we don’t know is how well?  Are we to assume that No means “No, not at all” and Yes means performed “Well” or it is “As Expected” or “Adequate” or maybe in this case it was “Sort of”? 

An additional column describing what it means to score low, medium, high or in our situation: Poor, Adequate, As Expected, and even Exemplar could provide much needed insight for the root cause analysis and investigation that will follow this deviation.  It provides a level of detail about the performance that goes beyond Yes, No, or Comment.  In most checklists I’ve reviewed, the comments column is hardly ever used.

In future posts, I will blog about what the QT signature means.  Until then, is documenting your OJT methodology worth it?  What is your answer? – VB  

Hale,JA Better Way to Measure Soft Skills, TD, ATD, August, 2018, pps. 61-64.

The Journey of a New Hire to Qualified Employee: What really happens at your company?

After weeks if not months of waiting for your new hire, she is finally here, finishing up 1st day orientation. Day 2, she’s all yours. Are you excited or anxious? The LMS printout of training requirements is overwhelming; even for you. Bottom line question running through your mind — when can she be released to task? Isn’t there a faster way to expedite this training, you ask? There is, it is called S – OJT.

Structured on the job training (S-OJT) is an organized and planned approach for completing training requirements. Yet for many line managers, they want their trainees now. Ironically, the faster you “push” trainees through their training matrix, the slower the learning curve. This in turn often leads to more errors, deviations, and quite possibly CAPA investigations for numerous training incidents. It’s a classic case of pay now or pay later.

This proactive vs. reactive dilemma is not new. Traditional OJT aka “follow Joe around” looks like a win – win for everyone on the surface. The new hire gets OJT experience, a SME is “supervising” for mistakes, and supervisors are keeping up with the production schedule. So what’s wrong, you ask?

[SOJT] is the planned process of developing task level expertise by having an experienced employee train a novice employee at our near the actual work setting.” Jacobs & Jones, 1995

After 6 months or so, the trainee isn’t new anymore and everyone “expects” your new employee to be fully qualified by then with no performance issues and no deviations resulting from operator error. Without attentive monitoring of the trainee’s progress, the trainee is at the mercy of the daily schedule.  S/he is expected to dive right in to whatever process or part of the process is running that day without taking into account where the trainee is on their learning curve.  The assigned SME or perhaps the “buddy” for the day is tasked with not only trying to perform the procedure correctly but explain what he’s doing and why it may be out of sequence in some cases.  The burden of the learning gap falls to the SME who does his best to answer the why.  

The structured approach puts the trainee’s needs center stage. What makes sense for him/her to learn what and when? The result is a learning plan individualized for this new hire that includes realistic time frames. Added to the plan is a Qualified Trainer who can monitor the progression towards more complex procedures and increase success for first time qualification success. Still too much time to execute? How many hours will you spend investigating errors, counseling the employee and repeating the training? Seems worth it to me. – VB

You may also like: Moving Beyond R & U SOPs

Jacobs RL, Jones MJ. Structured on – the – job training: Unleashing employee expertise in the workplace. San Francisco: Berrett – Koehler,1995.

What’s Your Training Effectiveness Strategy? It needs to be more than a survey or knowledge checks

When every training event is delivered using the same method, it’s easy to standardize the evaluation approach and the tool. Just answer these three questions:

  • What did they learn?
  • Did it transfer back to job?
  • Was the training effective?

In this day and age of personalized learning and engaging experiences, one-size training for all may be efficient for an organizational roll out but not the most effective for organizational impact or even change in behavior. The standard knowledge check can indicate how much they remembered. It might be able to predict what will be used back on the job. But be able to evaluate how effective the training was? That’s asking a lot from a 10 question multiple choice/ true false “quiz”. Given the level of complexity of the task or the significance of improvement for the organization such as addressing a consent decree or closing a warning letter, it would seem that allocating budget for proper training evaluation techniques would not be challenged.

Do you have a procedure for that?

Perhaps the sticking point is explaining to regulators how decisions are made using what criteria. Naturally documentation is expected and this also requires defining the process in a written procedure. It can be done. It means being in tune with training curricula, awareness of the types of training content being delivered and recognizing the implication of the evaluation results. And of course, following the execution plan as described in the SOP.   Three central components frame a Training Effectiveness Strategy: Focus, Timing and Tools.

TRAINING EFFECTIVENESS STRATEGY: Focus on Purpose

Our tendency is to look at the scope (the what) first. I ask that you pause long enough to consider your audience, identify your stakeholders; determine who wants to know what. This analysis shapes the span and level of your evaluation policy. For example, C-Suite stakeholders ask very different questions about training effectiveness than participants.

The all purpose standard evaluation tool weakens the results and disappoints most stakeholders. While it can provide interesting statistics, the real question is what will “they” do with the results? What are stakeholders prepared to do except cut training budget or stop sending employees to training? Identify what will be useful to whom by creating a stakeholder matrix.

Will your scope also include the training program (aka Training Quality System) especially if it is not included in the Internal Audit Quality System? Is the quality system designed efficiently to process feedback and make the necessary changes that result from the evaluation results? Assessing how efficiently the function performs is another opportunity to improve the workflow by reducing redundancies thus increasing form completion speed and humanizing the overall user experience. What is not in scope? Is it clearly articulated?

TRAINING EFFECTIVENESS STRATEGY: Timing is of course, everything

Your strategy needs to include when to administer your evaluation studies. With course feedback surveys, we are used to immediately after otherwise, the return rate drops significantly. For knowledge checks we also “test” at the end of the session. Logistically it’s easier to administer because participants are still in the event and we also increase the likelihood of higher “retention” scores.

But when does it make more sense to conduct the evaluation? Again, it depends on what the purpose is.

  • Will you be comparing before and after results? Then baseline data needs to be collected before the event begins. I.e. current set of Key Performing Indicators, Performance Metrics
  • How much time do the learners need to become proficient enough so that the evaluation is accurate? I.e. immediately after, 3 months or realistically 6 months after?
  • When are metrics calculated and reported? Quarterly?
  • When will they be expected to perform back on the job?

Measuring Training Transfer: 3, 6 and maybe 9 months later

We can observe whether a behavior occurs and record the number of people who are demonstrating the new set of expected behaviors on the job. We can evaluate the quality of a work product (such as a completed form or executed batch record) by recording the number of people whose work product satisfies the appropriate standard or target criteria. We can record the frequency with which target audience promotes the preferred behaviors in dialogue with peers and supervisors and in their observed actions.

It is possible to do this; however, the time, people and budget to design the tools and capture the incidents are at the core of management support for a more vigorous training effectiveness strategy. How important is it to the organization to determine if your training efforts are effectively transferring back to the job? How critical is it to mitigate the barriers that get in the way when the evaluation results show that performance improved only marginally? It is cheaper to criticize the training event(s) rather than address the real root cause(s). See Training Does Not Stand Alone (Transfer Failure Section).

TRAINING EFFECTIVENESS STRATEGY: Right tool for the right evaluation type

How will success be defined for each “training” event or category of training content? Are you using tools/techniques that meet your stakeholders’ expectations for training effectiveness? If performance improvement is the business goal, how are you going to measure it? What are the performance goals that “training” is supposed to support? Seek confirmation on what will be accepted as proof of learning, evidence of transfer to the workplace, and identification of leading indicators of organizational improvement. These become the criteria by which the evaluation has value for your stakeholders. Ideally, the choice of tool should be decided after the performance analysis is discussed and before content development begins.

Performance Analysis first; then possibly a training needs analysis

Starting with a performance analysis recognizes that performance occurs within organizational systems. The analysis provides a 3-tiered picture of what’s encouraging/blocking performance for the worker, work tasks, and/or the workplace and what must be in place for these same three levels in order to achieve sustained improvement. The “solutions” are tailored to the situation based on the collected data and not on an assumption that training is needed. Otherwise, you have a fragment of the solution with high expectations for solving “the problem” and relying on the evaluation tool to provide effective “training” results. Only when the cause analysis reveals a true lack of knowledge, will training be effective.

Why aren’t more Performance Analyses being conducted?
For starters, most managers want the quick fix of training because it’s a highly visible activity that everyone is familiar and comfortable with. The second possibility lies in the inherent nature of performance improvement work. Very often the recommended solution resides outside of the initiating department and requires the cooperation of others.   Would a request to fix someone else’s system go over well where you work? A third and most probable reason is that it takes time, resources, and a performance consulting skill set to identify the behaviors, decisions and “outputs” that are expected as a result of the solution. How important will it be for you to determine training effectiveness for strategic corrective actions?

You need an execution plan

Given the variety of training events and level of strategic importance occurring within your organization, one standard evaluation tool may no longer be suitable. Does every training event need to be evaluated at the same level of rigor? Generally speaking, the more strategic the focus is, the more tedious and timely the data collection will be. Again, review your purpose and scope for the evaluation. Refer to your stakeholder matrix and determine what evaluation tool(s) is better suited to meet their expectations.

For example, completing an after-training survey for every event is laudable; however, executive leadership values this data the least. According to Jack and Patricia Phillips (2010), they want to see business impact the most. Tools like balanced scorecards can be customized to capture and report on key performing indicators and meaningful metrics. Develop your plan wisely, generate a representative sample size initially and seek stakeholder agreement to conduct the evaluation study.

Life after the evaluation: What are you doing with the data collected?

Did performance improve? How will the evaluation results change future behavior and/or influence design decisions? Or perhaps the results will be used for budget justification, support for additional programs or even a corporate case study? Evaluation comes at the end but in reality, it is continuous throughout. Training effectiveness means evaluating the effectiveness of your training: your process, your content and your training quality system. It’s a continuous and cyclical process that doesn’t end when the training is over. – VB

 

Jack J. Phillips and Patricia P. Phillips, “How Executives View Learning Metrics”, CLO, December 2010.

Recommend Reading:

Jean-Simon Leclerc and Odette Mercier, “How to Make Training Evaluation a Useful Tool for Improving L &D”, Training Industry Quarterly, May-June, 2017.

 

Performance objectives are not the same thing as learning objectives

Some folks might say that I’m mincing words, but I beg to differ. The expectations for training delivery are that participants learn the content, aka learning objectives and then use or apply it back on the job thus improving departmental / organizational performance. So, do you provide the training and then keep your fingers crossed that they can deliver on their performance objectives or do you assure that employees can perform after the event is long over?

“Employee Qualification” is a successful program for the Life Sciences companies and they have been deploying variations of it for several years now. The essence of it is an observed assessment of performance by a qualified OJT Trainer. Simple in theory, yes nonetheless, implementation is a bit more structured. See Moving Beyond R & U SOP Training.

Employee Qualification is the ultimate Level 3 Training Evaluation

Referring to the well-known Kirkpatrick Model of Evaluation, Level 3 is behavior change The focus of Employee Qualification is about the employee’s ability to apply knowledge and skill learned during OJT back on the job / in the workplace setting.   Actual performance is the ultimate assessment of learning transfer. If an employee is performing the job task correctly during a formal performance demonstration, this meets the expectation for successful training.

Yet, according to the 2009 ASTD research study “The Value of Evaluation”, only 54.6% of respondents indicated that their organization conducts Level 3 evaluations.   The top technique used is follow-up surveys with participants (31%), while observation of the job was fourth (23.9%).   If on the job assessment is the “ultimate” measure of transfer, then why isn’t it being used more frequently?

Post training assessments are time and labor intensive.   About a quarter (25.2%) of their learning programs are evaluated for behavior according to the respondents.   But for organizations who have to meet compliance requirements (46.9% of survey respondents), documenting training effectiveness has fast become one of the top expectations of external regulators.     No longer satisfied with just providing LMS history records, many auditors are now asking to see the training effectiveness strategies for required compliance training.

Validating Your Training Effectiveness

Being “SOP” qualified is the demonstrated ability of an employee to accurately perform a task or Standard Operating Procedure independent of his OJT coach with consistency to meet acceptable quality standards.   Also see “From training logs to OJT Checklists and beyond”. An active Employee Qualification program also verifies that the training content in this case the SOP, accurately describes how to execute the steps for the task at hand. If either is not done well, the qualification is stopped and a cause analysis is conducted to examine contributing factors. Success starts with an effective training system.

Oh, but now we have Curricula!

Having training curricula and matrices is a huge step in identifying what is required for employees to learn. LMSes are also helpful for recording history, tracking overdue requirements and generating reconciliation reports. More sophisticated databases can provide functionality for quizzes. Quiz/ test/ knowledge check can measure knowledge retention and possibly comprehension if it includes challenge questions about real workplace situations. Making them a popular Level 2 evaluation tool. However, be mindful though of the danger of “teaching to the test” or using the search function within the e-doc system to find the answer in the SOP. Most of the knowledge retained is immediately flushed within hours of “passing the test” or satisfying the LMS generated quiz. So, having a quiz is not a guarantee that the knowledge transforms itself into a skill set back on the job.

The true measure of effectiveness

The use of “100% completed” reports is a metric for completeness only; a commonly used data point from the LMS. It does not address transfer of learning into performance back on the job. Neither does a 5-question multiple-choice quiz designed to measure the achievement of learning objectives.   The true measure of effective OJT training is an observed demonstration of the performance objective(s). Isn’t that what effective training is supposed to mean – a change in behavior? – VB

What happens when the performance demonstration becomes more of a "this is how I do it discussion" instead of an actual demonstration?

*The Value of Evaluation: Making Training Evaluations More Effective. An ASTD Research Study, 2009, ASTD.