Is an Awareness Training Session enough for successful User Adoption?

Note: This blog is part of an ongoing series. Blog # 7 – Go-Live Strategy introduces the 5 Steps.

Go-Live: Step 3 – Develop the Rollout Timeline and Training Schedules.

A redesign of the quality system SOPs more than likely resulted in significant changes in routine tasks.  What changed, what was removed, what was added that is truly new and what stayed the same? Simply reading the newest version in an e-document platform will not suffice as effective training.  Nor will reading the change history page or reviewing a marked-up version of SOP for the highlighted changes.  When your previously delivered change management sessions include this level of detail, then the content of your training session can focus more on the new process.  If successful user adoption is tied to the effectiveness check of your CAPAs, then the project team needs to discuss what the training rollout will look like.  

Identifying Critical Users

From the stakeholders’ analysis for Affected Users, consider who is directly affected and indirectly affected by the change in responsibilities. Which users are most critical to ensure success with adhering to the new steps and forms? I refer to them as the Primary Users who are directly affected. They are usually more functional in their responsibility rather by department titles or business units.  Another way to determine this is to review the responsibilities section of the new SOPs.  Who in your organization are these people? In this review, are there supporting and ancillary responsibilities with the steps and forms?  I call this group the Secondary Users that are indirectly affected.  Both sets of users need to be fully trained in their tasks and responsibilities in order to ensure that the new system will function per the SOPs. 

The Training Rollout

Training Roll Outs need to meet three different levels of Users needs.

The overarching question to address is whether or not everyone has to attend the training.  One awareness training session for both groups is extremely efficient but not nearly as effective if the training sessions were tailored based on the level of user need.  See figure at the right.  Within the indirectly affected group are the senior leadership team members. 

An executive briefing is more likely to be attended by these folks when it provides a summary of what they need to know only.  What does the general population need to know about these changes?   Keep this short and to the point. It’s the Primary Group of Users who need to not only be made aware of the changes but to also know how to execute the new forms.  Yes, this session is a bit longer in duration than Awareness Training and it should be.  These Users have more responsibilities for correct execution. 

Simply reading the newest version in an e-document platform will not suffice as effective training. Nor will reading the change history page or reviewing a marked-up version of SOP for the highlighted changes.

Vivian Bringslimark, HPIS Consulting, Inc.

Who is the Trainer?

Depending on the level of involvement of the Design Team members, the following minimal questions need to be addressed:

  • Should the Project Manager deliver any sessions?
  • Will we use Train the Trainer approach?
    • Where each design team member is assigned to deliver Awareness Training for their area of remit.
    • Do these members have platform skills to lead this session?
    • Will they be provided with a slide deck already prepared for them?
  • Solely the responsibility of the QA Training Department
    • Provided s/he was a member of the Design Team
  • What kind of Training schedule will we need?
    • Will we provide three different tiers to meet the needs of our Affected Users?

Did the Training Roll Out Meet the Learning Needs of Primary Users?

While the revised SOPs were in the change control queue, the design team for this client met to discuss the difference between Awareness Training and Primary Users Training. Briefly, the differences were:

Difference between Awareness Training and Primary Users Training Content

Awareness Training is more knowledge-based.  It tends to be information sharing and very passive until the Q & A session.  A knowledge check at the end is no assurance that there will not be any deviations.  Primary Users Training is intended to focus on the behavioral changes that will be needed for adoption back in the department. The session can be a workshop with real examples that are generated from the users as part of their concerns and questions. 

The design team concluded that the differences were significant enough to warrant two different classes based on the type of user.  The risk of deviations was too great and would send a negative message to the site leadership team about the new process design.  Early adopters were not at risk because they were already trained via their participation in the design team. 

The task of developing the Awareness Training and Primary Users materials was assigned to the instructional designer on the team.   Attending the Awareness Training would not be a substitute for participating in the Primary Users class.  However, attending the Primary Users class would automatically credit the Awareness Training requirement for users if they attended.

Given that condition, the Primary Users materials also included similar content from the Awareness Training and then expanded the level of detail to include the sequence of steps for executing associated new and revised forms.  The Primary Users class was designed to provide more in-depth discussion of the changes and to provide adequate time to become familiar enough with their responsibilities to minimize disruption on the day the procedures and forms go into effect. 

The system owner then scheduled all users to attend the Awareness Training.  He concluded that there would be too much confusion between which class to attend. Since Awareness Training was being delivered first due to a very short Go-Live window, it would be better that they received the same training or so he thought.  In addition, the system owner felt that all employees were actually Primary Users and would not attend the training session if it went past 60 minutes.  As a result, Primary Users were never identified, and no learner matrix was generated.   No one asked for more training until weeks after the SOPs and forms went into effect. 

A knowledge check at the end is no assurance that there will not be any deviations.

Vivian Bringslimark, HPIS Consulting, Inc.

But rather than schedule the Primary User class, Users who had questions or concerns stopped by the department for 1-1 help instead.  For weeks, the staff was interrupted from their daily tasks and was expected to conduct impromptu help sessions.  The intent of the Primary Users class was to provide a hands-on training workshop for their impacted documents and not have to stop and go find someone for help.  The slide deck for Primary Users was eventually uploaded to a shared drive.  When the department got tired of being interrupted, the system owner put out a general email with the link and redirected late adopters to the website link.  The slide deck was not designed to be a substitute manual. 

Had the design team followed through with identifying lead champions, the Primary Users training workshop would have been delivered to a small group of users who then could have fielded questions from their colleagues.  The original design team members did not agree to be change champions nor trainers for their departments.  They complained about their workload already being heavy and had no time to address implementation questions.  That was for the training department to deliver, they concluded.  And then reported back to management that their direct reports could not attend a second session on the revised procedures.

Stay tuned.  Next blog includes Steps 4 and 5 of the Go Live Strategy and wraps up this series.  Become a subscriber so you don’t miss any more blogs. 

Who is Vivian Bringslimark?

© HPIS Consulting, Inc.

Investigations 101: Welcome Newbies

So the event description is clarified and updated. The assigned investigator is up to speed on the details of “the story”.  What happens next? What is supposed to happen?  In most organizations, there is rush to find the root cause and get on with the investigation.  A novice investigator will be anxious to conduct the root cause analysis (RCA).  S/he can easily make early root cause mistakes like grabbing the first contributing factor as the root cause without being disciplined to explore all possible causes first.

Thus it makes sense to get the Investigators trained in root cause analysis. Unfortunately for many, this is the ONLY training they receive and it is not nearly enough. RCA is a subset of the investigation process and the training agenda is heavy on the tools, which is perfectly appropriate.  But when do they receive training on the rest of the investigation stages like determining CAPA significance and writing the report?  Given the amount of FD-483 observations and warning letter citations for inadequate investigations that continue to be captured, I’d say we need more training beyond RCA tools.  As a result, we are starting to see FDA “recommendations” for trained and QUALIFIED Investigators.  This means not only in how to conduct a root cause analysis, but also the Deviation and CAPA Process. 

This goes beyond e-sign the Read and Understand procedures in your LMS

E-Doc systems are a great repository for storing controlled documents.   Searching for SOPs has become very efficient.  In terms of documenting “I’ve read the procedure”, very proficient and there’s no lost paperwork anymore!  But learning isn’t complete if we’ve merely read through the steps.  We also need to remember it.  At best, we remember that we read it and we know where to find it when we need to look something up.  Does that translate to understood?  Maybe for some. 

To help us remember the actual steps, we need to do something with the knowledge gained.  This is where the responsibilities section of the procedure tells us who is to do what and when.  But the LMS doesn’t include structured and guided practice as part of the assigned curricula.  Unless your equipment and complex procedures are also flagged for Structured OJT and possible Qualification Events as in most Operations groups, practice happens incidentally as part of on the job experience.  Feedback is typically provided when there’s a discrepancy or a deviation.  This is reactionary learning and not deliberate practice. 

If we want Deviation Investigators to understand and remember their tasks (procedures) so they can conduct investigations and write reports that get approved quickly, then we need to design learning experiences that build those skills and ensures accurate execution of assigned roles and responsibilities for Deviations and CAPAs. They need an interactive facilitated learner centered qualification program.

More than just studying a set of procedures and filling out related forms

It’s about putting the learners; the assigned SMEs as Investigators and QA Reviewers, at the center of the whole learning experience.  It’s about empowering them to take charge of their own learning by enabling them to experience real work deviations / CAPA investigations and to deliberately practice new skills in a safe environment with the assistance of adult learning facilitator(s) and coaches.  Thereby bridging the “R & U Only Knowledge Gap”.

The look and feel of the program follows a Learn By Doing approach with customized learning content, using interactive techniques and offering more hands-on opportunities for them to engage with real work application that ensures learners are immediately using the knowledge and tools in class and for their homework assignments thus increasing the connections for knowledge transfer.

This requires a shift from the traditional mindset of a classroom course where the emphasis is on the expertise of the instructor and the content. The learners and their learning experience becomes the priority.  The instructor’s task isn’t to deliver the content, it’s to help their learners acquire knowledge and skill.

Shifting the priority to a more engaging Learning Experience

Qualifying SMEs as Deviation Investigators Program

This unique curriculum uses a variety of teaching methods fostering more balanced and meaningful instruction over the duration of the program.  It is not a single course or 2-day training event.  It is delivered in modules, with weekly “homework” assignments consisting of active deviations and open investigations.

“Spaced learning works, in part, because the brain needs resting time to process information, create pathways to related information, and finally place the new information into long-term memory – the main objective of learning.” (Singleton, Feb 2018, p.71). 

Each module revisits the Investigation Stages and builds on the prior lessons by reviewing and debriefing the homework.  Then, expanding on that content and including new lessons with increasing intensity of the activities and assignments.

By design, the program provides time and space to interact with the content as opposed to delivering content dumps and overwhelming the newbies; short-term memory gets maxed out and learning shuts down. The collaborative participation and contributions from the Investigators and Program Facilitator(s) result in better overall engagement. Everyone is focused on accomplishing the goal of the program; not just checking the box for root cause analysis tools.

The goal of the program is to prepare subject matter experts to conduct, write and defend investigations for deviations and CAPAs.  The program also includes QA reviewers who will review, provide consistent critique and approve deviations, investigations, CAPAs. Attending together establishes relationships with peers and mutual agreement of the content.  The learning objectives describe what the learners need from the Deviation and CAPA quality system procedures while the exercises and assignments verify comprehension and appropriate application.

“Learning happens when learners fire their neurons, not when the trainer gives a presentation or shows a set of Power-Point slides.” (Halls, Feb 2019, p.71).

Qualified, really? Isn’t the training enough?

Achieving “Qualified status is the ultimate measure of the training program effectiveness.  For newly assigned Investigators, it means the company is providing support with a program that builds their skills and confidence and possible optional career paths.   Being QUALIFIED means that Investigators have undergone the rigor of an intensely focused investigations curriculum that aligns with the task and site challenges.  That after completing additional qualification activities, Investigators have experienced a range of investigations and are now deemed competent to conduct proper investigations.

For the organization, this means two things.  Yes, someone gets to check the FDA commitment box.  And it also means strategically solving the issues.  Better investigations lead to CAPAs that don’t fail their effectiveness checks.  Now that’s significant performance improvement worthy of qualifying Investigators!  -VB

References:

  • Campos,J. The Learner Centered Classroom. TD@Work, August, 2014, Issue 1408.
  • Chopra,P. “give them what they WANT”, TD, May, 2016, p.36 – 40.
  • Halls,J. “Move Beyond Words to Experience”, TD, February, 2019, p. 69 – 72 DL.
  • Parker, A. “Built to Last: Interview with Mary Slaughter”, TD, May, 2016, p. 57.
  • Singleton, K. “Incorporating a Spiral Curriculum Into L&D”, TD, February, 2018, 70 – 71.

HPISC Coaching Brief available here.

The Big Why for Deviations

As part of my #intentionsfor2019, I conducted a review of the past 10 years of HPIS Consulting.  Yes, HPISC turned 10 in August of 2018, and I was knee deep in PAI activities.  So there was no time for celebrations or any kind of reflections until January 2019, when I could realistically evaluate HPISC: vision, mission, and the big strategic stuff.  My best reflection exercise had me remembering the moment I created HPIS Consulting in my mind.

Human Performance Improvement (HPI) and Quality Systems

One of the phases for HPI work is a cause analysis for performance discrepancies.  The more I learned how the HPI methodology manages this phase the more I remarked on how similar it is to the Deviation /CAPA Quality System requirements.  And I found the first touch point between the two methodologies.  My formal education background and my current quality systems work finally united.  And HPIS Consulting (HPISC) became an INC.  

In my role of Performance Consultant (PC), I leverage the best techniques and tools from both methodologies.  Not just for deviations but for implementing the corrective actions sometimes known as HPI solutions.  In this new HPISC blog series about deviations, CAPAs, and HPI, I will be sharing more thoughts about HPISC touch points within the Quality Systems. For now, lets get back to Big Why for deviations.

Why are so many deviations still occurring? Have our revisions to SOPs and processes brought us farther from a “State of Control”? I don’t believe that is the intention. As a Performance Consultant, I consider deviations and the ensuing investigations rich learning opportunities to find out what’s really going on with our Quality Systems.

The 4 cross functional quality systems

At the core of the “HPISC Quality Systems Integration Triangle” is the Change Control system.  It is the heartbeat of the Quality Management System providing direction, guidance and establishing the boundaries for our processes.  The Internal Auditing System is the health check similar to our annual physicals; the read outs indicate the health of the systems.  Deviations/CAPAs are analogous to a pulse check where we check in at the current moment and determine whether we are within acceptable ranges or reaching action levels requiring corrections to bring us back into “a state of control”.  And then there is the Training Quality System, which in my opinion is the most cross-functional system of all.  It interfaces with all employees; not just the Quality Management System.  And so, it functions like food nourishing our systems and fueling sustainability for corrections and new programs.

Whether you are following 21CFR211.192 (Production Record Review) or ICHQ7 Section 2 or  820.100 (Corrective and Preventive Action), thou shall investigate any unexplained discrepancy and a written record of the investigation shall be made that includes the conclusion and the follow up. Really good investigations tell the story of what happen and include a solid root cause analysis revealing the true root cause(s) for which the corrective actions map back to nicely.  Thus, making the effectiveness checks credible. In theory, all these components flow together smoothly.  However, with the continual rise of deviations and CAPAs, the application of the Deviation /CAPA Management system is a bit more challenging for all of us.  

Remember the PA in C-A-P-A?

Are we so focused on the corrective part and the looming due dates we’ve committed to, that we are losing sight of the preventive actions? Are we rushing through the process to meet imposed time intervals and due dates that we kind of “cross our fingers and hope” that the corrective actions fix the problem without really tracing the impact of the proposed corrective solutions on the other integrated systems? Allison Rossett, author of First Things Fast: a handbook for performance analysis, explains that performance occurs within organizational systems and the ability to achieve, improve and maintain excellent performance, depends on integrated components of other systems that involve people. 

Are we likewise convincing ourselves that those fixes should also prevent re-occurrence? Well, that is until a repeat deviation occurs and we’re sitting in another root cause analysis meeting searching for the real root cause.  Thomas Gilbert, in his groundbreaking book, Human Competence: engineering worthy performance tells us, that it’s about creating valuable results without using excessive cost.  In other words, “worthy performance” happens when the value of business outcomes exceeds the cost of doing the tasks.  The ROI of a 3-tiered approach to solving the problem the first time, happens when employees achieve their assigned outcomes that produce results greater than the cost of “the fix”. 

Performance occurs within three tiers

So, donning my Performance Consulting “glasses”, I cross back over to the HPI methodology and open up the HPI solutions toolbox.  One of those tools is called a Performance Analysis (PA). This tool points us in the direction of what’s not working for the employee, the job tasks a/or the workplace. The outcome of a performance analysis produces a 3 tiered picture of what’s encouraging or blocking performance for the worker, work tasks, and/or the work environment and what must be done about it at these same three levels.  

Root cause analysis (RCA) helps us understand why the issues are occurring and provides the specific gaps that need fixing.  Hence, if PA recognizes that performance occurs within a system, then performance solutions need to be developed within those same “systems” in order to ensure sustainable performance improvement.  Otherwise, you have a fragment of the solution with high expectations for solving “the problem”.  You might achieve short-term value initially, but suffer a long-term loss when performance does not change or worsens. Confused between PA, Cause Analysis and RCA? Read the blog – analysis du jour.

Thank goodness Training is not the only tool in the HPI toolbox!   With corrective actions /HPI solutions designed with input from the 3 tiered PA approach, the focus shifts away from the need to automatically re-train the individual(s), to implementing a solution targeted for workers, the work processes and the workplace environment that will ultimately allow a successful user adoption for the changes/improvements.   What a richer learning opportunity than just re-reading the SOP! -VB

  • Allison Rossett, First Things Fast: a handbook for Performance Analysis; 2nd edition 
  • Thomas F. Gilbert, Human Competence: Engineering Worthy Performance
You might want to also read:

What’s Your Training Effectiveness Strategy? It needs to be more than a survey or knowledge checks

When every training event is delivered using the same method, it’s easy to standardize the evaluation approach and the tool. Just answer these three questions:

  • What did they learn?
  • Did it transfer back to job?
  • Was the training effective?

In this day and age of personalized learning and engaging experiences, one-size training for all may be efficient for an organizational roll out but not the most effective for organizational impact or even change in behavior. The standard knowledge check can indicate how much they remembered. It might be able to predict what will be used back on the job. But be able to evaluate how effective the training was? That’s asking a lot from a 10 question multiple choice/ true false “quiz”. Given the level of complexity of the task or the significance of improvement for the organization such as addressing a consent decree or closing a warning letter, it would seem that allocating budget for proper training evaluation techniques would not be challenged.

Do you have a procedure for that?

Perhaps the sticking point is explaining to regulators how decisions are made using what criteria. Naturally documentation is expected and this also requires defining the process in a written procedure. It can be done. It means being in tune with training curricula, awareness of the types of training content being delivered and recognizing the implication of the evaluation results. And of course, following the execution plan as described in the SOP.   Three central components frame a Training Effectiveness Strategy: Focus, Timing and Tools.

TRAINING EFFECTIVENESS STRATEGY: Focus on Purpose

Our tendency is to look at the scope (the what) first. I ask that you pause long enough to consider your audience, identify your stakeholders; determine who wants to know what. This analysis shapes the span and level of your evaluation policy. For example, C-Suite stakeholders ask very different questions about training effectiveness than participants.

The all purpose standard evaluation tool weakens the results and disappoints most stakeholders. While it can provide interesting statistics, the real question is what will “they” do with the results? What are stakeholders prepared to do except cut training budget or stop sending employees to training? Identify what will be useful to whom by creating a stakeholder matrix.

Will your scope also include the training program (aka Training Quality System) especially if it is not included in the Internal Audit Quality System? Is the quality system designed efficiently to process feedback and make the necessary changes that result from the evaluation results? Assessing how efficiently the function performs is another opportunity to improve the workflow by reducing redundancies thus increasing form completion speed and humanizing the overall user experience. What is not in scope? Is it clearly articulated?

TRAINING EFFECTIVENESS STRATEGY: Timing is of course, everything

Your strategy needs to include when to administer your evaluation studies. With course feedback surveys, we are used to immediately after otherwise, the return rate drops significantly. For knowledge checks we also “test” at the end of the session. Logistically it’s easier to administer because participants are still in the event and we also increase the likelihood of higher “retention” scores.

But when does it make more sense to conduct the evaluation? Again, it depends on what the purpose is.

  • Will you be comparing before and after results? Then baseline data needs to be collected before the event begins. I.e. current set of Key Performing Indicators, Performance Metrics
  • How much time do the learners need to become proficient enough so that the evaluation is accurate? I.e. immediately after, 3 months or realistically 6 months after?
  • When are metrics calculated and reported? Quarterly?
  • When will they be expected to perform back on the job?

Measuring Training Transfer: 3, 6 and maybe 9 months later

We can observe whether a behavior occurs and record the number of people who are demonstrating the new set of expected behaviors on the job. We can evaluate the quality of a work product (such as a completed form or executed batch record) by recording the number of people whose work product satisfies the appropriate standard or target criteria. We can record the frequency with which target audience promotes the preferred behaviors in dialogue with peers and supervisors and in their observed actions.

It is possible to do this; however, the time, people and budget to design the tools and capture the incidents are at the core of management support for a more vigorous training effectiveness strategy. How important is it to the organization to determine if your training efforts are effectively transferring back to the job? How critical is it to mitigate the barriers that get in the way when the evaluation results show that performance improved only marginally? It is cheaper to criticize the training event(s) rather than address the real root cause(s). See Training Does Not Stand Alone (Transfer Failure Section).

TRAINING EFFECTIVENESS STRATEGY: Right tool for the right evaluation type

How will success be defined for each “training” event or category of training content? Are you using tools/techniques that meet your stakeholders’ expectations for training effectiveness? If performance improvement is the business goal, how are you going to measure it? What are the performance goals that “training” is supposed to support? Seek confirmation on what will be accepted as proof of learning, evidence of transfer to the workplace, and identification of leading indicators of organizational improvement. These become the criteria by which the evaluation has value for your stakeholders. Ideally, the choice of tool should be decided after the performance analysis is discussed and before content development begins.

Performance Analysis first; then possibly a training needs analysis

Starting with a performance analysis recognizes that performance occurs within organizational systems. The analysis provides a 3-tiered picture of what’s encouraging/blocking performance for the worker, work tasks, and/or the workplace and what must be in place for these same three levels in order to achieve sustained improvement. The “solutions” are tailored to the situation based on the collected data and not on an assumption that training is needed. Otherwise, you have a fragment of the solution with high expectations for solving “the problem” and relying on the evaluation tool to provide effective “training” results. Only when the cause analysis reveals a true lack of knowledge, will training be effective.

Why aren’t more Performance Analyses being conducted?
For starters, most managers want the quick fix of training because it’s a highly visible activity that everyone is familiar and comfortable with. The second possibility lies in the inherent nature of performance improvement work. Very often the recommended solution resides outside of the initiating department and requires the cooperation of others.   Would a request to fix someone else’s system go over well where you work? A third and most probable reason is that it takes time, resources, and a performance consulting skill set to identify the behaviors, decisions and “outputs” that are expected as a result of the solution. How important will it be for you to determine training effectiveness for strategic corrective actions?

You need an execution plan

Given the variety of training events and level of strategic importance occurring within your organization, one standard evaluation tool may no longer be suitable. Does every training event need to be evaluated at the same level of rigor? Generally speaking, the more strategic the focus is, the more tedious and timely the data collection will be. Again, review your purpose and scope for the evaluation. Refer to your stakeholder matrix and determine what evaluation tool(s) is better suited to meet their expectations.

For example, completing an after-training survey for every event is laudable; however, executive leadership values this data the least. According to Jack and Patricia Phillips (2010), they want to see business impact the most. Tools like balanced scorecards can be customized to capture and report on key performing indicators and meaningful metrics. Develop your plan wisely, generate a representative sample size initially and seek stakeholder agreement to conduct the evaluation study.

Life after the evaluation: What are you doing with the data collected?

Did performance improve? How will the evaluation results change future behavior and/or influence design decisions? Or perhaps the results will be used for budget justification, support for additional programs or even a corporate case study? Evaluation comes at the end but in reality, it is continuous throughout. Training effectiveness means evaluating the effectiveness of your training: your process, your content and your training quality system. It’s a continuous and cyclical process that doesn’t end when the training is over. – VB

 

Jack J. Phillips and Patricia P. Phillips, “How Executives View Learning Metrics”, CLO, December 2010.

Recommend Reading:

Jean-Simon Leclerc and Odette Mercier, “How to Make Training Evaluation a Useful Tool for Improving L &D”, Training Industry Quarterly, May-June, 2017.

 

Did we succeed as intended? Was the training effective?

When you think about evaluating training, what comes to mind? It’s usually a “smile sheet”/ feedback survey about the course, the instructor and what you found useful. As a presenter/instructor, I find the results from these surveys very helpful, so thank you for completing them. I can make changes to the course objectives, modify content or tweak activities based on the comments. I can even pay attention to my platform skills where noted. But does this information help us evaluate if the course was successful?

Formative vs. Summative Distinction

Formative assessments provide data about the course design. Think form-ative; form-at of the course. The big question to address is whether the course as designed met the objectives. For example, the type of feedback I receive from surveys gives me comments and suggestions about the course.

Summative assessments are less about the course design and more about the results and impact. Think summative; think summary. It’s more focused on the learner; not the instructional design. But when the performance expectations are not met or the “test” scores are marginal, then the focus shifts back to the course, instructor/trainer and instructional designer with the intent to find out what happened? What went wrong? When root cause analysis fails to find the cause, it’s time to look a little deeper at the objectives.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program. These statements aka objectives determine the content and they also drive the assessment. For example, a written test or knowledge check is typically used for classroom sessions that ask questions about the content. In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring? How much of the content they remember and maybe how much of the content they can apply when they return to work?

Training effectiveness on the other hand is really an evaluation of whether we achieved the desired outcome. So I ask you, what is the desired outcome for your training: to gain knowledge (new content) or to use the content correctly back in the workplace? The objectives need to reflect the desired outcome in order to determine the effectiveness of training.

What is your desired outcome from training?

Levels of objectives, who knew?

Many training professionals have become familiar with Kirkpatrick’s 4 Levels of Evaluation over the course of their careers, but less are acquainted with Bloom’s Taxonomy of Objectives. Yes, objectives have levels of increasing complexity resulting in higher levels of performance. Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid range. Evaluating and creating are at the highest levels.

If your end in mind is knowledge gained ONLY, continue to use the lower level objectives. If however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet performance expectations. They need to become more performance based. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Accuracy of the assessment tools

The tools associated with the 4 levels of evaluation can be effective when used for the right type of assessment. For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments. Level 2 (Learning) are effective in measuring retention and minimum comprehension and go hand in hand with learning based objectives. But when the desired outcomes are actually performance based, Level 2 knowledge checks need to shift up to become more application oriented such as “what if situations” and scenarios requiring analysis, evaluating, and even problem solving. Or shift altogether to Level 3 (Behavior) and develop a new level of assessments such as demonstrations and samples of finished work products.

Trainers are left out of the loop

But, today’s trainers don’t always have the instructional design skill set developed. They do the best they can with the resources given including reading books and scouring the Internet. For the most part, their training courses are decent and the assessments reflect passing scores. But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the performance expectations. This is where the gap exists. Trainers build courses based on knowledge / content instead and develop learning objectives that determine what learners should learn. They create assessments to determine whether attendees have learned the content; but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Performance objectives drive a higher level of course design

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish. While the content may be the same or very similar, how we determine whether employees are able to execute post training requires more thought about the accuracy of the assessment. It must be developed from the performance objectives in order for it to be a valid “instrument”. The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real work place conditions.

To ensure learner success with the assessment, the training activities must also be aligned with the level of the objectives. This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   This includes making mistakes and how to recognize a deviation is occurring. Michael Allen refers to this as “building an authentic performance environment”. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended. Are you willing to step up and do what it takes to ensure training is truly effective? – VB

 

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

Why Knowledge Checks are Measuring the Wrong Thing

When I taught middle school math, tests were used to assess knowledge comprehension and some application with word problems and a few complex questions requiring logic proofs. Results were captured via a score; a metric if you will as to how well you answered the questions and very appropriate in academia.

In our quest for training evaluation metrics, we have borrowed the idea of testing someone’s knowledge as a measure of effectiveness. This implies that a corporate classroom mirrors an educational classroom and testing means the same thing – a measure of knowledge comprehension. However, professors, colleges, universities and academic institutions are not held to the same results oriented standard. In the business world, results need to be performance oriented, not knowledge gained.

So why are we still using tests?

Call it a quiz, a knowledge check or any other name it is still assessing some form of knowledge comprehension. In training effectiveness parlance, it is also known as a level 2 evaluation. Having the knowledge is no guarantee that it will be used correctly back on the job. Two very common situations occur in the life science arena where “the quiz” and knowledge checks are heavily used: Annual GMP Refresher and Read & Understand Approach for SOPs.

Life sciences companies are required by law to conduct annual regulations training (GMP Refreshers) so as to remain current. To address the training effectiveness challenge, a quiz / questionnaire / knowledge assessment (KA) is added to the event. But what is the KA measuring? Is it mapped to the course /session objectives or are the questions so general that they can be answered correctly without having to attend the sessions? Or worse yet, are the questions being recycled from year to year / event-to-event? What does it mean for the employee to pass the knowledge check or receive 80% or better? When does s/he learn of the results? In most sessions, there is no more time left to debrief the answers. This is a lost opportunity to leverage feedback into a learning activity. How do employees know if they are leaving the session with the “correct information”?

The other common practice is to include a 5 multiple choice as a knowledge check for Read & Understood (R & U) SOPs especially for revisions. What does it mean if employees get all 5 questions right? That they will not make a mistake? That the R & U method of SOP training is effective? The search function in most e-doc systems is really good at finding the answers. It doesn’t necessarily mean that they read the entire procedure and retained the information correctly. What does it mean for the organization if human errors and deviations from procedures are still occurring? Does it really mean the training is ineffective?

What should we be measuring?

The conditions under which employees are expected to perform need to be the same conditions under which we “test” them. So it makes sense to train ‘em under those same conditions as well. What do you want/need your employees (learners) to do after the instruction is finished? What do you want them to remember and use from the instruction in the heat of their work moments? Both the design and assessment need to mirror these expectations. And that means developing objectives that guide the instruction and form the basis of the assessment. (See Performance Objectives are not the same as Learning Objectives.)

So ask yourself, when in their day to day activities will employees need to use this GMP concept? Or, where in the employees’ workflow will this procedure change need to be applied? Isn’t this what we are training them for? Your knowledge checks need to ensure that employees have the knowledge, confidence and capability to perform as trained. It’s time to re-think what knowledge checks are supposed to do for you. – VB

Need to write better Knowledge Check questions?  Need to advise peers and colleagues on the Do’s and Don’ts for writing test questions?

Instructional Design: Not Just for Full Time Trainers Anymore

When I left the manufacturing shop floor and moved into training, full time trainers presented in the classroom using a host of techniques, tools, and relied on their platform skills to present content. Subject matter experts (or the most senior person) conducted technical training on the shop floor in front of a piece of equipment, at a laboratory station or a work bench.

For years, this distinction was clearly practiced where I worked. Trainers were in the classroom and SMEs delivered OJT. Occasionally a “full time” trainer would consult with a SME on content or request his/her presence in the room during delivery as a back-up or for the Q & A portion of a “presentation”. It seemed that the boundaries at the time, were so well understood, that one could determine the type of training simply by where it was delivered.

Training boundaries are limitless today

Today, that’s all changed. No longer confined to location or delivery methods, full time trainers can be found on the shop floor fully gowned delivering GMP (Good Manufacturing Practices) content for example. And SMEs are now in the classroom more each day with some of the very tools used by full time trainers! What defines a full time trainer from a SME is less important, what is imperative however is what defines effective instruction.

Instructional Design is a recognized profession

What goes into good instructional design?

Believe it or not, instruction design (ID) / instructional technology is a degreed program offered at numerous colleges and universities. Underlying the design, is a methodology for “good” course design and really good instructional designers will confess that there is a bit of an art form to it as well. Unfortunately, with shrinking budgets and downsized L&D staffs, there are less resources available to develop training materials. Not to mention, shrinking time lines for the deliverables. So it makes sense to tap SMEs for more training opportunities since many are already involved in training at their site. But, pasting their expert content into a power point slide deck is not instructional design. Nor is asking a SME to “deliver training” using a previously created power point presentation effective delivery.

What is effective design?

To me, effective design is when learners not only meet the learning objectives during training but also transfer that learning experience back on the job and achieve performance objectives / outcomes. That’s a tall order for a SME, even for full time trainers who have not had course design training. The methodology a course designer follows be that ADDIE, Agile, SAM (Successive Approximation Model), Gagne’s 9 Principles, etc., provides a process with steps to facilitate the design rationale and then development of content including implementation and evaluation of effectiveness. It ensures that key elements are not unintentionally left out or forgotten about until after the fact like evaluation/ effectiveness or needs assessment. In an attempt to expedite training, these methodology driven elements are easily skipped without fully understanding the impact the consequences can have on overall training effectiveness. There is a science to instructional design.

The “art form” occurs when a designer creates visually appealing slides and eLearning scenes as well as aligned activities and engaging exercises designed to provide exploration, practice and proficiency for the performance task back on the job. The course materials “package” is complete when a leader’s guide is also created that spells out the design rationale and vision for delivery, especially when someone else will be delivering the course such as SMEs as Classroom Facilitators.

The Leaders Guide

Speaker notes embedded at the bottom of the notes pages within power point slides is not a leader’s guide. While handy for scripting what to say for the above slide, it does not provide ample space for facilitating other aspects of the course such as visual cues, tips for “trainer only” and managing handouts, etc. A well-designed leader’s guide has the key objectives identified and the essential learning points to cover. These learning points are appropriately sequenced with developed discussion questions to be used with activities; thus removing the need for the facilitator to think on demand while facilitating the activity. This also reduces the temptation to skip over the exercise/activity if s/he is nervous or not confident with interactive activities.

A really good guide will also include how to segue to the next slide and manage seamless transitions to next topic sections. Most helpful, are additional notes about what content MUST be covered, tips about expected responses for activities and clock time duration comments for keeping to the classroom schedule. Given all the time and effort to produce the leaders guide, it is wasted if the course designer and SME as Facilitator do not have a knowledge transfer session. Emailing the guide or downloading it from a share point site will not help the SME in following the guide during delivery unless an exchange occurs in which SMEs can begin to mark up his/her copy.

Using previously developed materials

I am not criticizing previous course materials if they were effective. But replacing clip art with new images and updating the slide deck to incorporate the new company background is not going to change the effectiveness of the course unless content was revised and activities were improved. For many SMEs, having a previous slide deck is both a gift and a curse.

While they are not starting with a blank storyboard, there is a tendency to use as-is and try to embellish it with speaker notes because the original producer of the power point slide did not include them or worse, provided no leader’s guide. The SME has the burden to make content decisions such as what content is critical; what content can be cut if no time. Perhaps even more crucial is how to adapt content and activities to different learner groups or off-shift needs. SMEs who attend a HPISC. ID basics course learn how to use design checklists for previously developed materials.   These checklists allow them to confidently assess the quality of the materials and justify what needs to be removed, revised or added; thus truly upgrading previously developed materials.

What’s so special about SMEs as Course Designers?

They have expertise and experience and are expected to share it via training their peers. But now the venue is the classroom as well. It’s training on course design methodology that is needed. SMEs and most trainers do not automatically have this knowledge. Some develop it by reading A LOT, attending well-designed courses, and over time with trial and error and painful feedback. The faster way is to provide funds to get SMEs as Course Designers at least exposed to how to effectively design for learning experiences so that they can influence the outcome of the objectives. This is management support for SMEs as Trainers. -VB

Facilitating the Shift from Passive Listening to Active Learning

On the one end of “The Learner Participation Continuum” is lecture which is a one way communication and requires very little participation.  At the other end, we have experiential learning and now immersive learning environments with the introduction of 3D graphics, virtual simulations and augmented reality.

In the middle of the range are effective “lectures” and alternate methods such as:

  • Demonstrations
  • Case Study
  • Guided Teaching
  • Group Inquiry
  • Read and Discuss
  • Information Search.

Shift one step to right to begin the move to active learningNow before you insist that the SME as Facilitator move to the far right and conduct only immersive sessions, a word of caution is in order. It’s really about starting with the learners’ expectations and the current organizational culture and then moving one step to the right. If they are used to lectures from SMEs, then work on delivering effective lectures before experimenting with alternate training methods. The overnight shift may be too big of a change for the attendees to adjust to despite their desire for no more boring lectures. Small incremental steps is the key.

How is this done? Upfront in the design of the course materials. The course designers have spent time and budget to prepare a leaders guide that captures their vision for delivering the course.  SMEs as Facilitators (Classroom SMEs) need to study the leader’s guide and pay attention to the icons and notes provided there. These cues indicate the differentiation from lecture, to an activity whether that be self, small group, or large group. While it may be tempting to skip exercises to make up for lost time, it is better for learner participation to skip lecture and modify an activity if possible.

During the knowledge transfer session/ discussion with the course designer and/or instructor, Classroom SMEs make notes of how the instructor transitions from one slide to the next and how s/he provided instruction for the activity. This is a good time for Classroom SMEs to ask how to modify content or an activity if certain conditions should occur. Especially important for SMEs to ask is what content is critical and what content can be skipped if time runs short. It is always a good idea for the Classroom SME to mark-up his/her copy of the materials. And then again after the first delivery to really make it their own leader’s guide. -VB

Speaking of personalizing their leaders’ guide, SMEs may want to experiment with different ways to “open a session” to get experience with a variety of techniques and observe which ones yield better results.

Moving from Lecture to Delivering an EFFECTIVE Lecture

While lecture has its merits, today’s learners want engaging content that is timely, relevant and meaningful. Yet, most SMEs tend to suffer from the “curse of too much knowledge” and find it difficult to separate the need-to- know from the nice-to-know content.

Presenting for them takes on a lecture style format. The thought of facilitating an activity gives most SME a case of jitters and anxiety.  So, in the HPISC. “SME as Facilitator” workshop, attendees are encouraged to step away from the podium and use their eyes, hands and voice to engage with their audience. Easier said than done, yes. That’s why the course is designed to allow them to take small steps within the safety of a workshop environment.

But rather than trying to pull off a fully immersive session, SMEs as Facilitators are introduced to techniques that “liven up” the lecture. They are shown how to move back and forth from passive (sit, hear, see) to active involvement (write, construct, discuss, move, speak). This requires the ability to:

  • follow a well organized design plan
  • capture and hold attention of learners
  • use relevant examples and deviations if possible
  • show authentic enthusiasm
  • involve audience both directly and indirectly
  • respond to questions with patience and respect.

Great presentations are like great movies. They open with an attention-seeking scene, have drama and conflict in the middle so you stick around long enough to see the hero survive and they close on a memorable note. Using the movie analogy, a SME as Facilitator can open the session with something more than his/her bio. They can pick a notable career achievement that most folks aren’t aware of.  Keeping the interest alive, the SME can then draw the connection of content to the audience and address the WIIFM question on everyone’s mind. (WIIFM = What’s in it for me?)

While we don’t need to add to anyone’s stress load, overcoming conflict makes for great story telling. Case studies, major CAPAs, deviations and audit observations make it real life. Use of visuals especially diagrams is visually appealing to learners and keeps them engaged. (CAPA= Corrective Actions Preventive Action Investigations)

Thoroughness in the preparation reflects care and thoughtfulness. Learners appreciate the personal desire to deliver a more lively lecture. Therefore, I like to use the concept of a lecturette; 10 minute blocks of time to chunk up complex topics. Interspersing a 10—15 minute lecture segment with an activity whether self, small group or stand up at the flipchart, gives learners the opportunity to engage with new and/or more complex content in smaller doses.

Stepping away from the podium forces the SME to take action and allow the learners to “get up close” with the SME as Facilitator. This in turn is reflected in the learners desire to respond to questions and dialogue during a facilitated discussion. The rule of thumb for lecturing is approximately 20 minutes max. But with today’s technology buzzing away at your fingertips or on the tabletop, I’d say more like 10 or 15 minutes max if you are an engaging facilitator.

difference between a novice and wise teacher

Remember, the goal of a session is to maximize retention of the audience, not just tell them the content. Attendees learn more if the SME as Facilitator can focus their attention on the topic and deliver content that is relevant to their work situation. Involving the learners in a variety of ways is the key to effective lectures and great presentations. – VB

You might also want to get up to speed with current trend for SMEs – check out the blog post – Are your SMEs becoming duo purposed? Comments welcomed.