Curricula Creation and Maintenance are NOT a “one and done event”!

Over the last few weeks, I’ve been blogging about structured on the job training (SOJT) with refreshed insights from my Life Sciences consulting projects.  Allow me to have a sidebar blog about the need for keeping curricula up to date.  I realize the work is tedious even painful at times.  That’s why donuts show up for meetings scheduled in the morning, pizza bribes if it’s lunchtime and quite possibly even cookies for a late afternoon discussion.  So I get it when folks don’t want to look at their curricula again or even have a conversation about them. 

Once a year curricula upkeep?

It’s like having a fichus hedge on your property.  If you keep it trimmed, pruning is easier than hacking off the major overgrowth that’s gone awry a year later.  And yet, I continue to get push back when I recommend quarterly curricula updates.  Even semi-annual intervals are met with disdain.  In the end we settle for once a year and I cringe on the inside.  Why?  Because once a year review can be like starting all over again.

Don’t all databases know the difference between new and revised SOPs?

Consider for a moment the number of revisions your procedures go through in a year.  If your learning management system (LMS) is mature enough to manage revisions with a click to revise and auto-update all affected curricula, then once a year may be the right time span for your company. 

Others in our industry don’t have that functionality within their training database.  For these administrators, revisions mean manual creation into the “course catalog” each time with a deactivation/retirement of the previous version; some may be able to perform batch uploads with a confirmation activity post submission.  And then, the manual search for all curricula so that the old SOP number can be removed and replaced with the next revision.  Followed by a manual notification to all employees assigned to either that SOP or to the curricula depending on how the database is configured.  I’m exhausted just thinking about this workload.  

Over the course of a year, how many corrective actions have resulted in major SOP revisions that require a new OJT session and quite possibly a new qualification event?  What impact do all these changes have on the accuracy of your curricula? Can your administrator click the revision button for these as well?   And then there’s the periodic review of SOPs, which in most companies is two years.  What is the impact of SOP’s that become deleted as a result of the review?  Can your LMS / training database search for affected curricula and automatically remove these SOPs as well? 

The Real Purpose for Curricula

Let’s not lose sight of why we have curricula in the first place.  So that folks are trained in the “particular operations that the employee performs” (21CFR§211.25).  And “each manufacturer shall establish procedures for identifying training needs and ensure that all personnel are trained to adequately perform their assigned responsibilities” (21CFR§820.25).  Today’s LMSes perform reconciliation of training completion against curricula requirements.  So I’m grateful that this task is now automated.  But it depends on the level of functionality of the database in use.  Imagine having to manually reconcile each individual in your company against their curricula requirements.  There are not enough hours in a normal workday for one person to keep this up to date!  And yet in some organizations, this is the only way they know who is trained.  Their database is woefully limited in functionality.

The quality system regulation for training is quite clear regarding a procedure for identifying training needs.  To meet that expectation, industry practice is to have a process for creating curricula and maintaining the accuracy and completeness of curricula requirements.  Yes, it feels like a lot of paperwork.  §820.25 also states “Training shall be documented”.   For me, it’s not just the completion of the Read & Understood for SOPs.  It includes the OJT process, the qualification event AND the ownership for curricula creation and maintenance. (Stay tuned for a future blog on documenting OJT.)

Whose responsibility is it, anyway?

Who owns curricula in your company?  Who has the responsibility to ensure that curricula are accurate and up to date?  What does your procedure include?  Interestingly enough, I have seen companies who get cited with training observations often have outdated and inaccurate curricula!  Their documentation for curricula frequently shows reviews overdue by 2 – 3 years, not performed since original creation and in some places, no specialized curricula at all!   “They were set up wrong.”  “The system doesn’t allow us to differentiate enough.”  “Oh, we were in the process of redoing them, but then the project was put on the back burner.”  Are you waiting to be cited by an agency investigator during biennial GMP inspection or Pre-Approval Inspection?

The longer we wait to conduct a curricula review, the bigger the training gap becomes.  And that can snowball into missing training requirements, which leads to employees performing duties without being trained and qualified.  Next thing you know, you have a bunch of Training CAPA notifications sitting in your inbox.  Not to mention a FD-483 and quite possibly a warning letter.  How sophisticated is your training database?  Will once a year result in a “light trim” of curricula requirements or a “hack job” of removing outdated requirements and inaccurate revision numbers? Will you be rebuilding curricula all over again?  Better bring on the donuts and coffee!  -VB

How many procedures does it take to describe a training program?

What’s Your Training Effectiveness Strategy? It needs to be more than a survey or knowledge checks

When every training event is delivered using the same method, it’s easy to standardize the evaluation approach and the tool. Just answer these three questions:

  • What did they learn?
  • Did it transfer back to job?
  • Was the training effective?

In this day and age of personalized learning and engaging experiences, one-size training for all may be efficient for an organizational roll out but not the most effective for organizational impact or even change in behavior. The standard knowledge check can indicate how much they remembered. It might be able to predict what will be used back on the job. But be able to evaluate how effective the training was? That’s asking a lot from a 10 question multiple choice/ true false “quiz”. Given the level of complexity of the task or the significance of improvement for the organization such as addressing a consent decree or closing a warning letter, it would seem that allocating budget for proper training evaluation techniques would not be challenged.

Do you have a procedure for that?

Perhaps the sticking point is explaining to regulators how decisions are made using what criteria. Naturally documentation is expected and this also requires defining the process in a written procedure. It can be done. It means being in tune with training curricula, awareness of the types of training content being delivered and recognizing the implication of the evaluation results. And of course, following the execution plan as described in the SOP.   Three central components frame a Training Effectiveness Strategy: Focus, Timing and Tools.

TRAINING EFFECTIVENESS STRATEGY: Focus on Purpose

Our tendency is to look at the scope (the what) first. I ask that you pause long enough to consider your audience, identify your stakeholders; determine who wants to know what. This analysis shapes the span and level of your evaluation policy. For example, C-Suite stakeholders ask very different questions about training effectiveness than participants.

The all purpose standard evaluation tool weakens the results and disappoints most stakeholders. While it can provide interesting statistics, the real question is what will “they” do with the results? What are stakeholders prepared to do except cut training budget or stop sending employees to training? Identify what will be useful to whom by creating a stakeholder matrix.

Will your scope also include the training program (aka Training Quality System) especially if it is not included in the Internal Audit Quality System? Is the quality system designed efficiently to process feedback and make the necessary changes that result from the evaluation results? Assessing how efficiently the function performs is another opportunity to improve the workflow by reducing redundancies thus increasing form completion speed and humanizing the overall user experience. What is not in scope? Is it clearly articulated?

TRAINING EFFECTIVENESS STRATEGY: Timing is of course, everything

Your strategy needs to include when to administer your evaluation studies. With course feedback surveys, we are used to immediately after otherwise, the return rate drops significantly. For knowledge checks we also “test” at the end of the session. Logistically it’s easier to administer because participants are still in the event and we also increase the likelihood of higher “retention” scores.

But when does it make more sense to conduct the evaluation? Again, it depends on what the purpose is.

  • Will you be comparing before and after results? Then baseline data needs to be collected before the event begins. I.e. current set of Key Performing Indicators, Performance Metrics
  • How much time do the learners need to become proficient enough so that the evaluation is accurate? I.e. immediately after, 3 months or realistically 6 months after?
  • When are metrics calculated and reported? Quarterly?
  • When will they be expected to perform back on the job?

Measuring Training Transfer: 3, 6 and maybe 9 months later

We can observe whether a behavior occurs and record the number of people who are demonstrating the new set of expected behaviors on the job. We can evaluate the quality of a work product (such as a completed form or executed batch record) by recording the number of people whose work product satisfies the appropriate standard or target criteria. We can record the frequency with which target audience promotes the preferred behaviors in dialogue with peers and supervisors and in their observed actions.

It is possible to do this; however, the time, people and budget to design the tools and capture the incidents are at the core of management support for a more vigorous training effectiveness strategy. How important is it to the organization to determine if your training efforts are effectively transferring back to the job? How critical is it to mitigate the barriers that get in the way when the evaluation results show that performance improved only marginally? It is cheaper to criticize the training event(s) rather than address the real root cause(s). See Training Does Not Stand Alone (Transfer Failure Section).

TRAINING EFFECTIVENESS STRATEGY: Right tool for the right evaluation type

How will success be defined for each “training” event or category of training content? Are you using tools/techniques that meet your stakeholders’ expectations for training effectiveness? If performance improvement is the business goal, how are you going to measure it? What are the performance goals that “training” is supposed to support? Seek confirmation on what will be accepted as proof of learning, evidence of transfer to the workplace, and identification of leading indicators of organizational improvement. These become the criteria by which the evaluation has value for your stakeholders. Ideally, the choice of tool should be decided after the performance analysis is discussed and before content development begins.

Performance Analysis first; then possibly a training needs analysis

Starting with a performance analysis recognizes that performance occurs within organizational systems. The analysis provides a 3-tiered picture of what’s encouraging/blocking performance for the worker, work tasks, and/or the workplace and what must be in place for these same three levels in order to achieve sustained improvement. The “solutions” are tailored to the situation based on the collected data and not on an assumption that training is needed. Otherwise, you have a fragment of the solution with high expectations for solving “the problem” and relying on the evaluation tool to provide effective “training” results. Only when the cause analysis reveals a true lack of knowledge, will training be effective.

Why aren’t more Performance Analyses being conducted?
For starters, most managers want the quick fix of training because it’s a highly visible activity that everyone is familiar and comfortable with. The second possibility lies in the inherent nature of performance improvement work. Very often the recommended solution resides outside of the initiating department and requires the cooperation of others.   Would a request to fix someone else’s system go over well where you work? A third and most probable reason is that it takes time, resources, and a performance consulting skill set to identify the behaviors, decisions and “outputs” that are expected as a result of the solution. How important will it be for you to determine training effectiveness for strategic corrective actions?

You need an execution plan

Given the variety of training events and level of strategic importance occurring within your organization, one standard evaluation tool may no longer be suitable. Does every training event need to be evaluated at the same level of rigor? Generally speaking, the more strategic the focus is, the more tedious and timely the data collection will be. Again, review your purpose and scope for the evaluation. Refer to your stakeholder matrix and determine what evaluation tool(s) is better suited to meet their expectations.

For example, completing an after-training survey for every event is laudable; however, executive leadership values this data the least. According to Jack and Patricia Phillips (2010), they want to see business impact the most. Tools like balanced scorecards can be customized to capture and report on key performing indicators and meaningful metrics. Develop your plan wisely, generate a representative sample size initially and seek stakeholder agreement to conduct the evaluation study.

Life after the evaluation: What are you doing with the data collected?

Did performance improve? How will the evaluation results change future behavior and/or influence design decisions? Or perhaps the results will be used for budget justification, support for additional programs or even a corporate case study? Evaluation comes at the end but in reality, it is continuous throughout. Training effectiveness means evaluating the effectiveness of your training: your process, your content and your training quality system. It’s a continuous and cyclical process that doesn’t end when the training is over. – VB

 

Jack J. Phillips and Patricia P. Phillips, “How Executives View Learning Metrics”, CLO, December 2010.

Recommend Reading:

Jean-Simon Leclerc and Odette Mercier, “How to Make Training Evaluation a Useful Tool for Improving L &D”, Training Industry Quarterly, May-June, 2017.

 

Did we succeed as intended? Was the training effective?

When you think about evaluating training, what comes to mind? It’s usually a “smile sheet”/ feedback survey about the course, the instructor and what you found useful. As a presenter/instructor, I find the results from these surveys very helpful, so thank you for completing them. I can make changes to the course objectives, modify content or tweak activities based on the comments. I can even pay attention to my platform skills where noted. But does this information help us evaluate if the course was successful?

Formative vs. Summative Distinction

Formative assessments provide data about the course design. Think form-ative; form-at of the course. The big question to address is whether the course as designed met the objectives. For example, the type of feedback I receive from surveys gives me comments and suggestions about the course.

Summative assessments are less about the course design and more about the results and impact. Think summative; think summary. It’s more focused on the learner; not the instructional design. But when the performance expectations are not met or the “test” scores are marginal, then the focus shifts back to the course, instructor/trainer and instructional designer with the intent to find out what happened? What went wrong? When root cause analysis fails to find the cause, it’s time to look a little deeper at the objectives.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program. These statements aka objectives determine the content and they also drive the assessment. For example, a written test or knowledge check is typically used for classroom sessions that ask questions about the content. In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring? How much of the content they remember and maybe how much of the content they can apply when they return to work?

Training effectiveness on the other hand is really an evaluation of whether we achieved the desired outcome. So I ask you, what is the desired outcome for your training: to gain knowledge (new content) or to use the content correctly back in the workplace? The objectives need to reflect the desired outcome in order to determine the effectiveness of training.

What is your desired outcome from training?

Levels of objectives, who knew?

Many training professionals have become familiar with Kirkpatrick’s 4 Levels of Evaluation over the course of their careers, but less are acquainted with Bloom’s Taxonomy of Objectives. Yes, objectives have levels of increasing complexity resulting in higher levels of performance. Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid range. Evaluating and creating are at the highest levels.

If your end in mind is knowledge gained ONLY, continue to use the lower level objectives. If however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet performance expectations. They need to become more performance based. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Accuracy of the assessment tools

The tools associated with the 4 levels of evaluation can be effective when used for the right type of assessment. For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments. Level 2 (Learning) are effective in measuring retention and minimum comprehension and go hand in hand with learning based objectives. But when the desired outcomes are actually performance based, Level 2 knowledge checks need to shift up to become more application oriented such as “what if situations” and scenarios requiring analysis, evaluating, and even problem solving. Or shift altogether to Level 3 (Behavior) and develop a new level of assessments such as demonstrations and samples of finished work products.

Trainers are left out of the loop

But, today’s trainers don’t always have the instructional design skill set developed. They do the best they can with the resources given including reading books and scouring the Internet. For the most part, their training courses are decent and the assessments reflect passing scores. But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the performance expectations. This is where the gap exists. Trainers build courses based on knowledge / content instead and develop learning objectives that determine what learners should learn. They create assessments to determine whether attendees have learned the content; but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Performance objectives drive a higher level of course design

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish. While the content may be the same or very similar, how we determine whether employees are able to execute post training requires more thought about the accuracy of the assessment. It must be developed from the performance objectives in order for it to be a valid “instrument”. The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real work place conditions.

To ensure learner success with the assessment, the training activities must also be aligned with the level of the objectives. This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   This includes making mistakes and how to recognize a deviation is occurring. Michael Allen refers to this as “building an authentic performance environment”. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended. Are you willing to step up and do what it takes to ensure training is truly effective? – VB

 

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

Why Knowledge Checks are Measuring the Wrong Thing

When I taught middle school math, tests were used to assess knowledge comprehension and some application with word problems and a few complex questions requiring logic proofs. Results were captured via a score; a metric if you will as to how well you answered the questions and very appropriate in academia.

In our quest for training evaluation metrics, we have borrowed the idea of testing someone’s knowledge as a measure of effectiveness. This implies that a corporate classroom mirrors an educational classroom and testing means the same thing – a measure of knowledge comprehension. However, professors, colleges, universities and academic institutions are not held to the same results oriented standard. In the business world, results need to be performance oriented, not knowledge gained.

So why are we still using tests?

Call it a quiz, a knowledge check or any other name it is still assessing some form of knowledge comprehension. In training effectiveness parlance, it is also known as a level 2 evaluation. Having the knowledge is no guarantee that it will be used correctly back on the job. Two very common situations occur in the life science arena where “the quiz” and knowledge checks are heavily used: Annual GMP Refresher and Read & Understand Approach for SOPs.

Life sciences companies are required by law to conduct annual regulations training (GMP Refreshers) so as to remain current. To address the training effectiveness challenge, a quiz / questionnaire / knowledge assessment (KA) is added to the event. But what is the KA measuring? Is it mapped to the course /session objectives or are the questions so general that they can be answered correctly without having to attend the sessions? Or worse yet, are the questions being recycled from year to year / event-to-event? What does it mean for the employee to pass the knowledge check or receive 80% or better? When does s/he learn of the results? In most sessions, there is no more time left to debrief the answers. This is a lost opportunity to leverage feedback into a learning activity. How do employees know if they are leaving the session with the “correct information”?

The other common practice is to include a 5 multiple choice as a knowledge check for Read & Understood (R & U) SOPs especially for revisions. What does it mean if employees get all 5 questions right? That they will not make a mistake? That the R & U method of SOP training is effective? The search function in most e-doc systems is really good at finding the answers. It doesn’t necessarily mean that they read the entire procedure and retained the information correctly. What does it mean for the organization if human errors and deviations from procedures are still occurring? Does it really mean the training is ineffective?

What should we be measuring?

The conditions under which employees are expected to perform need to be the same conditions under which we “test” them. So it makes sense to train ‘em under those same conditions as well. What do you want/need your employees (learners) to do after the instruction is finished? What do you want them to remember and use from the instruction in the heat of their work moments? Both the design and assessment need to mirror these expectations. And that means developing objectives that guide the instruction and form the basis of the assessment. (See Performance Objectives are not the same as Learning Objectives.)

So ask yourself, when in their day to day activities will employees need to use this GMP concept? Or, where in the employees’ workflow will this procedure change need to be applied? Isn’t this what we are training them for? Your knowledge checks need to ensure that employees have the knowledge, confidence and capability to perform as trained. It’s time to re-think what knowledge checks are supposed to do for you. – VB

Need to write better Knowledge Check questions?  Need to advise peers and colleagues on the Do’s and Don’ts for writing test questions?

Facilitating the Shift from Passive Listening to Active Learning

On the one end of “The Learner Participation Continuum” is lecture which is a one way communication and requires very little participation.  At the other end, we have experiential learning and now immersive learning environments with the introduction of 3D graphics, virtual simulations and augmented reality.

In the middle of the range are effective “lectures” and alternate methods such as:

  • Demonstrations
  • Case Study
  • Guided Teaching
  • Group Inquiry
  • Read and Discuss
  • Information Search.

Shift one step to right to begin the move to active learningNow before you insist that the SME as Facilitator move to the far right and conduct only immersive sessions, a word of caution is in order. It’s really about starting with the learners’ expectations and the current organizational culture and then moving one step to the right. If they are used to lectures from SMEs, then work on delivering effective lectures before experimenting with alternate training methods. The overnight shift may be too big of a change for the attendees to adjust to despite their desire for no more boring lectures. Small incremental steps is the key.

How is this done? Upfront in the design of the course materials. The course designers have spent time and budget to prepare a leaders guide that captures their vision for delivering the course.  SMEs as Facilitators (Classroom SMEs) need to study the leader’s guide and pay attention to the icons and notes provided there. These cues indicate the differentiation from lecture, to an activity whether that be self, small group, or large group. While it may be tempting to skip exercises to make up for lost time, it is better for learner participation to skip lecture and modify an activity if possible.

During the knowledge transfer session/ discussion with the course designer and/or instructor, Classroom SMEs make notes of how the instructor transitions from one slide to the next and how s/he provided instruction for the activity. This is a good time for Classroom SMEs to ask how to modify content or an activity if certain conditions should occur. Especially important for SMEs to ask is what content is critical and what content can be skipped if time runs short. It is always a good idea for the Classroom SME to mark-up his/her copy of the materials. And then again after the first delivery to really make it their own leader’s guide. -VB

Speaking of personalizing their leaders’ guide, SMEs may want to experiment with different ways to “open a session” to get experience with a variety of techniques and observe which ones yield better results.

Are you doing the Training 2 Step?

Has this situation ever happened to you? You are in a root cause meeting and the *CAPA investigator is conducting an interview with the primary individual involved with the discrepancy. When questioned why did this happen, he shrugs first and then quietly mumbles I don’t know. When pushed further, he very slowly says I just kind of went brain dead for a moment.  And then silence.

While that may be the honest truth, the investigator must resist the temptation to label it as Operator Error and explore possible causes. One of my favorite root cause analysis tools for “I Don’t Know Why” response is to use the Fish Bone Diagram also known as the 4 M’s diagram. This tool provides a structured focus to explore many possibilities and not just stop at the first plausible cause; such as Operator Error. Aptly nicknamed, the 4 M’s are Man, Machine, Methods and Materials.   When the results of this exercise points to a training or operator related issue, don’t stop at “operator error –> retrain”.

Consider for a moment, what this retraining session would look like. Will re-reading the procedure be enough to “jog his memory”? Will repeating the procedure be a good use of precious time when s/he already knows what to do? More than likely it won’t prevent “going brain dead” from happening again. Instead, do the HPISC Training 2 Step:

Step 1 – confirm the results of the gap analysis

What task, what step(s) or actions are in question?

Step 2 – address why the original training did not transfer back to the job.

Using the 4 M’s diagram as the framework, explore Man, Machine, Methods and Materials questions with regards to the training this operator receives. See diagram below. The full set of questions can be found in the eBook Training Cause Analysis.

4msfortrca

Is this really worth it?

I think it is. Conducting these 2 steps will accomplish two objectives.   It will provide further evidence that some kind of training is needed. And it will highlight what areas are in need of revising either for the performer, the training program or course materials. Yet, there are some who will resist this added work because it’s easier to find blame than to uncover the cause. Fixing the true root cause could trigger a re-validation of the process or an FDA filing if it’s a major process change.   Why create more work? Isn’t it easier to just retrain ‘em? No, not really. Finding the true root cause is the only effective way of eliminating many of the costly, recurring problems that can plague manufacturers.

But what if

Some folks will push back with the excuse – “this never caused a problem until now”, so it must be the operator’s fault! This may be the first time it was discovered but that does not mean the procedure is 100% accurate. Often, experienced operators know how to work around an incorrect step and don’t always report a misstep in the procedure while a less savvy operator follows the procedure and causes the non-conformance to occur. See Sidebar SOP Logic Rules. Is the procedure difficult, lengthy or requires weeks to become proficient let alone qualified? Was the qualification routine or performed as a simulation? Was the procedure written with support from a lead operator or qualified trainer? Did the draft version undergo some kind of field test or dry run prior to release? And the classic situation, are proposed changes hung up in change control awaiting effective release?screen-shot-2016-10-19-at-4-33-39-pm

Understanding Why Human Errors Occur

Industry practice is evolving to explore why people make the decisions they do by looking at the Organization’s systems. It’s usually a poor decision made somewhere in the error chain. We must believe that the person who made the poor decision did not intend for the error to occur. As part of CAPA investigations, we need to explore their physical environment as well; the conditions under which they make those decisions. The Training Program Improvement Checklist can be requested using this link to capture your findings.

If you are going to spend time and money on training, at least identify what the gap is; fix that and then “train” or provide awareness on what was corrected to prevent the issue from re-occurring again.   That is after all, the intention of *Corrective Action Preventive Action investigations. -VB

You may to explore these other library gems:

BandAidsPicture1
Why the Band Aids Keep Falling Off

OpErrorPicture1

Retraining and Refresher Training: Aren’t they one in the same?

I say no, not at all. Ask an Operations Manager and he’ll acknowledge that what it’s called is less important than getting the “assignment” done and entered into the LMS. He’s usually more concerned about the loss of productivity during the training than the effectiveness of the training at that time. It isn’t until later when the training may have to be delivered again (repeated), that the comment “training doesn’t really work” is heard.

Retraining is typically delivered as repeat training. Corrective Actions from *CAPAs usually trigger these types of required training events. In the context of the specific CAPA, we uncover the error, mistake, non-conformance or what I like to call performance discrepancy from expected outcome. It is believed that by delivering the training again, the cause of the discrepancy will be resolved. That is if the root cause was determined to be a lack of knowledge, skill or not enough practice.

Some folks believe that more is better and that with several repeated training sessions, employees will eventually get it right. It always amazes me that we find time to do repeat training over and over again but complain very loudly for refresher training, significant **SOP revision training or even new content training.   (*Corrective Actions Preventive Actions, **Standard Operating Procedures).Retraining Quote

Refresher Training implies that training was already provided at least once. The intention here is to review on that content.   A lot of regulatory training requirements are generated to satisfy this need. Common examples are Annual GMP Refreshers and several OSHA standards such as Blood Borne Pathogens training. While the aim is to refresh on the content, it is not necessarily meant to just repeat the training. Also included is the part – “so as to remain current” with current practice, trends and new updates. Hence, refresher training needs to include new material based on familiar content.

Upon Biennial SOP Review

There are some folks who would like to use this required SOP activity to coincide with the need to “refresh” on SOPs already read and/or trained. The rationale being that if the SOP hasn’t revved in 2 or 3 years time, more than likely the training hasn’t been repeated either. So, it sounds like a good idea to require that SOPs be “refreshed” upon using the same SOP cycle. One could argue for the prevention of errors; thus, in theory, this sounds very proactive.

But donning my Instructional Designer Hat, I ask you, what is the definition of training – to close a knowledge gap or skill gap. What value is there for forcing a mandatory “refresher reading” on SOPs just because the procedure is due for technical review? In practice, this becomes one huge check mark exercise leading to a paper work /LMS backlog and might actually increase errors due to “information overload”! Again, what gap are you trying to solve? In the above refresher scenario, we are avoiding a compliance gap by satisfying regulatory requirements.

Refresher Retraining

Defending Your Training Process

For those of you who have fielded questions from regulators, you can appreciate how the very training record produced generates follow up questions.   How you describe the conditions under which the training occurred or is “labeled” can impact the message you are sending as well. Calling it retraining instead of refresher training implies that training had to be repeated as a result of a performance problem not meeting expectations or standards. Whereas refresher training occurs at a defined cycle to ensure that the forgetting curve or lack of practice is not a factor of poor performance. It is a routine activity for satisfying regulatory expectations.

For end users, clarifying the difference between refresher training and “repeat” training in your Policy/SOP not only defines the purpose of the training session, it also provides the proper sequence of steps to follow to ensure maximum effectiveness of the training. There’s a difference between training content that is new /updated vs. delivered as a repeat of the same materials.   Yes, new and/or updated design takes resources and time.   How many times do you want to sit through the same old same old and get nothing new from it? Recall the definition of insanity – doing more of the same while hoping for change.   You just might want to review your Training SOP right about now. – VB

 

 

Using Neuroscience to Maximize Learning: Why we should start paying attention to the Research

In October 2015, I had the privilege to have a discussion with Anne-Maree Hawkesworth, Technical Training Manager of AstraZeneca, Australia before the 2015 GMPTEA Biennial Conference kicked off. Anne-Maree was in Orlando, Florida to present her concurrent session entitled Insights from ‘Inside Out’ – Employing lessons in neuroscience to facilitate successful learning” during the conference. As an avid fan and follower of the neuroscience literature being published, I was hungry to learn more and she generously gave up a few hours of her time to meet me with over a latte and a nibble of delicious chocolate from Australia.   What follows is a snippet of the exchanged dialogue.

Q: Why has neuroscience become so popular all of a sudden?

Actually it’s been around for a while. It’s not new, even though it sometimes seems that way. For example, look at Ebbinghaus Forgetting Curve that is so frequently referenced. It was first introduced 1885. And there are other classic research studies available if you conduct a good search.

Q: Why do trainers need to pay attention to neuroscience and the recent literature?

Quite frankly, they need to start learning how to design their training using these principles. They have to stop lecturing from the slides and speaker notes.

Q: Okay, then what do they need to know?

Concepts like chunking, memory techniques, and the effects of multitasking. Multitasking is very bad for learning. You end up learning nothing. It becomes a waste and yet we are multi tasking now more than ever. For example, management is expecting us to do more. For example, take an e learning course and answer their emails while taking the course!

V- this means the design has to change.  AMH- exactly!

Q: We need help. What should trainers tell Management about neuroscience?

That less is actually more. Stop requiring us to dump more content in slides. We end up remembering less. If you won’t believe us, there’s scientific evidence to back up what we are saying! And don’t dictate how we use the classroom. For example, I have my learners standing for most of the sessions involving activities that I facilitate. In one of my sessions, I had removed the chairs from the room and used ZERO slides.   Imagine that! Oh and I love flip charts!

Bonus Tip: AMH shared a little secret with me. She revealed that Production folks like to do flip chart work. They just don’t want to be the spokesperson. So if you can get them past that, they’ll love being busy writing on the chart.

Q: I noticed that you didn’t include motivation in your slide deck. Was that intentional? How are they related?

I only had 60 minutes, but yes motivation is so very important. We have to keep them motivated to learn. We have to continually grab their attention.   It should be one of the 12 principles.

Q: Earlier you mentioned Chunking. What trends are you seeing in micro learning? Are you implementing any of it?

I am looking at small chunks of learning at the time you require the learning as opposed to “Just in Case” learning that tends to occur months in advance.  Micro-learning is great for follow-up to formal class room or eLearning to boost memory. I like micro-learning in the form of case studies and in particular branching scenarios. Cathy Moore has some great material on her blog and webinars on branching scenarios.

I also like to chunk information within my training and use lots of white space to help separate pieces of information, this helps in facilitating learning.

Q: I work with a lot of Qualified SME Trainers from Production.   How do you get past the brain lingo when you explain neuroscience?

You explain that there are parts of the brain that do different things at different times. There is no need to turn the session into brain science 101. I show them a slide or two and them move on.

Q: Earlier you mentioned “principles”. Can you elaborate on that?

I’d love to but we are near the end of our time together. I can recommend trainers look up John Medina’s 12 Brain Rules.  Briefly they are,

  1. Survival
  2. Stress
  3. Attention
  4. Sensory Integration
  5. Vision
  6. Exploration
  7. Exercise
  8. Sleep
  9. Wiring
  10. Memory
  11. Music
  12. Gender

Alas, I could have dialogued with her for the entire conference albeit, she was jet jagged and the latte was wearing off.   Thank you Anne-Maree for sharing your thoughts and effective classroom delivery techniques with us.   Together, we will shift the classroom design mindset.   -VB

Batteries Not Included: Not all Trainers come with Instructional Design skill set

So you are a Trainer. You know how to use Power Point (PPT) software and how to present in the classroom. Does this make you an Instructional Designer as well? Some say yes and others cry foul as they cling to their certificates and advanced degrees.   Instructional Design (ID) as a field of study has been offered by many prominent universities for quite some time and is now more known as Instructional Technology. Entire masters programs have been created to achieve this level of credentialing. So forgive me when I say, not every Trainer or Training Manager has the skill set or ID competency embedded in his/her toolbox.   It’s analogous to the toy box on the shelf at Toys R Us – “NOTE: Batteries Not Included”. Except in our case, the note may be missing from the resume, but definitely embedded into the job description! And for Compliance Trainers, the challenge becomes even more daunting to make GMP training lively.

Power Point Slides are only a visual tool

Interactive, immersive, engaging are great attributes that describe active training programs. But it comes at a price: an investment in instructional design skills. Using Power point slides does not make training successful. It’s one of the main tools a trainer uses to meet the objectives of the learning event, albeit a main one. It’s the design of the content/course that makes or breaks a training event. Yet, senior leaders are not grasping that just “telling them the GMPs” is not an effective delivery technique, nor is it engaging. Even if it’s backed up with a slide deck, its either “death by power point” or click to advance to next slide.   Koreen Pagano, in her June 2014 T & D article, “the missing piece”, describes it as “telling employees how to swim, then sending them out to sink, hoping they somehow can use the information we’ve provided to them to make it shore”, (p.42). To make matters worse, employees end up with disciplinary letters for deviations and CAPAs for failure to follow GMPs.

Look at the GMP Refresher outline for the last 3 years at your company. What is the ratio of content to interactivity? Oh, you say, those sessions are too large to pull off an activity? Rubbish, I respond. When I dig a little deeper, I usually discover a lack of ID skills and creativity is a factor. And then I hear, “Oh but we have so little time and all this content to cover, there’s no more room. If I had more time, you know, I’d add it in.” Koreen informs us that “training is supposed to prepare employees to be better, and yet training professionals often stop after providing content” (p.43).

Remind Me Again Why We Need Refreshers?

For many organizations the sole purpose of the training is to satisfy the compliance requirements. Hence, the focus is on just delivering the content. Ironically, the intent behind the 211.25 regulation is to ensure that employees receive training more than at orientation and frequently enough to remain current. The goal is to ensure compliance with GMPs and SOPs and improve performance where there are gaps. Improved business performance is the result and not just a check mark for 100% attended. And the practice of repeating the same video year after year as the annual refresher? Efficient yes, effective, well just look at your deviations and CAPA data to answer that one. When you shift your focus from delivering content only as the objective to a more learner centered design, your sessions become more performance oriented and your effectiveness reaches beyond just passing the GMP Quiz.

Cut Content to Add Interactivity

Unfortunately, full time trainers and SMEs have the “curse of too much knowledge” and it manifests itself in the classroom slide deck. STOP TALKING and get learners engaged in some form of activity, practice or reflection exercise. But please use some caution in moving from lecture to immersive techniques in one fell swoop! If your sessions are the typical gloom and doom lecture and you decide to jump right into games, you might have a mutiny in your next GMP refresher. Instead, you need to introduce participative exercises and interactivity slowly.   See No More Boring GMP Refreshers (impact story).

Mirror, Mirror on the Wall: Whose GMP Refresher is Boring NO More!

So how does a Compliance Trainer with limited ID skills and budget pull off a lively GMP Refresher? They attend GMP TEA Biennial Conferences year after year. During 2015 Conference, I will be teaching attendees how to move from passive lecture style GMP refreshers to active learner centered sessions via two concurrent sessions. One of the benefits of shifting to this design is the opportunity for learners to process the content, to make it meaningful for themselves and then associate memory links to it for later recall when the moment of need is upon them. This can’t happen while the trainer is lecturing. It happens during activities and reflection exercises designed to generate their own ideas during small group interactions and link it back to the course content/objectives. This is what Part 2 of the conference session is all about; brainstorming and sharing what GMP TEA members use for interactive activities related to GMPs. Visit gmptea.net for more about 2015 GMPTEA Conference agenda.

Hope to see you in both sessions! -VB

References:

Pagano, K. “The Missing Piece”, T & D, June 2014, pp. 41 – 45.

Rock, D. “Your Brain on Learning”, CLO, May 2015, pp. 30 – 33,48.