Why Do CAPAs Fail Their Effectiveness Checks?

When we start talking about deviations and CAPAs, we can’t help having a sidebar discussion about root causes and more specifically the rant about finding the true root cause.  I intentionally skipped that content in the previous blog.  It was my intention to kick off the new Deviation and CAPAs blog series by first looking at deviations by themselves.  And the learning opportunities deviations can provide us about the state of control for our quality systems.  From those deviations and ensuing CAPA investigations, I ask you this: are we improving anything for the long term (aka prevention).  Are we making any progress towards sustaining those improvements?

Corrective Actions Preventive Actions (CAPA) Steps

Let’s step back a moment and quickly review typical steps for CAPAs:

CAPA Components

The purpose of an Effectiveness Check (EC) is for verifying or validating that actions taken were effective and do not adversely affect product, device or process.  It goes beyond the statement in the investigation form to include a follow-up activity that closes the loop on the specific CAPA.  If an effectiveness check fails meaning the CA/PA was not effective or another deviation /nonconforming incident has occurred, we go back to the beginning and either start again or in most cases, we re-open the investigation.  The pressing question is why did the EC fail?  Almost instinctively, we believe that we did not find the true root cause.  Perhaps.  Was there a rush to close the investigation?  Probably.  Did the investigation team grab the first probable cause as the root cause because the “problem” felt familiar?  Maybe. Or is it a case of a fix that backfired into unintended consequences? Possibly. I will also suggest that the CA/PA may not have been aligned properly.

Ask these 3 questions about CA/PAs

  • Is the CA/PA Appropriate? The focus of this question is about the affected people.  What is the size of this audience? Is it mainly one person or groups of people?

Can the CA/PA be executed efficiently?  Is it for one site or multiple sites?

  • Is the CA/PA Economical? What budget is available?

Is it a “cheap” fix or a 3 – 6 month project? Or an expensive solution of more than 6 months and will need capital expenditure funding?

  • Is the CA/PA Feasible? The real question is about the timeline.

            Need it fast – within 3 months or

            Have time – don’t need until more than 3 months from now.

And then there is the unspoken 4th question – is the CA/PA “political”?  I experienced first hand what happens to CAPAs that are politically oriented.  Most of them failed their ECs.  Request “Can You Stay a Little While Longer”. The best CAPAs are the ones that map back to the root cause.

Introducing the HPISC CAPA Performance Chain

On the left hand side, you will recognize the 3 traditional tasks to complete.  After the EC is written, trace upwards to ensure that the EC maps back to the CA/PA and that the CA/PA maps back to the root cause; hence, the bottom up arrow.  On the right hand side are performance improvement activities that I use as a Performance Consultant (PC) to bring another dimension to the CAPA investigation, namely, Human Performance Improvement (HPI). 

Before I can write the root cause statement, I examine the “problem” also known as a Performance Discrepancy or an incident and I conduct a Cause Analysis that forces me to take a three tiered approach (the worker, the work tasks, the workplace) for the possible causes and not get bogged down in observable symptoms only.  The Performance Solution is more appropriately matched to the identified gap. In theory, this is what the corrective action(s) is supposed to do as well. During the performance solution planning, determination of success and what forms of evidence will be used happens with key stakeholders.  So that collecting the data happens as planned, not as an after thought, and the effectiveness is evaluated as discussed.    

What can we really control?

In RCA/CAPA meetings, I often hear about what management should do to fix the working conditions or how most of the operator errors are really managements’ fault for not taking the culture factor seriously enough.  While there may be some evidence to back that up, can we really control, reduce or eliminate the human factor?  Perhaps a future blog on understanding human errors will be released.

Management Can:

  • Design work situations that are compatible with human needs, capabilities and limitations
  • Carefully match employees with job requirements
  • Reward positive behaviors
  • Create conditions that optimize performance
  • Create opportunities to learn and grow professionally.

Clues for Failed Effectiveness Checks

One of the first activities to perform for a failed EC is to evaluate the effectiveness check statement.  I have read some pretty bizarre statements that challenge whether the EC was realistic to achieve at all. The conditions under which we expect people to perform must be the same as the conditions we evaluate them during an EC review.  So why would we set ourselves up to fail by writing ECs that don’t match normal workplace routines? What, because it looked good in the investigation report and it got the CAPA approved quicker?

Next, trace back each of the CAPA tasks to identify where to begin the re-investigation.  I also suggest that a different root cause analysis tool be used. And this is exactly what we did while I was coaching a cohort of Deviations Investigators.  Future blogs will discuss RCA tools in more detail. -VB

The Big Why for Deviations

As part of my #intentionsfor2019, I conducted a review of the past 10 years of HPIS Consulting.  Yes, HPISC turned 10 in August of 2018, and I was knee deep in PAI activities.  So there was no time for celebrations or any kind of reflections until January 2019, when I could realistically evaluate HPISC: vision, mission, and the big strategic stuff.  My best reflection exercise had me remembering the moment I created HPIS Consulting in my mind.

Human Performance Improvement (HPI) and Quality Systems

One of the phases for HPI work is a cause analysis for performance discrepancies.  The more I learned how the HPI methodology manages this phase the more I remarked on how similar it is to the Deviation /CAPA Quality System requirements.  And I found the first touch point between the two methodologies.  My formal education background and my current quality systems work finally united.  And HPIS Consulting (HPISC) became an INC.  

In my role of Performance Consultant (PC), I leverage the best techniques and tools from both methodologies.  Not just for deviations but for implementing the corrective actions sometimes known as HPI solutions.  In this new HPISC blog series about deviations, CAPAs, and HPI, I will be sharing more thoughts about HPISC touch points within the Quality Systems. For now, lets get back to Big Why for deviations.

Why are so many deviations still occurring? Have our revisions to SOPs and processes brought us farther from a “State of Control”? I don’t believe that is the intention. As a Performance Consultant, I consider deviations and the ensuing investigations rich learning opportunities to find out what’s really going on with our Quality Systems.

The 4 cross functional quality systems

At the core of the “HPISC Quality Systems Integration Triangle” is the Change Control system.  It is the heartbeat of the Quality Management System providing direction, guidance and establishing the boundaries for our processes.  The Internal Auditing System is the health check similar to our annual physicals; the read outs indicate the health of the systems.  Deviations/CAPAs are analogous to a pulse check where we check in at the current moment and determine whether we are within acceptable ranges or reaching action levels requiring corrections to bring us back into “a state of control”.  And then there is the Training Quality System, which in my opinion is the most cross-functional system of all.  It interfaces with all employees; not just the Quality Management System.  And so, it functions like food nourishing our systems and fueling sustainability for corrections and new programs.

Whether you are following 21CFR211.192 (Production Record Review) or ICHQ7 Section 2 or  820.100 (Corrective and Preventive Action), thou shall investigate any unexplained discrepancy and a written record of the investigation shall be made that includes the conclusion and the follow up. Really good investigations tell the story of what happen and include a solid root cause analysis revealing the true root cause(s) for which the corrective actions map back to nicely.  Thus, making the effectiveness checks credible. In theory, all these components flow together smoothly.  However, with the continual rise of deviations and CAPAs, the application of the Deviation /CAPA Management system is a bit more challenging for all of us.  

Remember the PA in C-A-P-A?

Are we so focused on the corrective part and the looming due dates we’ve committed to, that we are losing sight of the preventive actions? Are we rushing through the process to meet imposed time intervals and due dates that we kind of “cross our fingers and hope” that the corrective actions fix the problem without really tracing the impact of the proposed corrective solutions on the other integrated systems? Allison Rossett, author of First Things Fast: a handbook for performance analysis, explains that performance occurs within organizational systems and the ability to achieve, improve and maintain excellent performance, depends on integrated components of other systems that involve people. 

Are we likewise convincing ourselves that those fixes should also prevent re-occurrence? Well, that is until a repeat deviation occurs and we’re sitting in another root cause analysis meeting searching for the real root cause.  Thomas Gilbert, in his groundbreaking book, Human Competence: engineering worthy performance tells us, that it’s about creating valuable results without using excessive cost.  In other words, “worthy performance” happens when the value of business outcomes exceeds the cost of doing the tasks.  The ROI of a 3-tiered approach to solving the problem the first time, happens when employees achieve their assigned outcomes that produce results greater than the cost of “the fix”. 

Performance occurs within three tiers

So, donning my Performance Consulting “glasses”, I cross back over to the HPI methodology and open up the HPI solutions toolbox.  One of those tools is called a Performance Analysis (PA). This tool points us in the direction of what’s not working for the employee, the job tasks a/or the workplace. The outcome of a performance analysis produces a 3 tiered picture of what’s encouraging or blocking performance for the worker, work tasks, and/or the work environment and what must be done about it at these same three levels.  

Root cause analysis (RCA) helps us understand why the issues are occurring and provides the specific gaps that need fixing.  Hence, if PA recognizes that performance occurs within a system, then performance solutions need to be developed within those same “systems” in order to ensure sustainable performance improvement.  Otherwise, you have a fragment of the solution with high expectations for solving “the problem”.  You might achieve short-term value initially, but suffer a long-term loss when performance does not change or worsens. Confused between PA, Cause Analysis and RCA? Read the blog – analysis du jour.

Thank goodness Training is not the only tool in the HPI toolbox!   With corrective actions /HPI solutions designed with input from the 3 tiered PA approach, the focus shifts away from the need to automatically re-train the individual(s), to implementing a solution targeted for workers, the work processes and the workplace environment that will ultimately allow a successful user adoption for the changes/improvements.   What a richer learning opportunity than just re-reading the SOP! -VB

  • Allison Rossett, First Things Fast: a handbook for Performance Analysis; 2nd edition 
  • Thomas F. Gilbert, Human Competence: Engineering Worthy Performance
You might want to also read:

Did we succeed as intended? Was the training effective?

When you think about evaluating training, what comes to mind? It’s usually a “smile sheet”/ feedback survey about the course, the instructor and what you found useful. As a presenter/instructor, I find the results from these surveys very helpful, so thank you for completing them. I can make changes to the course objectives, modify content or tweak activities based on the comments. I can even pay attention to my platform skills where noted. But does this information help us evaluate if the course was successful?

Formative vs. Summative Distinction

Formative assessments provide data about the course design. Think form-ative; form-at of the course. The big question to address is whether the course as designed met the objectives. For example, the type of feedback I receive from surveys gives me comments and suggestions about the course.

Summative assessments are less about the course design and more about the results and impact. Think summative; think summary. It’s more focused on the learner; not the instructional design. But when the performance expectations are not met or the “test” scores are marginal, then the focus shifts back to the course, instructor/trainer and instructional designer with the intent to find out what happened? What went wrong? When root cause analysis fails to find the cause, it’s time to look a little deeper at the objectives.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program. These statements aka objectives determine the content and they also drive the assessment. For example, a written test or knowledge check is typically used for classroom sessions that ask questions about the content. In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring? How much of the content they remember and maybe how much of the content they can apply when they return to work?

Training effectiveness on the other hand is really an evaluation of whether we achieved the desired outcome. So I ask you, what is the desired outcome for your training: to gain knowledge (new content) or to use the content correctly back in the workplace? The objectives need to reflect the desired outcome in order to determine the effectiveness of training.

What is your desired outcome from training?

Levels of objectives, who knew?

Many training professionals have become familiar with Kirkpatrick’s 4 Levels of Evaluation over the course of their careers, but less are acquainted with Bloom’s Taxonomy of Objectives. Yes, objectives have levels of increasing complexity resulting in higher levels of performance. Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid range. Evaluating and creating are at the highest levels.

If your end in mind is knowledge gained ONLY, continue to use the lower level objectives. If however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet performance expectations. They need to become more performance based. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Accuracy of the assessment tools

The tools associated with the 4 levels of evaluation can be effective when used for the right type of assessment. For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments. Level 2 (Learning) are effective in measuring retention and minimum comprehension and go hand in hand with learning based objectives. But when the desired outcomes are actually performance based, Level 2 knowledge checks need to shift up to become more application oriented such as “what if situations” and scenarios requiring analysis, evaluating, and even problem solving. Or shift altogether to Level 3 (Behavior) and develop a new level of assessments such as demonstrations and samples of finished work products.

Trainers are left out of the loop

But, today’s trainers don’t always have the instructional design skill set developed. They do the best they can with the resources given including reading books and scouring the Internet. For the most part, their training courses are decent and the assessments reflect passing scores. But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the performance expectations. This is where the gap exists. Trainers build courses based on knowledge / content instead and develop learning objectives that determine what learners should learn. They create assessments to determine whether attendees have learned the content; but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Performance objectives drive a higher level of course design

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish. While the content may be the same or very similar, how we determine whether employees are able to execute post training requires more thought about the accuracy of the assessment. It must be developed from the performance objectives in order for it to be a valid “instrument”. The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real work place conditions.

To ensure learner success with the assessment, the training activities must also be aligned with the level of the objectives. This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   This includes making mistakes and how to recognize a deviation is occurring. Michael Allen refers to this as “building an authentic performance environment”. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended. Are you willing to step up and do what it takes to ensure training is truly effective? – VB

 

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

Are you doing the Training 2 Step?

Has this situation ever happened to you? You are in a root cause meeting and the *CAPA investigator is conducting an interview with the primary individual involved with the discrepancy. When questioned why did this happen, he shrugs first and then quietly mumbles I don’t know. When pushed further, he very slowly says I just kind of went brain dead for a moment.  And then silence.

While that may be the honest truth, the investigator must resist the temptation to label it as Operator Error and explore possible causes. One of my favorite root cause analysis tools for “I Don’t Know Why” response is to use the Fish Bone Diagram also known as the 4 M’s diagram. This tool provides a structured focus to explore many possibilities and not just stop at the first plausible cause; such as Operator Error. Aptly nicknamed, the 4 M’s are Man, Machine, Methods and Materials.   When the results of this exercise points to a training or operator related issue, don’t stop at “operator error –> retrain”.

Consider for a moment, what this retraining session would look like. Will re-reading the procedure be enough to “jog his memory”? Will repeating the procedure be a good use of precious time when s/he already knows what to do? More than likely it won’t prevent “going brain dead” from happening again. Instead, do the HPISC Training 2 Step:

Step 1 – confirm the results of the gap analysis

What task, what step(s) or actions are in question?

Step 2 – address why the original training did not transfer back to the job.

Using the 4 M’s diagram as the framework, explore Man, Machine, Methods and Materials questions with regards to the training this operator receives. See diagram below. The full set of questions can be found in the eBook Training Cause Analysis.

4msfortrca

Is this really worth it?

I think it is. Conducting these 2 steps will accomplish two objectives.   It will provide further evidence that some kind of training is needed. And it will highlight what areas are in need of revising either for the performer, the training program or course materials. Yet, there are some who will resist this added work because it’s easier to find blame than to uncover the cause. Fixing the true root cause could trigger a re-validation of the process or an FDA filing if it’s a major process change.   Why create more work? Isn’t it easier to just retrain ‘em? No, not really. Finding the true root cause is the only effective way of eliminating many of the costly, recurring problems that can plague manufacturers.

But what if

Some folks will push back with the excuse – “this never caused a problem until now”, so it must be the operator’s fault! This may be the first time it was discovered but that does not mean the procedure is 100% accurate. Often, experienced operators know how to work around an incorrect step and don’t always report a misstep in the procedure while a less savvy operator follows the procedure and causes the non-conformance to occur. See Sidebar SOP Logic Rules. Is the procedure difficult, lengthy or requires weeks to become proficient let alone qualified? Was the qualification routine or performed as a simulation? Was the procedure written with support from a lead operator or qualified trainer? Did the draft version undergo some kind of field test or dry run prior to release? And the classic situation, are proposed changes hung up in change control awaiting effective release?screen-shot-2016-10-19-at-4-33-39-pm

Understanding Why Human Errors Occur

Industry practice is evolving to explore why people make the decisions they do by looking at the Organization’s systems. It’s usually a poor decision made somewhere in the error chain. We must believe that the person who made the poor decision did not intend for the error to occur. As part of CAPA investigations, we need to explore their physical environment as well; the conditions under which they make those decisions. The Training Program Improvement Checklist can be requested using this link to capture your findings.

If you are going to spend time and money on training, at least identify what the gap is; fix that and then “train” or provide awareness on what was corrected to prevent the issue from re-occurring again.   That is after all, the intention of *Corrective Action Preventive Action investigations. -VB

You may to explore these other library gems:

BandAidsPicture1
Why the Band Aids Keep Falling Off

OpErrorPicture1

The Silver Bullet for Performance Problems Doesn’t Exist

Oh but if it did, life for a supervisor would be easier, right? Let’s face it, “people” problems are a big deal for management. Working with humans does present its challenges, such as miscommunications between staff, data entry errors, or rushing verification checks. Sometimes, the task at hand is so repetitive that the result is assumed to be okay and gets “a pass”.  Add constant interruptions to the list and it becomes even harder not to get distracted and lose focus or attention to the detail.

Actual behavior vs. performing as expected

In their book, Performance Consulting: Moving Beyond Training, Dana Gaines Robinson and James C. Robinson describe performance as what the performer should be able to do. A performance problem occurs when the actual behavior does not meet expectation (as in should have been able to do).   Why don’t employees perform as expected? Root cause analysis helps problem solvers and investigators uncover a myriad of possible reasons.   For Life Sciences companies, correcting mistakes and preventing them from occurring again is at the heart of CAPA systems (Corrective Actions Preventive Actions).

A closer look at performance gaps

Dana and James Robinson conducted research regarding performer actions and sorted their results into three categories of obstacles:

  • Conditions of performers
  • Conditions of the immediate managers
  • Conditions of the organization

A checklist for common Performance Causes  – scroll down for the Tool.

But, weren’t they trained and qualified?

Hopefully, employees are trained using an approved OJT (On the Job Training) Methodology in which they are shown how to execute the task and then given opportunities to practice multiple times to become proficient. During these sessions, they are coached by Qualified Trainers and given feedback on what’s right (as expected) and given specific instructions to correct what’s not right with suggestions for tweaking their performance so that their final performance demonstration is on par with their peer group. At the conclusion of the qualification event, employees must accept that they now own their deviations (mistakes) from this point forward. So what gets in the way of performing “as they should” or in compliance speak – according to the procedure?

Is it a lack of knowledge, skill or is it something else?

The Robinson’s explain that performance is more than the training event. It’s combination of the overall learning experience and the workplace environment that yields performance results. Breaking that down into a formula per se, they suggest the following: learning experience x workplace environment = performance results.

The root cause investigation will include a review of training and the qualification event as well as a discussion with the performer.

  • Is it a lack of frequency; not a task often performed?
  • Is it a lack of feedback or delayed feedback in which the deviation occurred without their awareness?
  • Is it task interference?

The work environment includes organizational systems and business unit processes that together enable the performer to produce the outcomes as “expected”.   These workplace factors don’t always work in perfect harmony resulting in obstacles that get in the way of “expected” performance:

  • Lack of authority – unclear roles, confusing responsibilities?
  • Lack of time – schedule conflicts; multi-tasking faux pas?
  • Lack of tools – reduced budgets?
  • Lack of poorly stored equipment/tools – lost time searching?

Isn’t it just human nature?

Once the root cause investigation takes on a human element attention, it’s easy to focus on the performer and stop there.   If it’s the first time for the performer or first instance related to the task, it’s tempting to label the event as an isolated incident. But when it comes back around again, it becomes apparent there was a “failure to conduct an in-depth investigation” to correct and prevent. Not surprisingly, a push back of “Operator Error as Root Cause” has forced organizations to look deeper into the root causes involving Humans.

Who’s human nature?

Recall that one of the categories of the researched obstacles was “conditions of the immediate managers”. This makes managers uncomfortable. With so much on their plates, managing a people performance problem is not what they want to see. A silver bullet like a re-training event is a nice activity that gets a big red check mark on their to-do list. However, Robert Mager and Peter Pipe, in their book, Analyzing Performance Problems, provide insights to managing direct reports that may lead to unintended consequences. A brief list can be found here – scroll to Tool: Performance Causes.  (It’s not always the performer’s fault.)

It takes all three to correct a performance problem

soln-people-performance-problemThe third category of researched obstacles clustered around “conditions of the organization”.  I’ve already discussed task interference above. To suggest that organizations are setting up their employees to fail is pushing it just a bit too far.   So I won’t go there, but it is painful for some leaders to come to terms with the implication. In order to prevent issues from reoccurring, an examination of the incidents and quite possibly a restructuring of systems have to occur, because automatic re-training is not the only solution to a “people performance problem”. –VB

Robinson DG, Robinson JC. Performance Consulting: Moving beyond training. San Francisco: Berrett-Koehler; 1995.

Mager R, Pipe P. Analyzing performance problems. Belmont: Lake Publishing; 1984.

Tired of repeat errors – ask a Performance Consultant to help you design a better corrective action

In this last “Making HPI Work for Compliance Trainers” series, I blog about one of the biggest complaints I hear over and over again from Compliance Trainers – management doesn’t really support training.  It’s hard to ask for “more of the same” even though you know your programs are now different.  In previous blogs, I shared why management hasn’t totally bought into the HPI methodology yet. See the blog Isn’t this still training?

 

Given the constant pressure to shrink budgets and improve the bottom line, managers don’t usually allow themselves the luxury of being proactive especially when it comes to training.  So they tend to fall back on quick fix solutions that give them a check mark and “clears their desk” momentarily.  For the few times this strategy works, there are twice as many times when those fixes back fire and the unintended consequences are worse.

 

In the article, “Why the Band Aids Keep Falling Off”, I provide an alternate strategy that emphasizes moving away from events-only focus to exploring the three levels of interaction that influence performance: individual performer, task/process, organizational quality systems.  These same three levels are where performance consultants carry out their best work when supported by their internal customers.  The good news is that the first step is the same; it begins with a cause analysis.  See the blog Analysis du jour  for more thoughts on why these are essentially the same approach.

 

The difference is that the corrective action is not a reactive quick fix but a systems approach to correcting the issue and preventing it from showing up again.  System based solutions are the foundation of many HPI/HPT projects that require cross functional support and collaborative participation across the site / organization.  And this is where a PC needs support from senior leaders.

 

We wrap up this series here and introduce the next series – Gaining Management Support – where I blog about credibility, trust, and access and how these 3 concepts impact relationship management.

What’s the difference between Trainers and Performance Consultants: Aren’t they one and the same?

After 10 years of HPI consulting, I’m still being asked this question a lot.  In the blog, “Isn’t this still training”, I shared why it still looks like training.  Alas, this blog brings us to the beginning of another series within the Human Performance Improvement (HPI) arena.  I’m calling it “HPI: Making it Work for Compliance Trainers”. So, in this blog, I will expand upon 6 elements of comparison to illustrate the difference between the two and the depth of impact one has over the other. 

FOCUS

Training addresses the learning needs of employees.  Various definitions include closing the knowledge and skill gap of what they know now and what they know afterwards.  It’s built on the assumption that the cause of the gap is a lack of knowledge and skill.  Performance Consulting addresses business goals and performance needs of the affected employees.  Training is just one of the possible solutions that can be used; not the only one.

OUTPUTS

A training solution delivers a structured learning event.  Whether it is a classroom or virtual or self -led, the event itself is the end goal.  Performance Consulting or HPI projects are implemented to improve performance.  The end goal is not about the solution such as the specific HPI Project, but rather a positive change in performance that leads to the achievement of the business goal.  The endpoint is “further down the road”.  So it takes longer to produce the results.

ACCOUNTABILITY

With training, the Trainer is held accountable for the event.  In a lot of organizations, there is an implied but not spoken accountability for the results back on the job.  But without the proper systems and support mechanisms in place, many Trainers get “blamed” for training transfer failure.  Here’s the big difference for me.  Performance Consultants (PCs) partner with their internal customers, system owners and business leaders in support of the business goals.  The accountability for improved performance becomes shared across the relationships.

ASSESSMENTS

Trainers typically conduct a needs analysis to design the best learning “program” or course possible.  PCs conduct performance analyses gaps assessments to identify causes that can go beyond knowledge and skills.  See the blog, “Analyses du jour”.

MEASURES
Trainers
very often use course evaluation sheets as a form of measurement.  In the Compliance Training arena, knowledge checks and quizzes have also become the norm.  PCs measure the effect on performance improvement and achievement of business objectives.

ORGANIZATIONAL GOALS

This is another key differentiator.  Training is viewed as a cost typically.  Compliance Trainers are all too familiar with the phrase, “GMP Training is a necessary evil”.  PCs become business partners in solving performance gaps and accomplishing organizational goals.

For a visual graphic and expanded description of these 6 elements, you can request HPISC white paper, Why They Still Want Training?

I also recommend that you request the HPISC white paper, Performance Analysis: lean approach for performance problems.  

 

Analyses du jour: Isn’t it really all the same thing?

So there’s root cause analysis and gap analysis and now performance cause analysis?  Is there a difference? Do they use different tools?  It can be overwhelming to decipher through the jargon, no doubt!  I think it depends on which industry you come from and whether your focus is a regulatory / quality system point of view or performance consulting perspective.  To me, it doesn’t change the outcome.  I still want to know why the deviation occurred, how the mistake that was made and /or what allowed the discrepancy to happen.  Mix and matching the tools allows me to leverage the best techniques from all.

Why we love root cause analysis

For starters, it’s GMP and we get to document our compliance with CAPA requirements.  It allows us to use tools and feel confident that our “data doesn’t lie”.  This bodes well for our credibility with management.  And it provides the strategic connection between our training solution (as a corrective action) and site quality initiatives thus elevating the importance and quite possibly the priority for completing the corrective action on time.

Asking the right questions

Root cause analysis and problem solving steps dove tail nicely.  See sidebar below.  It requires us to slow down and ask questions methodically and sequentially.  More than one question is asked, for sure.  When you rush the process, it’s easy to grab what appears to be obvious.  And that’s one of the early mistakes that can be made with an over reliance on the tools.  The consequence?  Jumping to the wrong conclusion that automatic re-training or refresher training is the needed solution.  Done, checkmark.  On to the next problem that needs a root cause analysis. But when the problem repeats or returns with a more serious consequence, we question why the training did not transfer or we wonder what’s wrong with the employee – why is s/he not getting this yet?

Side Bar -Double Click to Enlarge.
Side Bar -Double Click to Enlarge.

No time to do it right, but time to do it twice!

Solving the problem quickly and rapidly closing the CAPA allows us to get back to our other pressing tasks.  Unfortunately, “band-aids” fall off.  The symptom was only covered up and temporarily put out of sight, but the original problem wasn’t solved.  So now, we must investigate again (spend more time) and dig a little deeper.  We have no time to do it right, but find the time to do it twice.  Madness!

Which tool to use?

My favorite human performance cause tool is the fish bone diagram, albeit the “ 5 Whys Technique” is a close second.  Both tools force you to dig a little deeper into the causes.  Yes, the end result often reveals something is amiss with “the training”, but is it man, machine, method or materials? Ah-hah, that is very different than repeat training on the procedure!  Alas, when we have asked enough right questions, we are led to the true cause(s).  That is the ultimate outcome I seek no matter what you call the process or which tool is used. -VB

HPIS C. has articles, impact stories and white papers.
Published article – Why the Band Aids Keep Falling Off

 

Request this Job Aid from HPIS C. Website.