Why Do CAPAs Fail Their Effectiveness Checks?

When we start talking about deviations and CAPAs, we can’t help having a sidebar discussion about root causes and more specifically the rant about finding the true root cause.  I intentionally skipped that content in the previous blog.  It was my intention to kick off the new Deviation and CAPAs blog series by first looking at deviations by themselves.  And the learning opportunities deviations can provide us about the state of control for our quality systems.  From those deviations and ensuing CAPA investigations, I ask you this: are we improving anything for the long term (aka prevention).  Are we making any progress towards sustaining those improvements?

Corrective Actions Preventive Actions (CAPA) Steps

Let’s step back a moment and quickly review typical steps for CAPAs:

CAPA Components

The purpose of an Effectiveness Check (EC) is for verifying or validating that actions taken were effective and do not adversely affect product, device or process.  It goes beyond the statement in the investigation form to include a follow-up activity that closes the loop on the specific CAPA.  If an effectiveness check fails meaning the CA/PA was not effective or another deviation /nonconforming incident has occurred, we go back to the beginning and either start again or in most cases, we re-open the investigation.  The pressing question is why did the EC fail?  Almost instinctively, we believe that we did not find the true root cause.  Perhaps.  Was there a rush to close the investigation?  Probably.  Did the investigation team grab the first probable cause as the root cause because the “problem” felt familiar?  Maybe. Or is it a case of a fix that backfired into unintended consequences? Possibly. I will also suggest that the CA/PA may not have been aligned properly.

Ask these 3 questions about CA/PAs

  • Is the CA/PA Appropriate? The focus of this question is about the affected people.  What is the size of this audience? Is it mainly one person or groups of people?

Can the CA/PA be executed efficiently?  Is it for one site or multiple sites?

  • Is the CA/PA Economical? What budget is available?

Is it a “cheap” fix or a 3 – 6 month project? Or an expensive solution of more than 6 months and will need capital expenditure funding?

  • Is the CA/PA Feasible? The real question is about the timeline.

            Need it fast – within 3 months or

            Have time – don’t need until more than 3 months from now.

And then there is the unspoken 4th question – is the CA/PA “political”?  I experienced first hand what happens to CAPAs that are politically oriented.  Most of them failed their ECs.  Request “Can You Stay a Little While Longer”. The best CAPAs are the ones that map back to the root cause.

Introducing the HPISC CAPA Performance Chain

On the left hand side, you will recognize the 3 traditional tasks to complete.  After the EC is written, trace upwards to ensure that the EC maps back to the CA/PA and that the CA/PA maps back to the root cause; hence, the bottom up arrow.  On the right hand side are performance improvement activities that I use as a Performance Consultant (PC) to bring another dimension to the CAPA investigation, namely, Human Performance Improvement (HPI). 

Before I can write the root cause statement, I examine the “problem” also known as a Performance Discrepancy or an incident and I conduct a Cause Analysis that forces me to take a three tiered approach (the worker, the work tasks, the workplace) for the possible causes and not get bogged down in observable symptoms only.  The Performance Solution is more appropriately matched to the identified gap. In theory, this is what the corrective action(s) is supposed to do as well. During the performance solution planning, determination of success and what forms of evidence will be used happens with key stakeholders.  So that collecting the data happens as planned, not as an after thought, and the effectiveness is evaluated as discussed.    

What can we really control?

In RCA/CAPA meetings, I often hear about what management should do to fix the working conditions or how most of the operator errors are really managements’ fault for not taking the culture factor seriously enough.  While there may be some evidence to back that up, can we really control, reduce or eliminate the human factor?  Perhaps a future blog on understanding human errors will be released.

Management Can:

  • Design work situations that are compatible with human needs, capabilities and limitations
  • Carefully match employees with job requirements
  • Reward positive behaviors
  • Create conditions that optimize performance
  • Create opportunities to learn and grow professionally.

Clues for Failed Effectiveness Checks

One of the first activities to perform for a failed EC is to evaluate the effectiveness check statement.  I have read some pretty bizarre statements that challenge whether the EC was realistic to achieve at all. The conditions under which we expect people to perform must be the same as the conditions we evaluate them during an EC review.  So why would we set ourselves up to fail by writing ECs that don’t match normal workplace routines? What, because it looked good in the investigation report and it got the CAPA approved quicker?

Next, trace back each of the CAPA tasks to identify where to begin the re-investigation.  I also suggest that a different root cause analysis tool be used. And this is exactly what we did while I was coaching a cohort of Deviations Investigators.  Future blogs will discuss RCA tools in more detail. -VB

The Big Why for Deviations

As part of my #intentionsfor2019, I conducted a review of the past 10 years of HPIS Consulting.  Yes, HPISC turned 10 in August of 2018, and I was knee deep in PAI activities.  So there was no time for celebrations or any kind of reflections until January 2019, when I could realistically evaluate HPISC: vision, mission, and the big strategic stuff.  My best reflection exercise had me remembering the moment I created HPIS Consulting in my mind.

Human Performance Improvement (HPI) and Quality Systems

One of the phases for HPI work is a cause analysis for performance discrepancies.  The more I learned how the HPI methodology manages this phase the more I remarked on how similar it is to the Deviation /CAPA Quality System requirements.  And I found the first touch point between the two methodologies.  My formal education background and my current quality systems work finally united.  And HPIS Consulting (HPISC) became an INC.  

In my role of Performance Consultant (PC), I leverage the best techniques and tools from both methodologies.  Not just for deviations but for implementing the corrective actions sometimes known as HPI solutions.  In this new HPISC blog series about deviations, CAPAs, and HPI, I will be sharing more thoughts about HPISC touch points within the Quality Systems. For now, lets get back to Big Why for deviations.

Why are so many deviations still occurring? Have our revisions to SOPs and processes brought us farther from a “State of Control”? I don’t believe that is the intention. As a Performance Consultant, I consider deviations and the ensuing investigations rich learning opportunities to find out what’s really going on with our Quality Systems.

The 4 cross functional quality systems

At the core of the “HPISC Quality Systems Integration Triangle” is the Change Control system.  It is the heartbeat of the Quality Management System providing direction, guidance and establishing the boundaries for our processes.  The Internal Auditing System is the health check similar to our annual physicals; the read outs indicate the health of the systems.  Deviations/CAPAs are analogous to a pulse check where we check in at the current moment and determine whether we are within acceptable ranges or reaching action levels requiring corrections to bring us back into “a state of control”.  And then there is the Training Quality System, which in my opinion is the most cross-functional system of all.  It interfaces with all employees; not just the Quality Management System.  And so, it functions like food nourishing our systems and fueling sustainability for corrections and new programs.

Whether you are following 21CFR211.192 (Production Record Review) or ICHQ7 Section 2 or  820.100 (Corrective and Preventive Action), thou shall investigate any unexplained discrepancy and a written record of the investigation shall be made that includes the conclusion and the follow up. Really good investigations tell the story of what happen and include a solid root cause analysis revealing the true root cause(s) for which the corrective actions map back to nicely.  Thus, making the effectiveness checks credible. In theory, all these components flow together smoothly.  However, with the continual rise of deviations and CAPAs, the application of the Deviation /CAPA Management system is a bit more challenging for all of us.  

Remember the PA in C-A-P-A?

Are we so focused on the corrective part and the looming due dates we’ve committed to, that we are losing sight of the preventive actions? Are we rushing through the process to meet imposed time intervals and due dates that we kind of “cross our fingers and hope” that the corrective actions fix the problem without really tracing the impact of the proposed corrective solutions on the other integrated systems? Allison Rossett, author of First Things Fast: a handbook for performance analysis, explains that performance occurs within organizational systems and the ability to achieve, improve and maintain excellent performance, depends on integrated components of other systems that involve people. 

Are we likewise convincing ourselves that those fixes should also prevent re-occurrence? Well, that is until a repeat deviation occurs and we’re sitting in another root cause analysis meeting searching for the real root cause.  Thomas Gilbert, in his groundbreaking book, Human Competence: engineering worthy performance tells us, that it’s about creating valuable results without using excessive cost.  In other words, “worthy performance” happens when the value of business outcomes exceeds the cost of doing the tasks.  The ROI of a 3-tiered approach to solving the problem the first time, happens when employees achieve their assigned outcomes that produce results greater than the cost of “the fix”. 

Performance occurs within three tiers

So, donning my Performance Consulting “glasses”, I cross back over to the HPI methodology and open up the HPI solutions toolbox.  One of those tools is called a Performance Analysis (PA). This tool points us in the direction of what’s not working for the employee, the job tasks a/or the workplace. The outcome of a performance analysis produces a 3 tiered picture of what’s encouraging or blocking performance for the worker, work tasks, and/or the work environment and what must be done about it at these same three levels.  

Root cause analysis (RCA) helps us understand why the issues are occurring and provides the specific gaps that need fixing.  Hence, if PA recognizes that performance occurs within a system, then performance solutions need to be developed within those same “systems” in order to ensure sustainable performance improvement.  Otherwise, you have a fragment of the solution with high expectations for solving “the problem”.  You might achieve short-term value initially, but suffer a long-term loss when performance does not change or worsens. Confused between PA, Cause Analysis and RCA? Read the blog – analysis du jour.

Thank goodness Training is not the only tool in the HPI toolbox!   With corrective actions /HPI solutions designed with input from the 3 tiered PA approach, the focus shifts away from the need to automatically re-train the individual(s), to implementing a solution targeted for workers, the work processes and the workplace environment that will ultimately allow a successful user adoption for the changes/improvements.   What a richer learning opportunity than just re-reading the SOP! -VB

  • Allison Rossett, First Things Fast: a handbook for Performance Analysis; 2nd edition 
  • Thomas F. Gilbert, Human Competence: Engineering Worthy Performance
You might want to also read:

What’s Your Training Effectiveness Strategy? It needs to be more than a survey or knowledge checks

When every training event is delivered using the same method, it’s easy to standardize the evaluation approach and the tool. Just answer these three questions:

  • What did they learn?
  • Did it transfer back to job?
  • Was the training effective?

In this day and age of personalized learning and engaging experiences, one-size training for all may be efficient for an organizational roll out but not the most effective for organizational impact or even change in behavior. The standard knowledge check can indicate how much they remembered. It might be able to predict what will be used back on the job. But be able to evaluate how effective the training was? That’s asking a lot from a 10 question multiple choice/ true false “quiz”. Given the level of complexity of the task or the significance of improvement for the organization such as addressing a consent decree or closing a warning letter, it would seem that allocating budget for proper training evaluation techniques would not be challenged.

Do you have a procedure for that?

Perhaps the sticking point is explaining to regulators how decisions are made using what criteria. Naturally documentation is expected and this also requires defining the process in a written procedure. It can be done. It means being in tune with training curricula, awareness of the types of training content being delivered and recognizing the implication of the evaluation results. And of course, following the execution plan as described in the SOP.   Three central components frame a Training Effectiveness Strategy: Focus, Timing and Tools.

TRAINING EFFECTIVENESS STRATEGY: Focus on Purpose

Our tendency is to look at the scope (the what) first. I ask that you pause long enough to consider your audience, identify your stakeholders; determine who wants to know what. This analysis shapes the span and level of your evaluation policy. For example, C-Suite stakeholders ask very different questions about training effectiveness than participants.

The all purpose standard evaluation tool weakens the results and disappoints most stakeholders. While it can provide interesting statistics, the real question is what will “they” do with the results? What are stakeholders prepared to do except cut training budget or stop sending employees to training? Identify what will be useful to whom by creating a stakeholder matrix.

Will your scope also include the training program (aka Training Quality System) especially if it is not included in the Internal Audit Quality System? Is the quality system designed efficiently to process feedback and make the necessary changes that result from the evaluation results? Assessing how efficiently the function performs is another opportunity to improve the workflow by reducing redundancies thus increasing form completion speed and humanizing the overall user experience. What is not in scope? Is it clearly articulated?

TRAINING EFFECTIVENESS STRATEGY: Timing is of course, everything

Your strategy needs to include when to administer your evaluation studies. With course feedback surveys, we are used to immediately after otherwise, the return rate drops significantly. For knowledge checks we also “test” at the end of the session. Logistically it’s easier to administer because participants are still in the event and we also increase the likelihood of higher “retention” scores.

But when does it make more sense to conduct the evaluation? Again, it depends on what the purpose is.

  • Will you be comparing before and after results? Then baseline data needs to be collected before the event begins. I.e. current set of Key Performing Indicators, Performance Metrics
  • How much time do the learners need to become proficient enough so that the evaluation is accurate? I.e. immediately after, 3 months or realistically 6 months after?
  • When are metrics calculated and reported? Quarterly?
  • When will they be expected to perform back on the job?

Measuring Training Transfer: 3, 6 and maybe 9 months later

We can observe whether a behavior occurs and record the number of people who are demonstrating the new set of expected behaviors on the job. We can evaluate the quality of a work product (such as a completed form or executed batch record) by recording the number of people whose work product satisfies the appropriate standard or target criteria. We can record the frequency with which target audience promotes the preferred behaviors in dialogue with peers and supervisors and in their observed actions.

It is possible to do this; however, the time, people and budget to design the tools and capture the incidents are at the core of management support for a more vigorous training effectiveness strategy. How important is it to the organization to determine if your training efforts are effectively transferring back to the job? How critical is it to mitigate the barriers that get in the way when the evaluation results show that performance improved only marginally? It is cheaper to criticize the training event(s) rather than address the real root cause(s). See Training Does Not Stand Alone (Transfer Failure Section).

TRAINING EFFECTIVENESS STRATEGY: Right tool for the right evaluation type

How will success be defined for each “training” event or category of training content? Are you using tools/techniques that meet your stakeholders’ expectations for training effectiveness? If performance improvement is the business goal, how are you going to measure it? What are the performance goals that “training” is supposed to support? Seek confirmation on what will be accepted as proof of learning, evidence of transfer to the workplace, and identification of leading indicators of organizational improvement. These become the criteria by which the evaluation has value for your stakeholders. Ideally, the choice of tool should be decided after the performance analysis is discussed and before content development begins.

Performance Analysis first; then possibly a training needs analysis

Starting with a performance analysis recognizes that performance occurs within organizational systems. The analysis provides a 3-tiered picture of what’s encouraging/blocking performance for the worker, work tasks, and/or the workplace and what must be in place for these same three levels in order to achieve sustained improvement. The “solutions” are tailored to the situation based on the collected data and not on an assumption that training is needed. Otherwise, you have a fragment of the solution with high expectations for solving “the problem” and relying on the evaluation tool to provide effective “training” results. Only when the cause analysis reveals a true lack of knowledge, will training be effective.

Why aren’t more Performance Analyses being conducted?
For starters, most managers want the quick fix of training because it’s a highly visible activity that everyone is familiar and comfortable with. The second possibility lies in the inherent nature of performance improvement work. Very often the recommended solution resides outside of the initiating department and requires the cooperation of others.   Would a request to fix someone else’s system go over well where you work? A third and most probable reason is that it takes time, resources, and a performance consulting skill set to identify the behaviors, decisions and “outputs” that are expected as a result of the solution. How important will it be for you to determine training effectiveness for strategic corrective actions?

You need an execution plan

Given the variety of training events and level of strategic importance occurring within your organization, one standard evaluation tool may no longer be suitable. Does every training event need to be evaluated at the same level of rigor? Generally speaking, the more strategic the focus is, the more tedious and timely the data collection will be. Again, review your purpose and scope for the evaluation. Refer to your stakeholder matrix and determine what evaluation tool(s) is better suited to meet their expectations.

For example, completing an after-training survey for every event is laudable; however, executive leadership values this data the least. According to Jack and Patricia Phillips (2010), they want to see business impact the most. Tools like balanced scorecards can be customized to capture and report on key performing indicators and meaningful metrics. Develop your plan wisely, generate a representative sample size initially and seek stakeholder agreement to conduct the evaluation study.

Life after the evaluation: What are you doing with the data collected?

Did performance improve? How will the evaluation results change future behavior and/or influence design decisions? Or perhaps the results will be used for budget justification, support for additional programs or even a corporate case study? Evaluation comes at the end but in reality, it is continuous throughout. Training effectiveness means evaluating the effectiveness of your training: your process, your content and your training quality system. It’s a continuous and cyclical process that doesn’t end when the training is over. – VB

 

Jack J. Phillips and Patricia P. Phillips, “How Executives View Learning Metrics”, CLO, December 2010.

Recommend Reading:

Jean-Simon Leclerc and Odette Mercier, “How to Make Training Evaluation a Useful Tool for Improving L &D”, Training Industry Quarterly, May-June, 2017.

 

Did we succeed as intended? Was the training effective?

When you think about evaluating training, what comes to mind? It’s usually a “smile sheet”/ feedback survey about the course, the instructor and what you found useful. As a presenter/instructor, I find the results from these surveys very helpful, so thank you for completing them. I can make changes to the course objectives, modify content or tweak activities based on the comments. I can even pay attention to my platform skills where noted. But does this information help us evaluate if the course was successful?

Formative vs. Summative Distinction

Formative assessments provide data about the course design. Think form-ative; form-at of the course. The big question to address is whether the course as designed met the objectives. For example, the type of feedback I receive from surveys gives me comments and suggestions about the course.

Summative assessments are less about the course design and more about the results and impact. Think summative; think summary. It’s more focused on the learner; not the instructional design. But when the performance expectations are not met or the “test” scores are marginal, then the focus shifts back to the course, instructor/trainer and instructional designer with the intent to find out what happened? What went wrong? When root cause analysis fails to find the cause, it’s time to look a little deeper at the objectives.

Objectives drive the design and the assessment

Instructional Design 101 begins with well-developed objective statements for the course, event, or program. These statements aka objectives determine the content and they also drive the assessment. For example, a written test or knowledge check is typically used for classroom sessions that ask questions about the content. In order for learners to be successful, the course must include the content whether delivered in class or as pre-work. But what are the assessments really measuring? How much of the content they remember and maybe how much of the content they can apply when they return to work?

Training effectiveness on the other hand is really an evaluation of whether we achieved the desired outcome. So I ask you, what is the desired outcome for your training: to gain knowledge (new content) or to use the content correctly back in the workplace? The objectives need to reflect the desired outcome in order to determine the effectiveness of training.

What is your desired outcome from training?

Levels of objectives, who knew?

Many training professionals have become familiar with Kirkpatrick’s 4 Levels of Evaluation over the course of their careers, but less are acquainted with Bloom’s Taxonomy of Objectives. Yes, objectives have levels of increasing complexity resulting in higher levels of performance. Revised in 2001, the levels were renamed for better description of what’s required of the learner to be successful in meeting the objective. Take note, remembering and understanding are the lowest levels of cognitive load while applying and analyzing are mid range. Evaluating and creating are at the highest levels.

If your end in mind is knowledge gained ONLY, continue to use the lower level objectives. If however, your desired outcome is to improve performance or apply a compliant workaround in the heat of a GMP moment, your objectives need to shift to a higher level of reasoning in order to be effective with the training design and meet performance expectations. They need to become more performance based. Fortunately, much has been written about writing effective objective statements and resources are available to help today’s trainers.

Accuracy of the assessment tools

The tools associated with the 4 levels of evaluation can be effective when used for the right type of assessment. For example, Level 1 (Reaction) surveys are very helpful for Formative Assessments. Level 2 (Learning) are effective in measuring retention and minimum comprehension and go hand in hand with learning based objectives. But when the desired outcomes are actually performance based, Level 2 knowledge checks need to shift up to become more application oriented such as “what if situations” and scenarios requiring analysis, evaluating, and even problem solving. Or shift altogether to Level 3 (Behavior) and develop a new level of assessments such as demonstrations and samples of finished work products.

Trainers are left out of the loop

But, today’s trainers don’t always have the instructional design skill set developed. They do the best they can with the resources given including reading books and scouring the Internet. For the most part, their training courses are decent and the assessments reflect passing scores. But when it comes to Level 4 (Results) impact questions from leadership, it becomes evident that trainers are left out of the business analysis loop and therefore are missing the performance expectations. This is where the gap exists. Trainers build courses based on knowledge / content instead and develop learning objectives that determine what learners should learn. They create assessments to determine whether attendees have learned the content; but this does not automatically confirm learners can apply the content back on the job in various situations under authentic conditions.

Performance objectives drive a higher level of course design

When you begin with the end in mind namely, the desired performance outcomes, the objective statements truly describe what the learners are expected to accomplish. While the content may be the same or very similar, how we determine whether employees are able to execute post training requires more thought about the accuracy of the assessment. It must be developed from the performance objectives in order for it to be a valid “instrument”. The learner must perform (do something observable) so that it is evident s/he can carry out the task according to the real work place conditions.

To ensure learner success with the assessment, the training activities must also be aligned with the level of the objectives. This requires the design of the training event to shift from passive lecture to active engagement intended to prepare learners to transfer back in their workspace what they experienced in the event.   This includes making mistakes and how to recognize a deviation is occurring. Michael Allen refers to this as “building an authentic performance environment”. Thus, trainers and subject matter experts will need to upgrade their instructional design skills if you really want to succeed with training as intended. Are you willing to step up and do what it takes to ensure training is truly effective? – VB

 

Allen,M. Design Better Design Backward, Training Industry Quarterly, Content Development, Special Issue, 2017, p.17.

The Silver Bullet for Performance Problems Doesn’t Exist

Oh but if it did, life for a supervisor would be easier, right? Let’s face it, “people” problems are a big deal for management. Working with humans does present its challenges, such as miscommunications between staff, data entry errors, or rushing verification checks. Sometimes, the task at hand is so repetitive that the result is assumed to be okay and gets “a pass”.  Add constant interruptions to the list and it becomes even harder not to get distracted and lose focus or attention to the detail.

Actual behavior vs. performing as expected

In their book, Performance Consulting: Moving Beyond Training, Dana Gaines Robinson and James C. Robinson describe performance as what the performer should be able to do. A performance problem occurs when the actual behavior does not meet expectation (as in should have been able to do).   Why don’t employees perform as expected? Root cause analysis helps problem solvers and investigators uncover a myriad of possible reasons.   For Life Sciences companies, correcting mistakes and preventing them from occurring again is at the heart of CAPA systems (Corrective Actions Preventive Actions).

A closer look at performance gaps

Dana and James Robinson conducted research regarding performer actions and sorted their results into three categories of obstacles:

  • Conditions of performers
  • Conditions of the immediate managers
  • Conditions of the organization

A checklist for common Performance Causes  – scroll down for the Tool.

But, weren’t they trained and qualified?

Hopefully, employees are trained using an approved OJT (On the Job Training) Methodology in which they are shown how to execute the task and then given opportunities to practice multiple times to become proficient. During these sessions, they are coached by Qualified Trainers and given feedback on what’s right (as expected) and given specific instructions to correct what’s not right with suggestions for tweaking their performance so that their final performance demonstration is on par with their peer group. At the conclusion of the qualification event, employees must accept that they now own their deviations (mistakes) from this point forward. So what gets in the way of performing “as they should” or in compliance speak – according to the procedure?

Is it a lack of knowledge, skill or is it something else?

The Robinson’s explain that performance is more than the training event. It’s combination of the overall learning experience and the workplace environment that yields performance results. Breaking that down into a formula per se, they suggest the following: learning experience x workplace environment = performance results.

The root cause investigation will include a review of training and the qualification event as well as a discussion with the performer.

  • Is it a lack of frequency; not a task often performed?
  • Is it a lack of feedback or delayed feedback in which the deviation occurred without their awareness?
  • Is it task interference?

The work environment includes organizational systems and business unit processes that together enable the performer to produce the outcomes as “expected”.   These workplace factors don’t always work in perfect harmony resulting in obstacles that get in the way of “expected” performance:

  • Lack of authority – unclear roles, confusing responsibilities?
  • Lack of time – schedule conflicts; multi-tasking faux pas?
  • Lack of tools – reduced budgets?
  • Lack of poorly stored equipment/tools – lost time searching?

Isn’t it just human nature?

Once the root cause investigation takes on a human element attention, it’s easy to focus on the performer and stop there.   If it’s the first time for the performer or first instance related to the task, it’s tempting to label the event as an isolated incident. But when it comes back around again, it becomes apparent there was a “failure to conduct an in-depth investigation” to correct and prevent. Not surprisingly, a push back of “Operator Error as Root Cause” has forced organizations to look deeper into the root causes involving Humans.

Who’s human nature?

Recall that one of the categories of the researched obstacles was “conditions of the immediate managers”. This makes managers uncomfortable. With so much on their plates, managing a people performance problem is not what they want to see. A silver bullet like a re-training event is a nice activity that gets a big red check mark on their to-do list. However, Robert Mager and Peter Pipe, in their book, Analyzing Performance Problems, provide insights to managing direct reports that may lead to unintended consequences. A brief list can be found here – scroll to Tool: Performance Causes.  (It’s not always the performer’s fault.)

It takes all three to correct a performance problem

soln-people-performance-problemThe third category of researched obstacles clustered around “conditions of the organization”.  I’ve already discussed task interference above. To suggest that organizations are setting up their employees to fail is pushing it just a bit too far.   So I won’t go there, but it is painful for some leaders to come to terms with the implication. In order to prevent issues from reoccurring, an examination of the incidents and quite possibly a restructuring of systems have to occur, because automatic re-training is not the only solution to a “people performance problem”. –VB

Robinson DG, Robinson JC. Performance Consulting: Moving beyond training. San Francisco: Berrett-Koehler; 1995.

Mager R, Pipe P. Analyzing performance problems. Belmont: Lake Publishing; 1984.

What will it take to gain access to HPI/HPT Projects?

It’s more than a name change.
Adding Performance Consulting to your department name or position title sounds like a good idea at first. You know, help get the word out and ease into Performance Consulting projects, right? Well not exactly. Adding it on is exactly what happens; possible projects get added on to your workload and the “regular” training requests keep coming. It becomes a non-event. Dana Gaines Robinson and James C. Robinson, authors of Performance Consulting, strongly recommend that you create a strategic plan for your transition. And that’s exactly what I did in 1997.

Technical Training is now known as Performance Enhancement Dept.
But not without first discussing my plan with my boss and then pitching it to his staff at his weekly meeting. My plan included the need for the change and a comparison of the traditional training model and the performance model. In this comparison, I listed the percentages of training to consulting ratios and where the shift would occur. Training was never going away, but that we would do less and pick up more performance consulting work instead. I used the now familiar line – training is not always the answer. (Back then it was a very edgy statement.)

And I included in my pitch, the recommended pieces from the Robinsons’ seminal book: mission, vision, guiding principles, services, responsibilities and even who are customers were by percentages. Key to this plan and acceptance, was that we never said no to a training request, but re-framed it into why and how will we measure success. If we couldn’t design a measurement strategy from the beginning, we were obligated to turn the project down. And the General Manager agreed with that guiding principle.

NOW OPEN FOR BUSINESS!
While we waited for feedback and project requests, I invited myself to a quality meeting about a GMP concern from a Line Trainer. When no one volunteered to complete a suggested task, I raised my hand and took the assignment. Cheers, we had our first project and we were now open for business. The task was then assigned to a direct report who thought I was crazy or evil, but I described how this assignment could catapult us into the limelight and showcase exactly the kind of performance work we were capable of doing. Intrigued but still doubtful, he took on the research task and I took on the rest of the project since I had the vision and could connect the dots. We got 3 more requests after we went public with our first project.

One of my best requests that first year was a request for Peer Mentoring. Oh did I want this project. I met with the requester and listened to his case. I researched the topic (remember this was 1997) and got some ideas about a possible solution. When I pressed on about measuring the success, he was vague and said, you know, as part of organizational awareness. I was in love with the topic and what it could mean for the operators and for the new PE department, but I could not find enough support to measure the success nor justify the time and resources to make it happen. We had to scrap the project request. This was the Evaluator Role coming out loud and clear. And this news got around fast. The PE department was not a dumping ground for someone else’s yearly objectives.

Okay, that’s great, but who does GMP Training?
During our success, we still managed the Compliance Training requirements as part of our agreement. Folks got so used to us and how we managed both the compliance side and performance enhancement requests, that we no longer had to explain what PE was and who we were. So upon biennial inspection from FDA, the inspector asked, “Well then, who does GMP Training”? So, I was asked to put Training back into our department name and become known as TPE: Training and Performance Enhancement which felt like we were back to square one. But the requests kept coming and the projects got much better.

My favorite project was the “Checking Policy”. It had everything going for it. Unfortunately for the company, a very expensive error was made by an operator and site leadership wanted him terminated. The GM who was our unofficial sponsor knew there was a better way to manage this and he needed to find the true root cause of the performance discrepancy, so he reached out to me. The rest of the story is long, so I’ll spare you the details, but three additional projects resulted from this request and all three included operators as my SME Team. This was unheard of at the time and really highlighted what an asset they were to the company despite the costly mistake. Turns out it wasn’t his fault, what a surprise!

Alas, the time came for me to leave that company and take on external consulting full time. When given the opportunity to reinvent myself once again years later, I reflected on the times when I was most engaged and excited about going to work. It was those Performance Enhancement projects that gave me such powerful examples of successfully aligning improvement projects with the business needs. But rather than do it again for one single company, I created HPIS Consulting instead so I could share the approach with more than one company.

So as this “Gaining Management Support” series concludes, I summarize all the related blogs with this final question and provide an overview as the answer.

What will it take?
Developing trust with business partners for starters. Ongoing skill development as an Analyst, a Change Manager, an Evaluator and of course, a Performance Solutions Specialist to build credibility. A good transition plan with vision for 1-3 years and tentative plans for year 4 and 5. And the courage to take projects no one else wants if you want to become a Performance Consultant bad enough. We did it and I’ve never been happier! -V

Gaining Management Series includes the following blog posts:

If the only tool in your toolbox is a hammer …
Are you worthy of your line partner’s trust?
Wanted: Seeking a business partner who has performance needs
First, make “friends” with line management

If the only tool in your toolbox is a hammer …

A hammer is the right tool to drive a nail into wood or dry wall, etc. supporting the adage “right tool for the right job”. Until the closet you installed comes off the wall and you realize that perhaps you needed screws instead or an additional widget to support the anticipated load. It isn’t until “in-use” performance feedback is collected that the realization of a different tool and additional support mediums are needed. Providing training (as in formal instruction) as the solution to a performance issue is analogous to using a hammer for every job.

Site leaders want business partners who can help them succeed with organizational goals, yearly objectives and solve those pesky performance issues. The more valuable the “trainer-now-known-as-performance-consultant is in that desire, the more access to strategic initiatives. So, the more you want to be recognized as a business partner to the site leaders, you need to continue to build your “solutions toolbox” that includes more than delivering a training event or LMS completion report.

As we begin to wrap up this series on gaining management support, we’ve been exploring how to forge relationships with line managers and earn their trust by being trustworthy. In the blog (Are you worthy of your line partner’s trust?), I asked if you were also trust worthy as a Performance Consultant (PC).

Do you have the necessary competencies to tackle the additional performance solutions? A logical next step is to review the plethora of literature that has been published on the multiple roles for a Performance Consultant. These include Analyst, Change Manager, Solutions Specialist, and Evaluator. There are more, but let’s start with an overview of these four.

A trainer with strong instructional design skill could argue that s/he has loads of experience with 3 of the 4 roles sans solution specialist. To that end, ADDIE has been the methodology and the foundation for successful training events for years. A sound training design analyzes need first and incorporates change management elements and includes evaluation activities for level 1(reaction) and level 2 (learning) of the Kirkpatrick Evaluation Model. So how hard could it be to master the role of Performance Consultant? Doesn’t every solution have a training component to it anyway?

Maybe and maybe not. As the traditional role of technical trainer evolves into Performance Consultant, the skills needed are evolving as well to keep up with management expectations for alignment with business needs.

The PC wears the hat of Analyst when working the business analysis and performance analysis portion of the HPI methodology honing in with the skill of asking the right questions and being able to analyze all of the contributing factors for performance causes. This is more than a needs analysis for designing a course.

The Solution Specialist role relies heavily upon systems thinking skills and is already way outside the power point training solution. As a problem solver working the probable causes from the Performance Cause Analysis, s/he opens the toolbox and can look past the “training design tray” into other alternative performance solutions. There is much more than a hammer in their toolbox. NOTE:  For more details on those types of solutions, navigate to paragraph “HPI Approach” within the blog link).  Implementation experience grows with each executed solution and a great PC also develops good project management skills.

During implementation, the PC may also have to wear a dual hat of Change Manager.  Process changes, culture change and more require strong facilitation skills and process consultation HPI roles for Perf Consulttechniques to manage the different phases of change depending on the nature of the solution and the size of the change impact.

And the Evaluator role surfaces at or near end of project implementation as the solution launches and goes live. Feedback collection, standards setting and re-assessing the performance gap to determine success or additional gap analysis.

The role of Performance Consultant requires more variety of skills and depth of project experiences. While training solutions are part of the PC toolkit, a training manager’s toolbox typically does not offer other performance solutions. It’s usually a hammer when a swiss army knife is what’s needed.  –VB

References: William J. Rothwell Editor, ASTD Models for Human Performance Improvement: Roles, Competencies, and Outputs. 2nd Ed, 1999.

Are you worthy of your line partner’s trust?

In this current series of gaining management support we’ve been exploring how credibility, trust and access impact or influence relationships with our business partners. In Stephen Covey’s, The 8th Habit: From effectiveness to greatness, he informs us that you cannot have trust without being trustworthy.  As Performance Consultants (PCs) continue to demonstrate their character and competence, their line leaders begin to trust them more and more.

From those initial getting-to-you-know-you chats (see previous blog)  to requests for help discussions, the give trust and return trust has been reciprocated and continues to strengthen the relationship. With each request / opportunity, PCs are demonstrating their character traits and further developing their Human Performance Improvement (HPI) technical competence and experience.

Following the HPI/HPT model gives the PC the ability to articulate the big picture of how this request, this performance gap, this project, relates to organizational goals thus illustrating a strategic mindset. And by following the related methodology, PCs demonstrate strong project management skills while implementing changes systematically; not just a quick course to fix a perceived knowledge gap or motivation problem.

So PCs become worthy of receiving their partners’ trust.  Line partners in exchange merit their trust by giving it. Are you trustworthy as a Performance Consultant? Do you have the necessary competencies to tackle the additional performance solutions? Stay tuned for more blogs on what those competencies are and why they are so helpful for PCs. In the meantime, check out the sidebar “Ten Steps for Building Trust” from Alan Weiss in Organizational Consulting.  -VB

How to Build More Trust

References:
Covey,SR. The 8th Habit: From effectiveness to greatness, USA, Free Press, 2004.

Weiss, A. Organizational Consulting: How to be an effective internal change agent, USA, Wiley, 2003.

Wanted: Seeking a business partner who has performance needs

This new series – Gaining Management Support – focuses on credibility, trust, access and how these 3 concepts impact relationship management.  In the first blog of this series; First make friends with line management, I blogged about establishing a working relationship with line management.

 

While the relationship is forming, both parties can begin to share information about each other’s area of responsibilities.  The Performance Consultant (PC) learns more about the manager’s department: work processes that are not robust; performance needs that are both urgent and on-going and tied to “important” performance requirements.  During the dialogue, listen for internal challenges such as supplier snafus, resource constrained hick-ups, conflicting policies and procedures and other projects that are resulting in more to-do’s.  Find out if they are also managing regulatory commitments and working on closing out CAPAs and deviations related to training, performance issues or “Operator Error” mistakes.  These are all sources of entry points to move the relationship to potential partner status.

 

Partnering implies a two way exchange.  The PC also shares information about HPI/ HPT (Human Performance Improvement/Technology) at a level of depth that matches the individual’s interest and need at the time.  Remember, while your goal is to educate them about HPI, you don’t want to lecture to them or overwhelm them with even more for their work load.  According to Mary Board, author of Beyond Transfer of Training: Engaging systems to improve performance; the PC is striving to build a close working relationship that over time can lead to more strategic performance improvement work.  It is not only about getting projects.

 

However, requests for help/support are bound to surface. To demonstrate support and strengthen the desire to partner, a PC can follow up on discussions by sending additional literary sources such as articles, white papers and blogs from industry thought leaders.  Another popular activity is to pitch in to help meet a deadline or rebalance their workload.  Mini-projects are certain to follow next.  It is an excellent way to move the relationship to partner status.  Early conversations around partnering should include:

  • purpose of working together
  • benefits of shared task; shared outcomes
  • role clarification
  • partnering process explanation and agreement

Keep in mind; however, that it is a JOINT undertaking and not a delegation of task to a direct report or a hired temporary employee.  This is where the consulting side of the partnership can begin; leading him/her through decisions and actions using the HPI methodology says Broad.

 

Technical Trainer or Performance Consultant wanna-be?

As the traditional role of technical trainer evolves into Performance Consultant, the skills needed are evolving as well to keep up with management expectations for alignment with business needs.  To that end, Beverly Scott, author of Consulting on the Inside: An internal consultant’s guide to living and working inside organizations, suggests that internal consultants re-tool with some new skill sets:

  • Know the business.  Tie solutions and align results to real business issues that add value. Get to know finances.
  • Identify performance gaps before management does or becomes the focus of a CAPA corrective action.
  • Become a systems thinker.  HPI is all about systemic performance improvement.
  • Build skills for the multiple roles a PC performs.  Become known as a change agent, systems thinker, learning strategist.
  • Pay attention to trends; talk about them.  Watch for relevance for the organization.

 

“The ability to give advice as a consultant comes from trust  and respect, which are rooted in the relationship”. (Beverly Scott, p.61, 2000). – VB