How to Improve the Customer Training and Learning Evaluation Process
Updated:
January 13, 2022
|
10
min read

How to Improve the Customer Training and Learning Evaluation Process

Evaluation is the process of determining the value, character, or quality of something or someone. It is both an art and a science, and has long been a shortcoming of most customer training organizations. Why? Because historically, the evaluation of learning has started and ended with a course evaluation. If it stops there then you miss something crucial: you miss the opportunity to assess if a student actually learned something. What did customers learn? Can they apply what they’ve learned on the job? Will what they’ve learned make an impact? These are questions that most course and customer training evaluations aren’t equipped to answer, but are critical for measuring downstream implications such as product adoption. A course evaluation assesses a student’s reaction to the learning and is classified as Level 1 in the Kirkpatrick Evaluation Model. The model includes three other levels of learning evaluation that are used much less frequently by customer training organizations.

survey results of the use of kirkpatrick evaluation model
Use of the Kirkpatrick Evaluation Model

As the data from the TSIA Education Services Benchmark Survey shows, 96% of education services organizations conduct Level 1 evaluation. However, only 4% conduct Level 4 evaluation, which assesses the impact of training on customer outcomes. I have been conducting this benchmark survey for 12 years, and while I’ve seen some increase in the use of Level 2 evaluation, there has been almost no change in usage of Levels 3 and 4. Let’s take a closer look at the Kirkpatrick Evaluation Model and how customer training organizations can use it to evolve from merely measuring a student’s reaction to learning to gauging its impact on the customer’s business.

What is the Kirkpatrick Evaluation Model?

The evaluation of learning has long been defined by the Kirkpatrick Evaluation Model. Formulated by Dr. Donald Kirkpatrick in the 1950s, it consists of four levels.

four levels of kirkpatrick evaluation model
4 Levels of Evaluation (Expand image)

Level 1 – Reaction: This level provides learners the opportunity to react to the content, the instructor, the facility, and so on. This is the level where “smile sheets” are used to evaluate the course or learning module shortly after its completion. Level 2 – Learning: Level 2 assesses if a learner has indeed learned something. This is often accomplished via pre- and post-training diagnostics, with the post portion administered immediately after the learning is completed. Level 3 – Behavior: This level of evaluation assesses if the learner can apply what he or she has learned by analyzing a difference in behavior. For example, can the learner now complete a task more quickly or use product features and functions that were not used prior to training? Level 3 evaluation is best applied three to six months after training is completed. Level 4 – Results: At the end of the day, it comes down to business results. This is what Level 4 evaluates–the tangible benefits of training. Does it lower spending, decrease downtime, result in reduced risk or increase customer satisfaction, improve quality, and so on?

Where Training Evaluation Falls Short

Conducting Level 1 evaluation is the starting point, not the ending point in the evaluation process. By not engaging beyond this level, there is a plethora of valuable data education organizations are leaving on the table. Without next level data, you miss the chance to understand how learning critical skills, or lack of learning, impacts adoption.

While customer training evaluations are an important first step, they typically do not measure learning. This is where most education organizations fall short. Maybe you receive glowing reviews in your course evaluations, but find out later that product adoption is low. Without evaluating customer learning, you don’t know why a student cannot, or is not, applying what is learned. Without this information, you do not know if existing content needs to be improved, or if net-new content needs to be developed to fill gaps in understanding. If a student cannot apply newly learned skills on the job, it can impact product adoption. While you might find that some barriers to skill application are outside of your control, some will not be. As I mentioned in our State of Education report, gathering the data on learning is crucial to being able to articulate your customers outcomes. However, you can’t fix something if you don’t know it needs to be fixed.

A common refrain TSIA hears for why evaluation stops at Level 1 is, “I can’t get the data.” So, what can your education organization do to overcome this refrain and build both its customer training and, more importantly, customer learning evaluation capabilities? The Kirkpatrick Model provides a logical progression for customer training organizations to build their evaluation capabilities. So, let’s look at how we can best use the subsequent levels to improve the training evaluation process.

Level 2: Learning

Why You Need Pre and Post Assessment

Level 2 assesses if students have learned something and is best done by conducting pre- and post-training assessments. The purpose of a pre-training assessment is to establish what a student knows and doesn’t know. This helps to inform content design and delivery.If a student knows certain things, then less time can be spent on that content and more time spent on topics with which the student is less knowledgeable. A post-training assessment will ask similar questions to the pre-training survey to ascertain if knowledge that was lacking prior to training is evident post-training.As an example: prior to a course a student is asked, “What’s the formula for relativity?” They either don't answer the question or answer it incorrectly pre-training, but post-training the student answers E=mc2. The assumption is that the student has acquired some level of knowledge and understanding that was not present prior to the training. In conversations with TSIA Education Services members, many equate “assessment” with “testing” and shy away from this approach, unless it is for the purposes of certification. If this is the case for your organization, you need to overcome this misperception and get comfortable with applying a pre- and post-training assessment or diagnostic of some sort. Knowing if a student learned something is an indicator of skill acquisition.   Another observation gleaned from Education Services member conversations is that the use of assessment is more common in online learning. The assessment process, however, should be mainstream for all training, whether online or via an instructor-led training event.

Level 3: Behavior

Application of Learning

The next important step is determining if the acquired skills are applied on the job. As shown in the previous graph, only 11% of benchmark respondents evaluate if a learner can apply newly acquired skills and knowledge on the job. What good does the training do if the user cannot apply what is learned? A straightforward way to assess if learners apply the skills they learned during training is to ask them. While this may seem like a simple solution, if implemented properly it can provide your organization with invaluable data regarding the effectiveness of training. Several years ago, TSIA conducted a survey to try and ascertain product usage post-training. While the survey did not ask specifically if the student had applied what was learned on the job, it did ask the following three questions:

  1. Since taking training, are you using the product more often than prior to training?
  2. Since taking training, are you able to use more features and functions than prior to training? If yes, please specify the additional features/functions you use.
  3. Since taking training, are you able to work more independently and rely less on co-workers and/or customer support?

Survey answer options provided a range from “no increase in product usage at all” to “using the product much more,” as shown in the example below. Presumably, if students indicate that they are using a product more, then they are applying some or all of the skills learned during training on the job.

usage survey example
Usage Survey Example (Expand image)

Any survey respondent that selected “No, I do not use the product more” or “I use the product slightly more” was asked to provide a reason why product usage did not increase. The comments often revealed the types of obstacles that impeded a student’s ability to apply what was learned. These three simple questions serve as a stepping stone for obtaining data relevant to the application of skills and knowledge and can easily be implemented by any customer training organization. It’s as simple as creating a questionnaire in a tool like SurveyGizmo and sending it to students 90 to 120 days post-training. You’ll be amazed by the feedback you get from the comments. In the case of the TSIA survey, the comments provided just as much, if not more insight than the answers to the questions themselves. If you’re a TSIA member, you can refer to Gauging Value: Answering the Training/Adoption Question for detailed survey findings.

Level 4: Results

Know Your Impact

Level 4 evaluation is the most elusive of the four levels, as demonstrated by the fact that just 4% of benchmark survey participants conduct this level of evaluation. What is it that makes Level 4 evaluation so difficult? It’s getting the data, as the data resides with the customer.  For example, to know if training impacted productivity, a customer would have to know how long it takes the student to complete a given task prior to training and then measure the time to complete the same task post-training. This "before and after" data ascertains if the task is now completed more quickly and thus positively impacts productivity. While this type of quantitative data would be best, it doesn’t mean qualitative data can’t be used. This can be done by asking students if they think they are more productive now, as compared to productivity prior to training.    A common misperception is that self-reporting is biased and lacks value. A study conducted several years ago by a company called Metrics that Matter found that this is not true. In fact, they found that qualitative, self-reported data can provide very useful insights to training organizations. Similar to the approach suggested for Level 3, a brief questionnaire can be constructed in tools like SurveyGizmo and sent to students 90 to 120 days post-training. Questions will differ depending on the business outcomes you are trying to measure, but here are two examples to get you started:

productivity survey example
Productivity Survey Example (Expand image)
Customer Satisfaction survey example
Customer Satisfaction Survey Example (Expand image)

Note: You may want to provide some definition for words like “slightly,” “moderately,” and “significantly” so that selection is not totally subjective. For example, if CSAT score improvement is 1-2 points higher, that might be defined as slight improvement; a 3–5-point gain might be defined as moderate, and any improvement that is 6 or more points would be significant improvement. Think about the types of business metrics that your training might impact and ask students if the training had an influence on those metrics. You all know your customers best and what the critical business outcomes are that your training should impact. Identify a few and start evaluating.

Evaluation and Product Adoption

The intent of implementing the Kirkpatrick Evaluation model goes well beyond determining a student’s reaction to the training taken. Its true value lies in ascertaining the following:

  • Has skill acquisition occurred?
  • Are acquired skills being applied on the job?
  • How are these newly acquired skills impacting the customer’s business results?

In its quest to drive product adoption, answers to these questions are a must for customer training organizations. Simply knowing if someone “liked” the training is not enough–a deeper assessment of learning is needed. If training is taken, but a student does not learn critical skills, or cannot apply the skills, this has the potential to adversely impact product usage. Data acquired from each level of evaluation enables education organizations to determine what it needs to begin doing, what it needs to stop doing, and what it needs to do differently. This is done in order to facilitate an optimal learning experience that ultimately results in increased product usage and adoption. Evaluating your training programs is the best way to ensure you are meeting your learning objectives and that learners are walking away with the intended knowledge and skills. So, if you are currently not conducting Level 2, Level 3, and Level 4 evaluation, the time to start doing so is now.

Smart Tip: Embrace Data-Driven Decision Making

Making smart, informed decisions is more crucial than ever. Leveraging TSIA’s in-depth insights and data-driven frameworks can help you navigate industry shifts confidently. Remember, in a world driven by artificial intelligence and digital transformation, the key to sustained success lies in making strategic decisions informed by reliable data, ensuring your role as a leader in your industry.

Copied to clipboard!

Looking for More on Learning Evaluations?

For more insights on the evaluation process and how you can apply the Kirkpatrick Evaluation model, our Education Services team is here to help. Not a member? Reach out today and find out how TSIA can help you with customer training evaluations, learning evaluations, and beyond.

We think you’ll also like this

No items found.
What is Partner Success?

What is Partner Success?

Explore the evolution and essential strategies of Partner Success in the XaaS era with this comprehensive guide.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Jared Raftery
Director of Revenue Research and Advisory
NPS versus KORE: Why the KORE Score Is the Future Metric of Customer Success

NPS versus KORE: Why the KORE Score Is the Future Metric of Customer Success

Discover why the KORE Score Framework reshapes customer success strategies and leaves NPS behind. Learn how this data-driven approach with precise metrics enhances retention, renewals, and brand awareness.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Stephen Fulkerson
Vice President of Customer Success Research
 Optimize Your XaaS Business Model with TSIA’s Data-Driven Frameworks

Optimize Your XaaS Business Model with TSIA’s Data-Driven Frameworks

Stay ahead in the competitive XaaS industry by mastering key business metrics like the Rule of 35 and RAC. Continuously assess and optimize your company’s performance against these benchmarks to drive sustainable growth and efficiency, positioning yourself as a strategic leader in your organization.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Thomas Lah
Executive Director and Executive Vice President
Download eBook