Measuring the Impact of Customer Training with Dynamic Metrics

People looking at a graph on a desktop
Adam Avramescu
June 4, 2020

As companies look for better ways to understand, justify and leverage customer training, many struggle to find the information they need. They usually measure their success using the same default metrics that have been used for decades and doesn’t add a lot of insight. 

In a recent interview with me on CELab: The Customer Education Lab podcast, Barry Kelly, CEO, Thought Industries, describes the approach Thought Industries takes to measure the impact of customer training using more dynamic metrics.
How have training programs typically measured success? This default data has been around forever in learning management systems. It centers around basic training activity:

  • how many people enrolled in a particular course of training;  
  • how many people attended the course;
  • how many people selected a piece of content;
  • how many people completed the course.

These metrics, while painting a picture that looks like a funnel, are yes/no, binary indicators of activity. Even though they can be described as a funnel and measured relatively easily, they don’t paint a strong picture of the impact of customer education on the business.
This default information is standard everywhere and is probably resident in any learning management system (LMS) on the planet. But there’s more. To return to the funnel analogy, default metrics are like having an analytics platform for your website and measuring the number of visitors (which is absolutely necessary). You don’t know if they’re qualified visitors but you know the number of visitors.
Beyond simple activity data, the default data for training programs also include assessments:

  • Did your learners complete an assignment or an assessment?
  • How many attempts did it take?
  • What’s the score? 

He would also include course-based NPS (net promoter score) and CSAT (customer satisfaction) surveys. Those are the ways in which, traditionally, we’ve discovered whether customers are satisfied and learners are engaged, and hopefully whether they learned something. 
These metrics aren’t useless; they exist for a reason. For years, the learning industry has relied on default measurement, and will for a long time to come. After all, using Kirkpatrick’s Four Level of Evaluation, these metrics will give you good information on levels one and two: Reaction and Learning. These are data points that you can measure immediately and will help you form strategy and process.
But if you only measure these things, you won’t have a strong argument for the impact of customer education at your business.
Thought Industries urges customer education leaders to consider dynamic measurement or dynamic data. There are three areas inside dynamic measurement or dynamic data: proximity; context; proficiency.

Measuring Proximity in Customer Learning

Proximity is how soon a person engages with the product after they engage with learning. This metric helps you show the linkage between training activity and product outcomes. In a way, it’s a window into Kirkpatrick’s Level Three: Application. Did the person actually do what they were trained to do?
You measure the time – the latency – between the training and the action that you would expect training to drive. You’re looking to see how close that signal is. Maybe the customer finished the training and went right to work and applied the new skills. Maybe there’s a discontinuity – the customer spent time doing other things and got to that application much later.
The less elapsed time, the greater the skill or information retention.
As you look to leverage proximity, first determine where they accessed the content. Perhaps the customer is an analyst looking to build a report in Salesforce. They hit a wall. Where do they go to access the learning content?
Obviously, it’s best if you have in-product training (or at least, a cue to access deeper training) instead of the customer having to Google an answer. The latency here is how quickly you can solve someone’s problem. Thought Industries offers clients an in-product widget to bring the right learning content to the right person at the right time.
An in-app widget for customer training reduces that proximity. As the learning organization matures, we get to delve into questions such as how do you create the right proximity or right presentation based on a learner’s profile, their role, etc.

Measuring Context

In Barry’s interview with me on CELab, he describes how Thought Industries approaches context learning and application learning. Context learning is functional and tactical. 
For example, if you need to build a landing page, context learning describes how to add your title, add your body, add your button, save, review. It typically describes a process and is highly feature focused.
On the other hand, application learning is more holistic and often encompasses skills and knowledge outside of the features. 
For example, instead of steps to build a landing page, application learning may approach the same task as, “I’m going to show you how to build a compelling landing page.” Application learning can help with time-to-value for the customer: How do we get our customers successful with our technology as quickly as possible? Through application learning, you teach customers to achieve outcomes by modeling success for them and providing scenario-based learning.
In our example, success means we not only want them to build a landing page; we want them to build the best landing page possible so that they get the best conversion rates and the best click-through rates. Here, the learning goes beyond just feature functionality to broader best practices.
This is where proximity becomes a critical metric, because it’s the training outcome that drives product renewal.

Measuring Proficiency in Customer Training

The holy grail for many customer educators is proficiency: in other words, how capably are customers actually doing what we trained them to, over time?
Proficiency is arguably the strongest lever for customer success. When customers are proficient at their jobs in our products, we can reason, they will be more likely to adopt, mature, and renew, and expand over time.
First, we need to understand, at a user-role level, are we influencing this individual’s knowledge and skills, and their time-to-value?
And those success metrics can be very different per role. The time-to-value for a business owner as opposed to a content creator or designer can be vastly different in a piece of technology.
What we’re really after is to help that individual understand the technology and deploy it to the best possible value. One of the things that we look at in proficiency at the first level is making sure the task was completed. What we’re ultimately trying to do is connect the training and learning object to the outcome.  
We know that someone consumed this content. But the big question is whether they performed the corresponding action? Did they build a landing page: Yes or no?  
Then, did they do it well? We have to consider how our customers are using our products.
The other part is the frequency of the task. Have they executed it more than once? Have they built more than one landing page? 
Some things they’re going to do once. It’s a static signal. During implementation, there’s something turned off and they turn it on. You can measure that easily.
Proficiency is built over the number of times you do the task. That’s so important when we think about the measurement of learning. Because the bottom-line question is: what has your customer education done to really drive proficiency and time-to-value?

Tying the Data Together

As you move from default to dynamic metrics, you’ll often need to tie several systems of record together. Your LMS alone won’t capture every signal that you need, although modern LMS platforms are moving increasingly in this direction. The goal for many programs is to combine data from multiple systems in a centralized data warehouse. That requires pulling all the key information from the learning management platform, Salesforce, customer success platform, etc.
After you have access to that data, then you can create a centralized repository, layer your Business Intelligence tool around it, and then you’ll have the dynamic metrics that show the real impact of your customer education programs.
Learn more by listening to the CELab podcast #33, a discussion with Barry Kelly of Thought Industries.

post hot takes melissa vanpelt

Melissa VanPelt’s Hot Takes on Driving Learning Experiences and Validating ROI [Episode Recap]

Read More
post hot takes bear shelton

Bear Shelton’s Hot Takes on Mutual Accountability and Driving Engagement [Episode Recap]

Read More