BLOG

Use Online Training Metrics to Improve Your Learning Programs

figure-3
Use Online Training Metrics to Improve Learning Programs Header
Daniel Quick
September 29, 2020
figure-1

If you’re more accustomed to collecting learning data than doing anything with it, try this approach for using the information to improve your learning and customer education programs.

As the saying goes, you can’t improve what you don’t measure. But sometimes it seems we in the education business get distracted by the second part — the measuring — and forget about the first part — the improving. We can be dazzled, and maybe a little overwhelmed, by the continuous stream of learner data generated by our end users. In this article, I’m going to share some techniques for converting your metrics into action: taking the data you glean from your learning platform and using it to improve your education programs. It’s not the numbers themselves that will help you, but how you use the numbers.

Four Learning Metrics in Customer Education

Let’s start with a quick review of different learning metrics. I’ve identified four clusters that are especially relevant for customer education professionals:

  • Velocity includes metrics that measure your team’s output, such as how many training videos it plans to produce during the quarter or how quickly it can work through a backlog of updates and revisions. These metrics will likely be your primary focus when you are getting started and have nothing to “measure.”
  • Reach encompasses learner engagement and consumption metrics, such as how many people are actively using your learning content or enrolling in courses, how many page views you’re getting for knowledge base articles, what percentage of accounts are engaging with the educational materials (breadth), or the percentage of qualified users in each account who are engaging with content (depth).
  • Quality examines whether the learning was effective and engaging. These metrics help you understand whether customers are satisfied with your educational content and interested in coming back for more. Moreover, they provide insight into how your content ties to learning objectives. Quality metrics include CSAT survey scores, assessment scores, completion rates, and average number of courses completed per user.
  • Impact examines how effective your learning content is in achieving the outcome you want. These metrics require you to define what success looks like and what you want your content to do for your customers. For a software-as-a-service offering, for instance, one goal might be reducing time-to-first-value, helping your customers to get value out of the SaaS product as quickly as possible. Follow-on objectives might focus on deflecting support tickets, fostering adoption of certain product features, earning loyalty and advocacy from customers, and ultimately making sure they renew.

Two Mistakes We Make with Learning Metrics

  1. While these metrics may sound straightforward, they’re not. Impact metrics, in particular, are difficult to draw conclusions from because it may be that you haven’t identified causation but rather correlation. Consider the example of people who have taken your onboarding course (let’s call them the “trained group”) versus those who haven’t (the “untrained group”). You may discover that the trained group is closely linked to the ongoing adoption of your product. The problem there is assuming that the onboarding course caused adoption of the product. It could simply mean that the people who are adopting your product were the ones who were interested in your onboarding course. While the metric may be interesting, it doesn’t necessarily imply a causal relationship. That’s mistake number one we make with learning metrics.
  2. Another mistake is over-relying on one of the metrics that fall under the “quality” cluster: completion rates. Completion rates may not be as helpful as you’d think. This is a metric that a lot of people in the learning field care deeply about. They’ll measure their core learning outcomes based on completion rates and do all they can to drive up completion rates. Yes, that metric has value, but it’s only one indication of quality, and it’s not a great one because it can be so easily manipulated.

One super-easy way to increase your completion rate is to reduce the amount of content in your course and voila! However, doing so doesn’t increase your learning outcomes. Arguably, it could reduce them. Likewise, learners might start the course but not finish it because they got what they needed from the first lesson. In that scenario, the learning outcome would be high but the completion rate low.

Or you might have people start a course and then not complete it because the first lesson in the course might have mentioned something that piqued their interest, leading them down a “rabbit hole.” Maybe they use a search tool and locate an article in your knowledge base or a video you’ve posted to YouTube to pick up what they wanted to learn in the first place. That’s how adults learn: We follow different trails, shift gears and pick up the basics we need at the moment. As a result, perhaps the completion rate on that course is low even as the learners have become experts because of the route they followed.

Make sure your learning platform supports the work

You’ll need to assess the viability of your learning platform to help you convert training metrics into better learning outcomes. I’d suggest focusing on a few functional areas:

User experience: Your software needs to care as much about the end user experience as it does about distributing content. You don’t just want a platform that allows you to play a video and track engagement with that video. It’s just as important that the application does that in a way that is delightful to the end user — that doesn’t make them feel like they’re trapped in 1999 with an interface that isn’t nice to look at or is difficult to use, because that will disrupt the learning process. User experience plays a much larger role than people give credit for in online learning.

For more on this topic, read:

Integrations: Take advantage in your learning platform of integrations with other systems in use at your company, including programs such as Salesforce, Zendesk and Gainsight. Those programs help us understand our customers and how we can reduce the friction of using our product and increase its relevance for them.

For more on this topic, read:

Contextual learning: Make sure your learning platform enables you to expose the learning content to your learners just in time, contextually, inside your product. You don’t want them to have to leave your product to take training in a separate location and then come back and try to remember what they’ve just learned. It’s much more effective to be able to learn about the product contextually.

For more on this topic, read:

Be Intentional and Selective in Your Online Learning Metrics

As you consider how to improve your work with learning metrics, I have two final pieces of advice:

  • First, approach your data analysis with intention. Be mindful about the metrics that you’re looking to influence before you get started. Be very clear about what success looks like and define that success so that along the way, you’re aligning everything you’re building to the specific metric that you’re trying to improve.
  • Second, choose just one or two metrics and put your best energy into those for the given quarter, half-year or whatever period you’re following. When the data is pouring in from myriad sources, that metric will serve as a beacon, lighting your way through the data and letting you focus on the numbers that matter for the task at hand.

Making substantive improvements in your education programs requires you to think both broadly and narrowly. It’s important to track multiple operational learning metrics so that you can fully understand the health of your programs and identify opportunities to improve them. However, to achieve the outcomes you want, it’s equally important to focus on improving only one or two of those metrics at time.

Daniel Quick is the Senior Director of Product Experience for Thought Industries. Previously, he led Customer Education programs at Asana, Culture Amp, and Optimizely. Contact Daniel at daniel.quick at thoughtindustries.com. 

figure-2

Thought Industries to Showcase the Power of External Learning at Learning Technologies 2024

Read More
post exterro

Customer Spotlight: Exterro’s Sarah Hargreaves on Migrating 5k+ Learners to Thought Industries in 90 Days

Read More