Without data, you’re just another person with an opinion: measuring training impact for better results

“Can I ask you a question?”

“Of course!”

“How do I know if the training I create is any good?”

“Oh.”

I wasn’t expecting that question, but I should have done. We’d just deployed a major course to a key client and they were delighted with it. What’s more, initial feedback from the learners showed that they were delighted with it too. But then came the post project review.

The client was praising – doting, even. The lessons learnt were minor and achievable. Afterwards we held an internal session, which is where a consultant piped up. “It wasn’t good enough, you know? It was the client’s first digital training and they were too easily led. They didn’t know what good looked like and that made you lazy.” It hadn’t, but hearing it hurt.

“It’s too hard to please everyone.” I reassured her. But really I was missing the point. Our project goal had been clearly defined: we needed to increase online sales by 5% over the next year.

Just as it didn’t matter what this one consultant thought, it was also wrong to consider the training successful because the client or even the learners liked it. We needed to wait and test whether it’d had the desired impact.

Shifting the emphasis away from opinion and towards something more empirical has been the topic of a number of industry articles and talks this year. It’s a particularly big topic in immersive and emerging technologies where budget holders and stakeholders sometimes need extra reassurance that their leap into the unknown will be worthwhile. But it reaches beyond that.

Success criteria should be at the centre of any training we create. In the words of the statistician William Edwards Deming, “Without data, you’re just another person with an opinion”. It’s time to get measuring.

 

Measuring starts before you deploy your course; it starts before you design it

First up, gather as much end-user research as you can. Hold focus-groups, interviews and distribute questionnaires to a representative sample of the learning population. This will not only give you insights into the learning need, but also what sort of intervention would appeal most to them. This kind of data is invaluable for designing a solution that’s addressing the right problem.

 

Define your KPIs

Next, ask yourself, ‘what does success look like to me and the business?’ This boils down to identifying what you’d like to see change as a result of the training. Once you’re clear on the business problem this training should address, you can start defining what you need to measure to determine its success.

Make a list of your Key Performance Indicators (KPIs) and how you’re going to measure them. They can be things like:

  • Business performance: uplift in sales, reduction in accidents or days taken off sick.
  • Compliance: reduction in the number of reported policy breaches.
  • Confidence level: how able does the learner now feel to perform a certain task or action.
  • Reach: evidence of the number of learners that have completed the training, in relation to the total possible audience size.
  • Time or cost saving: evidence of a reduction in the number of business hours spent training this topic or a reduction in the spend per head.

Try to make use of any metrics that are already collected by the business. This make it easier to evidence the impact of the training. But more importantly, it’s likely that if these metrics are already being collected, they’re critical to the business’ overall performance.

 

Think qualitative and quantitative

Stats and data only tell half the story. Quantitative data (e.g. reduction in working days lost, number of incidents reported) needs to be augmented with learner story’s, testimonies and satisfaction surveys to evidence that it was this piece of training that caused the resulting change. Make sure the data you’re collecting can creates the full picture of what, if anything, has changed and why.

 

Take a baseline

To be able to demonstrate a change in behaviour, you need to know what’s happening now. Before deploying a new training solution, take baseline measurements relating to your KPIs. Try to take this baseline as close to the training deployment as possible to help remove the influence of other things happening in the business that could also affect learners’ behaviour.

 

Space your measurements

Once a good number of learners (ideally more than 100) have completed the training, it’s time to measure again. Make sure you gather exactly the same type of data you collected for your baseline in order to show the training’s impact. To show the long-term effects, these measurements should be taken again at spaced intervals. For example, one month later and then again six months later.

 

Lastly, don’t panic

There can be nervousness about measuring in case it shows the training didn’t work. If you’ve been measuring and consulting as you go, this is unlikely to be the case. But even if you don’t see the change or the size of change you’d like, being in the know will put you in a really good position to develop something even more success next time.

After all, if we don’t measure how do we know if the training we create is any good?

Get in touch

We’re always happy to talk to you about how immersive technologies can engage your employees and customers. If you have a learning objective in mind, or simply want to know more about emerging technologies like VR, AR, or AI, send us a message and we’ll get back to you as soon as we can.