When you’re implementing a new training initiative, one of the first questions your sponsor will ask is, "How will you measure the success of your program?"
For many, this may incite a rapid heartbeat, flushed skin, and sweaty palms. After all, measuring any training is difficult, and figuring out what will be appropriate measurement for a new program is complex.
The good news is that content makes it easy for you to measure several of the common key performance indicators of training effectiveness.Measuring Training is Complicated
Let’s take a step back for a moment. The learning industry is notoriously bad at measurement. A 2016 research study conducted by Vantage Point and the Sales Management Association found that 49% of companies don’t measure the effectiveness of their sales training. Considering how much money and manpower goes into training, that’s astounding.
The truth is that training is a complicated beast to measure. If you host an in-person class, you need to evaluate everything from the facilities to the participation level of your attendees. Then take into consideration longer term effects of the training to see if there was learning transfer and if your training impacted performance and business results. Add blended learning to the mix, and you have more cogs in the wheel to track.
As Mimeo found in our recent Spotlight Report on Training Measurement, training teams must focus on multiple aspects of training to successfully evaluate their programs. The Siemens light rail team tracks assessment scores and feedback from attendees; an Applebee’s franchisee correlates store performance and learner grades; and Social Solutions has a new matrix designed to measure how confident their customers are upon course completion.
While every training measurement strategy is going to look different, the common factor is that they all identify which aspects of training are important to them and what key performance indicator (KPI) they are going to track. For example, to measure utilization, they keep track of how many learners sign up for each course.
This is where digital content comes in. By leveraging the reports built into your learning technology, you can gain instant access into specific aspects of your training.
The first piece you can measure with content is participation. This is especially impactful when your learners are remote or self-paced. When you’re in front of a classroom, you get a good sense of who is paying attention, following along in the book, raising their hand, and engaging in activities. When your learners are in front of a screen elsewhere, or accessing your content at their own pace, it’s harder to tell.
That’s where your digital document analytics come in. You can create reports on who opened your content, how long they stayed on a piece of content, and even whether they took notes, highlighted, or responded to a colleague’s comment.
Kirkpatrick Level 1
One of the most common industry measuring methods (used by 21% of our survey respondents for the Spotlight Report) is the Kirkpatrick Four-Level Training Evaluation Model. This identifies four levels of training that should be measured distinctly:
Digital content gives you great insight into the learners’ reactions to your course. From explicit indicators like comments they make, you can see what they like or dislike about the material. You can also leverage implicit indicators such as average time spent on a specific piece of content to infer how they react to that content.
Kirkpatrick Level 2
Even more impactful, with digital content you can create reports to measure the second Kirkpatrick Level: actual learning. For this level, the questions you ask are: how well did they understand the material? Have they achieved the learning objectives?
From digital content, you can create a clear picture of what they do and don’t understand. Track which pieces of content have the most questions, and read through the questions to see what is tripping up your learners. Again, look at which pieces of content get the most viewing vs the least, or how many times someone returns to a piece of content to understand whether the content works for your learners.
The Success of Your Content
Ultimately, whether you deliver your training via facilitator or handout materials, your product is content. With digital content, you have more visibility into the success of the effort put in by your content creators. By keeping track of your learners’ reactions and level of understanding, you can see whether your content is achieving its objectives.
This is especially true for new programs. For example, if you are rolling out a microlearning course, you’ll no longer have access to facilitator insight. That’s where digital content measurement comes in: with just a few reports, you can measure participation, reaction, and understanding.
At the end of the day, measuring your training boils down to two objectives: to find out how well your attendees learned and to find out what parts of your product need improvement. By leveraging built-in reporting from your digital content, you will have on-demand insight into the effectiveness of your program. Say goodbye to heartburn and hello to a training measurement strategy.
Blog Post: Exploring Assessment and Evaluation EdTech Tools and Apps
Measuring and evaluating training programs and their associated content doesn't have to be a process that's completely removed from your design and delivery. Using the correct EdTech can help support a seamless approach. This blog post shares the features you should look for when using technology to evaluate learning.
Report: Spotlight Report: Training Measurement
Mimeo's spotlight report includes information from current learning practitioners about how they successfully measure their training programs. First hand examples from Siemens and Apple Gold Group supplement the report's helpful review of common training methods.
BozarthZone! Recording: Evaluating E-Learning
E-Learning is notoriously hard to measure. Check out Jane Bozarth's learning event recording to learn the importance of linking objectives and outcomes, recommendations for building evaluation into your overall program design, and how to design better assessments.