After you’ve done your due diligence in the Analysis Phase, plotted your targeted training approach in the Design Phase, and transformed your plan of attack into reality in the Development Phase, it’s time to Implement (the I in ADDIE) and Evaluate (the E in ADDIE).
Unfortunately, these two phases often go the same way the last few of lines of the “30 Days Hath September” poem – they are forgotten.
(I can hear you trying to recite the poem right now in your head. Go ahead, I’ll wait.)
Implementation involves coordinating and delivering the completed program. What does that entail? A list of things to consider includes:
- Figure out administrative course logistics. Where will it be held? When? How will potential participants sign up? Who – and how – will you communicate with those participants before, during, and after class?
- Write an enticing course description that piques participant interest in the content. Emphasize the WIIFM (what’s in it for me) of the training as well as its practicality. Training relevance matters to adult learners, and we need to make sure that we communicate it effectively upfront.
- Announce the availability of the new program. To make a BIG splash, consider having one of your organization’s head honchos put his or her weight behind your program by recommending it. This kind of support can drive anticipation and go a long way towards a more positive learner attitude towards the training.
- Schedule and train your trainers! Consider having trainers meet with the designer to get down to the nitty gritty of what was discovered during the Analysis Phase as well the organization’s primary goal in putting forth this program. (What skill or knowledge gap will it solve?) Trainers need to understand the flow of the program, how the practice activities relate to the learning objectives, and what outside elements may impact the delivery. (Technology, wall charts, job aids, other equipment.) They need time to practice and connect with the content and to come up with their own content-related stories and examples to share in class.
- Run a pilot if you have the time and budget. This will tell you if the course is ready for a major rollout or not. Choose your pilot attendees carefully. You should have an audience that closely aligns with the intended audience for the training.
Evaluation is how you will be able to tell if your course is a roaring success or an abysmal failure…or somewhere in between. The type of evaluation you will create was decided long ago in the Analysis Phase, discussed in detail in the Design Phase, and perhaps already written up during the Development Phase. The industry standard for evaluation is Kirkpatrick’s Four Levels of Evaluation (roughly translated below):
Level | Description | Details |
1 | How did participants feel about the training? |
This type of evaluation (sometimes referred to as “smile sheets”) is inexpensive to create and administer but is quite subjective. Typically, you will ask questions like:
|
2 | What did participant learn? | This type of evaluation is inexpensive to administer, as well, but aims to judge whether the participants learned what you wanted them to learn during the training. Often, this will take the form of a quiz or test at the end of training. Be certain to only ask questions that directly relate to the learning objectives. |
3 | Did the participants’ job productivity improve? | This type of evaluation aims to prove job transfer. Is the participant able to leave the classroom and use what he or she learned back on the job. Level 3 evaluations require data. For example, if the company’s goal for the training was to improve customer satisfaction ratings for its customer service reps, you would need to collect data both before and after the training to prove job transfer. |
4 | Did the organization achieve its goal? | This type of evaluation aims to prove that the training affected the organization as a whole. Level 4 evaluations require both data and a control group who does not receive the training. For example, if the company’s goal for the training was to improve customer satisfaction ratings for its customer service reps, you would need to collect data both before one group is trained and one is not. Then provide training to the one group and compare data from both again afterwards. |
As you can imagine, evaluating at Levels 1 and 2 is much less complicated (and less expensive, as well as, less time-consuming) than evaluating at Levels 3 and 4. What level you decide to evaluate a program at depends on what you are trying to prove.