Summary: As it turns out, evaluating a virtual training program is just like evaluating any other instructional program. The challenges lie in what you measure and how you interpret the results. The good news is that current modern virtual classroom platforms give you a variety of tools you can use to assess the effectiveness of your instructional delivery and its effect on learners.
Anyone familiar with ADDIE can tell you that the last step (the “E”) is evaluation. The challenges that people misinterpret what that step means-not seeking to evaluate the learner, but rather the learning program. Effective program evaluation requires careful planning prior to design to identify key data measures and means to collect the data prior to implementation.
The InSync Training team congratulates Chip Dye, Lead Researcher and Director of Client Relations, on the defense of his dissertation and completion of his PhD program at the University of Connecticut. As part of our 20 Modern Learning Lessons Learned in 20 Years series, Chip highlights the purpose and value of his research into learner engagement.
A casual review of current literature in academic research finds more than 300 scholarly articles and more than 2,000 trade articles in 2019 alone that use the term “learner engagement,” but few commentators define learner engagement. It is perhaps the ubiquity of the usage that allows researchers and commentators to continue the practice without a strict definition – it is assumed everyone knows what is meant by the term. Most practitioners in the learning and development industry, be it K-12 public education, post-secondary instruction, or industry professional training, can easily distinguish an “engaged” learner from one that is not engaged, in many cases simply on sight.
Anecdotally, it is easy to “see” when someone is not engaged, but much more difficult to articulate what is meant by “learner engagement.” In the industry, learner engagement has developed into a short-hand term that loosely represents an amalgam of learner subject-matter interest/expertise, attitude, motivation, mastery, and self efficacy. Moreover, it is often explicitly or implicitly assumed that an engaged learner will achieve better outcomes against measurable rubrics than one who is not engaged.
Lessons Learned Series
Very often you will read or hear about “learner-centric” learning methods. These articles stress techniques that accommodate the learner’s needs and address expectations of the learner in the learning experience. One commentator I heard recently likened this to “putting yourself in the shoes of the learner in the learning experience.”
Fair enough as a start – certainly we need to understand how the learning experience is perceived by the learner, but I would argue that in lieu of trying to put yourself in the shoes of the learner you should instead provide a learning experience where everyone’s shoes already fit.
Virtual Classroom - Facilitation,
And how can we measure the effect of engagement of instructional outcomes?
In my previous article, learner engagement was defined as turning on three factors:
- An emotional response to the training—How does the learner “feel” about the content and its presentation/treatment?
- An intellectual response to the training—Does the instructional experience require and involve the learner’s intellect?
- An environmental response to the learning—Do the learners interact with the learning environment and is the environment changed because of the training?
Learner engagement turns on three factors: an emotional response to the training; an intellectual response to the training; and an environmental response to the learning.
Engaging learners has always been a challenge. Not only has training evolved into a blended learning model, the “Modern Classroom” is influenced by multicultural cohorts, a mobile workforce, and social networking tools. Managing these influences and integrating them into a blended program requires planning, training, and understanding from all members of the training delivery team.