8 min read

Top 4 Recommendations for Evaluating Hybrid Learning

Top 4 Recommendations for Evaluating Hybrid Learning

InSync Training recently conducted exploratory research on the issues surrounding hybrid learningthat is, learning that is delivered (at the same time) to some learners in person and others remotely (take a look at the results of that research). One of the interesting results of that research emerged from the treatment and perception of the learners in a hybrid delivery (whether it was designed for hybrid delivery or not and the effects on the learner thereby). From the program evaluation perspective, hybrid presents additional questions around program implementation that are often overlooked, largely because (the author suspects) hybrid is an emerging learning environment and design practices have not yet fully accommodated the specific affordances and learners needs within that environment.

 

Hybrid Work is Here to Stay . . .

I’d like to establish and dispose of a seemingly endless churn argument about hybrid. Hybrid is here to stay. Interviews, surveys, and broad demographic studies from multiple national and global sources suggest that some form of hybrid work is the status quo for work that doesn’t explicitly require full-time office presence. The current disparate policies and implementations company-to-company will (someday) subside, and a new “normal” labor model will emerge—one that almost certainly will include some form of hybrid work.

. . . Which Presents Some Challenges.

That’s not to say hybrid work does not have its own unique challengesmost noted among them the inability to transfer corporate culture and facilitate interpersonal and cross-functional collaboration. A review of recent organizational behavior research on the effects of remote and hybrid work shows a mixed bag of results that often conflict with each other.

 

Evaluating Hybrid Training

Like most training, when a hybrid learning and development solution is implemented, it is directed at providing learners with the opportunity to master/complete a particular set of individual learning objectives. To assess a hybrid, or, for that matter, any training approach, the evaluator must seek to assess against the individual objectives, organizational objectives, and the specifics of the learning implementation (treatment, environment, learner data, etc.). Hybrid learning is unique in that evaluation requires us to collect data from two learner sub-populations whose individual experiences are inherently different.

A Side Note about “Equivalence”

When addressing different delivery environments, it is critical to do away with the concept of “equivalent experience” (as in “I want the remote and in-person learner to have the same experience”). Equivalence, at least in this context is nonsensicalthe learning environment, and the learner’s interaction within that environment, is inherently different when the context of learning is different. The learner attending remotely in a hybrid environment has a VERY different experience that one in-person with the instructor, but that doesn’t mean that it is any better or worseit’s just different, and your evaluation approach needs to accommodate this difference by collecting specific data for each learning experience.

To illustrate some hybrid learning evaluation techniques, let’s look at an example:

Sofi is a software developer for Snakecharmer Software, LTD based in London, UK. Snakecharmer has just been awarded its first-ever large contract for the Latvian government, which brings with it unique security and development requirements for all developers. As part of its work-up to plan for and implement the project, the program manager has set up hybrid training sessions for the development team to address specific software processes, tools, and testing unique to the project. Much of Snakecharmer’s development team is remote (including Sofi), but about 25% of the team (mostly the test and integration team) is fully in-office (but typically works weird hours). The training sessions are held in London and include both in-person and remote attendees. In-person attendees watch the instructor manipulate the software development and testing tools while commenting and suggesting next steps, whereas the remote learners have their own software running on their desktop and follow along with the instructor manipulating their own code base. Questions and commentary during the sessions is robust from both remote and in-person learners. Sofi works remotely and attends the sessions (she selected the times to attend from multiple offerings). The conclusion of the sessions includes a practical assessment performed by the learners submitted after delivery using the online tools and process, the results of which are part of the certification package being required by the Latvian government. As part of the delivery, Sofi’s feedback is collected about the training (content and quality), as well as any feedback about the planned development approach. The goal is to get the first phase of software requirements development underway in a couple of weeks, once the Latvian government has approved the security approach and project plan.

Ready to Enhance Your Evaluation Strategy? 

Register Now for our Evaluation Hybrid and Virtual Learning Workshop

Only $399 per seat

 

Recommendation #1: Start with the Organizational Objectives

Look at the desired organizational outcome(s) first, and then evaluate how those objectives are supported by learner outcomes. In our example, Snakecharmer wants to get to development as soon as possibleeverything else is simple milestones to that first real project objective. They likely also want to be successful in designing and developing a quality product for their client. Lastly, they’d probably like to turn a profit on the project implementation and hopefully get awarded follow-on work. When stated this way, the assessment and evaluation of the learner and hybrid training solution becomes fairly apparentprogram evaluation could start with learner data (attendance, attrition, technology drop-offs, activity participation, assessment data, etc.), as well as affective and qualitative feedback from the learners to refine the processes being proposed and the training deliveries to be more effective, and then determine if those outcomes achieve the desired short-, mid-, and long-term organizational objectives. desired.

A close-up of a white paper

Description automatically generated

To identify the right data to collect, InSync employs a DEM-modified CIPP evaluation model (Stufflebeam, D. L. (2000), Provus, M. M. (1969)) to document the logical relationships between a learner in a classroom and the organizational outcomes. A logic mode assists in organizing the data collection and analysis efforts to collect the right data at the right time to answer the relevant questions about the training implementation, both at the learner level and at the organizational level. In our example, completion of project milestones on time might reasonably be reliant on an effective development and testing processes covered in the training. Mastery of the process by the development team would then be a critical element to evaluate early in the project. Put another way, the logic model might suggest individual performance on using the software tools in the training would be a prerequisite to meeting project milestones and organizational goals, and so aggressive and proactive evaluation and remediation of performance gaps in the processes early would be appropriate.

 

Recommendation #2: Define Your Unit of Analysis Differently

When evaluating hybrid learning, it may be helpful to expand the unit of analysis “outward” to include the learner-environment construct as a unit rather than two separate aspects of the training implementation. In this approach, analysis needs to look to data in context of the learning environment for each sub-population of learners (in-person and remote) and ask whether learner outcomes, participation, dialogue and narrative, etc. are appropriate to the environment. Then, in all cases, look to achievement of individual objectives as defined within the instructional implementation. In our example, there will be two different units for collection and analysis: one for in-person and one for remote, with different data sets. For example, collection of specific data around participation and interaction will be mostly qualitative for the in-person learners, but the remote learners will produce a variety of data relating to chat, screen time, software app usage, etc. that might inform whether hybrid delivery is appropriate, or whether perhaps all learners should bring in a laptop to manipulate the software during the instruction.

 

Recommendation #3: Evaluate Both Experiences

In evaluating hybrid learning, one of the singular challenges is the variance in learner experience within the same deliveryin-person and remote learners may evidence differences in mastery because their delivery included different practice elements. Recent research in hybrid learning conducted by InSync indicated that a large majority (76%) of hybrid deliveries were perceived as designed for only one audience (in-person or virtual) but both types of the learners were included in the delivery (due to expedience or other operational limitations). In such implementations, learners were far less likely to be engaged with the material and significantly more likely to attrite.

If the implementation is designed for a hybrid audience, the evaluator must identify those instructional elements that are non-common between the in-person and remote learners and then determine how those elements might impact learner outcome. Define your data collection approach with these differences in mind and look for differentiated outcomes. In our example, the remote learners may actually develop a more nuanced experience in using the software because they were actually manipulating it rather than observing. Keep in mind that it may come to pass that it is a distinction without difference in the ultimate achievement of organizational outcome, but you won’t know unless your measure it. The use of summative assessments (like the practicum in our example) can mitigate this issuemore on this later.

 

Recommendation #4: Comparative Analysis

Post-delivery, relevant data for evaluation of the training implementation is available from other sources that should be used to further assess learner outcomes. If we think about a training delivery as a deliverable service meant to achieve a specific outcome, early data on the software usage (process compliance, productivity, etc.) can be used not just to inform the program manager on execution but also to assess the efficacy of the learning and development approach, not just for this project but for the next big software project for Snakecharmer and whether there were any differences in being remote instead of being in-person. This knowledge might inform future training implementations for this or future projects.

In our example, following delivery all learners completed a summative assessment in the form of a practical. Including this type of data in the evaluation of the program might seem intuitive (and it is), but it would enhance the evaluation to evaluate the differences in outcomes as much as the overall achievement of the learner population. Perhaps some particular times of the session or mandated versus flexible scheduling produced some differences in achievementagain, you won’t know if you don’t look, and these differences may be critical in achieving the desired training (and organizational objectives).

Of note, you’ll NOT see any of the affective “smile sheet” kind of data that trainers often rely on in this evaluation approachfrankly, the only feedback Snakecharmer might want from its trainees is the user feedback on the software development tools and process to get to the first design and development milestone. User feedback (whether affective “smile sheet” kind of data, or more quantitative user data), is important; the point is that it must be used within the limits of what it can tell a program evaluator and provided context and relationships before collection. Affective feedback about the content (relevancy, utility, etc.) can be used to refine the training implementation, but that’s where the functionality of affective data about how the learner feels play a role, NOT in whether the training had to happen or notremember, the Latvian government is watching.

One other important note in this example: the assessment and evaluation approach need to be considered long before the training solution is implemented. Like most traditional development models (such as ADDIE), a learner assessment (and we would add organizational evaluation) in the training solution must be planned and designed before actual implementation. The program evaluator needs to build measures and measurement protocol BEFORE data starts coming inotherwise, there is a (pernicious) inclination to “read the data” and develop conclusions that may be unsupported by the data (this is where things like confirmation bias LOVE to play).

 

The Hybrid Learning Environment is Unique, Just Like Every Other One

The key to understanding the difference in evaluating hybrid learning is that the learner experiences are differentthese differences are of interest insofar as they might affect elearner outcomes, so your evaluation should in part focus on the learner experience and whether it was sufficient to achieve the objectives of the training. As a first step, for each “type of learner" (in-person or remote), develop measures that will provide insights into learner outcome (including assessment data) and then use a relational logic diagram (e.g. CIPP) to make inferences about organizational objectives. It may be that hybrid learning is not appropriate for a particular requirementin our example above, it might be best if all learners were remote, or in-personwe just won’t know until we collect data. At the very least, this data can inform learning and development implementations against similar requirements in the future and provide insights for the project management team on where to focus operationally to address any shortcomings that stem from less-than-optimal learning outcomes. The good news is that for varied learning environments, there are many latent data sources to determine learner progress in achieving desired outcomes. Moreover, in hybrid learning there is an inherent level of fidelity to real world hybrid workforce performance that a traditional method of instruction or assessment can’t always provide.

To conclude, the evaluation of a hybrid learning and development implementation requires some additional effort and analysis of new data sources to adequately address both the in-person and remote experience. If we fail to both acknowledge and anticipate the difference in learner experience and seek to measure the impact those difference may have, we, at best, will have only a partial picture of the success or failure of the program to achieve its desired outcome. Hybrid learning is here to stay. We, as practitioners, need to be able to measure its effectiveness accurately to ensure hybrid doesn’t become a euphemism for remote learners thinking of training as “checking e-mail while an instructor teaches the in-person attendees."

 

Ready to Enhance Your Evaluation Strategy? 

Register Now for our Evaluation Hybrid and Virtual Learning Workshop

Only $399 per seat

Infographic - 5 Key Moments of Learner Need

Infographic - 5 Key Moments of Learner Need

The Five Moments of Need model by Bob Mosher and Conrad Gotffredson identifies 5 key moments a learner will need additional information after the...

Read More
Virtual Training Strategies: Embracing Microlearning

Virtual Training Strategies: Embracing Microlearning

5 Recommendations To Maximize the Learning Experience and What Virtual Learning Experts™ Need to Know.This is part of an ongoing column byVirtual...

Read More