Learning leaders have an opportunity to take a smarter, evidence-based approach to both evaluation and design choices. There are two methods that can dramatically improve the quality and effectiveness of organizational learning: samples and competitive pilots. Here’s a brief explanation of both.
Sample Your Learners
For 30 years, I have viewed very few sampling models for learning evaluations. In almost all programs, organizations collect data from the entire universe of learners. A sampling model is a very viable alternative for evaluation. Here are a few approaches:
• Create a random sample of the 200 or 3,000 learners. This sample could be as small as 5 percent.
• Alternatively, construct a representative sample, based on key variables. For example, select five new hires, five more senior workers, five headquarters-based employees and five field-based workers.
• Rather than doing a quick-and-easy survey with the entire learner universe, plan to spend more resources on the sample.
• For comprehensive programs, such as leadership- or succession-focused offerings, organizations may choose to gather performance data or 360-degree feedback from the sample’s workplace and colleagues.
• For example, you may want to conduct an in-depth interview with the learner several weeks after the event.
• For the learners who are not selected for the in-depth evaluation, you still can provide forms or processes for them to provide feedback about the learning experience.
Sampling can be implemented with a quality-control methodology. If the data from the sample is unusual or reflects an unexpected pattern of learner reaction or performance, the organization can choose to expand the sample size or poll the entire universe of learners in a course.
Organizations using a sampling approach report a number of positive changes:
• Targeting evaluation expenses.
• Reducing fatigue from repetitive evaluation forms.
• Moving toward Level 3, 4 or 5 evaluation approaches.
• Trainers and designers receive more focused data.
• More contextual information (e.g., backgrounds of learners) can be gathered.
We have a tendency to pilot a single learning change and then scale it to the entire organization. But we rarely use a competitive piloting model to try multiple approaches and gather in-depth data about their efficacy.
Why not pilot very different and even contradictory approaches to a learning/performance objective? Here are a few ways to do it:
• The organization targets a learning process that is in need of redesign — for example, on-boarding and orientation.
• A few regions are selected and each one implements a different approach. These would vary by length, content, engagement level and delivery.
• A range of metrics and follow-up assessments would be gathered for each of the pilots.
• Cross-tabulations could be constructed to compare the effectiveness of each model with various employee demographics.
• Longitudinal tracking could be conducted to look at employee retention and satisfaction one year later.
Engage the designers in a competition mentality to get their creative design juices flowing. This model will help your organization answer these questions:
• Does each pilot result in measureable improvement?
• Which pilot is preferred by learners, managers and trainers?
• Is the improvement offered by a pilot worth the disruption?
• Which methods work best in which work environments and with which types of workers?
There also should be a collaborative aspect to the competitive pilot process. Often, the most effective approach will be a hybrid of the multiple models. Give key learning professionals opportunities to observe, up close and personal, the various approaches.
If your organization is a large enterprise, there is room for multiple, simultaneous pilots. In smaller organizations, you may want to benchmark with similar organizations to conduct a competitive pilot across companies, using a common supplier. In either case, bring the power of research to the learning process.