What Gets Measured, Gets Better: The Application of Learning Metrics

In the end, what gets measured gets better, but is learning formally measured? How? This month’s research looks at the metrics used to assess the effect of learning on employee and overall business performance.

Tom Peters may have said it slightly differently, but in the end, what gets measured, gets better. Or stating it in the reverse, what doesn’t get measured will never get better (and probably wasn’t all that important in the first place.) This is definitely not true of learning—we inherently understand its importance, and the ultimate goal of learning is to make things better, whether it trains an employee on a new skill or conveys the details of the latest product to the sales team. But is learning formally measured? If so, how? Test scores tell part of the story, but what about measuring the downstream impact of learning? Improving business performance is the new battle cry, and every process and expense is being scrutinized against its contribution. Learning is no exception.

Every other month, we pose questions to Chief Learning Officer magazine’s Business Intelligence Board on a variety of topics to gauge the issues, opportunities and attitudes that make up the role of a senior learning executive. The March 2005 issue focused on linking learning with formal employee performance management. This month’s research focuses on metrics used to assess the effect of learning on both employee and overall business performance.

Most learning leaders believe that learning contributes to overall business improvement. However, that belief is not proof, so the wise organization takes steps to demonstrate the value of learning through improved business performance. The possible metrics are numerous: employee performance, employee productivity, customer satisfaction, overall business results and more. The story doesn’t end with the workforce either, as more and more organizations are extending learning beyond the four walls to channel partners and customers. This article addresses the use of metrics to demonstrate learning’s impact on performance, looking in particular at metrics that assess achievement of Kirkpatrick Levels 3 and 4.

Manual Versus Automated
When respondents were asked to choose the description that best matched their current learning metrics initiatives, 22 percent indicated that they have no formal metrics in place. Of those that do, the majority (44 percent) use a manually generated approach. Only 20 percent use an automated feature tied to a learning management system (LMS). Why does the majority use a manual approach? Perhaps it allows them to tailor assessments better than a built-in automated approach. An open-ended question that asked respondents to explain how they are doing it garnered a wide variety of responses. One frequent observation was that technical or hard-skills training is much easier to measure than soft-skills or leadership training. So those measuring manually might be delivering more soft-skills training than their counterparts. (See Figure 1.)

Testing & Assessments
Metrics for Kirkpatrick Levels 1 and 2 are much easier to come by than those for Levels 3 and 4. Thus, it is not surprising that most organizations have processes in place to measure the former two. In fact, 82 percent of respondents said they administer tests and assessments, while only 17 percent said they do not. It is clear from this response that testing remains a key instrument in measuring the success of learning initiatives. When asked about certifications, 55 percent of respondents said that they offer certifications, whereas 44 percent said they do not. This relatively even split may demonstrate that the majority of training is not certification-ready. For example, few certifications exist for soft skills.

In terms of assessing on-the-job performance after training, responses are pretty evenly split: 54 percent of respondents said they do, and 44 percent said they don’t. This validates that almost half of respondents only use Kirkpatrick’s first two levels of evaluation: Did they like the training, and did they pass?

Where the Rubber Meets the Road
Business Intelligence Board members also were asked whether their organizations measure employee performance and productivity after learning takes place. (For more on this, see the March 2005 issue of Chief Learning Officer magazine, “The Missing Link: Examining the Connection Between Learning and Performance Management.”) In terms of performance, responses were fairly evenly split: 48 percent said they do make this correlation, and 47 percent said they do not. In terms of productivity, a slightly higher percentage (51 percent) indicated that they do make this correlation. This statistic might indicate that slightly more organizations measure output than measure quality. In general, both sets of responses reinforced that roughly half of those surveyed are not measuring whether learning affects employee performance. This raises questions about how these organizations are effectively demonstrating ROI in learning. The findings in the March 2005 article validated that most organizations do not have automated performance management systems in place, and it may be safe to draw the conclusion that a lack of technology is hindering progress in this area.

In terms of improved business performance after employee learning or extended enterprise training, the majority does not make this correlation: 22 percent of respondents indicated that they correlate business performance to employee learning, 3 percent correlate business performance to extended enterprise learning, and 19 percent indicate they correlate both. A whopping 56 percent make absolutely no correlation between learning and business performance. (See Figure 2.)

Correlating customer satisfaction to employee and extended enterprise training doesn’t fare much better. In fact, it’s a little worse: 57 percent of respondents do not correlate customer satisfaction to either employee or extended enterprise learning, 18 percent correlate employee learning, 3 percent correlate extended enterprise learning and 22 percent correlate both employee and extended enterprise learning to customer satisfaction. (See Figure 3.) The good news is that there’s ample opportunity for improvement.

The Technology Question
Those companies that successfully make correlations between learning and performance were asked whether their technology-based learning platform is helping them to do so. Somewhat surprisingly, only 31 percent indicated that their technology-based learning platform helps them to achieve these connections. Does this reflect a lack of analytical features, a lack of bandwidth or a lack of understanding of features delivered by the system? A surprisingly high number of respondents (25 percent) said that they didn’t know if the learning system is helping. Putting this together with the 44 percent who say the learning platform is not helping demonstrates that those who are achieving success are doing so outside of their learning technology environment.

Those who have had success were asked to explain how they are developing meaningful metrics. The following examples from Business Intelligence Board members may help readers get started:


  • Create performance objectives for each learning experience, and track the learner with Level 3 evaluation. “This is a journey. We are piloting several courses currently and will continue to expand upon this measurement capability over time.”
  • A written agreement for each project specifies learning objectives, delivery medium and performance metrics. “Everyone involved in the project has a stake in the outcome because those metrics become part of his/her annual appraisal.”
  • Provide reports to managers about employee training completion. The managers compare these reports with performance appraisals and provide feedback as part of an ongoing needs assessment. The quality department correlates learning with improved performance and feeds back information. This information flow also exists for feedback on safety training needs and successes. “We are in our infancy correlating learning to overall business performance, and we have identified indicators and are beginning to track the data.”
  • Do it selectively. “For example, in the Management Series, which is an internal, yearlong certificate program, at the front end we administer behavioral styles assessments and ask each participant to set three goals with their supervisor for areas of improvement (to be worked on during the course of the program). They might be organizational or strategic in nature or personal goals.” At completion, supervisors and participants are asked to rate completion of goals and use this data as metrics tied to greater productivity as a result of learning.

What Comes Next
The majority, who do not yet have metrics in place, were asked if they have plans to do so. Responses were somewhat mixed. A clear majority (60 percent) plans to, at a minimum, correlate learning with employee performance. Less encouraging, only 44 percent plan to build a correlation between learning and overall business performance. (See Figure 4.) Of those that do have some plans in place, 53 percent predict implementation time frames of between nine and 18 months.

Respondents also were asked if their plans for learning metrics would be independent or part of a larger initiative. The majority indicated that the latter is the case. In other words, while learning executives may long for such metrics, it is not likely to happen without a wider enterprise project. As a matter of fact, the majority of those that have no metrics in place (41 percent) said that the largest impediment is a lack of management support for such an initiative. (See Figure 5.) Perhaps this is the most compelling statistic of all.

Lisa Rowan is program manager of HR and staffing services research at IDC, a global market intelligence and advisory firm. Lisa can be reached at lrowan@clomedia.com.

May 2005 Table of Contents