Mission Accomplished? Measuring Success of Corporate Universities

As corporate executives increasingly turn to universities as strategic partners, university leaders must ensure that what and how they measure progresses beyond narrow measures.

Measurement science at corporate universities still is evolving. But as corporate executives increasingly turn to universities as strategic partners, university leaders must ensure that what and how they measure progresses beyond narrow measures to sophisticated talent development metrics.

Learning organizations have to measure what they do, and the measures have to be meaningful to corporate leaders.

In the Corporate University Xchange (CorpU) eighth annual benchmarking study, 58 percent of learning organizations reported they are under increasing pressure to demonstrate the value of learning and development (L&D). Yet, the same study showed corporate leaders do not really believe or buy into the measurements that are presented to them.

There must be a better way. Corporate universities, like their HR parents, have been lacking a “decision science” to adequately define and describe the value of their work and the payback on the company’s investment in learning.

John Boudreau, research director of the University of Southern California’s Marshall School of Business, is shining the light on the need for a new way to measure and report what HR, which often includes learning, does. He uses accounting as an analogy. The profession was equipped to tally balance sheets and income statements, but fell short in providing tools to evaluate financial investments. This shortcoming yielded a decision science called “finance,” a field with new sets of tools and frameworks to calculate decision concepts such as return on investment (ROI) and economic value added (EVA).

Similarly, the sales function was not equipped to analyze customer needs in a way that could lead to strategies on product packaging, pricing and promotion. Hence, the marketing function took shape. Similarly, learning must spawn new practices that can improve the meaning and the context of its value to the business. Corporate universities have struggled with measurement because the frameworks still are evolving. Leaders believe measurement has to proceed according to a corporate university maturity model. The maturity curve describes an evolution that proceeds from consolidation to alignment to validation, and finally, optimization.

Consolidation
Rio Tinto, one of the world’s largest mining companies, grew through a series of planned acquisitions. It initially chose not to interfere with the successful operation of its newly acquired mine sites, electing to let training teams operate in a highly decentralized mode. Eventually, senior leaders called for a thorough evaluation of HR and learning and found the costs were out of line with benchmarks.

The training teams couldn’t begin to think about demonstrating the value of their work until they had a clear view of how much money was being spent by the 37 unique training teams at all mine sites. It was impossible to measure efficiency improvements before providing a cost basis for current operations.

Building a consolidated view of the total investment meant analyzing 37 training budgets, all vendor contracts, full and part-time staff salaries and other costs. The problem was complicated further because all the business units were using different tracking systems, and many costs were buried in budget line items called “miscellaneous.”

It assembled its best estimate as the first step toward demonstrating value. The consolidation yielded added benefits, including:

• Agreement by training teams on consistent approaches to tracking training costs.
• Clear ideas for targeted cost reductions.
• Priorities for efficiency improvements.
• Ideas of which fixed costs to shift to variable.

The L&D team at a major oil company faced a similar consolidation challenge after making a commitment to trim several million dollars off its total training investment. Because many costs remained elusive, it “decided not to chase every needle in the haystack” and focused on big buckets such as L&D salaries, major vendors and travel expenses.

Alignment
Textron found itself in the alignment stage when senior leaders asked the question all L&D teams expect to hear sooner or later: “What’s our payback on our investment in learning?” In the alignment stage, L&D teams hone their ability to capture and illustrate results that show an undeniable connection between learning programs and performance improvements.

Upon hearing the question from executives, Nancy Brennock, director of operations at Textron University, had no problem describing new product designs that emerged during leadership training, improved success in meeting project deadlines after attending project management courses and better contracts negotiated by purchasing teams who attended training. She said she was even able to claim a sliver of credit for the efficiencies and cost savings derived from Lean Six Sigma programs that would not have happened without training.

But senior executives often want more. “They want a better business case. But how can you prove that leaders are becoming more effective communicators and better decision makers?” Brennock said.

Insights to this ongoing problem may lay deep inside our minds. In the book, Out of Control: The New Biology of Machines, Social Systems and the Economic World, writer Kevin Kelly reports that research on memory indicates people easily can recognize concrete nouns such as “elbow.” The brain can classify and store the image and description of the elbow for rapid retrieval. Say “elbow” and most people can point to it immediately.

But ask these same mere mortals to talk about abstract nouns such as “liberty” and “aptitude,” and they are lost. In this same vein, learning professionals are asked to prove how their efforts make people better “systems thinkers” and “relationship builders.”

In a similar way, hours of training and investment per employee are concrete measures that can be understood, counted and communicated. Unfortunately, they don’t tell much about improvement. So we often look for ways to convert abstract ideas such as systems thinking into something measurable.

To overcome the fuzziness of measuring ideas such as “better decision making,” Brennock’s team is translating the abstract to the concrete by defining behaviors it can specifically link to leadership competencies. The new behaviors then represent observable evidence of training-related improvements in leadership. And they’ve recently come across a concrete number every senior executive understands, reporting that leadership retention increased by 5 percent.

Validation
After demonstrating a plausible, if not completely quantifiable, relationship between learning programs and business results, some organizations take a scientific approach to look for harder evidence of impact.

Studies conducted by Fred Goh, Caterpillar’s manager of strategic learning, prove a correlation exists between learning and employee engagement, and employees’ perceptions of career opportunities. Results revealed Caterpillar could significantly improve engagement by targeting more learning opportunities to employees who were neutral in their attitudes about work. The findings highlighted an opportunity to increase engagement among production and salaried employees.

Staples began to more carefully analyze and compare sales and customer service performance among teams that attended training versus those that didn’t. It studied the impact of training programs designed to improve product knowledge, controlling for regional influences and other factors that could distort the impact. In the end, it offered senior leaders compelling evidence training had improved both top-line sales and operating profit. More specifically, it found an 8.4 percent revenue improvement per store.

Optimization
The optimization stage begins when organizations institute systems and processes designed to capture decision-quality data to analyze, measure and impact learning. Companies such as Raytheon, John Deere, Caterpillar and others recognize they need to apply the same rigor to talent decisions as they do when they make the investment to take a product from concept to production or open a business in a new region.

Organizations at the optimization stage are pursuing entirely new ideas about how to measure the value of learning, often by illustrating a causal relationship between learning and talent readiness, the total value of talent and the firm’s latent growth potential.

Measurement Frameworks and Operational Efficiency
New approaches require universities to illustrate cause-and-effect relationships in talent development metrics and help organizations find blockages in the talent development processes, in addition to how the corporate university is performing its role as a strategic resource. That doesn’t mean that the Kirkpatrick and Phillips models aren’t valuable, but it is important to realize they are looking at individual and group performance in a narrower way, not in the way most senior executives need to see today.

Many corporate universities also pursue continual improvement. They measure their operational efficiencies against industry benchmarks such as investment per employee. They also look for opportunities to shift tactical work to external partners to increase their focus on more strategic work. Other practices analyze efficiency of back-office operations, partner networks and training delivery strategies.

Learning and talent management practices rapidly are moving up the strategic value curve, becoming essential elements of sustained business success. As they do, executives will turn to learning and talent experts to support decision making on business growth plans and to make assignments into critical roles. The rate of innovation, particularly in measurement practices, must move at a much faster pace to better position learning and talent as a key support function on par with the finance, technology and marketing groups.