Does Senior Leadership Buy ROI for Learning? Part II

This month, CLO magazine continues the report on how ROI is measured in learning functions within various companies.

In the last issue of Chief Learning Officer magazine, we introduced research around how ROI is measured in learning functions within specific companies. We asked the following question of more than 200 learning professionals who are members of Executive Networks:

“Does your company have measure(s) of return on investment in learning that are accepted by your senior leadership as valid, rigorous and supportive of further investment in learning? If so, please tell us briefly what they are, and we will contact you to learn more about how you measure ROI on learning.”

We received responses from 11 companies who believe their ROI measures for learning meet the above criteria. (More than 50 companies replied that they didn’t believe their metrics met those conditions.) Responses from the first five respondents were featured in the December Business Intelligence article. The remaining six participants are included below.

Note: Chief Learning Officer magazine does not endorse any of the respondents’ methodologies as effective ROI measures, but reprints them to demonstrate the scope of how particular learning leaders view this subject.

6. Consumer Electronics Company
We do periodic, intensive ROI analyses.  Pretty well-accepted (measurements) in leader programs are:

1. Increased retention: For retention, we have done a matched control group comparison and a multiyear follow-up on leader development. We won a national award for the research.

2. Accelerated time to promotion: On acceleration, we measure with estimates of manager, HR and an interview. We then track actual time to promotion during and after the program.

3. Increased value to the company (measured in terms of getting paid more).

4. Contributions to business planning, process improvement, strategic initiatives (all by leader development groups working on business tasks), but no quantified ROI on this.

7. Global Electrical and Electronics Manufacturer
We use several evaluation methods:

• Happy sheets right after programs.
• Online evaluations within two weeks after a program.
• For external business school program: after four months, an evaluation on impact (online and interview).

8. Airline
As I initially read this request for best practices in measuring the ROI in learning, I was prepared to respond with a cautious, “We haven’t cracked the code.” However, upon reading the research study in more detail, I may have some “attempts to crack the code” to share. My response is tailored to the way in which this study defines “return on investment in learning,” specifically measured by desired behavioral change of the organizations’ members; improved organizational performance measured both financially and in terms of organizational capability; and in creating and sustaining a “learning organization.”

We are currently working on several evaluation projects to support our learning functions in demonstrating and maximizing the value of their interventions. The evaluation strategies are developed using a hybrid of Kirkpatrick’s Four Levels of Evaluation and Brinkerhoff’s Success Case Method. For example, to evaluate a manager development program, we are employing the following strategy:

• Standard L1 survey aimed at capturing quantitative and qualitative data relevant to the participants’ experience in the program, as well as a few leading indicators for how they might apply what they’ve learned on the job and any anticipated challenges to performance. We ask for an estimation of the extent to which they have gained new knowledge/skills, and to what extent they intend to apply these on the job.

• L2 (assessment of learned knowledge/skills) happens informally during the training program. This process is made possible and deliberate through the careful development of learning and performance objectives for each module. The facilitators of the program are then “trained” to monitor performance through each activity during the learning experience.

• L3 comes in the form of an electronic survey. This is sent out 30-days post-class, our rationale being that the participants would have had sufficient time to apply the learned skills on the job. Again, we capture both quantitative and qualitative data relevant to the participants’ experience applying new knowledge/skills on the job. We ask for self-assessment feedback about the extent to which they’ve been able to apply what they learned and for specific examples of their application. Those who leave us with a “success story” are encouraged to leave their name for follow up.

• Success Case Method — Interview: We have developed two interview protocols to support consistent and detailed follow up with those participants who have asked to be contacted on the L3 survey. The first protocol is intended to dive deeper into the success cases, getting at specific behaviors, enablers in the work environment and seen impact on the operation (at the individual, team or departmental level). The second protocol is intended to gain insight into those who have indicated non-success, or a challenge in applying the new skills. The focus here is on barriers in the work environment. We intend the qualitative data from these interviews to yield tangible stories that demonstrate the impact of our learning interventions. We’ll be able to trend best practices for how the operation can support participants once they leave the training environment. Additionally, past participants will be invited back to future classes to share best practices for overcoming challenges and applying new skills with other participants (supporting organizational capability).

I communicate our efforts in the previous example as an attempt, in part due to the fact that we have not yet employed the entire evaluation strategy. Therefore, I’m not certain how the data will be accepted in terms of being seen as valid and convincing. The impact of our efforts will be fully realized when we’re able to report and communicate results back to the stakeholders.

However, there are practices we are beginning to employ on the other side of the evaluation coin, during needs assessment. We have built into our process a collaboration with key stakeholders to identify what type of information they would need in order to view our evaluation efforts as such, valid and convincing. For example, when engaged by a client to develop a new training program, we are working to identify the “felt need” in terms of impact on the business. Where is the “pain”? How is it affecting your business? What will be different if we implement a training program? What will people be doing differently?

9. Electronics Company
This continues to be a difficult hard-dollar-quantifiable measure. We have a strong “Leaders as Teachers” culture and will be further developing and deploying that methodology into the middle-manager ranks. At the executive levels, the “faculty” (C-level and their direct reports) continue to use anecdotal follow up. The majority of our classes have an action item that’s followed up on at a pre-stated interval: 30-45-60-90 days later. At that time, the participants themselves supply supporting documentation and statements to indicate what was achieved. We use this along with Level 1 evals for the faculty to review.

Over time, the behavior changes begin to be noted in performance appraisals, development plans and observations by faculty who actually teach the classes and continue to interact with the senior leaders. This year, we are looking to do surveys and slowly reintroduce a multirater feedback option. Three years ago, I presented to the executive team and faculty how we could approach doing detailed studies. The overwhelming support was to keep with the anecdotal and trend it.  We are now beginning to do that.  In fact, we have begun second generation, and in some cases a third generation, of a course in areas such as customer service, ethics, strategic thinking, operational excellence, etc. The enthusiastic support by the senior leaders and reception (registration) of the participants reflects the tone from the top. Of course, we’d like to have more dashboard metrics, and we do provide some of the basics. We too would like to know how others can do it.

10. Financial Company
We incorporate standard measures into all strategic training initiatives. A measurement department was created in 1997 to research and adapt the best practices in the industry and customize to meet our particular needs. Standards were created using Kirkpatrick, Phillips, Shrock and Coscarelli, Norton and Kaplan, Hodges DeTuncq and Dana Gaines Robinson, to name a few. We use an applied social research methodology with both quantitative and qualitative data analysis. Our tests face rigorous development standards following the Shrock and Coscarelli methodology.

We use Level 1 evaluation data to report not only training effectiveness but also leading indicator data on knowledge acquisition and on-the-job application of skills learned. Our senior management is very interested in the knowledge acquisition scores as insight into talent management. This is a cost-effective and simple method to track learning indicator data.

We have also reported for over three years across our key business lines using a balanced scorecard following the Norton and Kaplan methodology. In particular, we report on business partner satisfaction, and employee engagement and satisfaction of our training investment. We are in a review cycle for reporting “next generation” impact data.

We have tested Level 3 data collection on specific strategic initiatives. We are currently testing facilitator follow-up calls post training to capture both quantitative and qualitative data to measure the impact the training is having on the job for management-level employees.
We are currently developing an enterprise scorecard for leadership and management development measuring Level 1-4 impact data.

Historically, we tested ROI forecasting and an ROI in our retail line. We strategically moved away from having Level 5. We determined the amount of resource effort and dollar spend was not adding value and chose to measure what we can control — Levels 1-3 and, selectively, Level 4. Our strategy now focuses on determining with our business partners what “impact” data is and measuring against that.

11. Manufacturing Company
Our company university has conducted one success case study that has been accepted by our senior leadership as valid, rigorous and supportive of further investment in learning. Our immediate plans call for two additional studies to be conducted, which are supportive of our long-range evaluation strategy. Our success cases follow Rob Brinkerhoff’s methodology.