<p>As I reflect on the 50 years that have passed since the publication of my four-level model for evaluating training programs, I am amazed at what has happened. </p> <p>In 1954, after giving my dissertation at the University of Wisconsin on “Evaluating a Human Relations Training Program for Supervisors,” I wasn’t thinking much about evaluation, apart from the programs I was teaching there. But five short years later, Bob Craig, editor of the Journal of the American Society for Training & Development (ASTD), called me and asked me to write an article on the topic. </p> <p>The four levels immediately came to mind, and I asked him if I could write four articles instead. He agreed. The result was a simple yet practical series on reaction, learning, behavior and results. Before I knew it, the Kirkpatrick Model was born.</p> <p>In 1993, I published my first book about the model, titled Transferring Learning to Behavior: Using the Four Levels to Improve Performance. As of this date, more than 50,000 copies of the book have been sold, and it has been translated into four languages.</p> <p>I recently asked colleagues: “Since they were introduced 50 years ago, are the Kirkpatrick Four Levels out of date?”</p> <p>The answer was a nearly unanimous “no,” but many people commented that problems usually arise in the implementation, particularly in Levels 3 (Behavior) and 4 (Results).</p> <p>I think that the best and shortest solution to that is an understanding of the concept of a “chain of evidence.” The basic premise is that an end result is the product of a series of things done over time and not just one factor.</p> <p>I recall a recent phone call from a trainer at Microsoft who asked: “We have evaluated at Level 1 and Level 2. Is it OK to go directly to Level 4 and measure that?” </p> <p>No! It is necessary to evaluate Level 3 to determine changes in behavior that occurred after the program, which in turn allows you to relate any change in results to the training program.</p> <p>Level 1 measures the reaction to the program by those attending. I call this a measure of “customer satisfaction.” At the University of Wisconsin Management Institute, it was essential that those who paid to attend our programs went back to their organizations with positive feedback. To make sure we were on the right track, we used reaction sheets to get attendee feedback.</p> <p>Level 2 of the Kirkpatrick Model addresses the practical learning that takes place in the program, requiring that learning executives ask themselves: “Did we achieve our objectives of increasing knowledge, improving skills and/or changing attitudes?”</p> <p>When they go back to their jobs, attendees had better be able to tell their managers that the program was practical and describe what they learned. Managers can help facilitate this in advance by: </p> <ol><li>Going over the learning program with attendees before sending them.</li><li>Telling them their jobs will be covered while they are gone.</li><li>Telling them that when they return, they will be asked what they learned and how it can be applied in their departments.</li></ol> <p>These two requirements were probably “met” in most organizations in the following way. The learning organization would present to top management — aka “the jury” — the number of programs presented, the total attendance at the programs and the positive comments about the programs. They might also take pre-event and post-event measurements to show that learning had taken place. The sad part is, the jury would be satisfied.</p> <p>In one of my 1959 articles, I stated that “the day of reckoning” would eventually come when the jury would be looking for evidence that learning programs resulted in change in behavior and positive impact. That day has arrived. The chain of evidence has moved into Levels 3 and 4. The problem now is how to do it to the satisfaction of the jury.</p> <p>We urge learning executives to use a new approach to provide this evidence. We call it ROE, an acronym for “return on expectations.” ROE starts with the learning leader discussing possible programs with members of the jury and asking them what success looks like.</p> <p>With some discussion and negotiation, the learning leader then develops the program in accordance with these objectives, also ensuring that the lessons learned can and will be applied in everyday practice.</p> To do this, learning executives must get managers on board so they will not only support the program and hold the learner accountable, but also assist in the evaluation of Levels 3 and 4 — thereby completing the chain of evidence.