RSS icon

Top Stories

360 Support Critical to Effectiveness

Without support 360 feedback can actually do more harm than good.

November 18, 2009
Related Topics: Onboarding, Technology, Learning and Development
Reprints
Susan studied her 360 feedback report in disbelief. She had thought she was a higher performer. Many of the ratings were good, but many were not. Who did these people think they were anyway? And what were they thinking? Susan started looking for another job and soon quit.

Susan’s organization was trying to provide helpful developmental feedback to its employees but went about it the wrong way. By not providing Susan and other employees with an environment that was conducive to receiving the feedback and the support systems necessary to make good use of the data, they did more harm than good.

But across the street, Art got his 360 report during a self-development class. This class taught that nurturing your strengths was important. The teacher explained how raters had been advised to treat the tool as an improvement mechanism, not a “gotcha” opportunity. Art was trained to view feedback as just one step in a process. There would be high and low ratings, items that were critical to success, and some less so. He would pick several to focus on.

He knew he’d construct an action plan and then work it, like any other plan. He’d review it with his manager. Everyone else in the class would do the same thing. And he had a coach. While it would be Art's process to own and manage, there would be someone he could bounce ideas off.

Feedback Is Data
Multi-rater feedback is only data. Like any other kind of data, it requires a process for acting on it and cannot be an effective stand-alone development event.

Why does 360 feedback need steps around it to improve performance? First, people are not as trained in self-improvement as they are in their specialty areas, and they need extra guidance. Second, most people have too much to do at work. Priorities come and go. Self-development is often treated as one of those priorities. In a 2001 issue of Educational Leadership, author Karen Dyer said: “The 360-degree feedback process can be a powerful tool, but only if it is used wisely and judiciously.”

Cumulative Benefits

If one person improves, it makes a positive dent. If 50 people improve, departmental results change. Deadlines get hit better and errors go down. Positive synergies happen.

The following year, those same people develop more new skills. And on top of that, fewer people leave the organization because they appreciate the training. The organization isn’t constantly starting from square one. The organization can repeat the feedback process to reinforce behavioral change and compound improvement benefits.

There are several key supplemental steps that can facilitate positive outcomes from 360 feedback.

Ensure That Data Is Meaningful

Choose raters who will be objective and truthful and who know the person’s performance. If you want to know the deductible on your dental insurance, you ask your benefits specialist, not the IT specialist. If you want to know how helpful your advice is, ask the recipients.

Make sure raters know who will see the data and what will be done with it. If it is truly to be used for development purposes, only the person being reviewed should see it. No bosses, no impact on formal reviews. It is important that raters know that in order to facilitate honest, helpful feedback. Most people will be inappropriately nice or mean if the boss will see it, rendering it developmentally useless. As Chris Musselwhite said in 2007, “Once you’ve determined the purpose and intention for the use of the 360, it should be clearly communicated and understood by all.”

Provide a 360 tool that is short enough to be completed by raters without feeling rushed. It is better to get 40 good pieces of data than 80 bad ones. The items should be specific and behavioral. Action verbs are best. Trait words are too vague. For example, “proactively shares meaningful project updates with teammates” is better than “uses teamwork skills.”

In a 2005 issue of Consulting Psychology Journal: Practice and Research, Frederick Morgeson and his co-authors summarized 360-degree research. The article pointed out that behavioral items tell people what they should be doing, rather than not doing. The example in the prior paragraph illustrates this. If someone were rated low on “teamwork” skills, would he or she know what to change to get better?

Provide Guidance in Understanding the Information

First, there is no conclusive science to how good is good and how bad is bad. People should identify items or competencies where the numbers are higher or lower than others. Second, follow-up action should focus both on strengths and weaknesses. The conventional wisdom of focusing on weaknesses is not so accepted anymore. Third, the raters are supposed to rate with positive intent. Low ratings are meant to be helpful, not to shoot zingers. Therefore, the individual receiving the feedback should be reminded to interpret them as such.

Fourth, managers, peers, direct reports and the individual being reviewed often rate the same person on the same area of performance in different ways. Both performance and perception of performance are complex. For example, consider a manager who varies his or her decisiveness versus consensus orientation with the situation. It’s probable that his or her manager, peers and direct reports would evaluate this differently. Training should prepare people for these differences.

Encourage People to Get More Information
This type of feedback rarely provides obvious conclusions. Take the aforementioned “proactively shares meaningful project updates with teammates.” Suppose a person forwards entire project documents to others without summarizing them or providing conclusions. He might perceive himself to be a high performer and be baffled when the feedback comes in low. Or his direct reports might rate him high because they want all that detail, but his boss, who doesn’t want it, may rate him low.

In this situation, the person could tell someone he trusts his perceptions and seek another interpretation. One recommendation given by authors Totsi and Addison in a 2009 issue of Performance Improvement is that feedback recipients can find a feedback partner or form a small support group where they use each other as sounding boards for interpreting results and then support each other through the development process.

Or he or she could ask the supervisor, whose ratings were probably not anonymous. For instance, “I gave myself a 5 on this. You gave me a 2. Can you provide a helpful example of why so I can understand your rationale better?” It can be uncomfortable, but as long as everyone assumes one another’s positive intent, it can be critical.

Employees Should Create Useful Development Plans

If there is no action-oriented development plan designed and committed to, the probability of meaningful improvement action being taken is low. One of the beauties of creating a plan is it requires the person to identify a few workable areas and focus on them. It also forces the person to identify actions, not just aspirations.

The plan can focus on improving strengths and/or weaknesses. And it should ideally consider two other things. First, where does the employee have passion? Second, what does the organization need the most? The plan should be important and satisfying and should benefit the employee and company.

The development plan is an action plan and should be treated as such, with specific, measurable goals, interim steps and dates. While the employee should drive it, the supervisor should have some review and input, to provide valuable perspective. And the employee will need the supervisor to support the plan and ensure allotment of time to it.

Dyer makes the above point neatly: “The 360-degree feedback activity is not a stand-alone event. An outcome of any 360-degree feedback process is developing a plan of action. This should be not just an exercise in goal setting, but rather a blueprint for achieving and sustaining behavioral change.”

Provide Coaches

A coach would have several purposes. The first is to help people manage emotions about the feedback. The second is to help people understand the feedback. Third, the coach can help provide feedback on the action plan. Are the optimum improvement areas chosen? Are the actions too vague?

In an article by Evelyn Rogers in a 2002 issue of Human Resource Planning, one executive said this about his experience with an expert coach: “I read and reread my feedback report. What our outside facilitator found in that report was astounding — connections between practices and themes about my management behavior that were stunningly accurate.”

A study by authors James Smithers and others in a 2003 issue of Personnel Psychology examined the effectiveness of executive coaching on 360 feedback over time. Managers who worked with coaches were more likely to set specific rather than vague goals and to solicit ideas for improvement from their supervisors.

The effectiveness of 360 feedback depends on a number of factors. These include the content of the feedback, what is done with the feedback and whether people have coaches to help process the feedback. The feedback itself should be only one element of a bigger process. If 360 feedback does not lead to important developmental steps and results, one might conclude that 360s don’t work, when a better conclusion is that the right steps were not taken to take advantage of it.
Comments powered by Disqus

Hr Jobs