Employee Ranking and Rating
So far, I’ve drawn a positive picture of the PE process, with only a few clouds. If approached seriously, the process described in the previous sections will give your employees a useful picture of their job performance and guidance for their professional development.
But, there is a dark side to the PE process. Between the time you complete the written PE, and the time you hold the PE discussion, the odds are that you’ll need to meet with your department management team  to assign ratings and rankings. This is where the PE process usually dives headfirst into the weeds.
Companies give lip service to an idyllic view of the PE. But, because the real value of the PE process for the company is to sort employees into winners and losers, that is what managers are actually held accountable for, not the quality of the written PE or the quality of the discussion with the employee.
Sorting employees distorts the process. If you are an employee, you are unlikely to admit to any weakness that might push you down the list. If you are a manager, you need to communicate weaknesses to your employees, but you are also looking out for them as they get sorted in with employees from other groups. If you’re forthright about your teams’ weaknesses and other managers aren’t, your team as a whole will suffer.
Before getting into a discussion of how best to manage this distortion, let’s look at the two common ways that companies sort employees, ranking and rating.
- Ranking lists employees in order of job performance from highest to lowest. For example, Freddy is ranked third in his team.
- Rating gives each employee one of a group of labels, again from highest to lowest. For example, “Exceeds Objectives,” “Meets All Objectives,” “Meets Most Objectives,”and the dreaded “Needs Improvement.”
It’s not unusual to combine these methods. After all, companies really want a ranking. It’s very convenient for lay-offs and allocating goodies. But for an employee a true ranking, “you’re number 44 out of 78, right between Betty and Veronica,” is at best worthless and at worst strongly de-motivating. Therefore, it’s common for companies to create a ranking, then allocate ratings based on some formula (for example, 10% in the highest rating, 30% in the next highest, etc.).
Some employees like the idea of a competitive workplace with relative rankings and ratings. They like to see how they’re doing compared to others and they work hard to improve their position. In my experience, however, this group is in the minority, and a good portion of that minority only gets a short-term boost in performance.
In practice, the ranking and rating process is good for the top-rated people, since they get the goodies, and the bottom-rated people, since they get a strong message that they need to change. But, the top-rated people are nearly always already strongly motivated and don’t need a ranking or rating to stay that way. And the bottom-rated people should be getting that strong message anyway, or you’re not doing your job as a manager.
For everyone else, and that’s the bulk of your team, a ranking or rating is at best a no-op and at worst a de-motivator. If they can get around whatever bucket they’ve been placed in, then they can deal with whatever specific suggestions you have for their professional development. If they can’t, then all the work you’ve put into their PEs is wasted.
Besides the de-motivational aspects of ranking and rating, there are other problems that you need to be aware of. Here are a few of the most important:
- Ranking and rating are easy to confuse. For example, some companies define a group of rating categories with descriptive terms (for example, “Exceeds Objectives,” “Meets All Objectives,” etc.). Then they force a percentage distribution across those categories (for example, 10% “Exceeds Objectives,” 30% “Meets All Objectives,” etc.) This confusion of the two concepts leads to situations where you have to explain why an employee was “rated” “Meets Most Objectives” when he or she clearly met all objectives.
- Rating categories are often fuzzy. If your categories are: “Exceeds Objectives,” “Meets All Objectives,” “Meets Most Objectives,”and “Needs Improvement,” where do you put an employee who exceeds expectations in two areas, meets expectations for all but one other area, and needs improvement in a couple of areas?
- Ranking is difficult, if not impossible, across disciplines. For example, how do you rank order a technical writer and a firmware programmer working on two different products on two different teams?
- Forcing a distribution across a set of ratings only works when the group is a large enough to eliminate the distortions that arise in smaller groups. Depending on who you listen to, you may need 100 or more people to get a reasonably fair distribution. It’s unlikely that any manager who has a department of 100+ people will know all of them well enough to arbitrate a fair distribution. Therefore, the distribution gets pushed down to smaller groups, where it’s almost surely unfair.
- There’s no completely fair way to merge ratings from sub-teams into a larger group. The safest, though still not perfect, way to do this is to force each sub-team to have the same distribution of ratings, then fight out the borderline cases. But, then you’re back to the small group problem.
- The same thing happens if you try to merge rankings, though as we’ll see in the example below, I think it can be somewhat fairer.
Overall, ranking and rating work against employee development. But, in many companies you have no choice but to work with them. In the next section, I’ll take a look at formulating rankings and ratings, using one common method as an example.
While the methods used to formulate rankings and ratings vary widely, there are common elements to most of them. I’ll focus on those common elements and use a typical system as an example. Your company will surely use a different method, but the common elements should give you some ideas on how to work in nearly any context.
Though there are innumerable variations, most ranking and rating systems I’ve seen have a set of common elements:
- A rating system that puts everyone into one of several categories. Usually the categories are non-descriptive, but ordered. For example, 1, 2, 3, 4. Rankings are rarely communicated to employees, so while the system may use rankings, the result will be ratings.
- A recommended or forced distribution across the rating categories.
- A process for assigning ratings to employees. Usually the process has managers assign a tentative ranking or rating to each of their employees. Then the department management team meets to officially assign ratings, as well as ensure that the distribution is followed.
- Reviews by HR and higher level management to ensure that the rules have been followed.
- A formula for relating ratings to rewards—salary, bonuses, stock options, etc.. While you probably have no input into this formula, you should understand it so that you understand the consequences of placing an employee into one rating category or another.
- A process for communicating the results to employees.
The core of the ranking and rating process is typically a “battle royale” fought out in a management team meeting. Most managers dread this meeting because this is where the fate of your team is determined. If you succeed, your team will get more goodies and presumably be happier; if you fail, you’ll have an unhappy crew.
I’ve heard gruesome stories of shouting matches, betrayal and back-stabbing, and I’m sure that’s happened to some, but my experience with this process has been better than you might expect. Usually, individual managers understand that they need to work with their peer managers every day, that not everyone on their team can be rated in the top category, and that not everyone on the other teams is an idiot. Just as important, I’ve found that the manager in charge usually steps up to the responsibility of averting fistfights and hair-pulling.
I’ll split the discussion of ranking and rating meetings into two parts: general considerations, those things that I think are common to most companies, and a specific example, which presents one way to run the meeting. I’ll present both parts from the perspective of the person running the meeting, presumably the highest ranking manager in the room. First the general considerations:
- Make sure you thoroughly understand your company’s process and any instructions from the company and your manager.
- Be sure you understand the deliverables.
- Talk with your manager and your HR representative to make sure you really understand the process and deliverables.
- Meet with your team to discuss these requirements and determine a process for your meeting. While you should come in with a good idea of how you want to run the meeting, be open to suggestions. When the management team buys into the process, they’ll be more likely to cooperate when things get sticky.
- Make it clear that you will be the final arbiter of disputes. This both asserts your authority and helps mitigate conflict between managers. As a general rule, I don’t like a manager acting as a “judge” in disagreements between subordinates, but in this context you have no choice unless you want to spend the rest of your life watching managers argue.
- Stress teamwork. No individual’s rating is important enough to risk weakening your team. Ideally, you want to come out of the meeting with a stronger team than you started with.
- Have the meeting off-site or in a private conference room. If you can’t get the entire management team in one room, hold the meeting by phone. If you try to bring in one or two managers by phone while the rest of the team is on-site, the managers who have to phone in will be at a disadvantage.
- Plan for more time than you think you’ll need and don’t plan any meeting for more than a couple of hours, Otherwise you’ll tire near the end, get lazy or sloppy, and make bad decisions.
- Plan a follow-up meeting before you finalize the ratings to give everyone a chance to sleep on the results and raise any last minute concerns.
Now let’s look at the process for a typical ranking and rating meeting. The meeting objective will be to produce a rating for each employee with a forced distribution across a set of non-descriptive rating categories.
Before the meeting, have each manager:
- Rank order his or her employees from 1 to N; no ties allowed.
- Prepare to briefly discuss each employee’s accomplishments, strengths, and weaknesses.
During the meeting:
- Review the process and ground-rules.
- Review each employee’s performance, asking his or her manager to briefly summarize the written PE and give an overall assessment. This review can be longer or shorter depending on how well the rest of the management team knows each person.
- Merge the rank ordered lists using the following steps:
- Take the top person from each manager’s ranking list and select from among them the top person in the department.
- Move that person from his or her manager’s list to the merged list, then move the next highest ranked person on that manager’s list to the top position.
- Repeat from step 1 until you’re done.
Note that this is not a mechanical “traffic” merge where you pick the top person from each list in turn, then the second person from each list, etc. Discuss the remaining people on every iteration.
- Review the completed rank order for sanity.
- Assign ratings based on any distribution requirements.
- Review the employees who are on the borderline; you may want to stretch the distribution requirements to bump up or down particular people.
- Have a beer; this is hard work.
This method works well when you must report rankings to the company, but it also works when you only need to report ratings. If the company requires either a ranking or a rating with a forced distribution, this is the fairest method I know.
If the company’s requirements are looser, for example a rating without a forced distribution, then you don’t need to go this deep. And if you don’t need to report rankings, you can shave some corners in the merge when the exact ranking of two people has no impact on a rating result. Just use common sense; an argument over ranking two people who, regardless of the result, will end up in the same rating category is a waste if you don’t need to report the ranking.
While it can be stressful, ranking does force each manager to think carefully about the relative value of each employee to the company. Otherwise, it’s too easy to get sloppy, and then it gets harder to merge the teams against a rating distribution. If each manager has a clear view of his or her team, things go more smoothly.
When I described my views on rating and ranking to a former colleague, he took me to task for not recognizing that rating and ranking are useful as a stimulus to improve performance. In his view, knowing where you’re ranked can cause you to improve your performance, both to get ahead of others and to better position yourself for the goodies.
He has a point, and I don’t deny that there are people who motivate themselves to improve their ranking or rating. But, I think the majority of people find their long term motivation in other places.
As a manager, I think the best time to focus on rating and ranking is when it reinforces a point that isn’t getting through by other means. For example, if you have an employee who is not meeting a reasonable standard of performance, pointing out that his or her job performance is at the bottom of the department can be a powerful stimulus for change. Conversely, if you have an employee who excels, but is over-critical of his or her performance, revealing a high ranking may give him or her a boost.
Like any other tool, ranking and rating can be used or abused. But, recognize that in a lot of ways they are the chain saw of management tools. They’re great for some tasks, but if you don’t know what you’re doing and don’t use them with great care, someone’s going to get hurt, probably you.
 I’ll use “department management team” to refer to you, your peer managers and your common manager at the next highest level.