Good question and I'll be interested to see the responses. There is no perfect rating system, but we're always looking to improve ours. Because our association has never allowed coach input, it is hard for me to imagine how that could work. Coaches know as much about reffing as we do about coaching (isn't it funny how refs get mad about coaches who badmouth refs yet when a group of refs get together (especially refs who are parents of players!), they often badmouth coaches?).
Regardless, we have always had strictly peer ratings where officials only rate each other. Only varsity and JV games are rated. Varsity officials rate the J.V. officials as well as their partners. J.V. officials only rate their partners. The rating scale is 1 to 10 with each number having descriptors by it that are supposed to help you place each ref into a category. Rating sheets are turned in at each general meeting and then the numbers are compiled at the end of the season (sum of scores divided by number of games that you were rated on). There are many negatives to our system including the fact that it perpetuates a "status quo" because varsity refs seem to have a different rating system for a JV ref than a varsity ref. In fact, we have several refs who refuse to do JV games anymore, even in an emergency, becuase they feel it will hurt their rating. Another negative is that one person can have a huge influence on your rating. If you get the same partner several times (we try to avoid scheduling that way, but sometimes it is inevitable), that one partner may weigh up to 30% of your rating.
This year, we (the board of directors) are proposing a new system to our general members. At the end of the season, you place each person you worked with (JV and Varsity games only again) into a category of A through F. "A" refs are partners who you would feel completely comfortable working a state championship game with. "F" is entry level. In addition, we are spending the money to hire independent evaluators who will observer each ref at least 4 times during the season. The observers consist of former refs and a former coach (who has been out of coaching long enough to hopefully help him be non-biased). At the end of the season, they will all get together and place all the refs into the A through F categories as well. Their input will weigh as 50% of each refs rating and the peer ratings will weigh as 50%. It will be interesting to see how it works.
IMHO, our old system actually worked fairly well. Our assignor (who is an ex-ref and does a great job of being unbiased) figures that it ranks 95% of our refs within a couple spots of where they ought to be. No matter how good the system, you'll always have people who think they get screwed every year. One thing I have learned as association president is that you need to include comments rather than just a number. When I have a ref who wants to know why he/she didn't advance to where they think they belong, I at least owe them some comments which they can use to improve.
There will ALWAYS be whining. We have a great ref who was rated at #35 last year (he was one of those 5% who got screwed - and the reason was because his availability only allowed him to do mostly JV games (he does college ball too) so he was a victim of that JV bias. Anyway, he moved all the way to #10 this year which IMHO is where he deserves to be. Nobody ever said a word to me when he was rated #35 yet you wouldn't believe how many refs complained to me that he moved up too fast this year. Heavy sigh.
Z
|