Hi,
We have a somewhat similiar problem. We are presently working on a new system for coaching and evaluating referees in our local organisation. I have to say coaching our referees and improve their performances is of higher priority than generate a ranking. Yet, we would like to have something like a grade for their overall performance and how well they performed in certain categories, for example communication, fouls, violations (travel, 3 seconds, and so on ...).
In the past we had a standard form for the evaluators, where they could grade the ref in different categories. There was also a place to back up those grades with more details. The problem was, referees don't get better if you just tell them that they had a bad game, but don't give more details. On the other hand, if evaluators give more details (text), it is almost impossible to rank the referees by the different feedbacks we get from different evaluators. What do you do to ensure that your evaluators are on the same page (look for the same things) and that one evaluator/coach doesn't say one thing, and one week later another evaluator/coach says something completely opposite. How do you make sure you can compare reports from different evaluators? How can we train the evaluators/coaches to give proper ratings? This season the worst grade was b/c on an a-b-c-d scale, which means we have very good referees or something's wrong with the evaluations!!! We don't have enough evaluations to calculate an average for every referee, some of them get only one or two evaluations per season.
Any input on this is much appreciated.
Kostja
|