![]() |
|
|||
I am presently on the ratings committee for our local high school IAABO board. After using the same ratings and ranking system for over 25 years, we are considering some major changes for next season. At this point we are asking our own board members for suggestions on how to improve our system or suggestions on a brand new system. As a member of this forum, I've decided to ask you, colleagues that represent a wider geographical area, for rating and ranking systems that you may use and believe to be successful.
First, a summary of our present rating system, which is used to assign a ranking that will determine the level of games, and how many games you get for the season: The present rating system is based on peer rating (80%), attendance at meetings (5%), IAABO refresher exams (open book, group effort, 5%), and availabilty to the commissioner (10%). The peer rating is the portion that we are looking at the closest. Presently, each board member rates every official that he or she has observed in a board assigned game and rates them (one to ten). Varsity officials are required to get to the site early enough to observe at least half of the junior varsity game. Junior varsity officials are required to stay late enough to observe at least half of the varsity game. Most games are two man games, so each of the four officials should give and get three rating for that game. Ratings are kept secret and are sent into the board at the end of the season. During the summer, each official receives a copy of his or her ratings (numbers only, no names of raters, no reasons for the rating), which are used to generate a ranking for the next season's assignments. How does your local board rate and rank? Do you use a system of peer rating, if so, does it differ from our system? Do you have any specific suggestions (i.e. rating guidelines) to help us improve our system? I will report the results of this forum thread to the chair and assistent chair of our ratings committee. Thanks for your help.
__________________
"For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life." (John 3:16) “I was in prison and you came to visit me.” (Matthew 25:36) |
|
|||
Quote:
The state requires every association to rank the Top 15 Officials in our association for their playoff consideration. We do have a rating system but this is for the state, not for associations to use. This process varies greatly between organizations on how their lists are compiled. For the most part the executive boards come up with their own criteria and those rankings a largely based on past playoff experience. There is one list for Boys and another list for Girls. I only belong to one organization that has very specific criteria for getting on the list and that is a football organization. The rest of the criteria are up to the board to decide the order and who should and should not be on the list. That is why we elect officers to do those kinds of things in the first place. Peace |
|
|||
Bill, I'm Board 31, just north of you in Springfield. We don't have a ratings system anymore, so I can't really give you any input into that. The college ratings (for ECAC) give the coaches some input; but also include the open book test, peer ratings, attendance at the rules clinic and the assignor's rating.
My question is: why are JV officials rating Varsity officials? I don't think I'd be real comfortable with a 2nd year official rating me.
__________________
Any NCAA rules and interpretations in this post are relevant for men's games only! |
|
|||
Quote:
![]() [Edited by All_Heart on Mar 20th, 2006 at 10:31 AM] |
|
|||
Hi,
We have a somewhat similiar problem. We are presently working on a new system for coaching and evaluating referees in our local organisation. I have to say coaching our referees and improve their performances is of higher priority than generate a ranking. Yet, we would like to have something like a grade for their overall performance and how well they performed in certain categories, for example communication, fouls, violations (travel, 3 seconds, and so on ...). In the past we had a standard form for the evaluators, where they could grade the ref in different categories. There was also a place to back up those grades with more details. The problem was, referees don't get better if you just tell them that they had a bad game, but don't give more details. On the other hand, if evaluators give more details (text), it is almost impossible to rank the referees by the different feedbacks we get from different evaluators. What do you do to ensure that your evaluators are on the same page (look for the same things) and that one evaluator/coach doesn't say one thing, and one week later another evaluator/coach says something completely opposite. How do you make sure you can compare reports from different evaluators? How can we train the evaluators/coaches to give proper ratings? This season the worst grade was b/c on an a-b-c-d scale, which means we have very good referees or something's wrong with the evaluations!!! We don't have enough evaluations to calculate an average for every referee, some of them get only one or two evaluations per season. Any input on this is much appreciated. Kostja |
|
|||
Quote:
So, although the ratings are given by JV officials, I am guessing that for the most part, they are higher ratings than what the varsity officials might give each other because missed assignments etc are much more noticeable to those who have called more games.
__________________
Never hit a piñata if you see hornets flying out of it. |
|
|||
Quote:
Z |
|
|||
Quote:
Quote:
__________________
Any NCAA rules and interpretations in this post are relevant for men's games only! |
|
|||
Quote:
Z |
![]() |
Bookmarks |
|
|