![]() |
|
|||
![]()
I imagine every association has officials concerned about their advancement.
What are some of your associations doing to evaluate basketball official performance? What kind of a system do you use? Here in podunk Idaho Falls. We allow the coaches to do it all. The commisioner/assigner says he has a 50 percent input but I have never seen it. Additionally, the commisioner allows us to perform an end-of-season of evaluation of each other. Not sure that either the commisioner or the peer input has any value - we never see the results. The commisioner was bragging last year, about 3-4 weeks before the end of the season, that he hadn't "watched a single game yet this season." So what kind of input can I expect from him? I make the assumption that only the coaches are really giving input and then mostly in favor of their drinking buddies, college roomates, boyfriends/girlfriends, etc. We do not have any kind of an established evaluation program. I would like to implement something this season so my association can understand good and bad performance. Does anyone have suggestions, evaluation sheets, preferred methodology?
__________________
"There are no superstar calls. We don't root for certain teams. We don't cheat. But sometimes we just miss calls." - Joe Crawford |
|
|||
Good question and I'll be interested to see the responses. There is no perfect rating system, but we're always looking to improve ours. Because our association has never allowed coach input, it is hard for me to imagine how that could work. Coaches know as much about reffing as we do about coaching (isn't it funny how refs get mad about coaches who badmouth refs yet when a group of refs get together (especially refs who are parents of players!), they often badmouth coaches?).
Regardless, we have always had strictly peer ratings where officials only rate each other. Only varsity and JV games are rated. Varsity officials rate the J.V. officials as well as their partners. J.V. officials only rate their partners. The rating scale is 1 to 10 with each number having descriptors by it that are supposed to help you place each ref into a category. Rating sheets are turned in at each general meeting and then the numbers are compiled at the end of the season (sum of scores divided by number of games that you were rated on). There are many negatives to our system including the fact that it perpetuates a "status quo" because varsity refs seem to have a different rating system for a JV ref than a varsity ref. In fact, we have several refs who refuse to do JV games anymore, even in an emergency, becuase they feel it will hurt their rating. Another negative is that one person can have a huge influence on your rating. If you get the same partner several times (we try to avoid scheduling that way, but sometimes it is inevitable), that one partner may weigh up to 30% of your rating. This year, we (the board of directors) are proposing a new system to our general members. At the end of the season, you place each person you worked with (JV and Varsity games only again) into a category of A through F. "A" refs are partners who you would feel completely comfortable working a state championship game with. "F" is entry level. In addition, we are spending the money to hire independent evaluators who will observer each ref at least 4 times during the season. The observers consist of former refs and a former coach (who has been out of coaching long enough to hopefully help him be non-biased). At the end of the season, they will all get together and place all the refs into the A through F categories as well. Their input will weigh as 50% of each refs rating and the peer ratings will weigh as 50%. It will be interesting to see how it works. IMHO, our old system actually worked fairly well. Our assignor (who is an ex-ref and does a great job of being unbiased) figures that it ranks 95% of our refs within a couple spots of where they ought to be. No matter how good the system, you'll always have people who think they get screwed every year. One thing I have learned as association president is that you need to include comments rather than just a number. When I have a ref who wants to know why he/she didn't advance to where they think they belong, I at least owe them some comments which they can use to improve. There will ALWAYS be whining. We have a great ref who was rated at #35 last year (he was one of those 5% who got screwed - and the reason was because his availability only allowed him to do mostly JV games (he does college ball too) so he was a victim of that JV bias. Anyway, he moved all the way to #10 this year which IMHO is where he deserves to be. Nobody ever said a word to me when he was rated #35 yet you wouldn't believe how many refs complained to me that he moved up too fast this year. Heavy sigh. Z |
|
|||
I read a survey of employees once. I thing the audience was engineers but the results are probably universal. After the results were gathered, it was found that 80% of the respondant s felt they were "Above average" in their abilities and contribution to the company. This says that at least 30% of the employees were wrong in their self assessment, perhaps more since there were probably some who underrated themselves.
So, when it comes to officiating, there will always be some officials that feel they are not getting the games they deserve when in fact they are right where they belong. Their comparison of others to themselves is not from a neutral perspective. They will consider the faults of others to be of a great magnatude while they minimize the value of their own faults. Me, I'm probably right where I belong. I sometimes fall into the group I just mentioned when I find some officials above me making errors, but I try to temper those judgements with reality and realize that I have to better them to pass them, not just be their equal. I certainly reconized that there are MANY officials (perhaps several dozen) in our association that are clearly better than me. |
|
|||
At camp this summer, Mickey Crowley told us that the only thing worse than not having a rating system is having a rating system.
Refs at all levels rip each other in peer evaluations. If you can find a way to negate that basic fact (other than giving coaches 100% weight), you'll be way ahead of the game. Good luck. Chuck
__________________
Any NCAA rules and interpretations in this post are relevant for men's games only! |
|
|||
Quote:
__________________
Get it right! 1999 (2x), 2006, 2008, 2010, 2012, 2014, 2016, 2019 |
|
|||
[QUOTE]Originally posted by Camron Rust
[B]I read a survey of employees once. I thing the audience was engineers but the results are probably universal. After the results were gathered, it was found that 80% of the respondant s felt they were "Above average" in their abilities and contribution to the company. This says that at least 30% of the employees were wrong in their self assessment, perhaps more since there were probably some who underrated themselves. I've never met an official who underrated himself. Around here, the various leagues hire an assignor who supervises and assigns the officials. Some of the leagues allow the assignors to use veteran officials to scout and evaluate officials. Some of the leagues have the coaches rate the officials. Some of the assignors scout officials at summer camps. Some ask officials they know and trust for opinions regarding other officials. For state tournament assignment purposes, coaches rate the officials on a 1 to 5 scale. Every year, the officials receive a summary of their ratings. The state association committee uses these ratings to assist in making assignments and as a floor for tournament eligibility. Any evaluation process is subjective. All you can do is get as much input as possible, and continually update your ratings because some officials improve and others go downhill. |
|
|||
Being fairly new and not haveing the desire to advance to the college level admittedly influences my opinion. I can also completely understand the importance others who have D1 aspirations need to place on this area. There is probably no good way to do this and the problem of fair evaluations is not limited to basketball.
Our chapter is using evaluators this year to see how it works (don't know if they are paid). We are also experimenting with a rating system. A list of members was mailed to everyone and we were asked to select who we felt were the 50 best officials in the chapter, not rank them, just pick the top 50. Coaches/ADs were asked to do the same. Our assignment committee also consist of 3 people and make every effort to watch as many members as possible work in summer ball and camps. The process, as explained to me, uses the combination of the evaluators, Top 50, personal observations, coaches requests are what determines if you are ready for higher/bigger games. I was also told that if the assignment sec. feels you are "on the bubble" he will contact your partners for their input or will take a flyer and put you with someone he knows and gets their input. The system must work fairly well as I have not, in my limited circle, heard anyone complaining vigorously and my schedule so far is better than I expected. It may be a case that I simply have not heard them. The way I look at it, when they think I'm ready they will give me a shot. If I think I'm ready, I just need to show it more in the camps we run or attend the better camps with board members and the assignment people. But then guys like me who happily take what we get probably make the assignment sec's job much easier. By the way, if you can find a way to get personal bias out of an evaluation process please post it here. That is an area where I have always struggled.
__________________
I didn't say it was your fault...I said I was going to blame you. |
|
|||
Z-man, What do you plan to pay your evaluators? Given a 100 officials in our association and the expectation of 4 evaluations per official that would mean the evaluator must attend at least a portion of 400 games! Another thing that I would need to think about is our school locations they range from Salmon/Challis about 160 miles to the north and Blackfoot 25 miles to the south - perhaps employ an evaluator that lives at the distant location. Should an evaluator expect $5 a game? $10?
Are you planning to have the evaluator ever divulge his identity and meet with the officials being evaluated? Or just collect a few written comments at the end of the season? I like your ideas, Thanks. Cameron, I have seen a similar survey of automobile drivers. Fully 85 percent rated themselves above average. And yes that means that at least 35 percent have over rated themselves - of course 35% also assumes that all the above average driver's correctly put themselves in the above average category. Same goes for officials evaluating each other - we all seem to feel we should be in the top 10 but there is just not enough room for all of us in those 10 spots. Chuck you are correct, whenever we are criticized we want to fight back and defend ourselves. We as a whole don't have enough respect for other people's opinions. I think this emphasizes the importance of selecting the right persons to be evaluators - they need leadership skills to provide positive criticism and have it accepted, and they need to understand the plight of a basketball official trying to advance (sounds like a veteran official). Stripes brings up another important point; the evaluators will need to keep up with the changes in the game. All great points. Keep the ideas coming.
__________________
"There are no superstar calls. We don't root for certain teams. We don't cheat. But sometimes we just miss calls." - Joe Crawford |
|
|||
Quote:
So having no ratings at all is bad; but very often having ratings is worse. I went from the #1 rated non-varsity official on my college board to the bottom rated varsity official in one year. I had great ratings so I got bumped up to varsity. But once there, other officials didn't think quite so highly of me. Part of that may have been my performance as a rookie, I readily admit. But I don't think all of the change can be attributed to that. Just my own experience and opinion. Chuck
__________________
Any NCAA rules and interpretations in this post are relevant for men's games only! |
|
|||
Yes, there are enough former refs (even more importantly, ones who have the respect of the current refs) to do this. The board is going to give them formal training to keep them current and to try to make sure they are consistent in their rating criteria.
All our members know who the observers are. The observers give an "observation sheet" to each ref when they are observed with helpful hints. We also have several VHS camcorders and each observer will take a few minutes of video of each ref and give them the tape. The tape is truly subjective. :-) We will pay the observers $20 per game. They will watch both the JV and Varsity games in an attempt to make it worth their while (two games pay for one trip). All our schools are within a 40-mile radius so that helps. We do a summer fundraising tournament (where our officials donate their time) to raise the funds to pay our evaluators. The board and the assignor view all of our referees peer ratings at the end of each season. It is quite obvious if a referee is intentionally rating down his/her partners. In the two years that I've been on the board, I've only seen it once. Our officials have shown excellent integrity and the peer ratings have been very fair, IMHO. We're just trying to make it "even more fair" by adding observers. One other thing I forgot to mention: We do not micromanage our assignor. The ratings are used to help him in assigning, but not to dictate his every move. He is an ex-referee (with the respect of everyone) and he observes a lot of games himself. He has the freedom to assign "lower-rated officials" to varsity games if he feels that the ratings system is not placing them in a spot which reflects their ability. It's always a work in progress. :-) Z [Edited by zebraman on Oct 23rd, 2002 at 10:50 AM] |
|
|||
Quote:
![]()
__________________
Get it right! 1999 (2x), 2006, 2008, 2010, 2012, 2014, 2016, 2019 |
|
|||
![]()
In our association we hire about 10-15 observers (former or active college officials) to evaluate our 60 varsity officials. We pay them $25.00 or $30.00 per game and each official gets 5 evaluations (you can do the math)
Each Varsity official who has a JV game before theirs is required to observe and evaluate the JV officials working that game. Scoring is based on a 100 point system. Each year the top 5 JV officials move to the Varsity list and the bottom 5 Varsity officials move to the JV list. It's not a perfect system but it works well for us. Here is the Evaluation form we use for scoring purposes. http://home.officiating.com/sowb/Doc...serverform.pdf or if you would like a word version change the extention to .doc [Edited by Ron Pilo on Oct 23rd, 2002 at 11:50 AM] |
|
|||
Zebraman
what association are you with.........? Everyone else........our evaulators meet with us in the locker room after them game. We don't get a score right then but we do get the feedback. |
![]() |
Bookmarks |
|
|