There are a lot of takeaways for us from the rankings, but we hope teams read between the lines and learned something from them too besides the names of which teams got ranked. I’ll keep this post short since we already received a lot of good feedback on the college rankings and feedback on the high school rankings. Feel free to leave a constructive comment and we’ll take your feedback into consideration for the next set of rankings.
1. We need to clarify that these are team rankings only. They only measure the competitive success of the travel team. They do not measure the overall quality of the program i.e. if they host a conference, do outreach, go abroad, bring in speakers, etc.
2. Teams that host do not receive a score for their own conference. There’s no perfect solution to this but hosting teams should take comfort in the fact that if they’ve hosted a conference, it provided them with funds to travel to more conferences or send more people to a conference to make up their score. The most constructive suggestion came from the college feedback: instead of giving teams a score for hosting conferences (since these are purely team rankings), we should create a ratings system for the conferences that’s analogous to how people rate and choose hotels.
3. The rankings measure achievement rather than give predictions. The rankings are meant to reward cumulative success so far. They do not necessarily predict which teams will beat another team in the future. This expectation needs to be clarified, or perhaps the rankings should be changed back to its original name which was Standings.
4. Cumulative scoring is good but not perfect. We still prefer using a cumulative score of weighted points because it fits into our fundamental mission of encouraging teams to attend more conferences. But there were strong cases made for including head-to-head wins/losses into the rankings, and a weaker case made for using an average of weighted scores (total points divided by number of conferences).
5. Cap policies on cumulative score works for now. On the college circuit, no one protested to us not using a cap of the number of conferences we count. On the high school circuit, teams seem satisfied at our temporary cap of four conferences. However, we expect that number to go up as Model UN becomes more accessible in other regions and as teams lobby their administration to go to more conferences (perhaps at least smaller, local ones).
6. Arbitrary cut-off times will always hurt some teams. We chose the end of December 2011 to do Fall rankings. However, teams choose conferences for the entire year and not just the Fall; some teams didn’t compete as much in the Fall and thus didn’t have as many scoring opportunities. There are two solutions to this. First, we could create a standing list of awards instead of focusing on Fall rankings which would be a valuable resource. Second, we could communicate the expectation of a Fall ranking and encourage teams choose to attend more conferences in the Fall, which would fit into our mission of expanding the Model UN circuit.
7. Higher-ranked teams focus on education instead of competition. They’re not just good because they have elite upperclassmen who win awards. They’re good because they can field a large delegation (win more awards) and they have underclassmen who can win too at less competitive conferences (win at more conferences). They do a great job at recruiting and training their members and that educational focus is reflected by their newer members winning more awards at more conferences rather than relying on returners winning at only one or two of the most competitive conferences. Put in another way: higher-ranked teams focus on education instead competition; the awards just come along with the process.
8. Transparency is improving. We realized that the delegation award winners are not necessarily the teams that won the most awards since conferences like to use a delegate-to-awards ratio. We have full awards for the college circuit so we’re now able to fix that and provide better recognition to teams that don’t win delegation awards. We’re still working on this for the high school circuit.
9. Regions without MUN conferences should start their own. We saw the benefits of entrepreneurship in California. On the college circuit, SBIMUN’s rapid growth gave a solid boost to all the California universities that previously had to fly across the country to attend a single conference. On the high school circuit, the California high schools have many local conferences to attend which gives more people the opportunity to train — or compete since several of them were as competitive as a university-hosted conference — at a cost-effective manner. Other regions should emulate this so that teams have more opportunities to do Model UN. Starting a conference, of course, requires experienced upperclassmen to grow their teams by recruiting and training newer students to help.
10. Overall, sportsmanship seemed to be a lot better this time. And we commend everyone for that. Being ranked anywhere on the Top 25 is already an honor. If you were proud of your team, then you get it. You get that Model UN is about much more than the awards and the rankings and this extra recognition is just a by-product of your success.
Congrats to all the teams that got ranked and good luck in the winter and spring!