Which high school Model UN teams are the best in North America? We saw many benefits to creating a High School Model UN Rankings system and are about to release the Fall rankings for the 2011-2012 school year. Here’s what you need to know about our purpose, our philosophy, and changes made to the methodology before diving into the rankings.
We want top teams to be recognized for their accomplishments in a centralized location and we want other teams to aspire to become a top team. We believe sharing this information will be interesting and valuable to the community and that it will foster discussion among high school Model UN teachers and conference organizers on bigger issues that affect the activity such as the lack of a standardized awards criteria and transparency, creating a competitive versus an educational experience, and the sharing of best training methods.
We do not believe that awards are the purpose of Model UN — rather, awards are a way to recognize Model UN teams for their hard work and leadership in committee. Awards should support the greater purpose of Model UN, which is to be an educational experience that helps today’s students become tomorrow’s leaders, discover their passions, and change the world.
It is important to understand that rankings inherently reflect the publisher’s philosophy and values. Best Delegate’s mission is to grow the Model UN activity in terms of both quality and size. Therefore, we ultimately value both the ability to win and the ability to win at more conferences – we believe the best teams are those that can perform consistently well across many conferences and especially so at the most competitive conferences.
Our ideal rankings would reward success for winning at more conferences and would not penalize teams for losing so-called “head-to-head matchups” at a single conference. It would reward teams for bringing larger delegations to conferences as it would use total weighted score of awards won instead of an awards-to-delegate ratio. It would also rank teams that may not have won a delegation award but won more consistently over teams that just gained publicity from winning a small delegation award at a small conference. Finally, it would reward teams that performed exceptionally well at a certain conference by winning a higher proportion of the awards available.
Unfortunately, we do not have enough awards data at the moment for most high school MUN conferences beyond the largest ones and have to rely on delegation awards information for now when creating our rankings. In the future, we hope to improve our methodology to look more like our College Rankings methodology where we have full awards data from almost all the conferences.
The rankings for high schools in North America are determined by the sum of the four highest scores achieved per team at conferences held in North America. A score for each conference is determined by converting delegation awards won or the total number of awards won (when information is available) into points and then multiplying those points by a conference competitiveness weighting. Greater weighting is given to awards won at the more competitive conferences. We decided to cap the number of conferences at four for this year since there is a disparate availability of Model UN conferences across geographic regions. Conferences that did not submit awards data are excluded. One-day novice conferences hosted by high schools are also excluded since they focus more on training than on competition.
Essentially, the formula used for the rankings is the sum of the four highest scores when multiplying Delegation Award Score by Conference Weighting.
- Delegation Award Score: Delegation awards are converted into points. We valued delegation awards in this order: Best Large, Outstanding Large, Best Small, and Outstanding Small. Some conferences feature other types of delegation awards and we converted them appropriately depending on if that award is considered more or less prestigious than the aforementioned four awards. When data is available, teams that won numerous awards but did not win a delegation award will have their scores converted to be equivalent of placing “3rd,” “4th,” and so on.
- Conference Weighting: Every conference is assigned a competitiveness multiplier based on our internal algorithm that takes into account total size of the conference, number of award-winning teams present, the delegate-to-committee ratio, the number of days of the conference, and whether it was hosted by a university/non-profit organization or a high school.
The conferences included in the Fall edition of the 2011-2012 rankings include the conferences below. They are roughly divided by the range of their Conference Weightings and are listed in alphabetical order:
- Princeton PMUNC
- William & Mary WMHSMUN, Chicago CIMUN
- Brown BUSUN, Georgia Tech GTMUN, McGill SSUNS, Rutgers RUMUN, UCLA BruinMUN, Virginia VAMUN
- Edison EHSMUN, Regionals RHSMUN, San Antonio MUNSA, Stanford SMUNC
- Baylor BUMUN, Brigham Young BYMUN, Central Florida KnightMUN, Colorado UCHSMUN, Connecticut UCMUN, Contra Costa CCCMUN, Great Lakes Invitational GLIMUN, Southeast High School SHSMUN, Southern United States SUSMUN, Vanderbilt VUMUN
Teams that participated at these conferences earned points for these conferences and are included in the rankings (unless we received incomplete awards data). It is important to note that some teams participate in other conferences such as conferences abroad and their ranking may not reflect the actual quality of the program as their achievements from these other conferences are not reflected under our methodology.