Open Thread: 10 Lessons From the Fall Rankings

by KFC on December 21, 2011

There are a lot of takeaways for us from the rankings, but we hope teams read between the lines and learned something from them too besides the names of which teams got ranked. I’ll keep this post short since we already received a lot of good feedback on the college rankings and feedback on the high school rankings. Feel free to leave a constructive comment and we’ll take your feedback into consideration for the next set of rankings.

1. We need to clarify that these are team rankings only. They only measure the competitive success of the travel team. They do not measure the overall quality of the program i.e. if they host a conference, do outreach, go abroad, bring in speakers, etc.

2. Teams that host do not receive a score for their own conference. There’s no perfect solution to this but hosting teams should take comfort in the fact that if they’ve hosted a conference, it provided them with funds to travel to more conferences or send more people to a conference to make up their score. The most constructive suggestion came from the college feedback: instead of giving teams a score for hosting conferences (since these are purely team rankings), we should create a ratings system for the conferences that’s analogous to how people rate and choose hotels.

3. The rankings measure achievement rather than give predictions. The rankings are meant to reward cumulative success so far. They do not necessarily predict which teams will beat another team in the future. This expectation needs to be clarified, or perhaps the rankings should be changed back to its original name which was Standings.

4. Cumulative scoring is good but not perfect. We still prefer using a cumulative score of weighted points because it fits into our fundamental mission of encouraging teams to attend more conferences. But there were strong cases made for including head-to-head wins/losses into the rankings, and a weaker case made for using an average of weighted scores (total points divided by number of conferences).

5. Cap policies on cumulative score works for now. On the college circuit, no one protested to us not using a cap of the number of conferences we count. On the high school circuit, teams seem satisfied at our temporary cap of four conferences. However, we expect that number to go up as Model UN becomes more accessible in other regions and as teams lobby their administration to go to more conferences (perhaps at least smaller, local ones).

6. Arbitrary cut-off times will always hurt some teams. We chose the end of December 2011 to do Fall rankings. However, teams choose conferences for the entire year and not just the Fall; some teams didn’t compete as much in the Fall and thus didn’t have as many scoring opportunities. There are two solutions to this. First, we could create a standing list of awards instead of focusing on Fall rankings which would be a valuable resource. Second, we could communicate the expectation of a Fall ranking and encourage teams choose to attend more conferences in the Fall, which would fit into our mission of expanding the Model UN circuit.

7. Higher-ranked teams focus on education instead of competition. They’re not just good because they have elite upperclassmen who win awards. They’re good because they can field a large delegation (win more awards) and they have underclassmen who can win too at less competitive conferences (win at more conferences). They do a great job at recruiting and training their members and that educational focus is reflected by their newer members winning more awards at more conferences rather than relying on returners winning at only one or two of the most competitive conferences. Put in another way: higher-ranked teams focus on education instead competition; the awards just come along with the process.

8. Transparency is improving. We realized that the delegation award winners are not necessarily the teams that won the most awards since conferences like to use a delegate-to-awards ratio. We have full awards for the college circuit so we’re now able to fix that and provide better recognition to teams that don’t win delegation awards. We’re still working on this for the high school circuit.

9. Regions without MUN conferences should start their own. We saw the benefits of entrepreneurship in California. On the college circuit, SBIMUN’s rapid growth gave a solid boost to all the California universities that previously had to fly across the country to attend a single conference. On the high school circuit, the California high schools have many local conferences to attend which gives more people the opportunity to train — or compete since several of them were as competitive as a university-hosted conference — at a cost-effective manner. Other regions should emulate this so that teams have more opportunities to do Model UN. Starting a conference, of course, requires experienced upperclassmen to grow their teams by recruiting and training newer students to help.

10. Overall, sportsmanship seemed to be a lot better this time. And we commend everyone for that. Being ranked anywhere on the Top 25 is already an honor. If you were proud of your team, then you get it. You get that Model UN is about much more than the awards and the rankings and this extra recognition is just a by-product of your success.

Congrats to all the teams that got ranked and good luck in the winter and spring!


  • Abhimanyu Muchhal

    Hey Best Delegate.

    I would like to say that this is actually a very nice summary of the comments that we have seen throughout the rankings and im sure is helpful for all your readers.

    Specifically, I would like to mention that point 6 is a very important point. I come from a school that has done pretty well in the HS rankings in the last year but moved down this year because we went to very few conferences in the fall – our conference schedule hits it peak in the winter.

    Keeping this rankigns fair and abiding by the Best Delegate methodology I think the idea of a standing list of awards would be very effective as it would allow teams to see the progress of other delegations in their circuit, and also allow them to improve throughout the year before being ranked.

    A consensus amongst between other MUNers I talked to regarding this matter was that many of the larger conferences like HMUN, ILMUNC, NAIMUN and McMUN dont happen until later this year and it is important to look into the performances of schools their before ranking them.

  • Anonymous

    When I first saw the fall rankings/standings, I had pretty much the same reaction as everyone else – surprise, anger, etc.

    However, I’ll say that after taking a little bit of time to think about them, there’s really no reason to be upset. There’s clearly been a major shift since the 2010-2011 rankings, and (in my opinion) the best schools have not been represented very well in the higher part of the rankings – but that is okay.

    If Best Delegate had done fall rankings last year, they probably would not have looked all that much like the final rankings, and these fall rankings probably won’t look like this year’s final rankings.

    There are issues, though. First of all, smaller conferences, especially those in the California circuit, are being given far too much weight. That’s okay for the fall rankings – so far, the closest thing to a “Most Competitive” conference that we’ve had has been PMUNC. It makes sense to give more weight to smaller conferences now, but this needs to change as we move through the year.

    Right now, we’re moving into the heart of the high school circuit. The best, largest and most competitive conferences are coming up. This year, these conferences will be NAIMUN, ILMUNC, HMUN, etc. Yes, those are in a particular order. NAIMUN will be the #1 conference this year, with Mira Costa, Dalton, WW-P South, and a LOT more top schools attending (in addition to being the largest conference in North America). HMUN would be ranked higher than ILMUNC if it wasn’t losing Dalton and Horace Mann this year. While HMUN is still incredibly competitive/large, it loses some weight with the absence of Dalton and Horace Mann. That leaves ILMUNC to gain some weight, with the presence of Horace Mann added to the top schools that usually go.

    In addition to that big three, there are the other “Most Competitive” conferences coming up to look forward to, as well as all of the conferences in the tier after that, and others.

    The point here is that in terms of competitiveness and size of conferences, we’re probably less than 20% of the way through the season. Don’t get upset, because these are not in any way, shape or form the final rankings. Right now, these rankings/standings are accurate – but at the end of the year, the largest and most competitive conferences need to be given more weight. If ANY school won Best Large at NAIMUN, that award alone should probably put that school at or near the current standing of Huntington Beach.

    So, instead of lashing out about these FALL STANDINGS, understand and realize that the final standings are light years away, and there’s potential for literally any school to soar to #1 based on their performances at the most competitive conferences this year.

  • Anonymous

    Hey Guys-

    Thanks again for compiling this list of Rankings. It’s really great to see how far BestDelegate has come in refining its take on Model UN, and in expanding to produce lists like these.

    There’s been a lot of discussion about how to conflate the results of the Fall Season into this list- the main problem is that they act to compound Disparities between different Model UN teams. This early in the season, many teams simply aren’t ready to be ranked.

    What I first propose is that you divide your Fall Season into two different circuits, recognizing the inherent differences in teams’ approach to the Fall Season. With the exception of well-traveled teams like UChicago Labs, we’re already finding an Eastern Circuit centered around Princeton PMUNC, and a Western Circuit with its own kernel.

    Each circuit would have its own “Most Competitive”, “Large”, and “Regional” Conferences- determined by the number of intra-circuit schools in attendance. As far as weightings go, you would have the freedom to rate them as you pleased- acknowledging the edge of schools that succeed in a more competitive setting, but granting credence to the strategy of traveling wider to accumulate exposure.

    What this does is allow teams to take more away from your Rankings. Currently, we have consecutively ranked teams who have never met each other, and never seen each others’ performance. Contrast this with Standings within a Circuit- where teams have had the chances to intimately familiarize themselves with their neighbors in the Rankings, and who have far more scope to self-evaluate coming off of Rankings like these.

    This carries a unique benefit to schools hosting their own conference- natively hosted Conferences would assume special status to the dynamics of each circuit. They would become a forum for teams to both train a team and make a statement- serving the goal of increasing attendance within each circuit.

    Ultimately, what I think this system produces is degrees of freedom in your approach- it creates the freedom to recognize teams based on their own stated goals, and to build themselves based on your feedback.

    BestDelegate Reader

    • Kevin Felix Chan

      Yes, what you’re referring to is Regional Rankings which would make much more sense for the majority of schools in the nation.

Previous post:

Next post: