10 Lessons from the Model UN Standings (open thread for feedback)

The two series of MUN Standings articles we have produced – America’s Best High School Model UN Teams and the Best College Model UN Teams – is the culmination of a lot of work gathering data throughout the year and an attempt at being the first one to centralize and aggregate all this data. The articles have garnered immense interest and we are very excited about the response we have received.

What we wanted to do with this post is to share ten lessons we have learned during this exercise and to open up the comments thread for constructive feedback on not just how to improve the Standings methodology but also on how to address larger issues regarding conferences that your school attends – or doesn’t have the opportunity to attend.

First, five lessons from completing the project:

1. We will start streamlining the collection of data. Now that we have centralized the awards data and received an overwhelming positive response to these Standings, we know schools are interested in reading about it and would want to know the Standings throughout the season in the future. We do admit that it’s difficult to manually collect all this data and that there were a few errors during our first time doing it, but it was more important that we tried so that we could get feedback on whether a streamlined process was worth developing.

2. Centralizing Model UN is beneficial for the community. Due to decentralization, it is difficult to know which teams are really good or are making contributions to the Model UN activity – it’s quite possible that many of the schools outside the Northeast have never heard of most of the schools Top 10 and vice versa and particularly so if schools don’t travel outside their region. We speak often about the “Model UN community,” but this community is relatively fragmented and we think connecting the different schools and students including those who only attend smaller, regional conferences will yield numerous benefits, some of which we already mentioned in our overview.

3. Reactions of pride were the most rewarding response for us. Whenever we saw students or alumni get excited about where they placed, we felt rewarded because that was the purpose behind creating these Standings. We wanted others to be proud of themselves and their teams. To us, the sad response were students who were disappointed – or didn’t know how to react – because of the worldview they have been trained with or the emphasis on awards that have been instilled onto them. Placing in the Top 25 – or even the Top 100 that we didn’t release – is already an incredible accomplishment. We hope students understand this and learn that at the end of the day, those who recognize the value of Model UN over just the awards won will get much more out of the activity than those who only cared about rankings and awards.

4. We hope students use these Standings to promote their accomplishments. Of all the benefits that we mentioned in our Overview, we think this one is the most relevant to the readers. Programs should use these standings to recruit more members into their team, to show to their school administration in hopes of receiving funding or support, and to publicize their accomplishments in school and local newspapers. This shouldn’t be a one-time thing — schools should be promoting their accomplishments throughout the school year.

5. Ideas for Model UN reform needs to be communicated to the conferences. An inadvertent result of doing these Standings is that participants have become more willing to share their concerns to us about some of these conferences. In the high school level, the issues seem to be large committees, low educational quality, limited use of technology, lax on enforcement of the rules including on plagiarism, and criteria of judging awards. On the college level, the issue seems to be the simulation of non-U.N. committees. However, we believe these issues should be brought up directly with the conference organizers. It is only with clear feedback can they improve – and establish themselves as a conference that you would want to regularly attend. We understand there needs to be a centralized platform that allows for regular, productive communication between schools and conference organizers, and we hope we can improve this site as a platform to facilitate this for the benefit of all participants.

Next, five lessons from the comments on methodology:

6. Conferences have different philosophies, awards systems, and judging metrics. Each one requires a different skill set to win, and we don’t favor one philosophy over another. Schools need to understand that just because a conference is not as “competitive” as Harvard HMUN or Georgetown NAIMUN does not necessarily mean it is not difficult to win there – it’s analogous to understanding that business is conducted differently around the world in real life. We believe the best teams can adapt across conference philosophies or know to choose conferences that match their own philosophies. We have written about this subject as a delegate strategy, as a team-building strategy, and in our guide, How to Win Awards in Model United Nations.

7. Teachers should select conferences that give their students the best educational experience. One of our fears in producing the Standings is that teams will want to start gaming the system and attend only conferences that have high weighting – hence we didn’t release the specific weighting we gave for each conference. We’re confident though that teachers will have their students’ best interests in mind and continue to select the conferences that are best for their students – and not necessarily best for the Standings (which could lead to increased pressure and emphasis on winning awards). In fact, we advocate a balance of conferences of different competitiveness when building a top travel team, and we will look at how to best integrate smaller, more regional conferences into the methodology.

8a. Conferences need to be reweighted in the high school standings. This came from two ends of the spectrum as schools of all calibers wanted to boost their own standing and believe their conferences are more competitive than they are currently weighted. On one end, the top schools want their head-to-head competition with each other to boost each others’ profiles and therefore perpetuate their ability to stay on top. Therefore, a mid-sized conference could be considered a more competitive conference than a large, national one and earn everyone more points on the Standings. Regions with active circuits will certainly benefit in future Standings with this weighting that is more biased toward “actual” competition. And on the other end, many good teams that do not have the opportunity to travel to these major conferences also want their smaller, local conferences to receive a weighting boost because they are in fact competitive with very good teams and should be counted for more than what they currently do.

8b. Harvard National HNMUN is overweighted in the college standings. We anticipated this would happen, especially since they provided a full awards list whereas that information was not available for many of the other conferences. Therefore, it was much easier to use their data for head-to-head comparisons and we mentioned their increased influence in the methodology. The solution for “fairer” weightings in the college standings — especially for those that do not attend HNMUN — is to gain access to more complete awards information from every conference or figure out a way to streamline awards results submission from schools.

9. Absolute number of awards should be rewarded more than winning percentage. Conferences use winning percentage to determine awards won because smaller teams have lobbied to get an even playing field when they are compared in the same size tier as other schools for delegation awards. That’s fine, but the Best Delegate team values absolute number of awards more than winning percentage. To make a basketball analogy, a game isn’t won by the percentage of shots made but by the total shots made (points scored). Awards aside, we believe teams should be growing the activity and bringing as many delegates as they want to. If more delegates win, then they deserve recognition for that team-wide effort. We used this in our methodology to figure out “3rd place,” “4th place,” and so forth and will continue to do so with minor modifications.

10. The number of conferences used for calculations may need to be capped. We believe in absolute number of awards won including delegation awards, but there are certainly issues to this. On the high school level, socioeconomic factors, distance, or administration limitations may prevent teams from competing in many conferences every year. Teams that attend more conferences will have the opportunity to earn a higher aggregate score, but then that means we’re judging the program’s quality (to field a team) instead of the actual team’s quality (to perform at a conference). It’s a similar case on the college level. One solution is to take the top standard number of results — say, five conferences — from each school to level the playing field from factors outside the teams’ control, but we hope this will not discourage teams from attending more conferences.

There are already plenty of other comments from all the articles.

America’s Best High School Model UN Teams:
Overview, Methodology, Top 1-5, Top 6-10Top 11-15, and top 16-25!

The Best College Model UN Teams:
Overview, Methodology, Top 1-5, top 6-10top 11-15 and top 16-25, and international top 20!

Now we wanted to open it up for feedback. What did we do well? What needs to be improved or changed? What are the more important takeaways (e.g. dialogue on conference reform) that should be implemented from this exercise?

**

If you would like to give us general feedback on Best Delegate beyond just the MUN Standings articles, please take our reader survey.

© 2020 BEST DELEGATE. All Rights Reserved | Site design by Hibiscus Creative on Thesis