America’s Best High School Model UN Teams: Methodology

by KFC on March 22, 2011

The Best High School Model UN Teams standings are determined by awards won at the most competitive conferences in the United States and by head-to-head matchups between schools attending the same conferences.

We used conference size as a proxy for prestige and competitiveness – winning Best Delegation awards or even performing well at larger conferences are weighted more heavily into the standings. We weighted conferences by three tiers: Most Competitive (1,500+ delegates), Large (1,000-1,500 delegates), and Regional (500-1,000 delegates that are the most competitive in a geographic region or are notably competitive). The list of conferences in each tier is listed below. We only counted conferences that are hosted by colleges and non-profit organizations – we did not have enough data to consider conferences hosted by high schools.

We valued delegation awards in this order: Best Large, Outstanding Large, Best Small, and Outstanding Small. Delegation awards won at the more competitive conference tiers carry more weight. Conferences that published their full list of award winners gave us better data to work with and will be more influential in the standings – schools that did not win delegation awards but had many award winners are counted as winning “3rd place,” “4th place,” and so forth.

In our analysis of the 40 largest conferences that have taken place by March 21st, 2011, we were able to determine over 90 high school MUN teams that have consistently won awards so far this year. Since we have awards results and head-to-head matchups from six of the seven Most Competitive conferences, delegation award winners and top performing schools at those conferences as well as Best Large Delegation award winners from the Large conferences will populate all of the Top 25 standings.

Tier Breakdown

Most Competitive: We looked at the 40 largest conferences in the United States and sorted them by size. We determined that the seven largest conferences with over 1,500 delegates each are also the most competitive conferences in the nation.  The awards won at these conferences are thus weighted more heavily than awards won at any other conference. Since we have visited each conference and have good head-to-head data, we were able sort these conferences in order of strength of competition:

Large: The next set of conferences is those with 1,000-1,500 delegates, and awards won at these conferences are weighted more heavily than awards won at Regional conferences. These in alphabetical order are:

Regional: In our analysis of the 40 largest conferences, we made sure to take note of conferences that have 500-1,000 delegates but may be the most competitive conference available in the region including but not limited to:

We also noted in the same category conferences that are not large and not the most competitive in the region but are attended by competitive high school teams, and these include Cornell CMUNC, Columbia CMUNCE, and Illinois-Chicago CIMUN.

  • Pingback: America’s Best High School Model UN Teams: Overview | Best Delegate()

  • Pingback: America’s Best High School Model UN Teams: Top 16-25 | Best Delegate()

  • Pingback: America’s Best High School Model UN Teams: Top 11-15 | Best Delegate()

  • Pingback: Recaps: WAMUNC, FHSMUN, MSUMUN, and MUNterrey. Plus Previews of Seton Hall SHUMUN, West Point WSPC, Mt. Holyoke FCMUN, and NYUMUNC! | Best Delegate()

  • Pingback: America’s Best High School Model UN Teams: Top 6-10 | Best Delegate()

  • Ronald

    *FHSMUN barely had 300 students attend.

    • KFC

      Ronald, you are correct. When we did our initial research for the methodology, we thought FHSMUN was larger and was considered the most competitive conference in Florida. We discovered from a reader-submitted recap a few days later that the conference had under 300 students, so we’ll fix it in the final end-of-year standings.

  • http://www.wwphssmun.com Nikhil

    Although your methodology makes sense, you fail to recognize PMUNC as being a competitive conference. PMUNC is a large conference and therefore would be put in the first category, and was attended by West Windsor – Plainsboro High School South, West Windsor-Plainsboro High School North, Horace Mann, and Dalton this year, making it an extremely competitive conference. Therefore you should add this into your calculations when deciding the top 1 – 5 schools.

    • KFC

      Princeton PMUNC is among the top 40 conferences that we used for the calculations (we only listed 25 here but we used 40). We had placed it under the “Regional” tier based off the best information we could obtain but if the attendance numbers are confirmed to be close to 1,000 or more delegates, then we will bump it up to the “Large” tier.

      I understand your argument that many good teams attended a particular conference so that it should be weighted more (it’s similar to the “strength of schedule” factor in college football or basketball), but in order to be unbiased, we assigned weighting for each conference reputation-blind and based only on conference size (a standard metric) before seeing what the results were. In addition, we do not have data on the attendees for all conferences, so we wouldn’t have been able to implement this across all conferences. That said, there are a few conferences — Cornell CMUNC and Florida Gulf Coast SWFLMUN come to mind — that may be underweighted for its size.

  • Amanda

    I don’t quite understand the fascination with NHSMUN. The level of competition there clearly was not of the caliber of NAIMUN, or even perhaps University of Pennsylvania’s ILMUNC for that matter.

    Four of the twenty five schools only received recognition at NHSMUN. In essence, Centennial, Richland or Franklin & Highland Park had no other results, or at least results that you’ve specified. The challenging level of NHSMUN seems as though it is not as difficult as you’ve pointed out.

    On another note, I think it might be in the best interest of Best Delegate to put together an advisory board (would never have to be published- names or affiliations). This could advise you and prove information/insight to you at certain times and concerning certain issues. For instance, why did you originally believe FHSMUN was the largest and most competitive conference in Florida? Mistakes like this, though innocent, could negatively impact your credibility. I would not want to see this happen in any facet. Any coach of a competitive Florida team could have given you the insight on FHSMUN. Also, I fear that you have given, without seemingly any valid reason, an inordinate degree of supposed challenge or difficulty.

    My last concern is SSUNS (Montreal). You’ve listed McGill’s college conference as one of the most competitive conferences, but failed to give any recognition to SSUNS, McGill’s high school conference. As a delegate who attended SSUNS in previous, I strongly believe that SSUNS is a very competitive conference, and it is also very large. I’m inquiring as to why you would not include SSUNS in your list of “Most Competitive” conferences.

    Thank you for your time.

    • KFC

      In our conference weightings, NHSMUN and NAIMUN are about the same. Both are a bit more competitive and also larger than ILMUNC. Remember, conferences are unbiasedly weighted primarily by size, not by the reputation of the top schools that attended. See the explanation regarding PMUNC above for the reasons why we chose this for our methodology.

      In addition, weighting a conference by the top few schools that attended discredits the competitiveness of all the other schools that attended — we can’t determine the overall competitiveness of a conference simply because one Top 10 school attended. (Or if we did, Mira Costa, Cerritos, and Huntington Beach could alone make any high school-hosted conference in southern California outrank almost every “Large” conference on the list regardless of who else attended).

      Schools like Centennial and Richland Northeast do not attend as many high profile conferences (and understandbly so since there are far fewer conferences and no large conferences in the southeastern United States). When they travel to NHSMUN, they are able to go head-to-head against other schools that had more opportunities to demonstrate success (e.g. East Brunswick or Cerritos). We focused on awarding absolute success on the strength of the awards won (which again are based on conference size) instead of penalizing schools relative to each other.

    • KFC

      Your last two points are of particular interest to us.

      We are the first and only ones attempting to consolidate awards information and analyze competitiveness from 40 conferences which was first whittled down from a list of over 100 conferences — it’s a lot of data to collect and analyze. Our attempt at producing these standings is meant to get everyone’s attention that there is a lack of centralized information in Model UN — and your response is precisely what we wanted. We do want conferences to publicize themselves and we do want schools to have a centralized location to share their success. Our goal with this exercise is to jumpstart the crowdsourcing of Model UN information, and given the popularity of these MUN Standings articles, we now know schools are eager to submit awards data and we’ll be implementing a more streamlined process for this.

      In regards to SSUNS, we left it off because we weren’t including Canadian secondary schools into our standings due to a lack of data and we didn’t have the information on how many American high schools attended to determine if it should be counted as part of a greater North American circuit. On the college circuit, it was easy to include McMUN into a greater North American circuit because many American colleges attend McMUN, and McGill plus several other Canadian colleges compete primarily in American conferences. However, we will do more research to see if SSUNS will be considered in the future for a North American version of the standings.

  • A.

    I was wondering why SWFLMUN isn’t on here. it is a very competitive conference with schools such as Port Charlotte, Canterbury and Gulf Coast. Also, Why is NHSMUN Ranked so high? Centennial was beaten by both Port Charlotte and Gulf Coast at Georgia Tech, both who were in attendance at NAIMUN along with West Windsor South, JP Stevens, Chicago Lab, and Hunnington Beach.

    • KFC

      How large is SWFLMUN? If it is near 500 delegates or more, than it would be considered in the “Regional” tier. Otherwise it is not large enough to qualify for our methodology (our data shows around 300 delegates). Also as previously mentioned, it’s not just about the top schools that attend — you have to take into account all the other schools too. Having three good schools attend isn’t the only defining factor in making a conference competitive. See my explanations above to Nikhil and Amanda. In addition, what’s invisible here is that we have 90+ schools that have been consistent award winners at multiple conferences (as mentioned in this article) and we believe that is a better sample size and indicator of a conference’s competitiveness.

      NHSMUN is ranked high because we used size — an unbiased and standard metric — as a proxy for competitiveness. NHSMUN is the 2nd largest conference in the nation. Winning at a 3,000-delegate conference is weighted much more heavily than winning at a 500-delegate conference. Remember, these are standings that “score” on absolute accomplishments i.e. Centennial scored much higher than Gulf Coast for the season to date despite a head-to-head “loss” at a much smaller conference.

      • A.

        Gulf Coast not scoring high is very reasonable. Im just saying that Port Charlotte beats Centennial every year at Georgia Tech. Size is not the biggest factor, in fact, if you were to use size, Georgetown(NAIMUN) is the largest conference in the western hemisphere. But it isn’t as competitive as Harvard.

      • The Jet

        FIMUN is probably the only conference in Florida with close to 500 students. My school goes to all the Florida High School MUN conferences. We went there this year and they had over 400 for sure. SWFLMUN and Gator MUN don’t come close.

  • RWJ

    I continue to applaud what you are doing at BEST DELEGATE. It is a daunting and potentially “thankless” task.

    I think that the fundamental issue of concern is that it appears that your ONLY measure of competitiveness is size alone. I guess you just have to make that glaringly clear and acknowledge that the inherent quality of the teams in attendance is not a part of your methodology.. As to that methodology, at face value it would seem like size would be indicative of a high level of competition, and it actually is in relation to the fact that many many delegates are in attendance and all “competing.” So there is MUCH competition.

    But in reality it must also be recognized that the competitive quality of a conference is undeniably enhanced, exponentially, by the presence of top teams. Some people herein are talking about QUALITY and others (Best Delegate) are talking about SIZE. The SIZE alone argument would insinuate that if there was a conference attended by all of the top twenty-five rated teams, and each team brought twenty-five delegates, then that conference at most could be accorded Regional” status and no more in Best Delegate’s methodology. So the issue is definitional and philosophical in nature. So neither position is inherently right or wrong, correct or incorrect.

    It would seem that in the future to somehow develop a rubric/methodology that somehow combines to two methods would be the way to go.

    On another note, I am positive that this year the Georgetown people who run NAIMUN claimed that their conference was the “largest in the Western hemisphere.” Were they wrong? Because if not then NAIMUN should be at the top of the most challenging scale. And isn’t Harvard actually smaller as to attendance than both NHSMUN and NAIMUN yet it is given top billing? But I would agree that Harvard should be given top billing followed closely because they combine comparatively high levels of attendance in general AND involve a significant number of top teams. Thanks again for all of your hard work.

    RWJ

    • AAF

      And in response, the NAIMUN website states, and I quote:
      “Each year, NAIMUN provides nearly 3000 high school students with innovative committees that draw in delegates from across the globe to partake in high quality debate and experiences unique to our nation’s capital. At NAIMUN XLVIII, we will continue to uphold the high standards of the Western Hemisphere’s largest and preeminent Model United Nations conference and introduce new initiatives to create a truly comprehensive and dynamic educational experience.”

      • AAF

        Conversely, the HMUN website states:
        “HMUN 2012, you will have the opportunity to meet and interact with more than 2,500 high school students from more than 200 high schools around the world. HMUN provides a host of substantive issues for this dynamic group of students and creates a forum for debate and negotiation among them – a truly worthwhile experience. Our conference is designed to foster negotiation and public speaking skills, introduce students to a range of global challenges, and provide culturally diverse perspectives on a variety of issues.”

    • KFC

      Hi RWJ,

      Thank you for recognizing our hard work and for the constructive feedback. We’re the first and only ones attempting to aggregate all the awards data and we realize that tweaks are necessary. I agree that ideally we should weight conferences by its actual strength of competition (much like how college football team rankings are weighted by “strength of schedule”) but we do not have complete information to do so at this point.

      NAIMUN is the largest conference in the Western hemisphere. However, we mentioned in the methodology that we re-weighted the seven “Most Competitive” conferences because our team had observed all of them first-hand and we sorted them by what we thought was the appropriate strength of competition — in essence, we tried doing exactly what you suggested in measuring a conference’s weighting with factors besides size alone.

  • AAF

    In reviewing a few delegate guides from years past, I have calculated that (on average) anywhere form 1/4 to 1/3 of the schools in attendance at the SSUNS conference in Montreal are form the United States.
    Additionally, as a former delegate who has attended this particular conference, I would argue that it is indeed of high enough quality to be considered, and there are enough American schools as well.

  • Pingback: America’s Best High School Model UN Teams: Top 1-5 | Best Delegate()

  • Timothy

    The methodology is slightly skewed. Competitive conferences should not be determined by the number of kids who go to to the conference, but by the number of P5 Schools (Top 5) and Top 10, etc in addition to consideration of the size of the conference.

    For instance, CMUNC was arguably the toughest conference of the year because the ratio of strong delegates to delegates from comparatively worse schools was higher. Imagine a GA of 100 kids, where 40% of the kids are extremely competitive and desire to be a coalition leader: that is a true test of skill to win. At NHSMUN, the GAs are huge, but if the number of competitive schools is not that large, the ability to win consistently comes far easier.

    I don’t believe any of the P5 schools attended NHSMUN this year with the exception of Mira Costa.

    I think that there is a California slant as both of the writers were grounded in the Cali Circuit. Even though my team isn’t in the top 5 (they are in the top 10), it’s pretty clear that Dalton, with having won 21 conferences in a row, and HMUN like 5 times, is the deserving #1, as much as I hate to say it. It would be really close between WW-P and Mira Costa based on the data, but I do think that you guys have placed a tad too much emphasis on the difficulty of NHSMUN and the Cali conference.

    Winning in the East Coast certifies having to contend with Dalton (ranked 2), WW-P (ranked 3), UChicago (ranked 4), Port Charlotte (ranked 5), and HMann (ranked 6.) I personally believe that you are overemphasizing the difficulty of the West Coast conferences vs. East Coast based on the P5 and Top 10.

    I still commend your efforts at centralizing MUN. Great job.

    • KFC

      Hi Timothy,

      Thank you for your interesting suggestion. Most of the comments above emphasize the need to rate “actual” competitiveness of a conference instead of only using conference size when it comes to weighting a conference, but you’re the first one to propose a solid suggestion which is to count how many Top 25, Top 50, Top 75, etc. schools attend each conference and give each one a multiplier.

      I would disagree with the California bias. There are many more conferences on the East Coast and we spend most of our time liveblogging on the East Coast — we’re exposed to East Coast schools throughout the season more often than schools in California, the Midwest, or other parts of the country. In addition, the vast majority of the Top 40 conferences we used in our methodology including five of the seven “Most Competitive” conferences are located in the East Coast, so East Coast schools probably gained an advantage from this. In fact, we tried to make sure we didn’t have an East Coast bias — or any bias — when creating these standings.

    • KFC

      One note about unbiasedly weighting conferences in the future if we’re going to use “actual” competitiveness instead of just size: my opinion is that weighting by only Top 10 isn’t the fairest metric. Top 25, Top 50, and Top 75 are better indicators. From our travels across the country, we know there are many schools that are pretty good competitively in other regions, but they simply don’t have as many opportunities to travel to compete in the major conferences that are primarily on the East Coast. Therefore, these schools tend to win awards locally but don’t have the strength of victories to break into a higher ranking, and in turn they would not provide a boost to the Top 25 schools that beat them even though in reality they may have been tough competition.

  • MR RM

    As a member of a highly-competitive Model UN team, I am very glad somebody has decided to put together these rankings and I am even more glad you gentlemen put so much time into things. I do have a few comments.
    You tend to use “total awards won” in your placings calculation at conferences. This is the easiest measure (easy being a relative term, with the hoops one has to go through to get full awards lists sometimes), but a computed ration of award points per delegate is sometimes a better measure of school strength. This isn’t always true, as the extreme case would allow a team bringing a tiny delegation that all gaveled to have a better ratio than a school that brought 20 or 30 kids and had strong performances across the board, but sometimes ratios are worth taking into account when computing placings at competitive conferences.
    I don’t see a pro-California slant. Though I would personally put Dalton as number one on the strength of their defeating other Top-5 and Top-10 schools several times throughout the year, Mira Costa was the only California school represented in the Top 10. This makes it hard to argue for a systematic West-Coast bias and if anything California students could make their case for pro-East Coast bias (though I doubt it would be a very convincing case). It is difficult to compare East-Coast and West-Coast programs because they compete against one another so infrequently and I think KFC and Ryan have done a good job.

    • KFC

      Thank you for commending us on our work.

      We are using total awards won because that’s the data we can obtain. We don’t have data on how many delegates each team brought, so it’s difficult to determine what a team’s winning percentage is — even though some conferences use that to determine delegation awards. There are several instances where a “3rd place” or “4th place” team won more awards overall than the Best Small Delegation.

      • http://thsmun.org Kevin Trevithick

        I agree with your methodolgoy here. I’m not sure why it’s hard to determine how many delegates each team brought, unless you are purposely trying to avoid “judging bias” by having individual registrations/assignments to avoid favoritism. More importantly, judging best delegation by percentage “punishes” a school for bringing lots of students, which should be one of the goals – to give the most students a growing experience.

  • Rahul

    Thanks again for creating the rankings, it’s great that you guys have actively pursued this project, a culmination of all your efforts. Teams that rarely encountered each other, from the Top 5 to Top 25, will be able to appreciate the greater connections garnered from this ranking. Below is a proposed solution to the methodology arguments that takes into account: quantifying competitiveness of a conference, conference size, and local area schools that do not have the luxury to attend larger conferences.

    Potential Solution:
    A potential solution to the methodology would be to weight conferences based on the number of Top 75 schools that attend, with elevated weight given to the Top 5, 10, 25 and any higher metrics that you determine. I think that adding this to the current methodology described in this post would be really effective in sorting the top teams.

    However, the local schools that may not have the financial luxury to attend more national conferences (that are weighted in a higher scale than local area conferences), would suffer immensely. This would inhibit local area schools from breaking out into the Top 25. The solution to this would be to weight rank on three elements: conference size, competitiveness, and geography.

    In a geographic distribution, divisions can be created that rank local schools that often compete with each other. Rankings could then be sorted in two dimensions: pure rank, and a geographically weighted rank, where local area schools could be ranked by state or by a larger geographic region for your ease, and then those schools would appropriately be ranked. For instance, states such as NJ, NY, and CA could have their individual divisions because of the competitiveness and number of teams per state, as opposed to geographic areas with less density of MUN teams.

    The aforementioned system is the common format of most competitive sports leagues, and inherently takes into consideration factors such as financial duress and isolation from the national conferences. Assuming that you use a point system (that allows you to weight different delegation awards, that are appropriately weighted across conferences), you can then rank and recognize local teams in their respective geographic divisions, and then sort the Top 25 based on the highest number of points.

    While this would exclude the local schools you mentioned earlier from the Top 25, they would still have appropriate recognition in the geographic standings, as opposed to none.

    Thanks again for the hard work,
    Rahul

    • KFC

      Hi Rahul,

      Thank you for this write-up. I think these are the best suggestions to improving the standings that I’ve read so far and I agree with everything you have stated. We’ll definitely take this into consideration.

  • http://www.mvhsmun.org Dominic Trevino

    This could be good. I now have something tangible to motivate my students with that they can grasp and understand. Thank you Ryan and KFC.

  • KFC

    We received a very well-written email about how Harvard HMUN and Georgetown NAIMUN are more competitive than conferences like NHSMUN and that therefore they should be weighted more heavily.

    I wanted to share my response:

    We don’t judge how conferences give awards. Every conference has a different philosophy and schools need to understand that they may need to do different things to win at each one. While we agree that Harvard and Georgetown require a very competitive skill-set to win, we also think a different set of skills are necessary to stand out at conferences like NHSMUN where the philosophy and style may be different. We don’t judge which skill set, philosophy, or awards system is better (at least not yet). We believe the best teams can adapt to both or will choose conferences that match their own philosophy — we have written several articles about this (archived under the “Strategy” tab).

    The standings produced Mira Costa at #1 given that we were unbiased for conference philosophy and that we used size as a primary metric. If bias was placed on conferences that value competition, then Dalton may get the edge there.

  • GREEN

    Is this going to be changed for next year? So a school such as gulf coast can place? Please, if it is going to change, I urge you to get it out as fast as possible so we can shape our conference schedual for the top5

    • KFC

      We’ll be releasing a post-standings article on April 8th that explains what changes might be taken into consideration and why schools should NOT schedule their conferences based off these standings.

  • Pingback: 10 Lessons from the Model UN Standings (open thread for feedback) | Best Delegate()

  • Pingback: Recaps: MUN Standings, GC Middle School, Cornell CIAC, and StuyMUNC. Plus preview of Montessori MUN | Best Delegate()

  • Matt Barger

    Hey guys!

    Just wanted to pop in and say I really support what you’re doing here. It’s a great way to foster interregional interaction between high school programs.

    It is important to note, however, that these rankings should be taken with the same seriousness as nationwide high school football rankings. They will be biased toward larger programs with a larger budget, and it should be an honor to make it anywhere on the top 25. Especially for smaller budget high schools.

    But, IMO, comparing the value of a top level New York school vs. a top level Atlanta school vs. a school like Mira Costa will not give much utility at the high school level, as aside from the national conferences, these schools may rarely see each other year after year. On the college level this is a much different story.

    One useful tactic that I’m sure you must have considered may be to have regional rankings in conjunction with or addition to national rankings. Therefore, you can recognize more programs, account for unexpected volatility in conference attendance numbers, and continue chipping away at the large-school bias.

    In either case, I think the rankings can only be improved if you keep up this outstanding data collection and analysis.

    Keep up the great work!

    Cheers,
    Matt Barger
    President and Founder Emeritus, Seattle University Model United Nations

  • http://thsmun.org Kevin Trevithick

    As 20-year hosts of a pretty large conference (1200 – 1500) in SoCal, and having about 250 students in our program, we at Tustin High see a lot of good competition. One of the schools I would like to spotlight is Santa Margarita Catholic High School. They bring many well-prepared delegates to our conference and many of the conferences we attend. They consistently win awards and do it “the right way” – not by going off-policy to team up for the purpose of advancing the success of one delegate.
    Santa Margarita also hosts a superb FRESHMAN NOVICE conference that gently “takes delegates by the hand” and introduces them to competition. Tustin High wouldn’t start our competition year any other way!

    • KFC

      Great to see a fellow Southern California school vouch for Santa Margarita. They have a great program and in fact placed just right outside the Top 25 in the March 2011 edition of the rankings.

      As an alum of the Southern California high school MUN circuit, it also goes without saying that Tustin also runs an excellent Model UN program. Both Tustin and Santa Margarita teach their students the diplomatic way to do Model UN, host large conferences that many college programs would envy, and annually take their students to conferences abroad. If we did an article that highlighted the quality of MUN education (as opposed to merely the competitiveness of a club’s team), I’m sure Tustin and Santa Margarita would be mentioned in it.

  • Mo

    Where does SSUNS fall on this list? It’s a fairly large conference, although not as competitive.. and how are awards there counted since there are no tiered-awards?

  • Wedlerd12

    Georgia Tech 2011 is to be over 1,000 students. Will the conference thus be moved up the LARGE category?

Previous post:

Next post: