Ok, it probably doesn’t, but there’s a good chance that it’s not as prestigious as it’s supposed to be.
Today, universities are businesses. Not that they haven’t always been, but now that one can access hoards of information at the click of a mouse, the image of a university is much more of a priority. But what, exactly, is this image supposed to portray? Should schools come off as warm and welcoming? Diverse and holistic? Astute and superior? Each of these questions is given considerable weight when crafting the public image of a university which, in turn, affects our perceptions of how “good” of a school that university actually is.
This is the dilemma behind national college rankings published by independent surveyors like Business Week, U.S. News and World Report, and The Financial Times. Take, for example, the ranking criteria for the U.S. News and World Report. A combined 40% of the weight is placed on two criteria called “Peer Assessment” and “Student Selectivity.” But let’s examine this further: the USNWP defines “Peer Assessment” as, “How the school is regarded by administrators at peer institutions.” Now, this may seem fair, especially because it is the opinion of other school officials being counted and not that of measly laymen. Yet there is no denying that this category is measuring name recognition, visibility and public acclaim within the university system. 25% of the score that a university receives is based on this criterion alone. (Oh, and since we’re speaking percentages, the USNWR provides exactly 0% rationale as to why these specific weights were chosen.)
This method is a problem because respondents have the option to omit schools they do not feel qualified to rate. Consequently, some schools are evaluated by more respondents than are other schools, skewing the rankings. Further, this method produces artificially large differences among schools and even creates differences where none truly exist. For example, in 1998 US News reported that five schools (Yale, Harvard, Chicago, Columbia, and Michigan) tied for first place. It said the next four schools (Stanford, Berkeley, NYU, and Virginia) tied for sixth. This result could be obtained if all the respondents who evaluated the first five schools put them in the first quartile and all the respondents but one who evaluated the other four schools also put them in the first quartile. In other words, just one of the over 400 persons who returned questionnaires could change a school’s rank on the “academic reputation” factor from first to sixth which in turn could change its overall rank (such as knocking it out of the top 5).
But wait, theeeeeres more! Another 15% of the final ranking of a college or university is placed on a category called “Student Selectivity.” This is where the big problem is: as schools increase the sums of money allocated towards marketing and promotional materials, they subsequently increase their level of exposure to potential students. As these students become more aware of the schools, they are more likely to apply. The more applicants a school gets for a limited number of spaces, the more they are required to deny, thereby increasing their “Student Selectivity.” So the logic behind this criterion is therefore false: Just because a large proportion of students are not admitted to an institution doesn’t mean that it is more prestigious, it simply means that, for one reason or another, more students are applying.
Taking this argument into account, we can deduce that money being spent on advertising campaigns, increasing media outlets, hiring additional marketing staff, and so forth are being spent on activities other than improving the educational experience for students and faculty. What is happening is that schools are actually declining in educational capacity while improving in national rankings. It takes a little to get a little, right? Not when you’re talking about the same thing.
It is criteria like these that artificially increase and decrease the final ranking scores of universities. Take, for instance, the case of Trinity College. It began in 88th place in 2004, dropped to 111th in 2005, climbed back to 78th in 2006 before moving up to 53rd in 2007. Last year, Trinity’s ranking was 49th.
How is one supposed to explain the enormous and volatile fluctuations in a single university’s rankings? Year-to-year changes in curriculum can in no way justify these changes. So how are we, then, supposed to interpret such findings? We’ll let you sit on that for a while.