Categories

To add value to university rankings, try adding some value added

There are a plethora of university ranking systems, many of them generated by media (and often a source of considerable revenue for them).  The most well known include those from Shanghai Jiao Tong,  Times Higher Education, U.S. News and World Report and, in Canada,  the Globe and Mail and Maclean’s.  There are innumerable other university ranking projects around the world.

These various ranking systems use different performance measures and methodologies. They are imperfect at best.  They are routinely and roundly criticized.  Yet, they persist and are procreating at a rapid rate.  And, in spite of the critics and detractors, they are taken seriously by and influence the opinions and decisions of the public, university administrations, boards and students (especially international ones).  University rankings are not going away.  As Philip Altbach said in his article in Inside Higher Ed: “If rankings did not exist, someone would invent them.”

Given that they are here to stay, I offer three observations about university ranking systems.

First, in spite of their imperfections, they do have certain face validity.  I don’t really care that much whether Harvard is ranked  #1, #5 or #10 in the world  or whether McGill (full disclosure: my undergraduate alma mater) is ranked #1, #2 or #4 in Canada.  But, regardless of the bias or orientation of the ranking system (see comment below on value added), acknowledged best universities are typically higher in the rankings than other universities that, by general consensus, should be lower in the rankings.  This is the way most rankings end up.  If they don’t, they are typically and rightfully dismissed.  In sum, as long as one does not fixate on the specific ordinal ranking of the university, they typically get the general pecking order about right.

Second, each ranking system values some aspects of university life more than others. They each have some measurement or methodological deficiency.  Sometimes these biases and deficiencies hurt a university’s ranking, sometimes they help it.  Some university administrators pay less attention to a ranking system’s deficiencies when it helps their institution’s standing; and provide detailed criticisms of the ranking system when its idiosyncrasies hurt their standing. This is not becoming.  The best university administrators don’t gloat or protest too much, but rather glean whatever useful information they can from the ranking information to evaluate or improve their institution’s performance.

Third (and this is my most important observation), most university ranking systems focus on some or all of the following characteristics:  the exclusivity of the university (high marks needed to get in; number of applicants refused, etc.); its wealth (volumes in the library; endowment; resources per student, etc.); and the star power/reputation of the faculty (significant awards won; research revenue, etc.).  The most significant variable upon which to rank universities, however, may be the degree to which the university experience added value to the personal and professional success their graduates would otherwise have enjoyed.  Harvard graduates typically do well.  But, they may do well because of who they are and where they come from and not because of something that happened in the halls of Harvard.  And, the value added of Harvard may be in the cohort it assembles, the networking it allows, or the branding a Harvard degree engenders, and not because of things that happened in a classroom or the faculty interactions a student has had.  Similarly, you have to think that there are higher education institutions out there that may not be prestigious, exclusive or wealthy but which take in students with really poor prospects and, because of the education experiences it provides, dramatically improve the life trajectory of these individuals.  We rarely measure value added, but I look forward to more ranking systems that consider this and learning outcomes.

To this end, HEQCO is organizing a conference for May 2011 to examine the different ways that various institutions and jurisdictions have approached the problem of measuring postsecondary learning outcomes and using these measures to evaluate the value of a postsecondary education.  As part of this conference, we also intend to pursue whether it is possible to rank postsecondary institutions based on their value added to learning rather than on the traditional input variables I note above.  Stay tuned to the HEQCO website for details about this conference.

 

Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *