News Details

img

Ranking Problems

Stop the University Ranking Circus

t’s that time of the year again. Some 50 percent of your academic LinkedIn connections share they are “happy” or even “thrilled” that their institution went up some places on the recently published Shanghai Ranking (officially known as the Academic Ranking of World Universities), while the other 50 percent remain remarkably silent. Marketing departments of the climbing universities produce hyped-up press releases and journalists fill their pages with clickbait articles about “the 100 best universities in the world” and “the 10 biggest risers and fallers.” In response, some critics write op-eds about the fact that rankings are ridiculous, and that’s that. Everything’s quiet again until the next ranking comes out and the circus starts all over again. I argue that we should ignore rankings as academics: they are misleading and harmful to academic values.

In our quantified society, it may seem as if something is only important and true if there’s a number attached to it; a metric, ranking score, or percentage. Measures and quantifications of various sorts come with an aura of objectivity, so, seems the idea, we had better take them seriously. This widespread trust in numbers is one of the reasons they are so popular, but it’s also why we should be critical of them. While many arguments can be formulated against university rankings, I’ll focus here on three.

1) Rankings have little meaning

Yes, we can count publications, citations, and even quantify reputations using surveys – sometimes referred to as “quantified gossip.” And yes, we can add everything up and create composite scores. But, what does this mean?

The more I ponder this question, the more questions I have. Does a university’s education quality indeed increase if someone who studied there 20 years ago receives a Fields Medal? There’s a story of a university that climbed 84 places on a particular ranking because one of their researchers was a co-author of a series of highly cited papers, but is that proportional? Is it meaningful to compare a humanities-focused university in the UK with a technically focused university in China?

The appeal of rankings of all sorts lies in their potential to simplify. Reality is far too complex to grasp, and a ranking reduces all of that to a one-dimensional scale. All qualitative differences are reduced to differences in quantity. Quantification enables comparisons between apples and oranges while camouflaging that such comparisons make little sense. Rankings impose an artificial order and sort universities from high to low. But does such an order make sense in the context of universities? It seems plausible to assume that the reputation of one university can improve while reputations of other universities remain constant. Yet, ranking logic suggests this is impossible: a university can only improve its reputation at the expense of another one (see also sociologist Jelena Brankovic, 2022). I see the appeal of one number “to rule them all,” but struggle to find meaning in aggregate university rankings.

2) Rankings are highly subjective and promote a competition frame

Today’s mantra seems to be: if it’s data-driven, it must be true. The detailed method sections of rankings such as the THE World University Ranking, QS Ranking, and the Shanghai Ranking signal robustness and contain references to reputable sources. Yet, before any ranking can be made, at least three highly subjective choices must be made: 1) what to measure, 2) which indicators to use as proxies, and 3) what weights to assign to each indicator. Different choices will result in different rankings. To illustrate this: the weight of “education” among the three rankings mentioned above is 29.5 percent (THE), 10 percent (QS), and 10 percent (Shanghai), while the latter ranking measures it by counting Nobel laureates among alumni. Questionable proxies aside, compilers of rankings have (disproportionately) much power over who will turn out to be the winners and losers.

Relatedly, all this talk about winners, losers, and climbing rankings suggests that overtaking other universities is indeed a possible and important task of universities. It reinforces the idea that science is all about competition among institutions, instead of cooperatively enhancing our understanding of the world. Furthermore, this sustains the belief that “more” (publications, citations) is always better, which in turn may trigger scientific misconduct as I argued earlier on this platform

3) Commercial ranking organizations shouldn’t dictate what a good university is

Rankings have real power over universities. When we repeat after them that University X is “the best in the world,” we may come to believe that this is indeed the case – without having seen or experienced University X ourselves. We reproduce the reputation ranking, as we learn about reputations of universities mainly through…rankings.

Universities who wish to do better in next year’s rankings will likely examine the indicators and weights used by the ranking organizations and try to become more like the “best” university, to improve their score. This is likely one of the reasons why some think rankings are still a good thing: they could incentivize to do “better.” However, while this may lead to gaming and “indicatorism” – i.e., an obsessive focus on improving an indicator while losing the actual goal out of sight – this is also inherently problematic. Rankings favor uniformity over uniqueness. For instance, rankings often fail to appreciate local differences and small-scale, meaningful connections to local communities.

Furthermore, we should ask ourselves the question whether we indeed want to outsource the question what constitutes a “good university” or even “the best university.” Are commercial ranking organizations really in the best place to tell the academic community what they should do? Or should we, as a local or (inter)national academic community, use our academic freedom in a more meaningful way and discuss these questions among ourselves? It’s up to us.

Replacing frames

Rankings only have power if they are taken seriously. The very moment we stop taking them seriously, they lose their power. As mentioned above, there are good reasons to ignore them. University rankings have little meaning, are highly subjective, promote a competition frame, and thwart our academic freedom, as commercial ranking organizations dictate what constitutes “a good university.”

So, let’s have (inter)national and/or local discussions among academics about what sorts of places we want our universities to be, without interference from commercial rankers. Let’s replace the frame of academia as a competition with a frame that recognizes and rewards cooperation. Let’s put an end to the ranking circus.

  • SOCIAL SHARE :