© Mohamed Hassan from Pixabay
Author profile picture
About this column:

In a weekly column, alternately written by Eveline van Zeeland, Derek Jan Fikkers, Eugène Franken, JP Kroeger, Katleen Gabriels, Carina Weijma, Bernd Maier-Leppla, Willemijn Brouwer and Colinda de Beer, Innovation Origins tries to figure out what the future will look like. These columnists, sometimes joined by guest bloggers, all work in their own way to find solutions to the problems of our time. Here are all the previous installments.

Not long ago, late summer was the season of Nobel prizes, when universities held their breath and waited for the Nobel Prize Committee to reward science. Nowadays, late summer is ‘ranking season’, and universities are much more interested in what the ‘ranking houses‘ have to say about science. Which is a great shame. 

The huge impact of rankings

Granted, it is easy to explain why these rankings have become so important. Most universities are government-funded, and naturally, the government wants to know that it’s getting good value for money, just like company shareholders want to know how big this year’s dividend is going to be. Governments, and later universities started developing more ‘businesslike’ ways of working in the 1980s. They dubbed it New Public Management. At its core was a focus on accountabilitykey performance indicators (KPIs), and results. 

In this context, rankings make sense. Governments, especially in emerging countries, use rankings to decide which universities to fund. Students use them to decide where they will study, companies use them to decide which universities to work with, and funders use them to decide who get study grants and who don’t. And – last but not least – universities often use them to assess the quality of their own work or to force through internal changes. In other words: rankings have a huge impact. However, by heeding them, we are effectively shooting ourselves in the foot.

Why we shouldn’t take rankings quite so seriously

As such, the use of rankings also comes with several caveats. Firstly, there is the objection that they are almost always limited to research work. The quality of a university’s education hardly counts. As a result, universities that want to rise in the rankings start focusing more on research, and less on education. And just at a time when society really needs a good education.

The second objection is that rankings are, in effect, only looking retrospectively. The universities currently at the top of the rankings are there, thanks to several years-old publications based on even older research. If a researcher wins the Nobel Prize in November, their university rises in the ranks a year later, while the researcher’s actual work was probably done more than twenty years ago.

Thirdly, rankings lead to perverse copying behavior: the so-called ‘McDonaldization’ of research. After all, the best way to climb the ranks is to copy the winner. In other words, everyone tries to mimic Harvard. But this is a death knell for innovative universities that focus on the future, for instance, by educating first-generation students or by carrying out research into serious societal challenges in collaboration with companies. Those universities will fall behind in the rankings. It is also problematic for the social sciences and humanities, as well as for disciplines in which English is not necessarily the working language.

The inevitable fate of European universities

Fourthly, rankings are fast turning into ‘bad news shows’. Because emerging countries are investing so much in their universities, European universities are finding it practically impossible to rise in the rankings. For most, their inevitable fate is a gradual decline. On 11 October, the Times Higher Education (THE) rankings will be published – one of the three largest rankings. Sneak preview: this year Asia will outstrip Europe as the best-represented continent… 

Lastly, the rankings are not even transparent. Rankings reduce unbelievably complex, dynamic, and ambitious organizations to a few figures. But almost no one knows exactly what caused a given rise or fall, and lamentably, the ranking houses are practically unaccountable regarding their judgments. This alone should be enough to persuade everyone – and certainly the scientific world – to ignore the rankings altogether.

Are rankings worth all the attention?

Rankings are deadly to innovativeness and creativity! Yet rankings have now become an integral part of the academic world, and their publication has become almost as much of a tradition as the annual publication of in September.  But we would be well-advised to pay a bit less attention to them. The fundamental objections I have just listed will hopefully have made this clear. Dutch universities should be less keen to jump through that little hoop. 

About this column::

In a weekly column, alternately written by Eveline van Zeeland, Derek Jan Fikkers, Eugène Franken, JP Kroeger, Katleen Gabriels, Carina Weijma, Bernd Maier-Leppla, Willemijn Brouwer and Colinda de Beer, Innovation Origins tries to figure out what the future will look like. These columnists, sometimes joined by guest bloggers, all work in their own way to find solutions to the problems of our time. Here are all the previous instalments.