News Details

img

Impact Rankings Criticised

Study warns of ‘drift towards symbolism’ in impact rankings

A new study, which argues that a major global university ranking measuring institutions’ contribution to sustainability encourages symbolic achievements over substantive change, invites scholars, policymakers and university leaders to “reconsider how impact is defined, measured and incentivised in the pursuit of sustainable development”.

According to “The illusion of impact: symbolic sustainability and the managerialisation of sustainable development goals in higher education”, the design and implementation of the Times Higher Education (THE) Impact Rankings, which assess universities’ contributions to the United Nations' 17 Sustainable Development Goals, encourage symbolic sustainability, wherein institutions prioritise reputation management and procedural compliance over substantive change.

Authored by I-Ting Chen and Konstantin Karl Weicht at the College of Management and Smart Sustainability, Tzu Chi University in Taiwan, the study, published in Studies in Higher Education on 30 April, provides “a novel conceptual lens for examining the unintended consequences of performance-based evaluation frameworks”.

Anchored in critical management studies, audit culture, and institutional theory, the study interrogates key methodological and ethical flaws, including overreliance on self-reported data, opaque validation mechanisms, and the reproduction of isomorphic behaviours within global higher education.

Through institutional reflection and practitioner insights, the study reveals how performance metrics can demotivate internal change agents, mask structural inequalities, and divert institutional resources away from socially or environmentally impactful initiatives, calling for stakeholders to reconsider how impact is defined, measured, and incentivised.

The ‘progress myth’

Weicht told University World News the significance of the study lies in challenging the “progress myth” of sustainability.

Weicht said the study showed that sustainability in universities was not “a simple ladder”.

“We found it is more like a slippery slope: when schools focus too much on ‘winning’ at rankings, they can actually slide backward. They end up doing ‘zombie sustainability’ – looking great on paper while doing very little real-world good.”

To prove this, the authors built a conceptual yardstick to compare how universities actually behave against how they should behave in an ideal world.

“Extending the Corporate Social Responsibility (CSR) maturity model, the study introduces two additional stages – Stage 0 (Idleness) and Stage-1 (Regressive Compliance) – to capture forms of stagnation and regression exacerbated by ranking pressures.

“We were able to map out nine distinct stages – ranging from institutions that are actually causing harm (-1) to those that are truly leading the way," Weicht said.

“We found that institutional engagement is rarely linear; instead, it is a precarious process where 'one step forward’ can easily lead to ‘two steps back’,” he said.

“Crucially, under the pressure of global ranking metrics, institutions can actually regress into what we call Regressive Compliance (-1). At this stage, the pursuit of data becomes counter-effective, creating a ‘zombie compliance’ that masks structural inequalities and exhausts internal change agents.”

According to the expanded model, typical behaviours of Stage -1 include “strategically manipulating indicators, selectively reporting favourable data, or privileging reputational signalling over substantive progress”.

The study notes that universities may “recycle outdated submissions, exploit opaque ranking methodologies, or leverage positions in international rankings for external marketing while neglecting internal reform”.

The study goes on to say that limited SDG budgets “may be directed toward symbolic rather than structural initiatives, privileging short-term reputational gains over long-term outcomes.

“In extreme cases, institutions construct a ‘shadow infrastructure’ that sustains appearances of engagement without corresponding investment in policy development, capacity building, or resource allocation”.

Methodological flaws

Weicht said he and his co-author documented a number of methodological flaws in the THE ranking methodology, including the “Paywall Paradox” where THE locks the “Impact Dashboard” behind a fee, creating a two-tiered system where only wealthy institutions can access comparative insights – directly violating the spirit of SDG 10 (Reduced inequalities).

There were also “arbitrary rule changes”, said Wiecht.

“We documented how THE recently reduced allowed evidence submissions from three documents to just one, forcing complex institutional work into a single format and increasing subjective reviewer bias,” he said.

Then there is the “knowledge lag”, where rankings released in 2026 rely on data submitted for activities in 2023. “This ‘frozen snapshot’ rewards outdated historical reputations rather than real-time sustainability action,” Weicht said.

He said the duo also identified “measurement inequity”, showing how the framework “fails to differentiate between the water-energy demands of residential vs non-residential campuses, effectively penalising operational scale rather than actual inefficiency”.

“The key takeaway message for university leaders is that we must shift our focus from auditing ‘illusionary’ outputs to fostering the organisational conditions that make genuine SDG integrity possible,” said Weicht.

Careful design needed

University World News reached out to ShanghaiRanking, QS World University Rankings and THE World University Rankings for their views on the study’s findings but only received responses from QS.

Ben Sowter, QS senior vice-president, told University World News: “When designed carefully, rankings can do more than signal performance; they can help raise standards and accelerate change.”

However, he said the tension identified by Chen and Weicht’s study was “real”.

“Any performance framework creates incentives, and not all of them are the ones intended. The study's principal concerns, overreliance on self-reported data, opaque validation mechanisms, and the risk of procedural compliance substituting for substantive change, are specific to the framework under examination,” Sowter said.

“At the same time, context matters,” Sowter added.

“A decade ago, sustainability barely featured on most institutional agendas. The emergence of global frameworks and rankings, including the QS World University Rankings: Sustainability has helped move it from a peripheral concern to a strategic priority, creating a shared language, greater transparency, and a degree of accountability across very different systems.

“In developing our approach, we have made deliberate design choices with the intent of making our framework as relevant and rigorous as possible at global scale,” Sowter said.

“We draw on independently verifiable data sources wherever possible, and we place emphasis on evidence of real-world impact rather than activity or process compliance alone.

“We are also mindful of structural limitations that affect opt-in frameworks more broadly,” he said.

Sowter conceded that when participation is voluntary, “some key actors inevitably remain outside the scope of accountability. Moreover, where only a subset of categories is required to qualify for an overall ranking, institutions may concentrate efforts on what is easiest to improve rather than what is most impactful”.

Sowter noted: “By contrast, our framework includes all institutions that meet the eligibility criteria in the analysis.

“No framework is perfect and all require continuous refinement. The more useful question, and one this study helps to advance, is how measurement systems can evolve to incentivise genuine progress rather than the appearance of it.”

‘Not a meaningful measure’

Professor Ellen Hazelkorn, joint managing partner at BH Associates education consultants, told University World News she agreed with the findings of the study by Chen and Weicht.

“The Impact Ranking is not a meaningful measure. It combines research indicators with significant amounts of self-reported data,” said Hazelkorn, who is also a professor emeritus at Technological University Dublin in Ireland.

Dr Savo Heleta, a researcher, internationalisation scholar, and analyst, agreed. “University rankings are as popular as ever, yet as the critics rightly point out, they employ simplistic methodologies and in most cases rely on self-reporting by institutions,” he said.

He said, in essence, rankings were no more than “public relations exercises” for universities.

“The same goes for the Times Higher Education Impact Rankings,” Heleta said.

“Instead of genuinely measuring the contribution by universities to the implementation of the SDGs and climate action and sustainability more broadly, these rankings allow universities to sugar-coat their activities so they can get what they see as positive public relations from appearing in the rankings.”

Director of strategic insights at RMIT University in Australia and author of “Sustainability Rankings: What they are about and how to make them meaningful”, Angel Calderon told University World News that the study reaffirmed “longstanding critiques” advanced by ranking experts and scholars regarding the THE Impact Rankings.

“Specifically, it demonstrates that the rankings fail to adequately account for structural differences and contextual variation across higher education systems,” he said.

“Furthermore, the homogenisation of assessment frameworks limits the recognition of institutional distinctiveness, while geographical and ecological factors remain largely invisible within the rankings process.”

He said the authors discuss corporate practices that have given rise to an “extensive industry” of consultants, auditors, and compliance professionals.

“Should similar practices be adopted within higher education, they are likely to impose significant costs on universities and increase administrative burdens, with limited evidence of corresponding institutional benefit.

“In this context, universities increasingly find themselves dependent on external entities to demonstrate progress toward their missions, strategic objectives, and contributions to societal wellbeing,” Calderon said.

However, he admitted there were “exemplar institutions” that have integrated sustainability reporting into their overarching strategic reporting frameworks, thereby achieving a level of maturity that moves beyond compliance-driven approaches associated with global ranking systems.

“I believe there are institutions across different national systems which have developed robust data infrastructures and governance processes to quality assure institutional data and generate meaningful insights that support evidence-informed decision-making,” Calderon said.

“However, this practice is not spread across the world, and there are gaps in data management and capacity building,” he said.

Universities’ responsibility

Calderon said university leaders needed to be cognisant of both the benefits and limitations of participating in external assessment frameworks such as the THE Impact Rankings and make considered judgements about the value of engagement.

This required leadership to “look beyond the pursuit of external recognition and prioritise meaningful engagement with the communities they serve and operate within”, he said.

Expanding further, Heleta said: “As the authors point out in the study, universities have a responsibility to support climate action and sustainability efforts transparently, ethically, and with integrity.”

However, “none of that was being done through participation in the Times Higher Education Impact Rankings or other similar rankings”, Heleta said.

Heleta said change was urgently needed and anyone working or studying at universities should be demanding fundamental change within their own institutions.

“Instead of wasting time and money chasing a placement in the rankings, we need genuine commitment and actions that contribute to climate justice and sustainability,” Heleta said.

Ranking as a business

Hazelkorn also raised concerns about how the portfolio of evidence submitted by each institution is stored and evaluated – a process that “adds to THE’s privatisation of institutional-public data”.

‘Evaluation has always been behind closed doors, and now THE is using AI … which should be even more worrisome,” Hazelkorn said. “Another concern is, as the authors point out, the use of self-reported data.

She said while disproportionate emphasis on rankings by governments, investors and students had placed pressure on institutions to do all they can to improve their standing, she was also concerned about institutions and governments hiring THE to provide consultancy services to help them rise more effectively in the rankings.

“This is effectively where the real business lies. So, rankings are simply a door-opener,” Hazelkorn said. “The Impact Ranking encourages other perverse behaviour, for example adding terms such as “sustainability” wherever possible in the curriculum or activities.

“Ultimately, it is very difficult to provide meaningful and measurable evidence of ‘substantive change’,” Hazelkorn said, before adding: “It is time to grasp the nettle and consider regulation, governance, and data ownership issues of the rankings, publishing, and data analytics business.”

Calderon said the adoption of AI in evaluation of rankings represents a significant shift in ranking administration; it also introduces “additional sources of variability and risks further entrenching existing institutional inequalities”.

He said THE needs to provide greater clarity about how AI is used to assess institutions and involve an external panel of assessors and experts (not consultants) to corroborate results.

Interest in rankings

Duncan Ross, former chief data officer of the Times Higher Education from 2015 to 2025, told University World News the authors of the new report provide “no empirical evidence to back up their claims” – something that was “disappointing, but not uncommon in a criticism of rankings”.

He noted that their focus “seems to fly in the face of the continued interest in the rankings from universities worldwide – universities with significant pressures on their time and resources, who would simply not choose to participate if they didn’t see value”.

Ross said the authors also “misrepresent the position of THE in terms of data-sharing and transparency around methodology (which, the last time I looked was over 150 pages long!), as evidenced in the changes announced by THE around rankings membership and data access”.

Weicht said Ross’ claim concerning a lack of empirical evidence was “simply false”.

“Our research is grounded in three years of systematic, practitioner-led data which demonstrates how the ranking's methodological instability traps universities in 'rituals of verification',” he said.

“High participation rates do not prove a ranking's value; they prove the coercive power of reputational pressure,” Weicht said.

“While the former official defends a system that locks comparative sustainability insights behind commercial paywalls, our recent research published in mid-April, offers a constructive alternative: a 9-stage Maturity Model designed to help universities achieve the authentic SDG stewardship that current rankings fail to measure.”

Heleta said he was not surprised to see the people linked to the companies involved in the production of higher education rankings defend them and speak badly of those who criticise the rankings industry.

“This is their very profitable business model and they will do all it takes to keep making money.”

  • SOCIAL SHARE :