From education to employment

The Research Excellence Framework: A hydra-headed beast or an effective policy tool?

A web of intricate complexity

When I attend events on higher education policy with more than a handful of academics present, the conversation invariably descends to complaints about ā€˜marketisationā€™, ā€˜commodificationā€™ and ā€˜neo-liberalismā€™. Beyond signalling a speakerā€™s general political outlook, these flabby concepts are used to indicate a dislike of metrics, league tables and other accountability measures. But their widespread use can wrongly imply the higher education sector has a single (negative) view about such assessments.

In contrast, when the higher education sector is asked to evaluate itself, as with the Research Excellence Framework (REF), managers are drawn to systems of enormously intricate and sensitive complexity, as the chapters here by Nick Ellison and Clare Viney help to explain. With the REF, the sector then opts to make the process even more complicated, as Peter Mandlerā€™s piece makes clear ā€“ for example, by running voluntary initiatives like shadow REFs inside institutions, which serve as a dress rehearsal for the real thing.

It is all an amazing sight to behold. In scripture, the number seven is used for completion, so it seems fitting that once each septennium, this complex and carefully built web captures the brilliant research conducted throughout the UK, as Cara Aitchisonā€™s chapter on Wales so clearly explains. This happens irrespective of whether the research is already well known or, as Diana Beech explains in her chapter, it has previously been overlooked. The REF is comprehensive, trusted and the envy of many other countries.

How the REF came to be

Helen Carassoā€™s contribution usefully reveals how todayā€™s research assessment came to be and includes a consideration of the key drivers within the old University Grants Committee (UGC). People will invariably point out in response that the UGC is long gone while the REF marches on to tunes played in Whitehall and Westminster – and, as Iain Gillespie shows, Edinburgh. But this is too simplistic.

Policymakers typically follow the aphorism ā€˜donā€™t let the best be the enemy of the goodā€™, meaning chasing perfection has an opportunity cost, and they tend to work at a higher level of granularity than academics. So, if those conducting research in universities really do want a less nuanced, lighter touch and simpler assessment of what they do, then they could leave it to policymakers to impose something more straightforward, though it would have sharper edges.

Such a system would probably also have less money distributed on the back of it, for it is the very complexity and sensitivity of the current process, honed over decades, which makes the REF such an irresistibly powerful tool in the competition for resources against other publicly financed initiatives.

Hard cash

After the REF process is over, the results are converted into hard cash. This is a tricky operation, as Ellie Russell and Jennie Eldridge show. But it is another area where the critics who rush to bandy around claims of ā€˜neo-liberal marketisationā€™ get it wrong. There are no strings attached to the Quality Related (QR) funding that flows from the REF.

It is sometimes said the perfect university funding model would entail a lorry loaded with cash arriving once a year to disgorge its contents before disappearing with no questions asked, and then magically reappearing a year later. That way, academics could be left to get on with their work. QR funding, in some respects, is as close to that lorry as it is possible to get in the modern world, at least when it comes to public spending rather than endowments.

This has led to complaints that the QR money is ā€˜a slush fundā€™ for vice-chancellors. This is the wrong way to think about it and not just because of the deep and deepening shortfalls in project-based research-funding, which mean university managers have less leeway in practice when deciding where to spend their QR funding. It is also the wrong way to think about it because of the depth of historic accountability upon which the detailed REF process is based.

To put it bluntly, if every Ā£1 of public money were awarded on the same sort of rigorous peer-reviewed assessment of past performance, we would be better off as a country. Whatever financial advisers might be obliged to say, the past is often a fairly good guide to the future ā€“ in this case partly because if you spend the money well, it will set you up admirably for the subsequent REF, producing a virtuous circle.

Autonomy versus accountability

The REF methodology is also appropriate because it incorporates a deep respect for the institutional autonomy on which the success of the UK higher education sector is founded. Indeed, because the REF is backward-looking while the next REF is always hovering somewhere above you, the exercise has stumbled upon a sweet spot between accountability and autonomy. Day-to-day politics have been removed while democratic accountability has been retained. The time lag built in to the REF also provides important (though not fool-proof) insulation against the risk of culture wars spilling over into decisions about university research, as has occurred in Australia.

Despite its benefits, no evaluation system that is trying to tick so many boxes simultaneously is going to be perfect. The REF process is too complicated, too laborious and too unpopular to be free of imperfections. However, the research evaluation process has never been static, with one notable recent change being the introduction of ā€“ and then the increase in the relative weighting of ā€“ impact. Another important shift has been the move to open-access research outputs, which is helping to reduce (but not entirely bridge) the gap between those who pay for research and those who conduct it. Over the years, there has ā€“ rightly ā€“ been greater focus on ensuring the process gives more weight to concerns about societal inequalities too.

Indeed, one of the most impressive features of the REF has been how much those who have owned the initiative in policy terms ā€“ such as David Sweeney, whose piece closes this collection, Steven Hill and Kim Hackett at Research England ā€“ have been willing to respond to changing events, constructive criticism and new evidence. Admittedly, there is an incentive for them to do so built in to the process, because it is harder to cheat or game a system when the target is moving, but that again just reminds us of the REFā€™s strengths.

Preparing for the next wave

This begs the question of how the REF might be changed for the next wave. The Foreword here by Bahram Bekhradnia, who has a good claim to be the godfather of research assessment in the UK, suggests it is time for ā€˜a fundamental rethinkā€™. Given the shifting political backdrop, the formal Future Research Assessment Programme and the fact that the new Executive Chair of Research England is likely to want to stamp her mark on the process, significant change is more likely than not.

While James Wilsdon warns about adopting even more ā€˜simultaneous objectivesā€™ for the exercise, the chapters gathered here collectively suggest some changes that might sensibly be made. As a think-tank Director swimming among policy for a living, I want to dwell upon two more.

First, when it comes to the REF, it continues to feel as if non- traditional research outputs ā€“ such as think-tank publications ā€“ resemble a square peg in a round hole. Yet these can be a very quick road to impact, and those academics who are most focused on having influence do nonetheless finds ways to hammer the peg in. For example, the rich Impact Case Study database for REF 2021 shows how a number of influential academics researching higher education have used HEPI output, which is not generally regarded as so clearly ā€˜REF-ableā€™ as a monograph or journal article, to prove their impact:

  1. Professor Robin Middlehurst of Kingston University referred to her influential HEPI report on ā€˜alternative providersā€™;
  2. Professor Neil Morris of the University of Leeds referred to his HEPI blog on ā€˜the unbundled universityā€™;
  3. Professor Nicola McLelland of the University of Nottingham noted how the findings from her research had informed a HEPI paper on the decline in language learning; and
  4. Professor Claire Callender of Birkbeck, University of London, referred to her chapter in a HEPI collection on the decline in part-time students.

Even though the least well-read think-tank paper will be seen by far more people ā€“ overall as well as within the corridors of power ā€“ than the average piece of academic output, researchers are still more drawn towards traditional outputs than those more likely to fall into the hands of policymakers. This is not an attack on traditional academic publishing: think-tank reports or other accessible or popular versions of academic work serve a different purpose and are often a supplementary route for getting the same sort of information into different places, while acting as an advert for the underlying research. A REF process in which both the rules and the way they are implemented did more to encourage the submission of a wider range of outputs would feel appropriate as we approach the second quarter of the twenty-first century.

Secondly, the inextricable link between undergraduate tuition fees and access to those Research Englandā€™s funds that are distributed on the back of the REF is a little strange. Unless you are in the Approved (fee cap) part of the Office for Studentsā€™ Register, which limits your full-time undergraduate fees for home students to Ā£9,250, you have no access to the QR funding that is distributed on the basis of institutionsā€™ REF results.

If there were ever a rationale for limiting QR funding to such a subset of institutions, which I doubt given other well-respected institutions are also research active, then it no longer exists. Teaching and research were both once jointly funded in England by the Higher Education Funding Council for England (Hefce) but, when Research England was spun out of Hefce and into UKRI and once Hefce was replaced by the Office for Students, a new wedge was driven between teaching and research. This split was confirmed when the Minister for Science job, having been separated from the Minister for Higher Education job, was then given to two separate people. Whereas public policy used to embed the idea of a nexus between teaching and research, it no longer clearly does so.

If Quality Related research funding is designed to fund the best quality research, why limit it to institutions that have taken one particular approach to setting their fees for undergraduate courses? As research budgets grow, this area should be looked at afresh, perhaps as part of the formal post-legislative scrutiny of the Higher Education and Research Act (2017) or as part of the forthcoming Higher Education Bill. Otherwise, there is a risk the current rules will come to seem like monopolistic behaviour stemming from an unholy alliance between existing universities wanting to hold back competition and the Government wanting to limit demands on public funds.

Conclusion

This collection does not claim to cover every aspect of the REF. For example, there is not very much in the pages that follow on the ways that funding is allocated on the back of the REF process, and how this differs across the UK. But the chapters should be read alongside the other contributions HEPI has already published, and will continue to publish, on our website and elsewhere.

Research assessment exercises have been around for longer than most British universities and, despite the commitment to ā€˜fundamentally rethink the assessment of researchā€™ in Labourā€™s manifesto for the 2019 General Election, they are unlikely to disappear any time soon. The general concept of closely evaluating research might be approaching middle age and may even be on the cusp of a mid-life crisis, but it has also notched up a good record of achievement. It seems likely there are too many people around who value its contribution to push it into early retirement any time soon.

Nick Hillman, Director of HEPI 

The above article is a chapter by Nick Hillman from Research Evaluation:The past, present and future of research assessment launched on the 1st September 2022.

The Higher Education Policy Institute hasĀ published a new collection of piecesĀ entitled Research Evaluation: Past, present and future (HEPI Report 152), edited by Dr Laura Brassington. The dozen different authors consider the origins of UK research evaluation, the outcome of the latest Research Eaxcellence Framework (REF 2021) and options for the future of research assessment.


Related Articles

Responses