I recently attended a meeting discussing the national REF results for Sociology that was convened by the British Sociological Association (BSA), the Heads and Professors of Sociology (HaPS), and the Sociology REF sub-panel, including the Chair, Sylvia Walby. It was a real pleasure to spend the afternoon with sociologists undertaking clear-sighted and critical analysis in a collegial context. The general health of sociological research across the sector is robust and the Sociology sub-panel is now a thriving part of the broader eco-system, with an increased number of submissions.

There was a general sense of validation of the work that is done by sociologists, whether in Sociology departments or located elsewhere, across diverse fields of research. Amidst the positivity, concern was expressed about the consequences of reframing the profiles produced by REF into rank orders such as those produced by the Times Higher Education (THE). As with the NSS, rank orders are often the consequence of marginal differences across units. Focusing on the rank order distracts us from acknowledging the quality and vitality of research across the sector.

One suggestion that emerged from the meeting was that we could shift from attention being given to ranking to that of benchmarking. This would mean that so long as units of assessment met the benchmark, then any associated income would be divided equally among all units with broadly similar profiles (a return to an earlier REF model perhaps?). This would be one way to address the problematic aspects of ranking that make the REF unpopular among colleagues while still ensuring the quality of research that is being produced. More importantly, it would also entail a powerful defence for continuing with QR funding while reducing the stresses of such exercises.

Many of the questions asked at the meeting necessarily had inconclusive answers given that all the associated data has not yet been made public. One such question was about what the improvement in REF scores was being attributed to. Was it a consequence of an improvement in the quality of submissions? Or in changes to guidance, in terms of selectivity of outputs? Or was it a result of changes in the rules/metrics, for example, increasing the weight of the impact score from 20% to 25% in the overall profile? Or then, was it about changes in the composition of the sub-panels?

At present, it was not possible to definitively answer such questions, but it is worth thinking about them particularly in terms of what their consequences might be. For example, THE reported that “A single output was submitted for 44 per cent of researchers who participated in REF 2021 … The maximum of five outputs was, meanwhile, attributed to about 10 per cent of submitted staff.” The selectively previously applied to researchers appears to have translated seamlessly into the selectivity of outputs and the local consequences of this need to be monitored. The shift from assessing individuals to assessing the unit, while welcome in many aspects, will also have implications for how issues of EDI are addressed.

We should be aware also that there are sectoral moves to reconfigure the REF. These are focused around the ‘Future Research Assessment Programme’ currently underway at UKRI. In a recent article, James Wilsdon of the University of Sheffield’s Research on Research Institute suggested that the evaluation of research would be better done at a level above disciplinary units of assessment and should be forward oriented, rather than the current retrospective focus. Such developments would potentially have serious implications for Sociology and other disciplines.

While we should be pleased that the current REF cycle is completed and that Sociology is in a healthy condition, we need to pay attention to what is currently being formulated for future research evaluation exercises.