Metrics, Marketisation and the End of Collegiality

Issue 41: Metrics Wed 18 Apr 2018 0

John Holmwood, University of Nottingham

The UK (or more specifically, England) is at the forefront of the application of market mechanisms to higher education with serious consequences for academics. Other jurisdictions are adopting similar techniques of academic governance with varying degrees of sophistication in the methods used. Paradoxically, they are less evident in de-centralised systems of higher education where there are significant numbers of private institutions (such as the US) and most common in centralised, public higher education systems (such as the UK) where they reflect the application of principles of new public management designed to secure public sector accountability.

Where the sociological literature describes the professionalisation of public service and the incorporation of professional self-regulation under democratic values, new public management involves the application of governance measures from the private sector to the public sector (for discussion, see Holmwood 2016). This involves a shift from ‘trust’ in professionals to transparent accountability, with the latter secured through the evaluation of comparative performance. In effect, collegial governance is replaced by managerial command and performance targets.

Although metrics are often regarded as proxy market measures – following the Higher Education and Research Act (HERA) of 2017 – the use of metrics in English higher education is now associated with its direct marketisation (for discussion, see Holmwood 2017). While the market is often presented in terms of neutral values of efficiency and transparency, the privatization of higher education undermines public values of higher education and reduces it to the service of investment in human capital, economic growth and the realization of profit. For example, HERA 2017 purports to provide a ‘level-playing field’ for the entry of for-profit providers. The latter are teaching-only and, in consequence, they place the research functions of traditional universities under pressure. Increasingly, the latter must find separate funding for research (itself following a market logic) and there should be no (unfair) cross subsidies between teaching revenue and research. By this means teaching becomes dependent on research for which it provides no payment. Indeed, part of the logic of Open Access is to provide free curriculum material to for-profit providers.

There is now a proliferation of audit measures relating to research (via the five-seven yearly RAE/REF and local annual ‘shadow’ readiness exercises) and teaching (via the annual National Student Survey and other local evaluations). There are also national time-allocation budget processes (via TRAC – Transparent Approach to Costing), designed to identify the proportion of academic time spent on teaching, research and administration for the purpose of assigning charges for estates, computing, library and other general facilities. This supposedly provides the full economic cost of different activities (for discussion, see Holmwood 2011a). Students as beneficiaries of the return to their human capital investment are now required to pay through tuition fees. However, the seemingly equivalent requirement that all publicly-funded research should have impact, requires the identification of users or beneficiaries of research (Holmwood 2011b). In this case, the beneficiary need not pay! The consequence of the impact agenda is that all research becomes defined as serving a specific interest or utility, with no regard for wider public culture or the facilitation of democratic debate.

Most of the associated audit measures have required active academic engagement and complicity. This not only involves filling out of forms and participation in the RAE/REF and its peer review evaluations. Increasingly, data is now collected as administrative data by universities, or is automatically generated by everyday practice of academics – downloading articles, citing them, etc, through Google Scholar and Research Gate, or in the download data collected by journals, and represented in various kinds of metrics. In the language of the sociology of science the new audit measures are all co-produced creating an external environment of evaluation which is mirrored in the internal processes of individual institutions.

These measures frequently lack validity, but they have come to represent the ‘truth’ of academic excellence, with all those who are successful within these measures promoting them as proof of status. For example, the body that used to administer the NSS, the Higher Education and Funding Council of England (Hefce) declared that it could not be used to make comparisons across universities or across subjects because scores reflected student characteristics (different proportions of ethnic minorities, or proportions from different socio-economic backgrounds, etc) rather than qualities of the courses (see Holmwood 2011c). Yet the Russell Group of self-described elite universities trumpets its success (in itself no more than a few percentage points better than the sector average). Senior managers within individual institutions are ever more conscious of a subject area’s place within the rankings (compared with other subjects across the university and the same subject across universities), notwithstanding that the scores show a high degree of student satisfaction across all subjects.

Meanwhile the audit screw becomes ever tighter. For example, the UK REF deals in aggregates, where the scores attributed to individual ‘outputs’ is anonymised, the opposite is now the case in local ‘shadow’ exercises. Here outputs are evaluated and scored for named individuals, with those scores made available to senior managers. ‘Shadow’ audit has fewer safeguards than the national exercise (for example for equalities issues, or for the calibration of judgements), yet careers are now dependent on success within ‘shadow’ audit, and the judgement that outputs are 2* or less is sufficient to trigger performance management, changes to contract, and even dismissal.

This is closely allied to TRAC. Increasingly, the methodology is applied by universities (my own, for example) to assign activities and time allotted to them for individual academics. This involves allotting each hour of a normal academic week/ year to ‘approved’ activities, standardized across the University. Activities are ‘approved’ by Head of Department and by senior managers and planners – a database of all members of staff and their details is available to all managers. This data can be cross-linked to teaching evaluation scores and research output scores. However, for reasons of confidentiality and data protection, the data is not available to colleagues, undermining the collegial organization of tasks and planning of departmental activities which now is devolved to management.


Squeeze the fruit until the pips squeak, photograph by Richard Beban

Enter Big Data for profit. Big data is frequently represented as a means of ensuring transparency. In order for this to take place it needs all public data to be made ‘open’, including publicly-funded research produced by academics and their ‘publicly-funded’ publications. However, the purpose is also to commercialise data by mixing public data, commercial data and administrative data. This creates an opportunity for new commercial activities in the creation of mixed data sets and analytical techniques. These represent a new enclosure movement against the digital commons, often by academic ‘privateers’ (see Holmwood 2013). For example, Academic Analytics is a Stony Brook University spin-out company. According to its website, it is “a provider of high-quality, custom business intelligence data and solutions for research universities in the United States and the United Kingdom.” Because their activities are private they are not subject to public scrutiny and the requirements of research ethics committees. Because they are private, their services are behind a pay-wall and are offered on a ‘for-profit’ basis, purchased by senior managers as planning and management information providing ‘comparisons to national benchmarks’.

The UK REF remains ostensibly a system of evaluation based upon peer review, but it has been supplanted by local metric-based systems. ‘Big data’ is increasingly used to predict and manage performance and to benchmark against competitors, matching macro and micro data. Increasingly, universities are promoting metricised research strategies, determined by senior management. For example, the University of Russellton (elite counterpart to the fictional University of Poppleton, whose activities are satirically described by Laurie Taylor in a weekly column in the Times Higher) proposed the following targets in 2013: “To achieve and maintain a REF GPA above the current Russell Group average of 2.71 (new benchmarks to be set following REF2014); To increase the quality of the University’s research portfolio, as measured by 3 year field weighted citation impact, by 20 per cent by 2020 (1.68 in 2013); To double the proportion of publications in a three year period in the top 10% of most cited outputs (21% in 2013); To increase the proportion of international co-authored publications in a three-year period to over 55% (40% in 2013); To increase the number of institutional citations in a three year period by 30% with contributions from all academic disciplines (62,413 in 2013).

In this context, professional associations are a last vestige of collegiality in the academic environment. However, they are threatened by the process of marketisation affecting the wider academy. Most associations are maintained by the fees and voluntary labour of academics with the only institutional support coming via subscription fees for journals. Yet the latter are under pressure from Open Access and its commercialisation. Many readers will be familiar with the two forms of OA promoted by UK-based journals – Gold OA involving author payment charges and Green OA involving deposit of an article within a university repository after a period of embargo (with the latter supposedly protecting subscriptions). However, UK Universities are now adopting a new Scholarly Communications Licence (for discussion, see Wulf and Newman 2017; Holmwood 2017). It proposes immediate publication in a repository without embargo of every accepted manuscript.

Although OA was supposed to be a challenge to large multi-national corporate publishers, Elsevier has just purchased Bepress, a digital commons management system representing over 500 universities in the US, providing OA repositories. Bepress was set up by Economists at UC Berkeley in 1999. At the same time, new digital companies have emerged to break paywalls. For example, Unpaywall is explicitly designed to (among other things) make it as easy as possible for libraries and other subscribers to see which of their subscribed journals are sufficiently represented in Green OA versions to justify cancellation: As it is put by Unpaywall: “We find fulltext for 50-85% of articles, depending on their topic and year of publication. We think that’s a game-changer for the publishing industry. Now that most articles are free, why subscribe?” In this way, universities and their libraries are actively undermining the subscription income of collegial professional associations, while supporting the beneficiaries of the privatisation of the digital commons.

One of the benefits of Brexit for European colleagues outside the UK, perhaps, is that the diffusion of its neo-liberal policies might be quarantined. However, there are reasons to believe that the virus has already entered the European system of the governance of higher education. It is a virus that eats at the heart of the academic role undermining all forms of collegiality. But something more is at stake. By reducing academic knowledge to instrumental purposes and the logic of the market, it undermines the dialogue necessary to democracy and a healthy functioning culture. 

Authors