Few aspects​ of academic labour are more despised than marking. This is one of its uses, since academics love to complain. But the university is not the safe space for complaint that it once was. Negativity, even ambivalence, is frowned on. Nothing less than complete enthusiasm will satisfy: you must at all times be thrilled to announce, excited to be part of, delighted to share. In this context, marking – which at most universities involves uploading long lists of numbers to creaking online portals that crash with abandon – is one of the few remaining repositories for an acceptable ennui, an apolitical ire. It unites the divided: everyone hates marking.

Yet the labour of marking is accorded a certain dignity – noble and necessary donkey work that is also highly specialised, an almost magical process. This ambiguous status was in evidence during the marking boycott that took place at around twenty British institutions last summer, part of a long-running struggle against the erosion of staff pay and conditions. Managers responded with punitive deductions (in some cases docking full pay) and by seeking to bring in private outfits such as the Australian consultancy firm Curio to grade student essays. At Queen Mary in East London, assessors appeared to copy and paste generic feedback with minor variations: ‘Good attempt here although can be more coherent in places’; ‘Good attempt – coherence can be better in places’. Marks and comments made little sense: ‘70: Nice work!’; ‘65: Perfect!’ Students and union leaders weren’t wrong to complain about these ‘inexpert’ and ‘incompetent’ assessments – though the bigger problem is surely the resort to scab labour. But it is far from unheard of for ‘expert’ feedback to be similarly poor: vague, minimal, inconsistent and littered with typos. This has much to do with the time pressure and overwork that led to the boycott in the first place.

There are problems with marking, however, that are inherent to the activity and can’t be evaded, no matter how much time or coffee is available. We are asked to express evaluative judgments in numerical form, but there is no standard unit of academic merit or worth. As Stefan Collini has observed, the attempt to put a number on something inherently immeasurable, such as the quality of teaching and research in UK universities, has resulted in the measuring of various ‘proxies’, often related to money and marketability: average graduate income, number of citations etc.

The only alternative is to find a way of translating qualitative judgments into quantitative ones. Humans seem able to do this, after a fashion. It’s possible to justify the awarding of Michelin stars to one restaurant rather than another, and the case might be quite clear cut if the food is good enough. The equivalent at universities are REF stars, awarded by panels of senior academics to indicate whether universities are ‘world-leading’ (four stars) in their research ‘output’ or merely ‘internationally excellent’ (three). The trouble is that the units used to ‘measure’ the merit of restaurants or universities aren’t anchored to anything. Marking scripts is not unlike being asked by a doctor to rate the pain you are experiencing on a scale of one to ten, except that marks are doled out in roughly inverse proportion to the amount of pain caused to you by a student’s work. Just as the pain-rating exercise relies on the parameters of a person’s experience (‘ten is the worst pain you’ve ever had’), a marker’s parameters – her ‘pain threshold’ – will be a partial function of what she is accustomed to suffer. As with the pain exercise, the practice of marking ‘works’ in some sense. There is likely to be a correlation between how good a piece of work is and the mark it gets. There is also a significant degree of alignment between members of a marking community, at least at a local level: people who work together respond in the same way to the same sorts of stimulus. There is no formula behind this loose agreement; you just adjust according to the responses of others near you, joining an indistinct, wobbling mass.

This anchorless, subtly self-adjusting character helps explain the mechanism (though not the driving force) of ‘grade inflation’. But that concept can be deceptive. It would be wrongheaded to claim that the same work is not more highly rewarded today than it would have been in the past. The proportion of firsts has more than doubled in the last ten years: it’s simply not plausible that the overall standard of education has risen dramatically in this period. What has changed is the marketisation of universities, which introduced pressures that make so-called inflation inevitable, linking institutions’ material fortunes to league table positions while financially incentivising the ‘recruitment’ of as many students as possible.

Talk of grade inflation is often a reactionary device, used to rehabilitate the idea that only the brightest – a category inseparable in practice from class and other forms of advantage – should benefit from higher education. Few people worry about the number of top grades achieved by pupils at expensive private schools (often facilitated by various dirty tricks). It’s the unprecedented numbers of working-class teenagers getting these grades that can’t be tolerated. What the fiction of ‘accurate’ marking obscures is that the kinds of decision made by markers are not determinations of fact or even value so much as practical, even ethical judgments. Given that there is no standardised way of deciding what is an ‘upper’ and a ‘lower’ second – there are criteria, of course, but these are open to interpretation and rely on relative terms (‘good’, ‘lucid’, ‘excellent’) – it becomes a question of how generous or stingy you want to be. You can choose to make a heroic stand against the devaluing of degree classifications, for the sanctity of the boundary between 2:1 and 2:2. But even if it were possible to return to the good old days when only the top 10 per cent got firsts (and a similar proportion of the population went to university), why do it? To make it easier for Deloitte and Accenture to take their pick? Alternatively, you might make life a little easier for some young people who have been screwed over since before they were born. Whose side are you on?

Ethical questions also arise from the relative weight markers give to the different components that make up the complex judgment of overall ‘merit’. These components – which often appear on official marking criteria under headings such as ‘quality of writing’, ‘originality’ and ‘coherence’ – are, in Thomas Kuhn’s sense, incommensurable. One marker may prize originality or inventiveness; another may be a stickler for accuracy. It’s largely a matter of temperament. More authoritarian personalities may be inclined to emphasise the extent to which a student has complied with instructions. A large part of marking is assessing how successfully the candidate has done what they were asked to do. The category of ‘relevance’ is one expression of this: a piece of work, however excellent, will be heavily penalised if it is on the wrong subject. But there are grey areas. For one essay set in my department, students are asked to refer to five of a range of texts they have studied. What do you do with an essay which, though otherwise good, refers only to four? What about the student who failed to notice that part of the instruction, but wrote an essay that otherwise fulfilled the task exceptionally well? Here, the usual variation between markers could widen to encompass both a first and a fail.

Awareness of this sort of inter-marker variation has students trooping round to their tutors’ offices as deadlines approach. They want to get a sense of what the marker wants from them – sometimes in exactly those words. They may have been burned before: they were marked down for not citing enough secondary literature, so they want to know how many secondary sources I wish them to cite. They find it unhelpful when I tell them that there is no general rule, that it depends on the sort of essay they write, that it doesn’t really matter how much literature is used but whether it’s used well.

Another question students often ask is ‘do you want me to give my opinion?’ They may have had work marked down for not demonstrating a ‘critical component’. I take a deep breath and tell them it depends, that there’s not much point tacking an unsupported opinion onto the end of an essay – but yes, the best work will do more than just relate the views of others, and will be critical in the sense of involving independent analysis and presenting a line of argument. But this doesn’t have to mean saying whether you think al-Kindi’s argument for the finitude of the universe is good or bad or somewhere in between (four stars? three? one?). A skilful and perhaps novel interpretation can count as a critical component too.

At some point in this explanation the student’s eyes begin to glaze over. They had hoped for a ‘yes’ or ‘no’ and hate the ‘it depends’ answer they so often get. Many seem convinced that a secret is being deliberately withheld from them, and read these equivocal responses as evasiveness. A certain amount of bad feeling tends to develop. The student is frustrated; the tutor is frustrated. Both sides have the sense that they are failing in their task. On the tutor’s side, it is tempting to think that students are irrationally obsessed with grades for grades’ sake. Sometimes a student comes to a lecturer to announce that she ‘really wants’ a certain grade, usually a first. Sometimes she even ‘needs’ it. But the questions aren’t really so irrational. In many cases, a student’s ability to do what is required of her is limited, and she knows it. For various reasons – lack of innate ability for the particular challenges posed by academia, lack of confidence, poor schooling, stress and distraction, lack of time to study due to having to work for money – many students aren’t in a strong position to succeed in the course of study they have taken on large debts to pursue. It makes sense to look for short cuts, for the few things you might be able to game or control. But the business of assessment is just too messy to be gameable. I’ve tried coming clean about this, in the hope that there might be some relief in letting go of the illusion of control. Instead, they find it deeply disturbing. ‘I wish you hadn’t told me that,’ one student said.

There is now more assessment – and more marking – than ever before, no longer confined to the single, dreaded exam period. An undergraduate student of philosophy at my university might take four modules each term, each with at least two assessed components, typically an essay and something else. Each module must include a ‘scaffolding’ component: an exercise intended to help the student with the main task. This element must itself be assessed, because otherwise most students wouldn’t do it (hardly surprising in an education system that denigrates any endeavour engaged in for its own sake). One example of scaffolding might be an essay plan. How do you mark an essay plan? Another option is a spoken presentation. Some ‘module leaders’ – managerial speak for ‘lecturers’, deliverers of ‘learning events’ – will set a weekly quiz, weighted at perhaps 10 per cent of the overall module mark.

The effect of all this is that an enormous amount of energy is diverted into the management of an ever more intricate system of continuous assessment. A student doing four modules can expect to have at least eight assessment ‘checkpoints’ in a term. It’s hardly surprising that the questions most often asked of teachers by students are about the bureaucratic details of the assessments, or that staff increasingly find it necessary to set aside class time to explain them. The feeling – on both sides, I suspect – is of being exhausted before the work has begun, of never getting round to the thing itself.

Students want – or think they want – more and faster feedback. So tutors write more and more, faster and faster, producing paragraph on paragraph that students, in moments of sheepish honesty, sometimes admit they don’t read. However infuriating, it’s understandable. This material is far from our best work. Much of it is vague, rushed or cribbed. In order to bridge the gap between staff capacity and student ‘demand’, some universities are outsourcing basic feedback to private providers. One company, Studiosity, lists thirty institutions among its ‘partners’, including Birkbeck and SOAS.

Managers often seem to assume that marking is a quasi-mechanical process whereby students are told what is good and bad about their work, and what they need to do to improve. But students don’t improve by being told how to improve, any more than a person learns to ride a bicycle by being told what to do – keep steady, don’t fall off. There’s a role for verbal feedback, but the main way that learning happens is through practice: long, supported, unhurried practice, opportunities for which are limited in the contemporary university.

It doesn’t end there, either, because after assessment comes reassessment, in which students are offered opportunities to retake modules, or parts of them, to improve on their original marks. Here is a system so complex as to be almost unfathomable. There are ‘capped’ and ‘uncapped’ reassessments, and different word limits according to student and circumstances, requiring spreadsheets linking student ID number to the specifications of each marking task. Struggling students who might otherwise drop out (entailing a loss of income for the university) are offered the option of ‘trailing’ failed credits – that is, being reassessed on last year’s work at the same time as this year’s – and so piling extra weight onto what was already too heavy a load. Then there is ‘reassessment without attendance’: a cheaper option than repeating a year, but which involves the loss of student loans, accommodation and tuition. At my university, one student was found to be sleeping in a hollow tree on the campus grounds while undertaking reassessment to make up the ‘credits’ needed in order to graduate.

There is only a modicum of truth in the perception, from outside universities, that it has become too easy for failing students to claim ‘extenuating circumstances’. It’s true that the majority of such claims are granted, and that both student ‘retention’ and ‘satisfaction’ are powerful incentives for universities to grant them. But the reality is grim. ‘EC’ claims, which usually have to be supported by documentary evidence, make for some sad spreadsheets: panic attacks, anxiety and depression; but also hearing voices, brain injuries, parental estrangement, bereavement, eviction, rape. In a saner system, students in these situations might simply take time out or repeat a year. But few are keen to take on the additional debt. So they struggle on, eventually coming to staff in despair.

If the educational costs of all this are great, the human ones are greater. Nobody is enjoying this. In fact, both sides are losing their shit. The answer is obvious. Less assessment would be better, no assessment better still. But it is hopelessly utopian to call for the abolition of marking without also demanding a radical overhaul of the entire system. It is also unhelpful to imagine returning to some golden past, when exam terms were exam terms, when people didn’t get so worked up. What is needed is something quite unlike both past and present. But you don’t have to be a nostalgic to see that the present is worse than the recent past, and that for many it is scarcely bearable. Whatever success might look like, this is failure.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 45 No. 7 · 30 March 2023

Lorna Finlayson perfectly captures the complexities of student assessment in today’s universities (LRB, 16 March). I was reminded of an article from a simpler time by Laurie Taylor, in which he pointed out that when confronted by a pile of essays or exam scripts, try as you might the first one was always a 57.

Peter Malpass
Bristol

Lorna Finlayson’s Diary took me back to the 1970s when my husband’s lecturer in Buddhism at Lancaster University offered this sublime assessment of one of his essays: ‘Tight. Like Bill Wyman’s bass or the Liverpool defence.’ Happy days.

Caroline Walker
Beaminster, Dorset

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences