Close
Close

Bunch of Rankers

Glen Newey

What is the point of academic journals? The main one, surely, is to disseminate new findings and ideas, but this doesn’t go far in explaining the current publications set-up. Journal articles loom large in government monitoring exercises like the Research Excellence Framework, a Standard & Poor’s-type academic credit-rating. REF figures shape departments’ public research funding and individual researchers’ career prospects.

But the ghost of G.E. Moore haunts the exercise: quality can’t be boiled down to component merits that are then tick-boxed into a ‘metric’. Most academics have macabre tales to tell of their treatment at the hands of journals, whose scrutiny methods vary wildly. Some well-considered organs are fiefdoms run to the fiat of the founding editor. Others’ byzantine vetting methods make Jarndyce v. Jarndyce seem a beacon of procedural clarity. Referees charged with deciding whether, as they say, a submission fills a much-needed gap, play a key role – with the vagaries of prejudice, available time, mood etc. Often they know little about the field or, precisely because they’ve published in it, bomb hapless authors with their données. The result – abetted by the blind refereeing system, a zone of unaccountable power – alloys conservatism with arbitrariness.

The problem, basically, is deciding whether a paper’s any good. Why not just read it? Because verdicts vary with the reader’s prejudices, available time, mood etc. So in practice, REF relies on proxy indicators of quality: a paper is more likely to be any good if it appears in a flagship periodical. It’s tempting to say: being in a stable doesn’t make a donkey a racehorse. That analogy has its limits, and at least a randomly picked object in a stable is more likely to be a horse than such an object elsewhere. How far along the axis the probability distribution peaks might be taken as a rough index of journal quality. But that just shifts the onus of opinion from the paper to its host journal, and yields only the equivalent of an expected value rather than a concrete measure.

Another method involves gauging a paper’s ‘impact’ – a factor the REF explicitly distinguishes from quality. Here, the quant is how influential a paper is – cashed out, say, by mentions of it in other publications. This too has obvious problems. I’ve thought of launching a Journal of Comparative Balls, whose articles’ sole rationale would be to put up Aunt Sallies that invite rebuttal, and so harvest mentions elsewhere; ideally, the rebuttals would in turn be sufficiently asinine to reap further mentions via counter-rebuttals, and so on. The same goes for online methods that gauge quality using such metrics as the number of downloads a paper gets. It’s easy to envisage download consortia springing up to massage the numbers.

Compare a genuinely quantifiable property like celebrity, where frequency of mention does look like a credible measure. But that’s because the measure isn’t a proxy – that’s just what celebrity is. The quest to quantify quality epitomises academia’s current palsy. Under pressure to devise robust quality indices, it has mired itself in mensuration ju-ju, from endless league tables to software like Turnitin, which combs students’ essays and outputs a score for plagiarism. One could poll academics to rank journals; but, apart from this relying on democratic principles largely unknown in academe, voters would be apt to vote for journals that had published their own papers.

Instead of trying to bottle quality like spa water, why not just let everything slosh around in the paddling-pool of the net? All internauts use heuristics – blog tips, for example – to dodge cyberland’s crashpads for the vacationing mind. Journal publication is already bypassed through personal-site uploads and repository sites. Hiring and research-assessment committees, instead of relying on their own heuristics, such as which journals host candidates’ papers, would have to read the work and reach a verdict on it. Public research monies could be advanced on a tendering basis, as they already partly are, rather than a block grant. Similarly, consortia applying for capital grants could still attach publications as proof of excellence.

This won’t happen. There’s no getting away from it: some people – many in positions of power – are rankers. Government, big publishers and campus bosses like ranking, as it regiments and keeps a lock on power. For the masters of destiny any number is better than none.


Comments


  • 21 November 2012 at 9:34am
    Josie McLellan says:
    Indeed. Rankers abound. Even within journals it's standard practice to circulate lists of 'most cited' and 'most downloaded' articles.

    And yet, as one of those shady journal editors (although yet to be allocated my personal fiefdom), I can't help feeling uneasy about some internet free for all. Journals don't just disseminate fully formed articles. Most articles are much improved by peer review. Not to mention the professional copy-editing still (just) provided by publishers.

    Perhaps I am getting carried away here but, for all their flaws, there is a case for journals as a progressive force. (Bear with me.) Think of the struggling post-PhD who gets her work read without having to beg for a retweet by @DigitalHumanitiesSuperstar. Or the beleaguered lecturer who suddenly finds it easier to convince his HoD that David Beckham Studies actually IS serious scholarship.

    Nobody's perfect. Journals and peer reviewers could do things a lot better (let's all start by signing our reviews). But they should and often do add value in a way that a research blog generally won't. (I hope I don't come across as too much as a ranker when I say I entirely understand your reasons for publishing this piece with the LRB rather than letting it frolic freely in the digital paddling pool.)

    • 23 November 2012 at 10:40am
      semitone says: @ Josie McLellan
      Of course journals and peer reviews preserve and promote quality reserach much better than the number of "likes" a blog gets, which is why climate deniers tend to blog (or write for the Telegraph) while climate scientists submit their work for peer review. It's always a bit tricky to work out what the hell Glen Newey means, but I hope he's not suggesting academics just let everything slosh around in the paddling-pool of the net instead of publishing in reputable journals. Oh wait, that is what he said. Well, this should please Big Pharma anyway.

      Though I think any lecturer trying to convince his HoD that David Beckham Studies is serious scholarship should most certainly stay beleaguered.

  • 21 November 2012 at 10:21am
    alex says:
    "In practice, REF relies on proxy indicators of quality: a paper is more likely to be any good if it appears in a flagship periodical." This is very widely believed but not actually true. Statements are regularly issued to the effect that peer-reviewedness is not a criterion for evaluation, and what's more, they are not mere lip-service. I can show this because in 2008 I was part of a department which was rated among the top 5 in the UK in my discipline: our submission did not consist exclusively of work published in flagship journals, and we beat several departments whose work was.

  • 21 November 2012 at 2:18pm
    simonpawley says:
    Unquestionably, rankings and 'impact factors' cannot and do not measure 'quality', and should not form the basis of any decisions about money, appointments, or anything else that matters. But I do not quite see why that makes Glen Newey want to abolish journals.
    He must have had a bad experience in the 'zone of unaccountable power' at one time or another, but if he believes in academic autonomy (as I suppose he does), it must rest on a little more faith in the integrity of most academics. The suggestion that 'most' peer reviewers 'know little about the field' raises the question of why they do it. It is a time consuming job with basically no rewards that (I think) most people do out of a sense of professional responsibility. Wouldn't most busy academics be only too happy to turn down a request to review something they are not qualified to evaluate? There must, of course, be occasions on which reviewers make unreasonable comments or demands, but it is the job of journal editors to mediate those. Active editors supervise the 'zone of unaccountability' while anonymous review limits the reach of an editor's fiefdom. This balance cannot produce perfection, but without a better suggestion seems the best that can be done.
    I cannot really see the problem with Turnitin, provided one uses it sensibly. It does not give you a 'score for plagiarism' in any meaningful sense, and is really no more than a time-saving tool. The only danger would be in overestimating its capabilities.
    And as to the question 'why not just let everything slosh around in the paddling-pool of the net?' One might as well ask why one would bother buying the LRB or a newspaper when there is equally good writing and journalism available free online. Well, I have no doubt that there is, but I lack the time to wade through the rest of the rubbish on the way or the skills to get straight there. Good journals play the same role, and I can only assume that as a whole raft of forces incentivize quantity more than quality in academic publishing, they will only become more important. Again, they are surely not perfect, but I do not think we have yet seen any better suggestions.

  • 21 November 2012 at 8:50pm
    davidmpyle says:
    As Mark Twain said, never let the facts get in the way of a good story; and so it so often is with reporting on the REF. Whatever one may think of the process, the fact is that it will happen. And, this time around, the rules that are in the public domain (http://www.ref.ac.uk/pubs/2012-01/) make one thing explicity clear: the assessment of the quality of submissions, which will be carried out by the expert panels assembled for the task, is explicitly not allowed to use any measure or judgement of the 'ranking' of 'prestige' of the journal, or publisher, to determine the quality of the output. The expert panels are themselves charged with making the judgement on the quality of the outputs, by reading them.

    The second aspect of the REF that it is essential to understand is that none of the published outputs are going to be assessed for impact: impact is a completely separate part of the assessment, and this will mainly be assessed with reference to a select number of case studies.

    By all means, let's have a real debate about the foolishness of metrics and the weaknesses of the models of publishing with which the Academy finds itself lumbered. But let's not confuse individual and institutional reward systems - many of which do seem increasingly, and unhelpfully, to be focussing on metrics - with the national framework for research assessment. We are at an exciting point in the evolution of academic discourse, and the glacial pace of blind peer-reviewed hardcopy publishing faces a real challenge from the possibilities offered by interactive and open peer-review across the spectrum of digital media.

  • 29 November 2012 at 4:17pm
    cping500 says:
    Thank you for confirming the first part. It saves making another call HEFC England. I now need to pursue the second as to the criteria.

    I notice that universities are paying professional writers to write these case studies (novelists?....confessions from LRB readers and writers would be welcome).

    There are far too many hares to chase your final paragraph. These will, I fear, further confuse the innocent academic.

Read more