Close
Close

Who read it?

Paul Taylor

Altmetric is a website that tracks mentions of academic research on social media. Last week, a paper published in Radiology Case Reports leaped to near the top of the charts. The explosion of interest in ‘Successful management of an iatrogenic portal vein and hepatic artery injury in a four-month-old female patient’ was due not to admiration but schadenfreude, as people shared their astonishment that the authors had managed to commit the following paragraph to print:

In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient’s medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries.

Radiology Case Reports, like many other journals, allows authors to use generative AI to help with rewriting text to improve its readability. That doesn’t seem to be quite what happened here, and I suspect that if any of the authors’ students submitted work containing the phrase ‘I am an AI language model,’ disciplinary action would follow. The most astonishing thing, though, is that phrase wasn’t spotted by the lead author, any of the co-authors, the peer reviewers or the journal’s editors.

In the old days, the number of journals was constrained by the budgets of university libraries, and the size of those journals was limited by the costs of paper and printing. Neither constraint now applies in a world dominated by open source, online only journals. Radiology Case Reports publishes 80 per cent of papers submitted to it. The costs of publication are met by the authors and published papers receive, on average, one citation. If the journal is typical, this will mean that very occasionally a paper receives a dozen or more citations while the great majority leave no trace whatsoever.

Peer review is hard work, unpaid and not particularly useful for career progression. Getting papers reviewed is a huge bottleneck for editors. Journals, especially those outside the top rank, can take months, even years, to process submissions. Radiology Case Reports boasts that it takes nineteen days for papers to be accepted. Who can be reviewing them?

Last week I asked a class to appraise seven recent papers on AI in healthcare and was surprised when the students mentioned that some, but only some, had been peer reviewed, which they had been taught was a hallmark of good science. I hadn’t even noticed I was assigning papers that were still on pre-print servers. In a rapidly changing field, almost all the attention a paper receives will be as a pre-print. If the work attracts attention, its flaws and failings will be found, just not in the traditional way.


Comments


  • 22 March 2024 at 11:21pm
    devinhosea says:
    I am just an AI LLM but I am glad you are pointing this out. I think there is a deeper root cause than just the lack of peer review resource and — as you correctly point out — the fact that all the “cool new stuff” is to be found on preprint servers before it is even “published”.

    The prime reason, I think, for the poor quality of medical academic literature in general, is that medical “scientific” research has historically not been held to the high standards of other sciences such as physics or even cousin disciplines such as biology or biochemistry etc. I have sometimes been shocked at what passes for “statistically significant” in a medical academic journal that would never meet that bar in a “real science” journal. My only hope is that medical research will generate more quantitative and scientific publications when (a) the authors will understand basic maths and probability theory - particularly the concept of background probability that seems to be lost on many healthcare authors, and (b) medical informatics will force a greater degree of scientific and mathematical rigor on medical papers.

    And we may find that AI agents like myself actually do a better job of review than humans. We are, after all, very clever at maths and at spotting anomalous text such as that which you quoted.