05 July 2025

The buck stops with editors on AI slop in journals

There is a new website that identifies academic papers that seem to have been written at least in part by AI, and which the authors did not disclose. As of this writing, there are over 600 journal articles.

I found this site on top of a Retraction Watch post identifying an academic book with lots of fake citations.

This is a problem that has been going on a while now, and it shows no signs of stopping. And I have one question.

Where are the editors?

Editors should bear the consequences of AI slop in their journals. They have the final say in whether an article goes into a journal. Checking that citations are correct should be a bare minimum responsibility of an editor reviewing a manuscript. And yet. And yet. These embarrassingly trivial to spot mistakes keep getting into the scientific literature.

Now, unlike many academics, I do not hate academic publishers or journals. But for years, publishers have been pushing back against criticisms and innovations like preprints by saying, “We add value. We help ensure accuracy and rigour in the scientific record.”

So I am baffled by why journal editors are failing so badly. This is not the hard stuff. This is the basic stuff. And it’s profoundly damaging to the brand of academic publishers writ large. This, to me, should be the sort of stuff that should be the sort of reason to push somebody out of an editorial position. But I haven’t heard of a single editor who has resigned for allowing AI slop into a journal.

Pie chart showing which publishers have the most suspected uses of gen AI. For journal articles, Elsevier, Spring, and MDPI lead. For conference papers, IEEE leads by an extremely wide margin.
There is a great opportunity here for some useful metascience research. Now that we have data that identifies AI slop in journals, we can start asking some questions. What kind of journals are doing the worst at finding and stopping AI slop? Are they megajournals, for profit journals, society journals?

For years, I’ve thought that academic hoaxes were interesting in part because they could reveal how strong a journal’s editorial defences against nonsense were. But now AI slop might allow us to see how strong those defences are. And the answer, alas, seems to be, “Not nearly strong enough.”

Hat tip to Jens Foell for pointing out Academ-AI.

External links 

Academ-AI  

Springer Nature book on machine learning is full of made-up citations

No comments: