08 January 2025

More publishers toying with AI in journal review

The Bookseller is reporting that Springer Nature is testing AI checks in their production pipeline for journal articles.

According to the publisher, the AI systems are designed to alert editors to “quality issues.” I find this frustratingly unclear, much like Elsevier’s testing of production pipeline changes. What, exactly, are they looking for with these systems?

I can see some automatic checking being very valuable. For example, it blows my mind that many journals apparently do no basic checks for plagiarism. I can also see value in an AI system that scanned basic statistical information, like whether reported p values were possible given the sample size, test statistic, and so on. For example, there is a program called Statcheck that has been proposed for exactly such purposes (Nuijten & Wicherts 2023, 2024), although there are ongoing debates about its utility (Schmidt 2017, Böschen 2024).

If the publishers were confident that these systems were genuinely making the peer review process better and were catching things like:

  • Image manipulation
  • Plagiarism
  • Tortured phrases
  • Citation cartels
  • Non-existent authors or reviewers
  • Statistical errors 
  • Undisclosed use of generative AI

All of which are real problems that need addressing. So why are academic publishers being so cagey about what processes they are implementing and what they are supposed to catch? Are they worried that this provides information that cheaters can use to bypass their “quality issues” detection systems? Something else?

Publishers always claim that they add value to academic publication. These new AI checks provide a real opportunity that they could show how they are adding value to academics who are increasingly mad at them and asking “What good are you?”

Related posts

Elsevier turns generative AI loose on manuscripts for no discernable reason

External links

Springer Nature reveals AI-driven tool to ‘automate some editorial quality checks’

References

Böschen I. 2024. statcheck is flawed by design and no valid spell checker for statistical results. arXiv: 2408.07948. https://doi.org/10.48550/arXiv.2408.07948 

Nuijten MB, Wicherts J. 2023. The effectiveness of implementing statcheck in the peer review process to avoid statistical reporting errors. PsyArXiv. https://doi.org/10.31234/osf.io/bxau9

Nuijten MB, Wicherts JM. 2024. Implementing Statcheck during peer review is related to a steep decline in statistical-reporting inconsistencies. Advances in Methods and Practices in Psychological Science 7(2): 25152459241258945. https://doi.org/10.1177/25152459241258945
 
Schmidt T. 2017. Statcheck does not work: All the numbers. Reply to Nuijten et al. (2017). PsyArXiv. https://psyarxiv.com/hr6qy