09 January 2025

We were friends once

In 2021, I was in front of the US Consulate in Toronto. And there was a small plaque that was surprisingly moving to me.


As the United States marks the 10th anniversary of the terror attacks of September 11, 2001, in the name of all Americans, Consulate General Toronto thanks the people of Ontario for their support and generosity following the worst attack on American since WWII.

The nearly 3,000 innocent victims from ninety countries included twenty-four Canadians, our friends and family in Ontario, and all of Canada, were with us in our darkest hour, as you always are. Neither terrorism nor any adversity can conquer free peoples, and we are grateful to stand with the best neighbours any nation ever had.

As together we look back, together we go forward.

We are eternally grateful.

September 11, 2011

 

Reading the text of this now is just crushing to me. I’m sad that “eternal” gratitude seems to have lasted less than 15 years since the plaque was revealed.

External links

Thanks and remembrance

08 January 2025

More publishers toying with AI in journal review

The Bookseller is reporting that Springer Nature is testing AI checks in their production pipeline for journal articles.

According to the publisher, the AI systems are designed to alert editors to “quality issues.” I find this frustratingly unclear, much like Elsevier’s testing of production pipeline changes. What, exactly, are they looking for with these systems?

I can see some automatic checking being very valuable. For example, it blows my mind that many journals apparently do no basic checks for plagiarism. I can also see value in an AI system that scanned basic statistical information, like whether reported p values were possible given the sample size, test statistic, and so on. For example, there is a program called Statcheck that has been proposed for exactly such purposes (Nuijten & Wicherts 2023, 2024), although there are ongoing debates about its utility (Schmidt 2017, Böschen 2024).

If the publishers were confident that these systems were genuinely making the peer review process better and were catching things like:

  • Image manipulation
  • Plagiarism
  • Tortured phrases
  • Citation cartels
  • Non-existent authors or reviewers
  • Statistical errors 
  • Undisclosed use of generative AI

All of which are real problems that need addressing. So why are academic publishers being so cagey about what processes they are implementing and what they are supposed to catch? Are they worried that this provides information that cheaters can use to bypass their “quality issues” detection systems? Something else?

Publishers always claim that they add value to academic publication. These new AI checks provide a real opportunity that they could show how they are adding value to academics who are increasingly mad at them and asking “What good are you?”

Related posts

Elsevier turns generative AI loose on manuscripts for no discernable reason

External links

Springer Nature reveals AI-driven tool to ‘automate some editorial quality checks’

References

Böschen I. 2024. statcheck is flawed by design and no valid spell checker for statistical results. arXiv: 2408.07948. https://doi.org/10.48550/arXiv.2408.07948 

Nuijten MB, Wicherts J. 2023. The effectiveness of implementing statcheck in the peer review process to avoid statistical reporting errors. PsyArXiv. https://doi.org/10.31234/osf.io/bxau9

Nuijten MB, Wicherts JM. 2024. Implementing Statcheck during peer review is related to a steep decline in statistical-reporting inconsistencies. Advances in Methods and Practices in Psychological Science 7(2): 25152459241258945. https://doi.org/10.1177/25152459241258945
 
Schmidt T. 2017. Statcheck does not work: All the numbers. Reply to Nuijten et al. (2017). PsyArXiv. https://psyarxiv.com/hr6qy