28 November 2025

More A.I. slop with the autism bicycle

It’s Scientific Reports turn to be embarrassed for publishing obvious generative A.I. slop.

The nonsensical bicycle, the bizarre almost words, the woman’s legs going through whatever she is sitting on. Just a mess.

The good news is that this is apparently going to be retracted, and that word came pretty quickly. But it is a bit concerning that the news of this retraction came from a journalist’s newsletter on that a platform that a lot more people should leave.

There is now a pop-up that reads:

28 November 2025 Editor’s Note: Readers are alerted that the contents of this paper are subject to criticisms that are being considered by editors. A further editorial response will follow the resolution of these issues. 

Less than 10 days from publication to alerting people of a problem is practically lightning speed in academic publishing.

My experience has been that when one finds one problem, there may be more lurking. So I looked for other papers by the author. I found none.

I then checked the listed institution: Anhui Vocational College of Press and Publishing. This does appear to be a real institution in China. But as the name suggests, it seems to be centred on publishing, design, and politics. It is not at all clear why a faculty member would write a paper on autism.

As I was looking around in search results for any more information about this institution, I stumbled upon two retracted papers from another faculty member. There are other papers from other faculty out there that seem to be more what you would expect, and are presumably not retracted.

It’s just strange.

Working scientists have to get organized and push back against journals that are not stopping – or are even willingly using – generative A.I. slop.

Reference

Jiang S. 2025. Bridging the gap: explainable AI for autism diagnosis and parental support with TabPFNMix and SHAP. Scientific Reports 15: 40850. https://doi.org/10.1038/s41598-025-24662-9

External links

Riding the autism bicycle to retraction town

21 November 2025

Google Scholar finally falls to “AI in everything”

Who thinks Google searches have gotten better recently? Because I have not seen anyone say that.

A few days ago, Google Scholar started its version of putting “AI” (large language models) to Google Scholar. Because that’s what every app does now whether users want it or not.

Google Scholar was one of the few online services I used regularly that hasn’t shown signs of enshittification. Scholar just worked. The complaints about it were things like that the metrics could be gamed, and it wasn’t perfect at screening out non-academic content. But I never heard from the online research community that the core search function was somehow deeply deficient at finding relevant papers.

Disappointed. But I expect this from Google now, just like I do from every tech company.

Shame on Philosophical Transactions B for using slop covers

Cover of Philosphical Transactions of the Royal Society B, Volume 380, Issue 1939, featuring a nonsensical phylogeny of animals and brains.
Hat tip to Natalia Jagielska for pointing out that the latest cover of Philosophical Transactions of the Royal Society B is ChatGPT generated slop.

Not only in AI yellow but scientifically nonsensical. Come on.

But then things got worse. Alexis Verger pointed out they had used ChatGPT for the cover of their previous issue. Again, it is obviously wrong. The spinal cord leads directly to the lungs? No. Just no.

And then I went and looked at the archive and found the another ChatGPT cover.

So of the journals last four issues, three were AI slop covers made by ChatGPT.

It should be an embarrassment for the journal. I would have rather a plain cover with no imagery at all instead of this. 

Cover of Philosphical Transactions of the Royal Society B, Volume 380, Issue 1938, featuring a nonsensical human torso, embryo, and virus.
Even the non-slop covers were not that impressive. Most of them are stock photos. I cannot help but think that many scientists probably have some some of relevant pictures they have taken for their slides, posters, and so on. Why not use those?

This is another example of how scientists don’t take graphics seriously

Anyway, I am off to email the journal.

Update, 22 November 3025: EMBO Journal also guilty of using slop.

External links 

AI-generated rat image shows that scientific graphics are undervalued
 

Cover of Philosophical Transactions of the Royal Society B, Volume 380 Issue 1936, showing a chemical glassware setup with equations overlaid on top. Background image generated using ChatGPT.