18 March 2024

Contamination of the scientific literature by ChatGPT

Mushroom cloud from atomic bomb
I’ve written a little bit about how often notions of “purity” come up in discussions of scientific publishing.

I see lots hand waving about the “purity and integrity of the scientific record,” which is never how it’s been. The scientific literature has always been messy.

But in the last couple of weeks, I’m starting to think that maybe this is a more useful metaphor than it has been in the past. And the reason is, maybe unsurprisingly, generative AI. 

Part of my thinking was inspired by this article about the “enshittification” of the internet. People are complaining about searching for anything online, because so much of search results is being dominated by low quality content designed to attract clicks, not be accurate. And increasingly, that’s being generated by AI. Which was trained on online text. So we have a positive feedback loop of crap.

(G)enerative artificial intelligence is poison for an internet dependent on algorithms.

But it’s not just the big platforms like Amazon and Etsy and Google that is being overwhelmed by AI content. Academic journals are turning out to be susceptible to enshittification.

Right after that article appeared, science social media was widely sharing examples of published papers in academic journals with clear, obvious signs of being blindly pasted from generative AI large language models like ChatGPT. Guillaume Cabanac has provided many examples of ChatGPT giveaways like, “Certainly, here is a possible introduction to your topic:” or “regenerate response” or apologizing that “I am a large language model so cannot...”.

It’s not clear how widespread this problem is, but that even these most obvious examples are not getting screened out by routine quality control is concerning.

And another preprint making the rounds show more subtle telltale signs that a lot of reviewers are using ChatGPT to write their reviews.

So we have machines writing articles that machines are reviewing and humans seem to be hellbent on taking themselves out of this loop no matter what the consequence. I can’t remember where I first heard the saying, but “It is not enough than a machine knows the answer” feels like an appropriate reminder.

The word that springs to mind with all of this is “contaminated.” Back to the article that started this post:

After the world’s governments began their above-ground nuclear weapons tests in the mid-1940s, radioactive particles made their way into the atmosphere, permanently tainting all modern steel production, making it challenging (or impossible) to build certain machines (such as those that measure radioactivity). As a result, we’ve a limited supply of something called “low-background steel,” pre-war metal that oftentimes has to be harvested from ships sunk before the first detonation of a nuclear weapon, including those dating back to the Roman Empire.

Just like the use of atomic bombs in the atmosphere created a dividing line of “before” and “after” there was widespread contamination of low-level radiation, the release of ChatGPT is enhancing and deepening another dividing line. Scientific literature has been contaminated with ChatGPT. Admittedly, this contamination might turn out to be at a low level that might not even be harmful, just like most of us don’t really think about the atmospheric radiation from years of above ground testing of atomic bombs.

While I said that it isn’t helpful to talk about “purity” of academic literature before, I think this is truly a different situation than we have encountered before. We’re not talking about messiness because research is messy, or that humans are sloppy. We’re talking about an external technology that in impinging on how articles are written and reviewed. It is a different problem that might warrant describing it as “contamination.”

(I say generative AI is deepening the dividing line because the problems language AI are creating were preceded by the widespread release and availability of Photoshop and other image editing software. Both have eroded our confidence that what we see in scientific papers represents the real work of human beings.)

Related posts

How much harm is done by predatory journals? 

External links

Are we watching the internet die?

No comments: