Generative AI fatalism (or Gen AI fatalism): The assertion that generative AI must be accepted, because its widespread adoption is inevitable and cannot be stopped.
A new article by James Zuo about ChatGPT in peer review is a particularly spectacular example of gen AI fatalism. Zuo mentions that many peer reviews show signs of being created by large language models, like ChatGPT. He lists many problems and only trivial potential advantages, and calls for more human interactions in the peer review process.
Since Zuo has nothing very positive to say about generative AI in peer review, the fatalism on display is stark.
The tidal wave of LLM use in academic writing and peer review cannot be stopped.
No!
This is a mere assertion that has no basis behind it. Technology adoption and use is not some sort of physical law. It’s not a constant like the speed of light. It’s not the expansion of the universe driven by the Big Bang, dark matter, and dark energy.
The adoption and use of technology is a choice. Technologies fail in the marketplace all the time. We abandon technologies all the time.
If generative AI is causing problems in the peer review process, we should say that. We should work to educate our colleagues about the inherent problems with generative AI in the review process.
I suspect that people use ChatGPT for the simple reason that they don’t want to do peer review. They do not believe they are adequately rewarded for the task. So, as is so often the case, we need to look at academic incentives. We need to look at why we peer review and what it can accomplish.
Creating journal policies about using ChatGPT is little more than trying to paper up holes in a boat. I would welcome the equivalent of pulling the boat into dry dock and doing a complete refit.
Reference
https://doi.org/10.1038/d41586-024-03588-8
No comments:
Post a Comment
Comments are moderated. Real names and pseudonyms are welcome. Anonymous comments are not and will be removed.