30 June 2022

Is using AI to write a paper academic misconduct?

We’ve come a long way from ELIZA. Or the ridiculous duelling chatbots.

Natural language artificial intelligence has recently gotten far better than I think many people realize, and today’s article in Scientific American points that out.

A researcher asked an open source artificial intelligence program, GPT-3, to write an academic paper. It did such a good job that the preprint is out and the paper is now under review at a technical journal.

Publicity stunt? It smells a little like that, but then again, this is an area that needs some publicity.

As natural language program like  GPT-3 get more widely available and more widely known, of course university students are going to do what these researchers did. They are going to get the programs to write their papers.

How is that going to shape up in our thinking about teaching?

It using an artificial intelligence to write a term paper cheating? I suspect a lot of my colleagues would say, “Yes,” for the same reason that asking an actual person to write a paper for you is cheating.

But how would you detect that?

The new article suggests that these papers are not going to be obviously defective. If anything, the clue for a professor might be that the paper is too good.

Every one would be a unique output of the artificial intelligence, so that it might skirt plagiarism detectors. I don’t know enough about how GPT-3 generates text to know if it has a “tell”: predictable quirks in expression that might indicate it was an artificial intelligence rather than a person.

I don’t think many university professors are thinking at all about what this means for student assessment. We professors have traditionally wanted to build towards using writing as the preferred assessment. We ask grad students to write theses, after all. But writing as a form of assessment keeps getting compromised and harder to validate.

For another look at what GPT-3 can do, check out this video of someone who used it to recreate a childhood imaginary friend - which turned evil and tried to kill him.

Disclaimer: The story in the video is so wild that I can’t help but wonder if some of it is staged. 

Update, 2 December 2022: Veteran blogger Chad Orzel has a Substack post on this issue: “Why do we assign writing?”. The take-away is that if your writing assignment can be done by an artificial intelligence chatbot, you’re probably doing it wrong.


External links

We asked GPT-3 to write an academic paper about itself. Then we tried to get it published

No comments: