- Two philosophy professors said they caught their students uploading essays written by ChatGPT.
- They said certain red flags warned them against using AI.
- If students don’t admit to using the program, professors say it can be hard to prove.
A few weeks after the launch of the ChatGPT AI chatbot, Darren Hick, a professor of philosophy at Furman University, said he caught a student submitting an AI-generated essay.
Hick said he became suspicious when a student turned in an essay on a topic that contained well-written misinformation.
After running it through the ChatGPT Open AI detector, the results showed that it was 99% likely that the essay was generated by AI.
Antony Aumann, a religious studies scholar and professor of philosophy at Fordham University, told Insider that he caught two students uploading essays written by ChatGPT.
After the writing style caused alarm, Aumann sent them back to the chatbot asking how likely they were written by a program. When the chatbot said it was 99% sure that the essays were written by ChatGPT, it sent the results to the students.
Both Hick and Aumann said they confronted their students, all of whom eventually admitted wrongdoing. Hick’s student failed the class, and Aumann had his students rewrite the essays from scratch.
“It was really well written badly”
There were some red flags in the essays that warned professors about the use of AI. Hick said the essay he found referred to several facts not mentioned in class, and made one nonsensical claim.
“Word for word it was a well-written essay,” he said, but on closer inspection, one statement about the prolific philosopher David Hume “made no sense” and was “just completely wrong.”
“Really well-written errors were the biggest red flag,” he said.
For Aumann, the chatbot was simply too perfect. “I think chat type better than 95% of my students ever,” he said.
“All of a sudden you have someone who doesn’t show that level of thinking or writing ability by writing something that perfectly fills all the requirements with sophisticated grammar and complex thoughts that are directly related to an essay prompt,” he said. .
Christopher Bartel, a professor of philosophy at Appalachian State University, said that while the grammar in AI-generated essays is near-perfect, the content tends to lack detail.
He said: “They are really fluffy. There is no context, no depth or insight.”
Plagiarism is hard to prove
If students do not admit to using AI for essays, it could put academics in a difficult position.
Bartel said some institutions’ policies had not evolved to combat this type of fraud. If a student has decided to interject and deny the use of AI, it can be difficult to prove it.
Bartel said the AI detectors on offer were “good but not perfect”.
“They give a statistical analysis of the likelihood of the text being generated by the AI, which puts us in a difficult position if our rules are designed so that we need to have final and provable proof that the essay is false,” he said. “If it comes back with a 95% chance that the essay was generated by AI, there’s still a 5% chance it wasn’t.”
In Hick’s case, while the detection site said it was “99% sure” that the essay was generated by AI, he said it wasn’t enough for him without admitting it.
“The confession was important because everything else looks circumstantial,” he said. “For AI-generated content, there is no tangible evidence, and tangible evidence weighs much more than circumstantial evidence.”
Aumann said that while he believed the chatbot analysis would be enough evidence to initiate disciplinary action, AI plagiarism still presents a new challenge for colleges and universities.
He said: “Unlike the old plagiarism cases where you could just say, ‘hey, here’s a paragraph from Wikipedia.’ There is no proof of a knockdown that you can provide other than chat saying it’s a statistical probability.”