ChatGPT is afree online tool that generates text in response to a prompt,including articles,essays,jokes and even poetry,which has gained wide popularity since its debut in November.
The AI classifier,a language model trained on the dataset of pairs of human-written and AI-written text on the same topic,aims to distinguish text that is written by AI. But OpenAI,which recently received amultibillion-dollar investment from Microsoft that industry publications said valued the firm at $US29 billion ($41 billion),has cautioned that its tool has a series of severe limitations.
In tests,OpenAI said its classifier correctly identified AI-written text 26 per cent of the time but described human-written passages as “likely AI-written” in 9 per cent of cases. It is also very unreliable on texts under 1000 characters and AI-written text can be edited to trick the classifier.
Loading
Jan Leike,head of OpenAI’s alignment team tasked to make its systems safer,warned the system was not foolproof,telling AP the method for detecting AI-written text “is imperfect” and would sometimes be wrong.
“It shouldn’t be solely relied upon when making decisions,” Leike said.
Australian universities havereportedly flagged a greater use of traditional assessment types such as pen and paper tests to prevent students having ChatGPT write their essays for them. Many Australian schools,including public schools in NSW,havebanned ChatGPT. But some Australian academics have called for students to be taught to work with AI as a tool they will likely encounter in the workplace.