May 3, 2024

I, Science

The science magazine of Imperial College

by Mikayla Hu (23 January 2023)

Using artificial intelligence (AI) to write articles is no longer a futuristic concept. While the ethical issues about letting AI bots produce media content like news reports are still being debated, the advanced technology has also posed a threat to academia that calls for attention. It has become possible that scholars can be ‘fooled ’by research papers generated or modified by AI.

A report has recently investigated scientific abstracts generated by ChatGPT, a chatbot launched by OpenAI,  merely using title and journal names. According to the research, all abstracts have managed to pass automatic plagiarism detection. They were then mixed with manually written abstracts. They were blindly presented to human reviewers to see if they could identify the AI production. As reported, the percentage of correct identification was 68%, while the incorrect was 14%.  

Such a result could be concerning, for it could impair research integrity and allow vague, incorrect or groundless content to be published as ‘solid’ scientific findings. Another possible issue with any reliance on AI writing in scientific research is that it may hinder revolutionary progress when data interpretation and discussion are generated from existing models of thinking. 

So what should we do about it? A college student, Edward Tian, recently developed an application to detect content produced by ChatGPT, named GPTZero. He used two parameters, the perplexity of the text and the burstiness of how varied the sentences are, to distinguish AI-written articles from human-written ones. The basic idea was to use an AI to investigate the text and see how familiar this AI ‘feels’ when reading the text.  

Despite the success of developing a defensive tool like this, we might not be completely free from misleading AI-produced content. Toby Walsh, a professor of artificial intelligence from the University of New South Wales, believed that the combat between language models like ChatGPT and counteracting ones like GPTZero might continue. Walsh told The Guardianit’s quite easy to ask ChatGPT to rewrite in a more personable style … like rephrasing as an 11-year-old,” and continued to say “This will make it harder but it won’t stop it.” 

Despite the success of developing a defensive tool like this, we might not be completely free from misleading AI-produced content. Toby Walsh, a professor of artificial intelligence from the University of New South Wales, believed that the combat between language models like ChatGPT and counteracting ones like GPTZero might continue. Walsh told The Guardianit’s quite easy to ask ChatGPT to rewrite in a more personable style … like rephrasing as an 11-year-old,” and continued to say “This will make it harder but it won’t stop it.” ChatGPT users could find maneuvers around any new censors by asking it to add more “randomness” Walsh said.  

So what do you think of using an AI tool to counteract the impact of another one in terms of generating writing materials? Personally, it looks sort of suspicious to me……