Bot or scientist? The controversial use of ChatGPT in science

For some, they are a threat. For others, an opportunity. Chatbots based on artificial intelligence hold centre stage in the international debate. In the meantime, top scientific journals announced new editorial policies that ban or curtail researchers from using them to write scientific papers.

They compose songs, paint pictures and write poems. In the last few years, artworks produced by artificial intelligence made giant strides, getting closer and closer to the mastery and sensitivity of human beings. Notably, neural nets achieved the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by humans. Large language models such as the popular ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. The big worry in the research community –according to an editorial published in Nature – is that students and scientists could deceitfully pass off AI-written text as their own or simplistically use LLMs, such as to conduct an incomplete literature review) and produce work that is unreliable.

How does ChatGPT work

ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI – the same Californian industry of the popular Dall-E software, which generates digital images from natural language descriptions – in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models (LLM) and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. This complex neural net has been trained on a titanic data set of text. The New York Times Magazine reports that, in GPT-3’s case, “roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books”. GPT-3 is the most celebrated of the LLMs but not the first nor the only one. Its success depends on the fact that OpenAI has made the chatbot free to use and easily accessible for people who don’t have technical expertise. Millions are using it, most often for fun, growing excitement but also worries about these tools.

Editors’ position

Several preprints and published articles have already credited ChatGPT with formal authorship. In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes. “That’s a lot of AI-generated text that could find its way into the literature soon”, commented Holden Thorp, editor-in-chief of Science Journals, underlining that “our authors certify that they themselves are accountable for the research in the paper. Still, to make matters explicit, we are now updating our license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author”. In the last few weeks, the publishers of thousands of scientific journals – including the same Science – have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.

 

“An attribution of authorship carries accountability for the work, which cannot be effectively applied to LLMs” said Magdalena Skipper, editor-in-chief of Nature. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she added. “From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner” wrote the editorialist. 

Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.

Elsevier, which publishes about 2,800 journals, including Cell and the Lancet, has taken a similar stance to Nature, according to Ian Sample, science editor of the Guardian. Its guidelines allow the use of AI tools “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions” said Elsevier’s Andrew Davis, adding that authors must declare if and how they have used AI tools.

Distinguishing humans and AI

In response to the concerns of the scientific community, OpenAI announced in its blog that had trained a new classifier to distinguish between text written by a human and text written by AIs from a variety of providers: “While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that a human wrote AI-generated text: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”. By the company’s own admission, however, the classifier is not fully reliable. In internal evaluations on a “challenge set” of English texts, the classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written”, while incorrectly labelling the human-written text as AI-written 9% of the time (false positives).

Picture Credits CC from Focal Foto on Flickr

Share

Article

Game over: The future of skiing and winter tourism

If the winters disappear can we engineer them or do we need to start looking into alternative approaches? As more and more evidence piles up against the demise of the ski industry as we know it, mountain communities and researchers start to explore new heights.

Article

Moving mining back to Europe

EU clean energy targets will need 35 times more lithium and seven to 26 times the amount of rare earth metals in 2050 compared with today. Critical raw materials are a key ingredient in any green transition, but now the question is: can mining go hand in hand with nature protection laws?

Rotterdam aerial view
Article

The urban divide: unequal distribution of heat-related risks on city dwellers

Climate change and global warming affect humans, nature and the environment at a global scale. However, their impacts are often not equally and uniformly distributed. People living in Urban Heat Islands are more likely to experience higher levels of heat-related risks for their health, often enhancing existing social inequalities.