Bot or scientist? The controversial use of ChatGPT in science

For some, they are a threat. For others, an opportunity. Chatbots based on artificial intelligence hold centre stage in the international debate. In the meantime, top scientific journals announced new editorial policies that ban or curtail researchers from using them to write scientific papers.

They compose songs, paint pictures and write poems. In the last few years, artworks produced by artificial intelligence made giant strides, getting closer and closer to the mastery and sensitivity of human beings. Notably, neural nets achieved the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by humans. Large language models such as the popular ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. The big worry in the research community –according to an editorial published in Nature – is that students and scientists could deceitfully pass off AI-written text as their own or simplistically use LLMs, such as to conduct an incomplete literature review) and produce work that is unreliable.

How does ChatGPT work

ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI – the same Californian industry of the popular Dall-E software, which generates digital images from natural language descriptions – in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models (LLM) and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. This complex neural net has been trained on a titanic data set of text. The New York Times Magazine reports that, in GPT-3’s case, “roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books”. GPT-3 is the most celebrated of the LLMs but not the first nor the only one. Its success depends on the fact that OpenAI has made the chatbot free to use and easily accessible for people who don’t have technical expertise. Millions are using it, most often for fun, growing excitement but also worries about these tools.

Editors’ position

Several preprints and published articles have already credited ChatGPT with formal authorship. In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes. “That’s a lot of AI-generated text that could find its way into the literature soon”, commented Holden Thorp, editor-in-chief of Science Journals, underlining that “our authors certify that they themselves are accountable for the research in the paper. Still, to make matters explicit, we are now updating our license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author”. In the last few weeks, the publishers of thousands of scientific journals – including the same Science – have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.

 

“An attribution of authorship carries accountability for the work, which cannot be effectively applied to LLMs” said Magdalena Skipper, editor-in-chief of Nature. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she added. “From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner” wrote the editorialist. 

Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.

Elsevier, which publishes about 2,800 journals, including Cell and the Lancet, has taken a similar stance to Nature, according to Ian Sample, science editor of the Guardian. Its guidelines allow the use of AI tools “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions” said Elsevier’s Andrew Davis, adding that authors must declare if and how they have used AI tools.

Distinguishing humans and AI

In response to the concerns of the scientific community, OpenAI announced in its blog that had trained a new classifier to distinguish between text written by a human and text written by AIs from a variety of providers: “While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that a human wrote AI-generated text: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”. By the company’s own admission, however, the classifier is not fully reliable. In internal evaluations on a “challenge set” of English texts, the classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written”, while incorrectly labelling the human-written text as AI-written 9% of the time (false positives).

Picture Credits CC from Focal Foto on Flickr

Share

Article

Draft IPCC Special Report on Global Warming of 1.5°C

Sent out for comments from governments and other experts, the text is a work in progress which could change substantially and do not necessarily represent the IPCC’s final assessment of the state of knowledge. According to leaked IPCC drafts, we need unprecedented changes in energy use, industry and other sectors to limit global warming below 1.5°C.

Article

Non-State Actors Lead the Way to Decarbonization

Why is Japan struggling to increase its share of renewable energy and decarbonize its economy? Companies, municipalities, research institutions, and civil society organizations are seeking answers to these very questions. Ken Tanaka, of the Japan Climate Initiative Secretariat, shares some unique insight.

Tetsu Kubota
Interview

More comfort, less carbon: Climate resilient housing in the Global South

The Global South is both home to over 75% of the world’s urban population and at the same time urbanizing faster than anywhere else in the world. “Building low carbon, climate resilient and affordable housing is therefore a priority if we are to meet climate goals in these areas,” says Building Science expert and Professor at Hiroshima University, Tetsu Kubota.