Bot or scientist? The controversial use of ChatGPT in science

For some, they are a threat. For others, an opportunity. Chatbots based on artificial intelligence hold centre stage in the international debate. In the meantime, top scientific journals announced new editorial policies that ban or curtail researchers from using them to write scientific papers.

They compose songs, paint pictures and write poems. In the last few years, artworks produced by artificial intelligence made giant strides, getting closer and closer to the mastery and sensitivity of human beings. Notably, neural nets achieved the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by humans. Large language models such as the popular ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. The big worry in the research community –according to an editorial published in Nature – is that students and scientists could deceitfully pass off AI-written text as their own or simplistically use LLMs, such as to conduct an incomplete literature review) and produce work that is unreliable.

How does ChatGPT work

ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI – the same Californian industry of the popular Dall-E software, which generates digital images from natural language descriptions – in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models (LLM) and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. This complex neural net has been trained on a titanic data set of text. The New York Times Magazine reports that, in GPT-3’s case, “roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books”. GPT-3 is the most celebrated of the LLMs but not the first nor the only one. Its success depends on the fact that OpenAI has made the chatbot free to use and easily accessible for people who don’t have technical expertise. Millions are using it, most often for fun, growing excitement but also worries about these tools.

Editors’ position

Several preprints and published articles have already credited ChatGPT with formal authorship. In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes. “That’s a lot of AI-generated text that could find its way into the literature soon”, commented Holden Thorp, editor-in-chief of Science Journals, underlining that “our authors certify that they themselves are accountable for the research in the paper. Still, to make matters explicit, we are now updating our license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author”. In the last few weeks, the publishers of thousands of scientific journals – including the same Science – have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.

 

“An attribution of authorship carries accountability for the work, which cannot be effectively applied to LLMs” said Magdalena Skipper, editor-in-chief of Nature. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she added. “From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner” wrote the editorialist. 

Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.

Elsevier, which publishes about 2,800 journals, including Cell and the Lancet, has taken a similar stance to Nature, according to Ian Sample, science editor of the Guardian. Its guidelines allow the use of AI tools “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions” said Elsevier’s Andrew Davis, adding that authors must declare if and how they have used AI tools.

Distinguishing humans and AI

In response to the concerns of the scientific community, OpenAI announced in its blog that had trained a new classifier to distinguish between text written by a human and text written by AIs from a variety of providers: “While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that a human wrote AI-generated text: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”. By the company’s own admission, however, the classifier is not fully reliable. In internal evaluations on a “challenge set” of English texts, the classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written”, while incorrectly labelling the human-written text as AI-written 9% of the time (false positives).

Picture Credits CC from Focal Foto on Flickr

Share

Cities are laboratories - Climate Foresight - CMCC
Article

Cities as climate laboratories for ecological research

A new study attempts to verify if and in which cases cities can constitute proxies to study the effects of long-term climate impacts on plants and animal species. Some peculiar conditions of urban centres, such as high CO2 concentrations, are hard to replicate experimentally; on the other hand, urban variables and characteristics could be misleading for the ecological research.

Article

The Relationship Between Extreme Weather and Climate Change

Extreme weather is becoming increasingly common. Most recently, a record breaking heatwave in Greenland has gone viral thanks to the image of sled dogs wading through water where there should be ice. However, relating single weather events to climate change is problematic and, although these instances can act as indicators of a trend and eye-openers for public opinion, it is important to distinguish between single weather events and climate change.

Dasgupta_GGA
Article

COP30 | Dasgupta (CMCC): Global Goal on Adaptation, the new frontier of adaptation policies between science and finance

COP30: Adaptation is one of the key themes with part of the negotiations revolving around measuring progress and the role of climate finance in future developments. “The Global Goal on Adaptation (GGA) is meant to serve as a unifying framework to drive political action and finance for adaptation on the same scale as mitigation,” says Shouro Dasgupta, CMCC researcher, member of the scientific support team for the Burkina Faso delegation at COP30, and one of the international experts working under the UAE–Belém work programme to develop and refine indicators that measure progress towards the GGA.