Bot or scientist? The controversial use of ChatGPT in science

For some, they are a threat. For others, an opportunity. Chatbots based on artificial intelligence hold centre stage in the international debate. In the meantime, top scientific journals announced new editorial policies that ban or curtail researchers from using them to write scientific papers.

They compose songs, paint pictures and write poems. In the last few years, artworks produced by artificial intelligence made giant strides, getting closer and closer to the mastery and sensitivity of human beings. Notably, neural nets achieved the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by humans. Large language models such as the popular ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. The big worry in the research community –according to an editorial published in Nature – is that students and scientists could deceitfully pass off AI-written text as their own or simplistically use LLMs, such as to conduct an incomplete literature review) and produce work that is unreliable.

How does ChatGPT work

ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI – the same Californian industry of the popular Dall-E software, which generates digital images from natural language descriptions – in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models (LLM) and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. This complex neural net has been trained on a titanic data set of text. The New York Times Magazine reports that, in GPT-3’s case, “roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books”. GPT-3 is the most celebrated of the LLMs but not the first nor the only one. Its success depends on the fact that OpenAI has made the chatbot free to use and easily accessible for people who don’t have technical expertise. Millions are using it, most often for fun, growing excitement but also worries about these tools.

Editors’ position

Several preprints and published articles have already credited ChatGPT with formal authorship. In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes. “That’s a lot of AI-generated text that could find its way into the literature soon”, commented Holden Thorp, editor-in-chief of Science Journals, underlining that “our authors certify that they themselves are accountable for the research in the paper. Still, to make matters explicit, we are now updating our license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author”. In the last few weeks, the publishers of thousands of scientific journals – including the same Science – have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.

 

“An attribution of authorship carries accountability for the work, which cannot be effectively applied to LLMs” said Magdalena Skipper, editor-in-chief of Nature. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she added. “From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner” wrote the editorialist. 

Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.

Elsevier, which publishes about 2,800 journals, including Cell and the Lancet, has taken a similar stance to Nature, according to Ian Sample, science editor of the Guardian. Its guidelines allow the use of AI tools “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions” said Elsevier’s Andrew Davis, adding that authors must declare if and how they have used AI tools.

Distinguishing humans and AI

In response to the concerns of the scientific community, OpenAI announced in its blog that had trained a new classifier to distinguish between text written by a human and text written by AIs from a variety of providers: “While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that a human wrote AI-generated text: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”. By the company’s own admission, however, the classifier is not fully reliable. In internal evaluations on a “challenge set” of English texts, the classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written”, while incorrectly labelling the human-written text as AI-written 9% of the time (false positives).

Picture Credits CC from Focal Foto on Flickr

Share

Wildfire in Corfu, Greece, 2023.
Article

Europe’s climate bill: the price of extreme weather

Europe’s “summer of hell” is impacting people, assets and the environment with exceptional intensity. Extreme weather and climate-related events lead to economic losses in different sectors. An overview of the estimates and projections of Europe’s economic toll from extreme climate events.

Article

Universal Energy Access: The Future Belongs To Mini-Grids

Providing universal energy access by 2030 is one of the UN defined Sustainable Development Goals. It will require doubling current investments in the energy sector, with a significant focus on sub-Saharan Africa. Fortunately, there is a new protagonist under the spotlight, bringing a more resilient and cost-effective solution to electrification. Forget the “bigger is better” mentality, the future is small, interactive, and decentralized.

Article

Renewable energy goals require renewed commitment

Renewable energy and improved energy efficiency are mainstays of a viable climate solution to meet the targets set out in the Paris Agreement – particularly …