WHY THIS MATTERS IN BRIEF
As the text writing bot ChatGPT takes on the world more people are getting unhappy about it and now it’s the scientists turn.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
After passing business, history, and medical exams, writing books, code, and coursework, among all manner of other skills, the Artificial Intelligence (AI) chatbot known as ChatGPT that’s taken the world by storm has finally made its formal debut in the scientific literature — racking up at least four authorship credits on published papers and preprints. And, months after an AI was awarded its own patent for inventing a new product this is just the latest in a long line of AI firsts that’s got the research and legal communities scratching their collective noggins.
As a consequence of this latest development journal editors, researchers, and publishers are now debating the place of such AI tools in the published literature, and whether it’s appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California.
We wrote a book using ChatGPT!
ChatGPT is a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet. The bot is already disrupting sectors including academia: in particular, it is raising questions about the future of university essays and research production.
Publishers and preprint servers contacted by Nature agree that AIs such as ChatGPT do not fulfil the criteria for a study author, because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an AI’s contribution to writing papers can be acknowledged in sections other than the author list. In one case, an editor told Nature that ChatGPT had been cited as a co-author in error, and that the journal would correct this.
ChatGPT is one of 12 authors on a preprint research paper about using the tool for medical education, posted on the medical repository medRxiv in December last year.
The team behind the repository and its sister site, bioRxiv, are discussing whether it’s appropriate to use and credit AI tools such as ChatGPT when writing studies, says co-founder Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York. Conventions might change, he adds.
“We need to distinguish the formal role of an author of a scholarly manuscript from the more general notion of an author as the writer of a document,” says Sever. Authors take on legal responsibility for their work, so only people should be listed, he says.
“Of course, people may try to sneak it in — this already happened at medRxiv — much as people have listed pets, fictional people, etc. as authors on journal articles in the past, but that’s a checking issue rather than a policy issue.”
Victor Tseng, the preprint’s corresponding author and medical director of Ansible Health in Mountain View, California, did not respond to a request for comment.
Meanwhile an editorial in the journal Nurse Education in Practice this month credits the AI as a co-author, alongside Siobhan O’Connor, a health-technology researcher at the University of Manchester, UK. Roger Watson, the journal’s editor-in-chief, says that this credit slipped through in error and will soon be corrected.
“That was an oversight on my part,” he says, because editorials go through a different management system from research papers.
And Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery company in Hong Kong, credited ChatGPT as a co-author of a perspective article in the journal Oncoscience last month. He says that his company has published more than 80 papers produced by generative AI tools.
“We are not new to this field,” he says. The latest paper discusses the pros and cons of taking the drug rapamycin, in the context of a philosophical argument called Pascal’s wager. ChatGPT wrote a much better article than previous generations of generative AI tools had, says Zhavoronkov.
He says that Oncoscience peer reviewed this paper after he asked its editor to do so. The journal did not respond to reporters request for comment.
A fourth article, co-written by an earlier chatbot called GPT-3 and posted on French preprint server HAL in June 2022, will soon be published in a peer-reviewed journal, says co-author Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden. She says one journal rejected the paper after review, but a second accepted it with GPT-3 as an author after she rewrote the article in response to reviewer requests.
The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship.
“An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.
“We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism,” says Holden Thorp, editor-in-chief of the Science family of journals in Washington DC.
The publisher Taylor & Francis in London is reviewing its policy, says director of publishing ethics and integrity Sabina Alam. She agrees that authors are responsible for the validity and integrity of their work, and should cite any use of LLMs in the acknowledgements section. Taylor & Francis hasn’t yet received any submissions that credit ChatGPT as a co-author.
The board of the physical-sciences preprint server arXiv has had internal discussions and is beginning to converge on an approach to the use of generative AIs, says scientific director Steinn Sigurdsson, an astronomer at Pennsylvania State University in University Park. He agrees that a software tool cannot be an author of a submission, in part because it cannot consent to terms of use and the right to distribute content. Sigurdsson isn’t aware of any arXiv preprints that list ChatGPT as a co-author, and says guidance for authors is coming soon.
There are already clear authorship guidelines that mean ChatGPT shouldn’t be credited as a co-author, says Matt Hodgkinson, a research-integrity manager at the UK Research Integrity Office in London, speaking in a personal capacity. One guideline is that a co-author needs to make a “significant scholarly contribution” to the article — which might be possible with tools such as ChatGPT, he says. But it must also have the capacity to agree to be a co-author, and to take responsibility for a study — or, at least, the part it contributed to.
“It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” he says.
Zhavoronkov says that when he tried to get ChatGPT to write papers more technical than the perspective he published, it failed.
“It very often returns statements that are not necessarily true, and if you ask it several times the same question, it will give you different answers,” he says. “So I will definitely be worried about the misuse of the system in academia, because now, people without domain expertise would be able to try and write scientific papers.”