WHY THIS MATTERS IN BRIEF
Scientific research has to have integrity and be accurate, and time and time again researchers using synthetic content in their papers have been shown to produce shoddy work.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
A little while ago scientific journals said they were against researchers publishing papers that had been co-created using Generative Artificial Intelligence (GAI) such as ChatGPT. Now, renowned scientific journal Nature announced in an editorial that it will not publish images or video created using generative AI tools. The ban comes amid the publication’s concerns over research integrity, consent, privacy, and intellectual property protection as generative AI tools increasingly permeate the world of science and art.
Founded in November 1869, Nature publishes peer-reviewed research from various academic disciplines, mainly in science and technology. It is one of the world’s most cited and most influential scientific journals.
Nature says its recent decision on AI artwork followed months of intense discussions and consultations prompted by the rising popularity and advancing capabilities of generative AI tools like ChatGPT and Midjourney.
“Apart from in articles that are specifically about AI, Nature will not be publishing any content in which photography, videos, or illustrations have been created wholly or partly using generative AI, at least for the foreseeable future,” the publication wrote in a piece attributed to itself.
The publication considers the issue to fall under its ethical guidelines covering integrity and transparency in its published works, and that includes being able to cite sources of data within images:
“Why are we disallowing the use of generative AI in visual content? Ultimately, it is a question of integrity. The process of publishing — as far as both science and art are concerned — is underpinned by a shared commitment to integrity. That includes transparency. As researchers, editors and publishers, we all need to know the sources of data and images, so that these can be verified as accurate and true. Existing generative AI tools do not provide access to their sources so that such verification can happen,” they said.
As a result, all artists, filmmakers, illustrators, and photographers commissioned by Nature “will be asked to confirm that none of the work they submit has been generated or augmented using generative AI.”
Nature also mentions that the practice of attributing existing work, a core principle of science, stands as another impediment to utilizing generative AI artwork ethically in a science journal. Attribution of AI-generated artwork is difficult because the images typically emerge synthesized from millions of images fed into an AI model.
That fact also leads to issues concerning consent and permission, especially related to personal identification or intellectual property rights. Here, too, Nature says that generative AI falls short, routinely using copyright-protected works for training without obtaining the necessary permissions. And then there’s the issue of falsehoods: The publication cites DeepFakes as accelerating the spread of false information.
However, Nature is not wholly against the use of AI tools. The journal will still permit the inclusion of text produced with the assistance of generative AI like ChatGPT, given that it is done with appropriate caveats. The use of these Large Language Model (LLM) tools must be explicitly documented in a paper’s methods or acknowledgments section. Additionally, sources for all data, even those generated with AI assistance, must be provided by authors. The journal has firmly stated, though, that no LLM tool will be recognized as an author on a research paper.
While some publications occasionally use clearly labelled AI-generated artwork for editorial purposes when it serves the story in an obvious and non-deceptive way, Nature believes that its role as a scientific journal gives it very little wiggle room for interpreting the current tea leaves of legal and ethical AI policy.
“Many national regulatory and legal systems are still formulating their responses to the rise of generative AI,” Nature writes. “Until they catch up, as a publisher of research and creative works, Nature’s stance will remain a simple ‘no’ to the inclusion of visual content created using generative AI.”
However, as generative AI continuously becomes more deeply integrated into traditional image-editing tools such as Photoshop, publications like Nature may find difficulty in drawing a clear line between what constitutes AI art and non-AI art. Just recently, Adobe introduced generative AI tools into a beta of Photoshop, powered by its Adobe Firefly engine. Before that, Adobe used AI algorithms to power many of its built-in tools for decades.
Once legal systems sort out potential issues with training AI models using scraped data, when it comes down to considering the ethics of a piece of artwork, the exact mechanisms of the visual tool itself may end up not mattering as much as the intention of the person presenting the information reflected within. In that case, it seems like Nature’s AI policy is written with enough flexibility to change in the future: “As we expect things to develop rapidly in this field in the near future, we will review this policy regularly and adapt it if necessary.”