Scroll Top

Google DeepMind launches a watermarking tool for generative content

Futurist_watermarking

WHY THIS MATTERS IN BRIEF

Perhaps years too late already watermarking content created by AI may help us identify it, and help AI’s avoid training themselves on it to avoid AI Model Collapse.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

With many people wondering how we’re going to be able to differentiate between human created content and Artificial Intelligence (AI) generated content – in part to avoid future AI model collapse – there have been increasing calls for companies to find a way to identify the content their AI’s produce. And now, Google DeepMind has launched a new watermarking tool that labels whether images have been generated with AI.

 

RELATED
Robo journalists covered the US election for the Washington Post

 

The tool, called SynthID, will initially be available only to users of Google’s AI image generator Imagen, which is hosted on Google Cloud’s machine learning platform Vertex. Users will be able to generate images using Imagen and then choose whether to add a watermark or not. The hope is that it could help people tell when AI-generated content is being passed off as real, or help protect copyright.

In the past year, the huge popularity of generative AI models has also brought with it the proliferation of AI-generated deepfakes, non-consensual porn, and copyright infringements. Watermarking—a technique where you hide a signal in a piece of text or an image to identify it as AI-generated—has become one of the most popular ideas proposed to curb such harms.

In July, the White House announced it had secured voluntary commitments from leading AI companies such as OpenAI, Google, and Meta to develop watermarking tools in an effort to combat misinformation and misuse of AI-generated content.

At Google’s annual conference I/O in May, CEO Sundar Pichai said the company is building its models to include watermarking and other techniques from the start. Google DeepMind is now the first Big Tech company to publicly launch such a tool.

 

RELATED
New Anti-CRISPRs have been found in bacteria for the first time

 

Traditionally images have been watermarked by adding a visible overlay onto them, or adding information into their metadata. But this method is “brittle” and the watermark can be lost when images are cropped, resized, or edited, says Pushmeet Kohli, vice president of research at Google DeepMind.

SynthID is created using two neural networks. One takes the original image and produces another image that looks almost identical to it, but with some pixels subtly modified. This creates an embedded pattern that is invisible to the human eye. The second neural network can spot the pattern and will tell users whether it detects a watermark, suspects the image has a watermark, or finds that it doesn’t have a watermark. Kohli said SynthID is designed in a way that means the watermark can still be detected even if the image is screenshotted or edited—for example, by rotating or resizing it.

Google DeepMind is not the only one working on these sorts of watermarking methods,  says Ben Zhao, a professor at the University of Chicago, who has worked on systems to prevent artists’ images from being scraped by AI systems. Similar techniques already exist and are used in the open-source AI image generator Stable Diffusion. Meta has also conducted research on watermarks, although it has yet to launch any public watermarking tools.

Kohli claims Google DeepMind’s watermark is more resistant to tampering than previous attempts to create watermarks for images, although still not perfectly immune.

 

RELATED
Baidu launches a medical chatbot to help Chinese doctors diagnose patients

 

But Zhao is skeptical. “There are few or no watermarks that have proven robust over time,” he says. Early work on watermarks for text has found that they are easily broken, usually within a few months.

Bad actors have a vested interest in disrupting watermarks, he adds – for example, to claim that deepfaked content is genuine photographic evidence of a non-existent crime or event.

“An attacker seeking to promote deepfake imagery as real, or discredit a real photo as fake, will have a lot to gain, and will not stop at cropping, or lossy compression or changing colors,” Zhao says.

Nevertheless, Google DeepMind’s launch is a good first step and could lead to better information-sharing in the field about which techniques work and which don’t, says Claire Leibowicz, the head of the AI and Media Integrity Program at the Partnership on AI.

“The fact that this is really complicated shouldn’t paralyze us into doing nothing,” she says.

 

RELATED
Anthropic releases their OpenAI GPT-4 busting Claude AI 3 model

 

Kohli told reporters the watermarking tool is  “experimental” and said the company wants to see how people use it and learn about its strengths and weaknesses before rolling it out more widely. He refused to say whether Google DeepMind might make the tool more widely available for images other than ones generated by Imagen. He also did not say whether Google will add the watermark to its AI image generation systems.

This limits its usefulness, says Sasha Luccioni, an AI researcher at startup Hugging Face. Google’s decision to keep the tool proprietary means only Google will be able to both embed and detect these watermarks, she adds.

“If you add a watermarking component to image generation systems across the board, there will be less risk of harms like deepfake pornography,” Luccioni says.

Related Posts

Comments (1)

[…] forthcoming metadata and watermarking tools can also help identify synthetic content. Metadata appended to a file provides context for the content while watermarking embeds resilient […]

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This