The invisible watermark will help identify generative text and video

CBS News

Among Google’s swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums.
It can now mark video that was digitally generated as well as AI-generated text.
Watermarking AI-generated content will matter increasingly as the technology gains prevalence, especially when AI gets used for malicious purposes.
It’s already been used to spread political misinformation, claim someone said something they haven’t, and create nonconsensual sexual content.
SynthID was announced last August and started as a tool to imprint AI imagery in a way that humans can’t visually decipher — but can be detected by the system.
The approach is different from other aspiring watermarking protocol standards like C2PA, which adds cryptographic metadata to AI-generated content.
Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind’s Lyria model.
SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.

NEGATIVE

Along with the plethora of new AI models and tools that Google unveiled today, the company is also extending the reach of its AI content watermarking and detection technology to two additional media.

The new enhanced SynthID watermark imprinting system as well as the team’s new AI tools, such as the Veo video generator, were discussed by Google DeepMind CEO Demis Hassabis during his first-ever appearance on stage at the Google I/O developer conference on Tuesday. Digitally produced video and text produced by AI can now be annotated.

As AI becomes more widely used, watermarking content produced by the technology will become more important, particularly when AI is misused for bad intent. It’s already been used to fabricate political disinformation, assert falsehoods about others, and produce nonconsensual sexual content.

When SynthID was first introduced in August of last year, it was intended to be a tool for imprinting AI imagery in a way that is detectable by the system but invisible to humans. The strategy differs from that of other aspirational watermarking protocol standards, such as C2PA, which augments AI-generated content with cryptographic metadata.

Furthermore, DeepMind’s Lyria model was used by Google to create AI-generated music that allowed SynthID to add inaudible watermarks. The Biden administration is directing federal agencies to develop guidelines around a number of AI safeguards, including SynthID, in order to prevent technology from being misused.

scroll to top