Google has introduced a technology that embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye, but able to perform AI-generated image identification
DeepMind, Google’s AI arm, has developed SynthID technology that will identify images generated by artificial intelligence (AI).
Currently in beta testing, the tool addresses the issue of distinguishing AI-generated images from those created by humans. With the popularity of generative AI tools like Midjourney, it becomes very hard to tell real pictures apart from realistic machine-created ones. However, at this point in time, SynthID will only apply to Google’s own AI image generator called Imagen.
The technology embeds a watermark into the image pixels. For the human spectator, nothing visually changes, but the dedicated software will be able to decipher this data and identify the image’s origin. Initially, SynthID is being tested with a limited number of select Vertex AI customers.
As Google embraces the potential of generative AI, adding artificial intelligence tools from Meta Platforms and Anthropic to its cloud platform, the company also wants to tackle the possible spread of misinformation. Often, AI-generated content looks so real, that people have no idea that they’re interacting with generated media which does not reflect the real events.
“We’re committed to connecting people with high-quality information, and upholding trust between creators and users across society. Part of this responsibility is giving users more advanced tools for identifying AI-generated images so their images — and even some edited versions — can be identified at a later date,” says Google DeepMind press release.
Although the company states the new watermark-based technology is accurate against many common image manipulations, Google admits that SynthID is not perfect and not “foolproof against extreme image manipulation”.