Google Develops Invisible Watermark to Detect AI Images
Google is testing an invisible digital watermark developed by DeepMind, Google’s AI arm, to identify images created by artificial intelligence (AI). This effort aims to combat disinformation as AI-generated images become more realistic and prevalent.
The technology, called SynthID, makes subtle changes to individual pixels rendering the watermarks invisible to the human eye but detectable by computers. However, DeepMind acknowledges it may not be completely foolproof against extreme image manipulation.
AI image generators are gaining popularity, allowing people to create images with simple instructions. This trend raises concerns about copyright and ownership on a global scale. Google's image generator Imagen prompted the development of a system for creating and checking watermarks, but it will only apply to images produced using the tool.
Traditional watermarks can be edited or cropped out but are ineffective for AI images. Tech companies use hashing to fingerprint known abusive videos but fingerprints can become corrupted if videos are edited. Google's system establishes an invisible watermark allowing users to instantly determine if an image is real or AI-generated using its software.
How SynthID works ?
SynthID works using two deep learning models trained together on diverse images. The combined model is optimized for correctly identifying watermarked content while improving imperceptibility by visually aligning the watermark to originals.
Watermarking
SynthID uses embedded watermarking technology that adds a digital watermark directly into AI image pixels, making it imperceptible to humans. It doesn't compromise image quality and allows watermarks to remain detectable even after modifications like filters, colors changes, and JPEG compression.
Identification
Identification works by scanning images for digital watermarks, helping users assess if content was generated by Imagen. The tool provides confidence levels for interpreting watermark identification results. If detected, part of the image is likely generated by Imagen.
Even after cropping or editing, Google's software can still identify the watermark according to Pushmeet Kohli, DeepMind head of research. However, Kohli said this is experimental and user feedback will assess its robustness. Standardization and transparency are needed in this evolving field to enhance trust in the information ecosystem.