BIP ATL News & Media Platform

collapse
Home / Daily News Analysis / Has Google’s AI watermarking system been reverse-engineered?

Has Google’s AI watermarking system been reverse-engineered?

Apr 20, 2026  Twila Rosenbaum  9 views
Has Google’s AI watermarking system been reverse-engineered?

Google's AI Watermarking System: Claims of Reverse Engineering

Recent reports have surfaced regarding a software developer's assertion that they have successfully reverse-engineered Google DeepMind's SynthID system, a watermarking tool designed for AI-generated images. While the developer, known as Aloshdenny, claims to have demonstrated methods for stripping watermarks from images, Google has firmly disputed these claims, arguing that the system remains intact and effective.

Aloshdenny's work, which has been published on GitHub, reportedly required only 200 images generated by Google's Gemini AI, alongside signal processing techniques and a considerable amount of leisure time. In a post on Medium, Aloshdenny humorously noted, "No neural networks. No proprietary access. Turns out if you’re unemployed and average enough ‘pure black’ AI-generated images, every nonzero pixel is literally just the watermark staring back at you." This implies a level of access and manipulation that Google argues is misleading.

SynthID is heralded as a near-invisible watermarking system that embeds itself within the pixels of images at creation, making it challenging to remove without compromising image quality. Google’s AI models, including those generating content for YouTube, employ this technology to tag and identify generated media effectively.

A side-by-side comparison in Aloshdenny's documentation illustrated the subtlety of SynthID's watermark. The left image displayed watermarked content, while the right showcased an image where the watermark had been partially removed, with minimal visual degradation evident. Aloshdenny remarked on the engineering quality of SynthID, acknowledging that while he could confuse the watermark’s decoders, completely erasing the watermark was beyond his reach.

Reverse Engineering Process

The process outlined by Aloshdenny for reversing SynthID is complex and might be daunting for non-developers. He detailed a simplified version of his method:

  • Generate 200 entirely black or pure white images using Gemini.
  • Enhance the contrast and saturation, then denoise the saturation to reveal watermark patterns.
  • Average these patterns together to ascertain the magnitude and phase of the watermark signal across frequency channels.
  • Search for these frequency signs in other images and attempt to remove them at the insertion angle used during generation.

Despite his attempts, Aloshdenny concluded that while he could confuse the watermark decoders, he couldn't entirely eliminate the watermarks. He stated, "The fact that the best I could pull off was confuse the decoder enough that it gives up — not actually delete the thing — says a lot about how well it was designed. It’s not perfect. But it’s not trying to be unbreakable. It’s trying to raise the cost of misuse high enough that most people don’t bother." This indicates a strategic design by Google to deter misuse rather than aiming for an impossible level of security.

Google's Response

Google has responded to Aloshdenny's claims, with spokesperson Myriam Khan stating, "It is incorrect to say this tool can systematically remove SynthID watermarks. SynthID is a robust, effective watermarking tool for AI-generated content." This assertion emphasizes Google's confidence in the integrity and resilience of its watermarking technology.

As of now, it appears that while there are claims of reverse engineering, SynthID has not been compromised to the extent that allows for easy manipulation by average users. The debate surrounding the efficacy and vulnerability of AI watermarking systems continues, highlighting the ongoing tension between innovation and security in the realm of artificial intelligence.

In conclusion, the complexities surrounding AI watermarking and the claims of reverse engineering by developers like Aloshdenny reflect a broader discussion on the need for secure and reliable methods of content identification and protection in the rapidly evolving landscape of AI technology.


Source: The Verge News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy