Google has introduced a new feature in its Gemini app that allows users to ask a simple question such as “Is this AI-generated?” when uploading an image, and the system will tell if the image was created or edited by Google’s AI.
Here are the key points:
- The functionality uses Google’s proprietary watermarking system called SynthID. When an image is generated or edited by a Google AI tool, Invisible markers (watermarks) are embedded to allow verification.
- For now, the feature is limited to images created or edited by Google’s AI tools. It does not yet reliably detect images generated by other platforms like DALL·E or Midjourney.
- Google plans to expand detection support to cover broader standards via Coalition for Content Provenance and Authenticity (C2PA) credentials, meaning future versions may detect images from many AI tools, not just Google’s own.
- The initiative is part of Google’s broader push toward “content authenticity” in an era of widespread AI-generated media.
Why this matters
- With AI image generation becoming more realistic and accessible, it’s increasingly difficult to tell whether an image is “real” (photograph or human-created) or synthetic. This verification tool empowers users to check image origins.
- For journalists, educators, and consumers, this can help combat misinformation, deep-fakes, and manipulated visuals.
- The on-device or app-based check adds a layer of transparency and trust for visual content shared online.
Limitations & what to watch
- Since the detection currently only works for Google’s own AI-generated images (those stamped with SynthID), many AI images generated elsewhere will not yet be flagged. So it’s not a full solution.
- There’s the broader challenge: For it to be truly effective, many platforms (social media, news outlets) must adopt similar watermarking and verification standards. Google acknowledges this
- The feature currently handles images. Google says video and audio verification is coming “soon.”
How you can use it (if you have Gemini)
- Open the Gemini app.
- Upload an image (for example from your phone, screenshot, or image file).
- In the chat, type something like “Was this created with Google AI?” or “Is this AI-generated?”
- Gemini will analyse the image for the SynthID watermark and respond whether it recognises it as AI-generated/edited by Google’s tool. blog.google
- Keep in mind: if no watermark is present, it may mean the image was not made by Google AI — but it doesn’t guarantee the image is “real/human-made”.
What this means going forward
- As Google expands support for C2PA and other credentials, this type of verification could become more universal — meaning you could check images regardless of which AI tool generated them.
- We may see similar tools being integrated into social networks, news platforms and browser plugins to help flag AI content at large scale.
- For creators and publishers: embedding metadata (watermarks, provenance) into AI-generated content will likely become a best practice to maintain trust and transparency.
Bottom line
The launch of Gemini’s ability to detect AI-generated/edited images marks an important step toward addressing the “trust gap” in visual media. While it’s not yet comprehensive, and currently tied to Google’s own AI ecosystem, it shows how major tech platforms are building tools for content authenticity. Over time, as standards like C2PA spread and more platforms participate, this kind of verification could help make the image-ecosystem more transparent and reliable.


