AI news
October 12, 2024

Google’s Strategy to Manage the Surge of AI-Generated Images in Search Results

90 percent of online content may be synthetically generated by 2026, according to experts.

Jim Clyde Monge
by 
Jim Clyde Monge

Today, I saw a Reddit post that showed a screenshot of a Google Image result for “baby peacock.”

The post was captioned “The human internet is dying… “ because almost all the images displayed are AI-generated.

reddit post about AI-Generated Images Are Taking Over Google Image Results
Image from Reddit

Intrigued and a bit alarmed, I decided to verify it myself. I quickly Googled the term, and there it was—a flood of AI-generated images dominating the search results.

In another Reddit thread, a user expressed frustration about the difficulty of finding genuine reference images for their artwork because the search results are inundated with AI-generated content.

“It also takes so much longer to find anything good or usable trying to get through all the AI images, and I feel like all the nice art and photos I used to find are drowned out by ugly distorted AI crap that btw for some reason has all the same style making it even more boring.”

This isn’t just a quirky internet anomaly; it’s a growing concern that raises important questions: How did we get here, and what can we do to mitigate this problem?

This was not the case a few months ago.

AI image and video generators seem to be the culprit of this rising amount of AI-generated content on the internet.

According to a report from European law enforcement group Europol, experts estimate that as much as 90 percent of online content may be synthetically generated by 2026.

At this rate, you have all the reasons to not believe everything you see on the internet.

Even stock photo websites like Adobe Stock and Shutterstock, once reliable sources for authentic photographs, are no longer immune.

Adobe stock for AI-generated images
Image by Jim Clyde Monge

Ever since they allowed AI-generated images on their platforms, finding real photos has become really challenging.

In this article, I’ll give details on the following topics:

  1. Why is Google showing a bunch of AI-generated images in the search results?
  2. The impact on regular and professional users.
  3. Google’s solution on AI-generated images is flooding the internet.
  4. Details on Google’s “About this image” and SynthID feature.
  5. How to report AI-generated images.
  6. Tips to get rid of AI-generated content on Google Image search.
  7. Final thoughts

Let’s get started.

Why Is This Happening?

The influx of AI-generated images in search results isn’t unexpected.

With the rise of powerful AI tools like Midjourney, OpenAI’s Dall-E 3, Flux, and Stable Diffusion, creating photorealistic images has become easier than ever.

These tools are accessible to the general public, allowing anyone with a computer and an internet connection to generate images.

Search engines like Google prioritize content based on relevance and popularity, not on whether an image is AI-generated or real.

As more AI-generated images are uploaded and shared across the web, they naturally climb the search rankings due to factors like keyword optimization and user engagement.

This creates a feedback loop where AI images become more prominent simply because they’re plentiful.

The Impact on Users

This influx of AI-generated images isn’t just a minor inconvenience; it’s affecting regular users and business owners who rely on authentic photos.

There are artists seeking reference photos now having to sift through layers of AI content to find genuine images. This not only wastes time but also hampers the creative process.

I remember myself using Pinterest when looking for reference image a few months ago; now all I see is a bunch of AI-generated stuff on the app.

This also poses a risk of misinformation.

For instance, an AI-generated image of a rare animal or a historical event could be mistaken for the real thing, leading to confusion or the spread of false information.

Is There a Solution?

Right now, there’s no straightforward or effective way to get rid of the AI-generated images showing up on the search results.

Some users suggest just searching before 2021 to eliminate the AI slop.

Google image results, searching before 2021 to eliminate the AI slop.
Image by Jim Clyde Monge

There you go, problem fixed… but not really.

So, how exactly is this problem going to get solved? Turns out, Google has an answer, but it’s not perfect.

Google’s Plan to Label AI-Generated Images

Last month, Google announced plans to implement technology that will identify whether an image is a real photo or AI-generated.

The tool is also designed to detect if an image has been edited with software like Photoshop.

In the coming months, users can expect a new About this image feature that lets you know if an image was created or edited with AI tools.

Here’s an example of how you can access the About this image feature on Google Lens.

In the coming months, users can expect a new “About this image” feature that lets you know if an image was created or edited with AI tools. Here’s an example of how you can access the About this image feature on Google Lens.
Image from Google

If an image contains C2PA metadata, people will be able to see if it was created or edited with AI tools.

How will it work?

Google will utilize an AI watermarking standard from the Coalition for Content Provenance and Authenticity (C2PA), one of the largest groups aiming to address AI-generated imagery.

“It is more important than ever to have a transparent approach to digital content that empowers people to make decisions,” said Andrew Jenks, C2PA Chair.

Other tech giants like Amazon, Microsoft, Adobe, Intel, and OpenAI have backed C2PA authentication, but adoption has been slow.

Google will be the first to implement it directly into its search engine.

Laurie Richardson, Vice President of Trust and Safety at Google, explains:

“For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate.”

Sure, labeling images as AI-generated or not is a step forward, but it isn’t clear whether there’s a way to remove them from the search result or not.

An intuitive toggle to hide AI-generated content would be helpful, but such a feature hasn’t been announced yet.

Google’s “About this image” Feature

The “About this image” feature is aimed to be more than just an AI image detector.

Here’s what it offers:

  • How Other Sites Use and Describe the Image: You can see how an image is used across the web and what other sources, like news outlets and fact-checking sites, say about it.
  • An Image’s Metadata: Metadata provides details about the image, such as the creator, creation date, and any modifications.
  • Digital Watermark Detection: The feature can identify if the image was generated using AI tools like Google’s own SynthID, which embeds an invisible watermark within the image pixels.

“About this image” is currently available in 40 languages globally and is accessible on select Android devices, including the latest Samsung and Pixel phones, foldables, tablets, and through Google Lens on both Android and iOS.

Google also plans to include C2PA information on YouTube videos captured by a camera, which could help verify the authenticity of video content as well.

How to access “About this image” feature?

To access the “About this image” feature, users can click the three dots above an image in the search results and select “About this image.”

How to access “About this image” feature? To access the “About this image” feature, users can click the three dots above an image in the search results and select “About this image.”
Image by Jim Clyde Monge

It’s also available through Google Lens and Android’s “Circle to Search” feature.

If you want to learn more about accessing the “About this image” feature, check out this blog post.

New ways to access About this image
Get context on images you see online using Circle to Search and Google Lens.blog.google

The Challenges with Labeling AI-generated Images

Although Google’s plan to use C2PA’s authentication standard sounds good, is not a foolproof solution.

Here are some of the challenges:

  • Limited support for C2PA standards: Not all cameras and image editing software support C2PA’s open technical standard. While Adobe’s Photoshop and Lightroom can add C2PA data, other popular tools like Affinity Photo and GIMP don’t. Major camera manufacturers like Nikon and Canon haven’t adopted the standard yet, and most smartphone cameras lack this feature.
  • Metadata removal: Many users intentionally remove metadata from their images for privacy reasons. If metadata is missing, does that make the image less trustworthy in Google’s eyes?

There’s a lot of uncertainty, and we can expect this solution to be imperfect at launch. It may take several iterations and widespread adoption of C2PA standards for the system to be truly effective.

AI Watermarking Through SynthID

Google’s SynthID is an interesting development in this space. Developed by Google DeepMind, SynthID embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye but detectable by algorithms.

AI Watermarking Through SynthID Google’s SynthID is an interesting development in this space. Developed by Google DeepMind, SynthID embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye but detectable by algorithms.
Image from Google

SynthID watermarks and identifies AI-generated content by embedding digital watermarks directly into AI-generated images, audio, text or video.

It can also scan a single image, or the individual frames of a video to detect digital watermarking. Users can identify if an image, or part of an image, was generated by Google’s AI tools through the About this image feature in Search or Chrome.

Users can identify if an image, or part of an image, was generated by Google’s AI tools through the About this image feature in Search or Chrome.
Image from Google

In the example above, SynthID was able to determine that part of the image was generated with Google’s AI tools.

However, this method also has limitations:

  • Compatibility: SynthID works primarily with images generated by Google’s own AI models. It doesn’t account for images produced by other AI generators unless they adopt the same watermarking technology.
  • Removal and Tampering: Savvy users might find ways to remove or alter the watermark, rendering the detection ineffective.

Reporting AI-generated Images

One way to improve the algorithms is by reporting images you believe are AI-generated.

While this may seem like a drop in the ocean considering the millions of AI-generated images produced daily, collective action can make a difference over time.

Here’s what you can do, if you think an image on the search result is AI-generated, click on the three dots and flag it by selecting “Send feedback.” Let Google know that the image is generated.

Here’s what you can do, if you think an image on the search result is AI-generated, click on the three dots and flag it by selecting “Send feedback.” Let Google know that the image is generated.
Image by Jim Clyde Monge

It’s a small step, but it’s something we can do as users to address the situation.

Few Tips to Avoid the AI Image Flood

If you’re finding your search results cluttered with AI-generated images, here are some strategies you can employ.

You may find some of them silly, but at least give them a try:

  • Use Specific Keywords: Adding terms like “photograph,” “real,” or “genuine” to your search queries can help filter out AI-generated content.
  • Exclude AI Terms: Use negative keywords to exclude AI content. For example, search for “baby peacock -ai -artificial -generated -midjourney -stable diffusion” to filter out images associated with these terms.
Google image search result for “baby peacock -ai -artificial -generated -midjourney -stable diffusion”
Image by Jim Clyde Monge
  • Advanced Search Settings: Utilize Google’s advanced search options to filter results by date, focusing on periods before the widespread use of AI image generators. 2021 was the start of the AI image generator boom.
  • Alternative Search Engines: Consider using search engines like DuckDuckGo or image databases that prioritize authentic content.

Final Thoughts

As someone who keeps a pulse on AI image generator tools, I understand that creating an effective AI-generated image detection system is extremely hard to do.

In the last few months alone, models like Flux 1.1 Pro, Midjourney v6.5, and Imagen 3 have made huge improvements in generating hyper-realistic photos. It’s becoming increasingly difficult to distinguish between real and AI-generated images unless you pixel peep.

There’s an added difficulty with the pace at which image models are improving by the day. In the last couple of months, image models like Flux 1.1 Pro, Midjourney v6.5, and Imagen 3 can now generate photos that are hyper-realistic, it’s very hard to say they’re fake unless you pixel peep.

Seeing Google Image results flooded with AI-generated photos is a disheartening consequence of making these powerful AI image generator tools widely accessible. The speed of innovation in image diffusion models is impressively fast, but the technology to control its negative impact on the internet is surprisingly slow.

Seeing Google Image results flooded with AI-generated photos is a sad consequence of getting these AI image generator tools to the public. The speed of innovation on image diffusion models is impressively fast, but the tech to control its negative impact on the internet is surprisingly slow.

Perhaps we’re being nudged back to the days when we needed to photograph our own reference images. It’s less convenient, but it ensures authenticity—a quality that’s becoming more and more scarce today.

Get your brand or product featured on Jim Monge's audience