90 percent of online content may be synthetically generated by 2026, according to experts.
Today, I saw a Reddit post that showed a screenshot of a Google Image result for “baby peacock.”
The post was captioned “The human internet is dying… “ because almost all the images displayed are AI-generated.
Intrigued and a bit alarmed, I decided to verify it myself. I quickly Googled the term, and there it was—a flood of AI-generated images dominating the search results.
In another Reddit thread, a user expressed frustration about the difficulty of finding genuine reference images for their artwork because the search results are inundated with AI-generated content.
“It also takes so much longer to find anything good or usable trying to get through all the AI images, and I feel like all the nice art and photos I used to find are drowned out by ugly distorted AI crap that btw for some reason has all the same style making it even more boring.”
This isn’t just a quirky internet anomaly; it’s a growing concern that raises important questions: How did we get here, and what can we do to mitigate this problem?
This was not the case a few months ago.
AI image and video generators seem to be the culprit of this rising amount of AI-generated content on the internet.
According to a report from European law enforcement group Europol, experts estimate that as much as 90 percent of online content may be synthetically generated by 2026.
At this rate, you have all the reasons to not believe everything you see on the internet.
Even stock photo websites like Adobe Stock and Shutterstock, once reliable sources for authentic photographs, are no longer immune.
Ever since they allowed AI-generated images on their platforms, finding real photos has become really challenging.
In this article, I’ll give details on the following topics:
Let’s get started.
The influx of AI-generated images in search results isn’t unexpected.
With the rise of powerful AI tools like Midjourney, OpenAI’s Dall-E 3, Flux, and Stable Diffusion, creating photorealistic images has become easier than ever.
These tools are accessible to the general public, allowing anyone with a computer and an internet connection to generate images.
Search engines like Google prioritize content based on relevance and popularity, not on whether an image is AI-generated or real.
As more AI-generated images are uploaded and shared across the web, they naturally climb the search rankings due to factors like keyword optimization and user engagement.
This creates a feedback loop where AI images become more prominent simply because they’re plentiful.
This influx of AI-generated images isn’t just a minor inconvenience; it’s affecting regular users and business owners who rely on authentic photos.
There are artists seeking reference photos now having to sift through layers of AI content to find genuine images. This not only wastes time but also hampers the creative process.
I remember myself using Pinterest when looking for reference image a few months ago; now all I see is a bunch of AI-generated stuff on the app.
This also poses a risk of misinformation.
For instance, an AI-generated image of a rare animal or a historical event could be mistaken for the real thing, leading to confusion or the spread of false information.
Right now, there’s no straightforward or effective way to get rid of the AI-generated images showing up on the search results.
Some users suggest just searching before 2021 to eliminate the AI slop.
There you go, problem fixed… but not really.
So, how exactly is this problem going to get solved? Turns out, Google has an answer, but it’s not perfect.
Last month, Google announced plans to implement technology that will identify whether an image is a real photo or AI-generated.
The tool is also designed to detect if an image has been edited with software like Photoshop.
In the coming months, users can expect a new “About this image” feature that lets you know if an image was created or edited with AI tools.
Here’s an example of how you can access the About this image feature on Google Lens.
If an image contains C2PA metadata, people will be able to see if it was created or edited with AI tools.
How will it work?
Google will utilize an AI watermarking standard from the Coalition for Content Provenance and Authenticity (C2PA), one of the largest groups aiming to address AI-generated imagery.
“It is more important than ever to have a transparent approach to digital content that empowers people to make decisions,” said Andrew Jenks, C2PA Chair.
Other tech giants like Amazon, Microsoft, Adobe, Intel, and OpenAI have backed C2PA authentication, but adoption has been slow.
Google will be the first to implement it directly into its search engine.
Laurie Richardson, Vice President of Trust and Safety at Google, explains:
“For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate.”
Sure, labeling images as AI-generated or not is a step forward, but it isn’t clear whether there’s a way to remove them from the search result or not.
An intuitive toggle to hide AI-generated content would be helpful, but such a feature hasn’t been announced yet.
The “About this image” feature is aimed to be more than just an AI image detector.
Here’s what it offers:
“About this image” is currently available in 40 languages globally and is accessible on select Android devices, including the latest Samsung and Pixel phones, foldables, tablets, and through Google Lens on both Android and iOS.
Google also plans to include C2PA information on YouTube videos captured by a camera, which could help verify the authenticity of video content as well.
To access the “About this image” feature, users can click the three dots above an image in the search results and select “About this image.”
It’s also available through Google Lens and Android’s “Circle to Search” feature.
If you want to learn more about accessing the “About this image” feature, check out this blog post.
Although Google’s plan to use C2PA’s authentication standard sounds good, is not a foolproof solution.
Here are some of the challenges:
There’s a lot of uncertainty, and we can expect this solution to be imperfect at launch. It may take several iterations and widespread adoption of C2PA standards for the system to be truly effective.
Google’s SynthID is an interesting development in this space. Developed by Google DeepMind, SynthID embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye but detectable by algorithms.
SynthID watermarks and identifies AI-generated content by embedding digital watermarks directly into AI-generated images, audio, text or video.
It can also scan a single image, or the individual frames of a video to detect digital watermarking. Users can identify if an image, or part of an image, was generated by Google’s AI tools through the About this image feature in Search or Chrome.
In the example above, SynthID was able to determine that part of the image was generated with Google’s AI tools.
However, this method also has limitations:
One way to improve the algorithms is by reporting images you believe are AI-generated.
While this may seem like a drop in the ocean considering the millions of AI-generated images produced daily, collective action can make a difference over time.
Here’s what you can do, if you think an image on the search result is AI-generated, click on the three dots and flag it by selecting “Send feedback.” Let Google know that the image is generated.
It’s a small step, but it’s something we can do as users to address the situation.
If you’re finding your search results cluttered with AI-generated images, here are some strategies you can employ.
You may find some of them silly, but at least give them a try:
“baby peacock -ai -artificial -generated -midjourney -stable diffusion”
to filter out images associated with these terms.As someone who keeps a pulse on AI image generator tools, I understand that creating an effective AI-generated image detection system is extremely hard to do.
In the last few months alone, models like Flux 1.1 Pro, Midjourney v6.5, and Imagen 3 have made huge improvements in generating hyper-realistic photos. It’s becoming increasingly difficult to distinguish between real and AI-generated images unless you pixel peep.
There’s an added difficulty with the pace at which image models are improving by the day. In the last couple of months, image models like Flux 1.1 Pro, Midjourney v6.5, and Imagen 3 can now generate photos that are hyper-realistic, it’s very hard to say they’re fake unless you pixel peep.
Seeing Google Image results flooded with AI-generated photos is a disheartening consequence of making these powerful AI image generator tools widely accessible. The speed of innovation in image diffusion models is impressively fast, but the technology to control its negative impact on the internet is surprisingly slow.
Seeing Google Image results flooded with AI-generated photos is a sad consequence of getting these AI image generator tools to the public. The speed of innovation on image diffusion models is impressively fast, but the tech to control its negative impact on the internet is surprisingly slow.
Perhaps we’re being nudged back to the days when we needed to photograph our own reference images. It’s less convenient, but it ensures authenticity—a quality that’s becoming more and more scarce today.
Software engineer, writer, solopreneur