AI Image Verification Failures: When Chatbots Can't Recognize Their Own Creations

AI chatbots are increasingly failing to identify artificially generated images, even ones they created themselves, highlighting a critical gap in visual verification capabilities as users turn to these tools for fact-checking. Recent cases from the Philippines and Pakistan demonstrate how AI systems incorrectly authenticate fabricated images, raising concerns about digital misinformation as major platforms reduce human fact-checking resources.

When AI Tries To Identify AI: Chatbots Fail To Detect Images They Created

AI systems are proving unable to consistently identify their own generated images, raising serious verification concerns.

Philippines:

In a revealing incident, Filipino citizens seeking to verify a viral photo of a politician involved in a corruption scandal consulted an AI chatbot, which failed to recognize the image was artificially generated—despite being created by the very same AI system.

The growing trend of users turning to AI chatbots for real-time image verification is raising alarm as these tools frequently deliver inaccurate results, challenging their reliability for debunking visual content at a time when major technology companies are reducing human fact-checking resources.

Numerous instances show these tools incorrectly validating fabricated images, even when those images originate from the same AI models, further complicating an online environment already saturated with AI-generated misinformation.

One prominent example involves a fabricated image circulating on social platforms showing Elizaldy Co, a former Philippine lawmaker facing charges in a massive flood-control corruption scandal that triggered widespread protests across the disaster-vulnerable nation.

The manufactured image portrayed Co, whose location has remained unknown since investigations began, supposedly in Portugal.

When internet users investigating his whereabouts questioned Google's AI about the image's authenticity, it erroneously confirmed it as genuine.

Fact-checkers from AFP later traced the image's creator and determined it was generated using Google's own AI technology.

"These models are primarily trained on language patterns and lack the specialized visual analysis capabilities required to accurately identify AI-generated or manipulated imagery," explained Alon Yamin, chief executive of AI content detection platform Copyleaks, in a statement to AFP.

"AI chatbots frequently provide inconsistent or overgeneralized assessments of images, even when they originate from similar generative models, making them unreliable tools for fact-checking or authenticity verification."

Google did not respond when AFP requested comment on this matter.

'Distinguishable From Reality'

AFP discovered additional examples of AI tools failing to verify their own creations.

During recent deadly demonstrations against lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image allegedly showing protesters marching with flags and torches.

AFP's analysis revealed the image was created using Google's Gemini AI system.

Yet both Gemini and Microsoft's Copilot incorrectly identified it as an authentic protest photograph.

"This inability to correctly identify AI images stems from the fact that AI models are programmed only to mimic well," said Rossine Fallorina from the nonprofit Sigla Research Centre in comments to AFP.

"Essentially, they can only generate resemblances. They cannot determine whether those resemblances are actually distinguishable from reality."

Earlier this year, Columbia University's Tow Centre for Digital Journalism evaluated seven AI chatbots—including ChatGPT, Perplexity, Grok, and Gemini—on their ability to verify ten photojournalistic news images.

The study reported that all seven models failed to correctly identify the origins of the photographs.

'Shocked'

AFP tracked down the creator of Co's viral photo that received over a million views across social platforms—a middle-aged web developer in the Philippines who created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," the creator told AFP, requesting anonymity to avoid backlash.

"I edited my post—and added 'AI generated' to stop the spread—because I was shocked at how many shares it got."

Such instances demonstrate how AI-generated photographs flooding social platforms can appear virtually indistinguishable from authentic imagery.

This trend raises concerns as surveys indicate online users are increasingly abandoning traditional search engines in favor of AI tools for information gathering and verification purposes.

This shift coincides with Meta's announcement earlier this year ending its third-party fact-checking program in the United States, transferring the responsibility of debunking falsehoods to ordinary users through a model called "Community Notes."

Human fact-checking has long been contentious in polarized societies, with conservative advocates accusing professional fact-checkers of liberal bias—allegations these professionals reject.

AFP currently collaborates with Meta's fact-checking initiative in 26 languages across Asia, Latin America, and the European Union.

Researchers acknowledge AI models can serve as useful assistants to professional fact-checkers, helping quickly geolocate images and identify visual clues to establish authenticity. However, they emphasize these tools cannot substitute for trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina concluded.

Source: https://www.ndtv.com/world-news/when-ai-tries-to-identify-ai-chatbots-fail-to-detect-images-they-created-9673021