Grok, is that Gaza? AI image checks mislocate news photographs

Asking a chatbot to pinpoint a photo’s origin takes it out of its proper role, said AI expert Louis de Diesbach.
Short Url
Updated 07 August 2025
Follow

Grok, is that Gaza? AI image checks mislocate news photographs

  • Furor arose after Grok wrongly identified a recent image of an underfed girl in Gaza as one from Yemen years back
  • Internet users are turning to AI to verify images more and more, but recent mistakes highlight the risks of blindly trusting the technology

PARIS: This image by AFP photojournalist Omar Al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel’s blockade has fueled fears of mass famine in the Palestinian territory.
But when social media users asked Grok where it came from, X boss Elon Musk’s artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.
The AI bot’s untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo.
At a time when Internet users are turning to AI to verify images more and more, the furor shows the risks of trusting tools like Grok, when the technology is far from error-free.
Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.
In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.
Before the war, sparked by Hamas’s October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.
Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP — and even that’s “not always available.”
Challenged on its incorrect response, Grok said: “I do not spread fake news; I base my answers on verified sources.”
The chatbot eventually issued a response that recognized the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.
The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate.

Grok’s mistakes illustrate the limits of AI tools, whose functions are as impenetrable as “black boxes,” said Louis de Diesbach, a researcher in technological ethics.
“We don’t know exactly why they give this or that reply, nor how they prioritize their sources,” said Diesbach, author of a book on AI tools, “Hello ChatGPT.”
Each AI has biases linked to the information it was trained on and the instructions of its creators, he said.
In the researcher’s view Grok, made by Musk’s xAI start-up, shows “highly pronounced biases which are highly aligned with the ideology” of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right.
Asking a chatbot to pinpoint a photo’s origin takes it out of its proper role, said Diesbach.
“Typically, when you look for the origin of an image, it might say: ‘This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine’.”
AI does not necessarily seek accuracy — “that’s not the goal,” the expert said.
Another AFP photograph of a starving Gazan child by Al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.
That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation.

An AI’s bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer.
“Just because you explain to it that the answer’s wrong doesn’t mean it will then give a different one,” Diesbach said.
“Its training data has not changed and neither has its alignment.”
Grok is not alone in wrongly identifying images.
When AFP asked Mistral AI’s Le Chat — which is in part trained on AFP’s articles under an agreement between the French start-up and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen.
For Diesbach, chatbots must never be used as tools to verify facts.
“They are not made to tell the truth,” but to “generate content, whether true or false,” he said.
“You have to look at it like a friendly pathological liar — it may not always lie, but it always could.”


Apple, Google offer app store changes under new UK rules

Updated 10 February 2026
Follow

Apple, Google offer app store changes under new UK rules

LONDON: Apple and Google have pledged changes to ensure fairness in their app stores, the UK competition watchdog said Tuesday, describing it as “first steps” under its tougher regulation of technology giants.
The Competition and Markets Authority placed the two companies under “strategic market status” last year, giving it powers to impose stricter rules on their mobile platforms.
Apple and Google have submitted packages of commitments to improve fairness and transparency in their app stores, which the CMA is now consulting market participants on.
The proposals cover data collection, how apps are reviewed and ranked and improved access to their mobile operating systems.
They aim to prevent Apple and Google from giving priority to their own apps and to ensure businesses receive fairer terms for delivering apps to customers, including better access to tools to compete with services like the Apple digital wallet.
“These are important first steps while we continue to work on a broad range of additional measures to improve Apple and Google’s app store services in the UK,” said CMA chief executive Sarah Cardell.
The commitments mark the first changes proposed by US tech giants in response to the UK’s digital markets regulation, which came into force last year.
The UK framework is similar to a tech competition law from the European Union, the Digital Markets Act, which carries the potential for hefty financial penalties.
“The commitments announced today allow Apple to continue advancing important privacy and security innovations for users and great opportunities for developers,” an Apple spokesperson said.
The CMA in October found that Apple and Google held an “effective duopoly,” with around 90 to 100 percent of UK mobile services running on their platforms.
A Google spokesperson said existing practices in its Play online store are “fair, objective and transparent.”
“We welcome the opportunity to resolve the CMA’s concerns collaboratively,” they added.
The changes are set to take effect in April, subject to the outcome of a market consultation.