War images and video footage created by AI erode trust in genuine news, media analysts warn

Short Url
Updated 03 April 2026
Follow

War images and video footage created by AI erode trust in genuine news, media analysts warn

WASHINGTON: A surge in the number of Middle East war images and video footage created or manipulated by artificial intelligence is eroding trust in genuine news, media analysts have warned.

Recent examples include video appearing to show the Burj Khalifa in Dubai collapsing in a cloud of dust. By the time a crowd-sourced verification system had debunked the footage, it had more than 12 million views.

In a reverse case, amid online speculation that Benjamin Netanyahu had been killed or injured in an Iranian strike, Israel published three videos of the prime minister, including one in a coffee shop.

Social media was inundated with claims that the footage was fake because Netanyahu appeared to have six fingers, a common sign of an AI-generated image. “Last time I checked, humans usually don't have six fingers ... AI does,” said one post on X that attracted nearly five million views. “Is Netanyahu no more?”
In fact the footage was genuine, and the “extra finger” was a trick of the light.
“The rise of AI deepfakes and the dismissal of real footage are two sides of the same coin,” said Sofia Rubinson of the misinformation watchdog NewsGuard. “When everything could be fake, it becomes easy to believe that anything is.”
Tech platforms are saturated with what has been called “AI slop.” Trust is eroded as hyper-realistic AI fabrications drown out authentic images.
X accounts posting AI content about the war have amassed more than a billion views, the Institute for Strategic Dialogue in London said.

“We believe tech platforms are not doing enough to help users identify whether content is AI-generated or authentic,” said Meta’s Oversight Board, which reviews content moderation on Facebook. “Fake content can be harmful by inciting more violence and fueling further conflict.”