MANAMA: As attacks spread after the bombing of Iran by US and Israeli forces, a video circulated widely of crowds peering up at fire, smoke and debris coming from the top of a high-rise building said to be in Bahrain.
Social media users claimed an Iranian attack had hit the skyscraper. But while buildings in Bahrain have been struck by Iranian missiles during the Iran war, this video wasn’t real. It was made with artificial intelligence.
“The content that’s coming from state actors tends to be a little better targeted,” said Melanie Smith, senior director of policy and research on information operations at the Institute for Strategic Dialogue. “They have a very clear kind of narrative structure and the videos are just used to support some kind of statement they want to make about the conflict and about the kind of geopolitical situation writ large.”
AI has helped fuel misinformation in ways that weren’t possible during past conflicts, even just a few years ago. Coupled with state-linked disinformation and censorship, this creates an even wider vacuum in which the truth can get lost.
Nikita Bier, X’s head of product, wrote in a post that the platform will suspend users from its revenue-sharing program if they post AI-generated content from an armed conflict without a proper disclosure.











