WhatsApp says Russia ‘attempted to fully block’ app

Moscow has threatened a host of Internet platforms with forced slowdowns or outright bans if they do not comply with Russian laws. (Reuters)
Short Url
Updated 12 February 2026
Follow

WhatsApp says Russia ‘attempted to fully block’ app

  • Moscow has been trying to nudge Russians to use a more tightly controlled domestic online service

SAN FRANCISCO, United States: WhatsApp said Wednesday that Russia “attempted to fully block” the messaging app in the country to push users to a competing state-controlled service, potentially affecting 100 million people.
Moscow has been trying to nudge Russians to use a more tightly controlled domestic online service.
It has threatened a host of Internet platforms with forced slowdowns or outright bans if they do not comply with Russian laws, including those requiring data on Russian users to be stored inside the country.
“Today the Russian government attempted to fully block WhatsApp in an effort to drive people to a state-owned surveillance app,” WhatsApp posted on X.
“Trying to isolate over 100 million users from private and secure communication is a backwards step and can only lead to less safety for people in Russia,” WhatsApp added.
“We continue to do everything we can to keep users connected.”
Critics and rights campaigners say the Russian restrictions are a transparent attempt by the Kremlin to ramp up control and surveillance over Internet use in Russia, amid a sweeping crackdown on dissent during the Ukraine offensive.
That latest developments came after Russia’s Internet watchdog said Tuesday it would slap “phased restrictions” on the Telegram messaging platform, which it said had not complied with the laws.


Fake AI satellite imagery spurs US-Iran war disinformation

Updated 30 sec ago
Follow

Fake AI satellite imagery spurs US-Iran war disinformation

  • Rise of generative AI has turbocharged the ability of state actors and propagandists to fabricate convincing satellite imagery during conflict
  • Forged satellite imagery can have effects that range from influencing public opinion on a major issue to impact financial markets
WASHINGTON: The satellite image posted by an Iranian news outlet looked real: a devastated US base in Qatar. But it was an AI-generated fake, underscoring the accelerating threat of tech-enabled disinformation during wartime.
The rise of generative AI has turbocharged the ability of state actors and propagandists to fabricate convincing satellite imagery during major conflicts, a trend that researchers warn carries real-world security implications.
As the US-Israeli war against Iran rages, Tehran Times, a state-aligned English daily, posted on X a “before vs. after” image it claimed showed “completely destroyed” US radar equipment at a base in Qatar.
In fact it was an AI-manipulated version of a Google Earth image from last year of a US base in Bahrain, researchers said.
The subtle visual giveaways included a row of cars parked in identical positions in both the authentic satellite photo and the manipulated image.
Yet the manipulated photo garnered millions of views as it spread across social media in multiple languages, illustrating how users are increasingly failing to distinguish reality from fiction on platforms saturated with AI-generated visuals.
Brady Africk, an open-source intelligence researcher, noted an “increase in manipulated satellite imagery” appearing on social media in the wake of major events including the Middle East war.
“Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don’t align with reality,” Africk told AFP.
“Others appear to be an image manipulated manually, often by superimposing indicators of damage or another change on a satellite image that had no such details to begin with,” he said.

- ‘Fog of war’ -

Information warfare analyst Tal Hagin flagged another AI-generated satellite image purporting to show that Israeli-US jets had targeted the painted silhouette of an aircraft on the ground in Iran, while Tehran seemingly moved real planes elsewhere.
The telltale clues included gibberish coordinates embedded in the fake image, which spread across sites including Instagram, Threads and X.
AFP detected a SynthID, an invisible watermark meant to identify images created using Google AI.
The fabricated satellite images follow the emergence of imposter OSINT — or open-source intelligence — accounts on social media that appear to undermine the work of credible digital investigators.
“Due to the fog of war, it can be very difficult to determine the success of an adversary’s strikes. OSINT came as a solution, using public satellite imagery to circumvent the censorship” inside countries like Iran, Hagin said.
“But it’s now being preyed upon by disinformation agents,” he added.
Reports of fake satellite imagery created or edited using AI also followed the Russia-Ukraine conflict and the four-day war between India and Pakistan last year.

- ‘Critical awareness’ -

“Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity,” Africk said.
“This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets.”
In the age of AI, authentic high-resolution satellite imagery collected in real time can give decision-makers vital clues to assess security threats and debunk falsehoods from unverified sources.
During a recent militant attack on Niamey airport in Niger, satellite intelligence company Vantor said it detected images circulating online purporting to show the main civilian terminal on fire.
The company’s own satellite imagery helped confirm that the photos were fake, almost certainly generated using AI, Vantor’s Tomi Maxted told AFP.
“When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events,” Bo Zhao, from the University of Washington, told AFP.
As AI-generated imagery grows increasingly convincing, it is “important for the public to approach such visual content with caution and critical awareness,” Zhao said.