Study finds nearly half of UK news stories on Muslims show signs of bias

The study looked at “structural patterns” in coverage that “shape public narratives” about Muslims amid rising hostility toward the community. (AFP/File)
Short Url
Updated 09 March 2026
Follow

Study finds nearly half of UK news stories on Muslims show signs of bias

  • Centre for Media Monitoring finds 20,000 out of 40,913 articles from 30 major news outlets contain bias and 70% link Muslims to negative behaviors or themes
  • Findings reveal ‘deeply concerning evidence of structural bias’ in portrayal of Muslims by UK press and point to ‘systemic problem’ within the media, says center’s director

LONDON: Nearly half of news articles published in the UK in 2025 that referenced Muslims or Islam contained some degree of bias, according to a report issued on Monday by the Centre for Media Monitoring. It also found that about 70 percent of stories linked Muslims to negative behaviors or themes.

The nonprofit organization, which tracks the ways in which Muslims and Islam are portrayed in the media, examined 40,913 articles from 30 major news outlets and found that about 20,000 showed some form of bias.

The study looked at “structural patterns” in coverage that “shape public narratives” about Muslims amid rising hostility toward the community.

“As the largest study of its kind ever conducted in the UK, this report presents deeply concerning evidence of structural bias in how Muslims are portrayed in the UK press,” said Rizwana Hamid, the director of the organization.

It found that 70 percent of the articles it reviewed highlighted negative aspects related to Muslims, though not all of the stories were biased in themselves. The wider patterns were also troubling: 44 percent of the coverage omitted key context, 17 percent relied on generalizations, and 13 percent included outright misrepresentation.

Taken together, the monitoring center said, the findings amounted to evidence of an “information integrity crisis” that distorts public understanding, and “a deeply concerning trend” in reporting on Muslims.

The research points to a “systemic problem within our media ecosystem,” Hamid said.

“When entire communities are repeatedly framed through lenses of suspicion or threat, it inevitably shapes public attitudes, political debate and the everyday lives of British Muslims,” she added.

News brands targeting right-wing audiences were more likely to produce biased coverage, the report found.

The Spectator magazine and GB News were identified as having the highest proportion of “very biased” articles, and as the “worst across all five bias categories”: negative framing, generalizations, misrepresentation, lack of context, and problematic headlines.

Other outlets highlighted for displaying high levels of biased content about Muslims included The Telegraph, The Jewish Chronicle, Daily Express, The Sun, Daily Mail and The Times.

In contrast, the BBC, other broadcasters and left-leaning outlets recorded the lowest rates of bias in the study.

The research comes as British Muslims report rising levels of discrimination. Official figures published in October revealed that religious hate crimes against Muslims rose by 19 percent in the year to March 2025 compared with the previous 12 months.


Fake AI satellite imagery spurs US-Iran war disinformation

Updated 09 March 2026
Follow

Fake AI satellite imagery spurs US-Iran war disinformation

  • Rise of generative AI has turbocharged the ability of state actors and propagandists to fabricate convincing satellite imagery during conflict
  • Forged satellite imagery can have effects that range from influencing public opinion on a major issue to impact financial markets

WASHINGTON: The satellite image posted by an Iranian news outlet looked real: a devastated US base in Qatar. But it was an AI-generated fake, underscoring the accelerating threat of tech-enabled disinformation during wartime.
The rise of generative AI has turbocharged the ability of state actors and propagandists to fabricate convincing satellite imagery during major conflicts, a trend that researchers warn carries real-world security implications.
As the US-Israeli war against Iran rages, Tehran Times, a state-aligned English daily, posted on X a “before vs. after” image it claimed showed “completely destroyed” US radar equipment at a base in Qatar.
In fact it was an AI-manipulated version of a Google Earth image from last year of a US base in Bahrain, researchers said.
The subtle visual giveaways included a row of cars parked in identical positions in both the authentic satellite photo and the manipulated image.
Yet the manipulated photo garnered millions of views as it spread across social media in multiple languages, illustrating how users are increasingly failing to distinguish reality from fiction on platforms saturated with AI-generated visuals.
Brady Africk, an open-source intelligence researcher, noted an “increase in manipulated satellite imagery” appearing on social media in the wake of major events including the Middle East war.
“Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don’t align with reality,” Africk told AFP.
“Others appear to be an image manipulated manually, often by superimposing indicators of damage or another change on a satellite image that had no such details to begin with,” he said.

- ‘Fog of war’ -

Information warfare analyst Tal Hagin flagged another AI-generated satellite image purporting to show that Israeli-US jets had targeted the painted silhouette of an aircraft on the ground in Iran, while Tehran seemingly moved real planes elsewhere.
The telltale clues included gibberish coordinates embedded in the fake image, which spread across sites including Instagram, Threads and X.
AFP detected a SynthID, an invisible watermark meant to identify images created using Google AI.
The fabricated satellite images follow the emergence of imposter OSINT — or open-source intelligence — accounts on social media that appear to undermine the work of credible digital investigators.
“Due to the fog of war, it can be very difficult to determine the success of an adversary’s strikes. OSINT came as a solution, using public satellite imagery to circumvent the censorship” inside countries like Iran, Hagin said.
“But it’s now being preyed upon by disinformation agents,” he added.
Reports of fake satellite imagery created or edited using AI also followed the Russia-Ukraine conflict and the four-day war between India and Pakistan last year.

- ‘Critical awareness’ -

“Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity,” Africk said.
“This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets.”
In the age of AI, authentic high-resolution satellite imagery collected in real time can give decision-makers vital clues to assess security threats and debunk falsehoods from unverified sources.
During a recent militant attack on Niamey airport in Niger, satellite intelligence company Vantor said it detected images circulating online purporting to show the main civilian terminal on fire.
The company’s own satellite imagery helped confirm that the photos were fake, almost certainly generated using AI, Vantor’s Tomi Maxted told AFP.
“When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events,” Bo Zhao, from the University of Washington, told AFP.
As AI-generated imagery grows increasingly convincing, it is “important for the public to approach such visual content with caution and critical awareness,” Zhao said.