Times newspaper corrects ‘distorted’ coverage of Muslim foster carers

Updated 26 April 2018
Follow

Times newspaper corrects ‘distorted’ coverage of Muslim foster carers

  • Coverage by The Times said the girl had been forced to live with a “niqab-wearing foster carer”
  • The Muslim Council of Britain (MCB) wants The Timesto apologize #for promoting a widely known to be an inaccurate, misleading and bigoted narrative about Muslims

LONDON: The Times newspaper has been ordered to correct a front-page story titled “Christian child forced into Muslim foster care,” after a ruling from the UK’s independent press regulator. 

The story, published Aug. 30, 2017, was one of three front-page articles published by the paper that month about a five-year-old Christian girl who was placed with Muslim foster carers in March 2017.

Coverage by The Times said the girl had been forced to live with a “niqab-wearing foster carer” and had been “sobbing and begging” not to be sent back because the carers did not speak English, an allegation that has since proved to be false.

The paper also claimed the carers removed the girl’s crucifix necklace, prevented her from eating bacon and encouraged her to learn Arabic. 

The Independent Press Standards Organization (IPSO) said that The Times’s coverage was “distorted,” after an investigation found the allegations to be unsubstantiated. The investigation was carried out by Tower Hamlets, the local council that had taken the child into care.

Wednesday’s edition of the paper mentioned the ruling on the front page and carried full details on page 2 and online.

3, secretary-general of the Muslim Council of Britain (MCB), said: “The Times should be forced to apologize for promoting what was widely known to be an inaccurate, misleading and bigoted narrative about Muslims. 

The story aided the hate-filled agenda of far-right extremists such as Britain First and the English Defense League.

“We hope that this front-page note will mark a turning point in the tolerance The Times has shown for anti-Muslim bigotry in its coverage and commentary.”

Miqdaad Versi, who heads the MCB’s work on media representation of Muslims, said: “While IPSO’s ruling on this shameful incidence of anti-Muslim reporting is welcome, their response thus far has been too little, too late.

“There needs to be a fundamental review to ensure this never happens again.”


Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes

Updated 12 January 2026
Follow

Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes

  • Authorities in both countries acted over the weekend, citing concerns about non-consensual and sexual deepfakes
  • Regulators say existing controls cannot prevent fake pornographic content, especially involving women and minors

KUALA LUMPUR: Malaysia and Indonesia have become the first countries to block Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, after authorities said it was being misused to generate sexually explicit and non-consensual images.
The moves reflect growing global concern over generative AI tools that can produce realistic images, sound and text, while existing safeguards fail to prevent their abuse. The Grok chatbot, which is accessed through Musk’s social media platform X, has been criticized for generating manipulated images, including depictions of women in bikinis or sexually explicit poses, as well as images involving children.
Regulators in the two Southeast Asian nations said existing controls were not preventing the creation and spread of fake pornographic content, particularly involving women and minors. Indonesia’s government temporarily blocked access to Grok on Saturday, followed by Malaysia on Sunday.
“The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesia’s Communication and Digital Affairs Minister Meutya Hafid said in a statement Saturday.
The ministry said the measure was intended to protect women, children and the broader community from fake pornographic content generated using AI.
Initial findings showed that Grok lacks effective safeguards to stop users from creating and distributing pornographic content based on real photos of Indonesian residents, Alexander Sabar, director general of digital space supervision, said in a separate statement. He said such practices risk violating privacy and image rights when photos are manipulated or shared without consent, causing psychological, social and reputational harm.
In Kuala Lumpur, the Malaysian Communications and Multimedia Commission ordered a temporary restriction on Grok on Sunday after what it said was “repeated misuse” of the tool to generate obscene, sexually explicit and non-consensual manipulated images, including content involving women and minors.
The regulator said notices issued this month to X Corp. and xAI demanding stronger safeguards drew responses that relied mainly on user reporting mechanisms.
“The restriction is imposed as a preventive and proportionate measure while legal and regulatory processes are ongoing,” it said, adding that access will remain blocked until effective safeguards are put in place.
Launched in 2023, Grok is free to use on X. Users can ask it questions on the social media platform and tag posts they’ve directly created or replies to posts from other users. Last summer the company added an image generator feature, Grok Imagine, that included a so-called “spicy mode” that can generate adult content.
The Southeast Asian restrictions come amid mounting scrutiny of Grok elsewhere, including in the European Union, Britain, India and France. Grok last week limited image generation and editing to paying users following a global backlash over sexualized deepfakes of people, but critics say it did not fully address the problem.