British MPs: Child influencers and their followers need more protection

Shutterstock illustration image
Short Url
Updated 09 May 2022
Follow

British MPs: Child influencers and their followers need more protection

  • Booming online culture has a ‘murkier world’ where children are ‘at risk of exploitation,’ says leading MP

LONDON: British MPs have called for more protection and tougher legislation for social media influencers, particularly “kidfluencers” and their followers.

A new report published by the parliamentary Digital, Culture, Media and Sport Committee argues that the rise of influencer culture has exposed online icons and their followers to harm and exploitation.

While noting that influencer culture brought significant benefits to the UK creative industries and the economy, DCMS Committee Chairman Julian Knight said: “If you dig below the shiny surface of what you see on screen you will discover an altogether murkier world, where both the influencers and their followers are at risk of exploitation and harm online.”

He added: “Child viewers, who are still developing digital literacy, are in particular danger in an environment where not everything is always as it seems, while there is a woeful lack of protection for young influencers who often spend long hours producing financially lucrative content at the direction of others.” 

The report revealed that many child influencers’ accounts on YouTube, Snapchat and Instagram are run and managed by their parents, raising concerns that the children were being exploited to make money.

The committee said it had heard concerns from witnesses that some children are being used by parents to capitalize on the lucrative child and family influencing market.

The committee called on the British government to strengthen employment and advertising laws to protect children, both influencers and consumers. 

Knight said: “The explosion in influencer activity has left the authorities playing catch-up, and exposed the impotence of advertising rules and employment protections designed for a time before social media was the all-encompassing behemoth it has become today.

“This report has held a mirror up to the problems which beset the industry, where for too long it has been a case of lights, camera, inaction.

“It is now up to the Government to reshape the rules to keep pace with the changing digital landscape and ensure proper protections for all.”

The committee added that children, parents and schools must be given more support in developing media literacy. 

Other recommendations include launching an investigation into influencer pay and giving advertising regulators more power to enforce the law around advertising.


Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes

Updated 5 sec ago
Follow

Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes

  • Authorities in both countries acted over the weekend, citing concerns about non-consensual and sexual deepfakes
  • Regulators say existing controls cannot prevent fake pornographic content, especially involving women and minors
KUALA LUMPUR: Malaysia and Indonesia have become the first countries to block Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, after authorities said it was being misused to generate sexually explicit and non-consensual images.
The moves reflect growing global concern over generative AI tools that can produce realistic images, sound and text, while existing safeguards fail to prevent their abuse. The Grok chatbot, which is accessed through Musk’s social media platform X, has been criticized for generating manipulated images, including depictions of women in bikinis or sexually explicit poses, as well as images involving children.
Regulators in the two Southeast Asian nations said existing controls were not preventing the creation and spread of fake pornographic content, particularly involving women and minors. Indonesia’s government temporarily blocked access to Grok on Saturday, followed by Malaysia on Sunday.
“The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesia’s Communication and Digital Affairs Minister Meutya Hafid said in a statement Saturday.
The ministry said the measure was intended to protect women, children and the broader community from fake pornographic content generated using AI.
Initial findings showed that Grok lacks effective safeguards to stop users from creating and distributing pornographic content based on real photos of Indonesian residents, Alexander Sabar, director general of digital space supervision, said in a separate statement. He said such practices risk violating privacy and image rights when photos are manipulated or shared without consent, causing psychological, social and reputational harm.
In Kuala Lumpur, the Malaysian Communications and Multimedia Commission ordered a temporary restriction on Grok on Sunday after what it said was “repeated misuse” of the tool to generate obscene, sexually explicit and non-consensual manipulated images, including content involving women and minors.
The regulator said notices issued this month to X Corp. and xAI demanding stronger safeguards drew responses that relied mainly on user reporting mechanisms.
“The restriction is imposed as a preventive and proportionate measure while legal and regulatory processes are ongoing,” it said, adding that access will remain blocked until effective safeguards are put in place.
Launched in 2023, Grok is free to use on X. Users can ask it questions on the social media platform and tag posts they’ve directly created or replies to posts from other users. Last summer the company added an image generator feature, Grok Imagine, that included a so-called “spicy mode” that can generate adult content.
The Southeast Asian restrictions come amid mounting scrutiny of Grok elsewhere, including in the European Union, Britain, India and France. Grok last week limited image generation and editing to paying users following a global backlash over sexualized deepfakes of people, but critics say it did not fully address the problem.