TikTok takes steps to make platform safer for teens

TikTok hired independent safeguarding agency Praesidio Safeguarding to carry out a global survey of more than 10,000 people. (Supplied)
Short Url
Updated 19 November 2021
Follow

TikTok takes steps to make platform safer for teens

  • Platform commissioned special report to better understand impact of potentially harmful challenges, hoaxes

DUBAI: Short-form video app TikTok has released the findings of a report specially commissioned to help better understand young people’s engagement with potentially harmful challenges and hoaxes — pranks or scams created to frighten someone — in a bid to strengthen safety on the platform.

In a statement, the company said that its social networking service had been designed to “advance joy, connection, and inspiration,” but added that fostering an environment where creative expression thrived required that it also prioritized safety for the online community, especially its younger members.

With this in mind, TikTok hired independent safeguarding agency Praesidio Safeguarding to carry out a global survey of more than 10,000 people.

The firm also convened a panel of 12 youth safety experts from around the world to review and provide input into the report, and partnered with Dr. Richard Graham, a clinical child psychiatrist specializing in healthy adolescent development, and Dr. Gretchen Brion-Meisels, a behavioral scientist focused on risk prevention in adolescence, to advise it and contribute to the study.

The report found that there was a high level of exposure to online challenges and teenagers were quite likely to come across all kinds of online changes in their day-to-day lives.

Social media was seen to play the biggest role in generating awareness of these challenges, but the influence of traditional media was also significant.

When teens were asked to describe a recent online challenge, 48 percent were considered to be safe, 32 percent included some risk but were still regarded as safe, 14 percent were viewed as risky and dangerous, and 3 percent were described as very dangerous. Only 0.3 percent of the teenagers quizzed said they had taken part in a challenge they thought was really dangerous.

Meanwhile, 46 percent said they wanted “good information on risks more widely” along with “information on what is too far.” Receiving good information on risks was also ranked as a top preventative strategy by parents (43 percent) and teachers (42 percent).

Earlier this year, the AFP reported that a Pakistani teenager died while pretending to kill himself as his friends recorded a TikTok video. In January, another Pakistani teenager was killed after being hit by a train, and last year, a security guard died while playing with his rifle while making a clip.

Such videos were categorized in the report as “suicide and self-harm hoaxes” where the intention had been to show something fake and trick people into believing that it was true.

Not only could challenges go horribly wrong, as evidenced by the Pakistan cases, but they could also spread fear and panic among viewers. Internet hoaxes were shown to have had a negative impact on 31 percent of teens, and of those, 63 percent said it was their mental health that had been affected.

Based on the findings of the report, TikTok was strengthening protection efforts on the platform by removing warning videos. The research indicated that warnings about self-harm hoaxes could impact the well-being of young people, as they often treated the hoax as real. As a result, the company planned to remove alarmist warnings while allowing conversation that dispelled panic and promoted accurate information.

Despite already having safety policies in place the firm was now working to expand enforcement measures. The platform has created technology that alerts safety teams to sudden increases in violating content linked to hashtags and has now expanded it to capture potentially dangerous behavior.

TikTok also intends to build on its Safety Center by providing new resources such as those dedicated to online challenges and hoaxes and improving its warning labels to redirect users to the right resources when they search for content related to harmful challenges or hoaxes.

The company said the report was the first step in making “a thoughtful contribution to the safety and safeguarding of families online,” adding that it would “continue to explore and implement additional measures on behalf of the community.”


Grok faces more scrutiny over deepfakes as Irish regulator opens EU privacy investigation

Updated 17 February 2026
Follow

Grok faces more scrutiny over deepfakes as Irish regulator opens EU privacy investigation

  • The regulator says Grok has created and shared sexualized images of real people, including children. Researchers say some examples appear to involve minors
  • X also faces other probes in Europe over illegal content and user safety

LONDON: Elon Musk’s social media platform X faces a European Union privacy investigation after its Grok AI chatbot started spitting out nonconsensual deepfake images, Ireland’s data privacy regulator said Tuesday.
Ireland’s Data Protection Commission said it notified X on Monday that it was opening the inquiry under the 27-nation EU’s strict data privacy regulations, adding to the scrutiny X is facing in Europe and other parts of the world over Grok’s behavior.
Grok sparked a global backlash last month after it started granting requests from X users to undress people with its AI image generation and editing capabilities, including putting females in transparent bikinis or revealing clothing. Researchers said some images appeared to include children. The company later introduced some restrictions on Grok, though authorities in Europe weren’t satisfied.
The Irish watchdog said its investigation focuses on the apparent creation and posting on X of “potentially harmful” nonconsensual intimate or sexualized images containing or involving personal data from Europeans, including children.
X did not respond to a request for comment.
Grok was built by Musk’s artificial intelligence company xAI and is available through X, where its responses to user requests are publicly visible.
The watchdog said the investigation will seek to determine whether X complied with the EU data privacy rules known as GDPR, or the General Data Protection Regulation. Under the rules, the Irish regulator takes the lead on enforcing the bloc’s privacy rules because X’s European headquarters is in Dublin. Violations can result in hefty fines.
The regulator “has been engaging” with X since media reports started circulating weeks earlier about “the alleged ability of X users to prompt the @Grok account on X to generate sexualized images of real people, including children,” Deputy Commissioner Graham Doyle said in a press statement.
Spain’s government has ordered prosecutors to investigate X, Meta and TikTok for alleged crimes related to the creation and proliferation of AI-generated child sex abuse material on their platforms, Spanish Prime Minister Pedro Sánchez said on Tuesday.
“These platforms are attacking the mental health, dignity and rights of our sons and daughters,” Sánchez wrote on X.
Spain announced earlier this month that it was pursuing a ban on access to social media platforms for under-16s.
Earlier this month, French prosecutors raided X’s Paris offices and summoned Musk for questioning. Meanwhile, the data privacy and media regulators in Britain, which has left the EU, have opened their own investigations into X.
The platform is already facing a separate EU investigation from Brussels over whether it has been complying with the bloc’s digital rulebook for protecting social media users that requires platforms to curb the spread of illegal content such as child sexual abuse material.