Charities call on TikTok to crack down on content that can harm children

This photograph taken with a fish-eye lens in Paris on March 1, 2023 shows the social media application logo TikTok. (AFP)
Short Url
Updated 04 March 2023
Follow

Charities call on TikTok to crack down on content that can harm children

  • In an open letter to the video-sharing platform’s head of safety, the nonprofit organizations urged the company to take ‘meaningful action’ to address the issue
  • The organizations accuse TikTok of failing to act swiftly enough in response to concerns raised by a report in December about the delivery of such content to young people

DUBAI: A number of charitable and other nonprofit organizations have urged video-sharing platform TikTok to do more to protect children by strengthening its policies for the moderation of content relating to suicide and eating disorders.

The call came in an open letter to TikTok’s head of safety, Eric Han. It was signed by almost 30 groups, including the Center for Countering Digital Hate, the American Psychological Association, the UK’s National Society for the Prevention of Cruelty to Children, and suicide prevention organization The Molly Rose Foundation.

TikTok’s algorithm pushes content about self-harm and eating disorders to teenagers almost as soon as they express interest in the topics, according to research published by the CCDH in December. It found that within 2.6 minutes, the platform recommended suicide-related content to teenagers’ accounts created by CCDH researchers. Within eight minutes, it served up content related to eating disorders. Every 39 seconds, the platform recommended videos about body image and mental-health issues.

The organizations accuse TikTok of failing to act swiftly enough in response to the concerns raised by the CCDH report. In their letter, they urged the platform to take “meaningful action,” including: Improvements to the moderation of content relating to eating disorders and suicide; working with experts to develop a “comprehensive” approach to identifying and removing harmful content; providing support for users who might be struggling with suicidal thoughts or eating disorders; and more transparency about, and accountability for, the steps it is taking to address the issues and the effects their efforts are having.

The letter noted that TikTok had removed only seven of 56 hashtags related to eating disorders that were highlighted by the CCDH research. Content containing those hashtags had received 14.8 billion views as of January 2023, including 1.6 billion views since the report was published, the center said.

“Since CCDH’s report was released in December 2022, you have chosen to deny the problem, deflect responsibility and delay taking any meaningful action,” the organizations said in the letter.

“You were presented with clear harms but continue to turn your backs on the young users you claim to protect. Your silence speaks volumes.”

This month, TikTok announced that teenagers on the platform will be limited to one hour of use each day. It said the limit was set after consulting the Digital Wellness Lab at Boston Children's Hospital.

However, users can override this setting when their 60 minutes are up by entering a passcode that allows them to continue using the app. This requires “them to make an active decision,” TikTok said.

“TikTok’s business model is to broadcast content produced by creators to viewers, using algorithms that individually optimize the addictiveness of the content, all so that they can ultimately serve those viewers ads,” said CCDH CEO Imran Ahmed.

“The stakes are too high for TikTok to continue to do nothing, or for our politicians to sit back and fail to act. We need platforms and politicians to have parents’ backs but right now they’re putting profits before people.”

Other organizations that signed the letter included Free Press, the Youth Power Project, the Real Facebook Oversight Board, and the Tech Transparency Project.

 


EU warns Meta it must open up WhatsApp to rival AI chatbots

Updated 09 February 2026
Follow

EU warns Meta it must open up WhatsApp to rival AI chatbots

  • The EU executive on Monday told Meta to give rival chatbots access to WhatsApp after an antitrust probe found the US giant to be in breach of the bloc’s competition rules

BRUSSELS: The EU executive on Monday told Meta to give rival chatbots access to WhatsApp after an antitrust probe found the US giant to be in breach of the bloc’s competition rules.
The European Commission said a change in Meta’s terms had “effectively” barred third-party artificial intelligence assistants from connecting to customers via the messaging platform since January.
Competition chief Teresa Ribera said the EU was “considering quickly imposing interim measures on Meta, to preserve access for competitors to WhatsApp while the investigation is ongoing, and avoid Meta’s new policy irreparably harming competition in Europe.”
The EU executive, which is in charge of competition policy, sent Meta a warning known as a “statement of objections,” a formal step in antitrust probes.
Meta now has a chance to reply and defend itself. Monday’s step does not prejudge the outcome of the probe, the commission said.
The tech giant rejected the commission’s preliminary findings.
“The facts are that there is no reason for the EU to intervene,” a Meta spokesperson said.
“There are many AI options and people can use them from app stores, operating systems, devices, websites, and industry partnerships. The commission’s logic incorrectly assumes the WhatsApp Business API is a key distribution channel for these chatbots,” the spokesperson said.
Opened in December, the EU probe marks the latest attempt by the 27-nation bloc to rein in Big Tech, many of whom are based in the United States, in the face of strong pushback by the government of US President Donald Trump.
- Meta in the firing line -
The investigation covers the European Economic Area (EEA), made up of the bloc’s 27 states, Iceland, Liechtenstein and Norway — with the exception of Italy, which opened a separate investigation into Meta in July.
The commission said that Meta is “likely to be dominant” in the EEA for consumer messaging apps, notably through WhatsApp, and accused Meta of “abusing this dominant position by refusing access” to competitors.
“We cannot allow dominant tech companies to illegally leverage their dominance to give themselves an unfair advantage,” Ribera said in a statement.
There is no legal deadline for concluding an antitrust probe.
Meta is already under investigation under different laws in the European Union.
EU regulators are also investigating its platforms Facebook and Instagram over fears they are not doing enough to tackle the risk of social media addiction for children.
The company also appealed a 200-million-euro fine imposed last year by the commission under the online competition law, the Digital Markets Act.
That case focused on its policy asking users to choose between an ad-free subscription and a free, ad-supported service, and Brussels and Meta remain in discussions over finding an alternative that would address the EU’s concerns.