OpenAI says AI tools can be effective in content moderation

The company claims that its tools can help businesses complete six months of work in just a day or two. (AFP/File)
Short Url
Updated 16 August 2023
Follow

OpenAI says AI tools can be effective in content moderation

  • Company said its ChatGPT-4 model a range of content moderation services showed promising results

LONDON: ChatGPT creator OpenAI made a strong case for the use of AI in content moderation, saying it can unlock efficiencies at social media firms by speeding up the time it takes to handle some of the grueling tasks.
Despite the hype around generative AI, companies such as Microsoft and Google-owner Alphabet are yet to monetize the technology in which they have been pumping billions of dollars in the hope that it will have a big impact across industries.
OpenAI, which is backed by Microsoft, said its latest GPT-4 AI model can reduce the process of content moderation to a few hours from months and ensure more consistent labeling.
Bloomberg reported that the company has been testing its GPT-4 model for a range of content moderation services, and has invited customers to try it out. The company claims that its tools can help businesses complete six months of work in just a day or two.
Content moderation can be a grueling task for social media firms such as Facebook-parent Meta, which works with thousands of moderators around the world to block users from seeing harmful content such as child pornography and images of extreme violence.
“The process (of content moderation) is inherently slow and can lead to mental stress on human moderators,” OpenAI said. “With this system, the process of developing and customizing content policies is trimmed down from months to hours.”
Separately, OpenAI CEO Sam Altman said on Tuesday that the startup does not train its AI models on user-generated data.


Grok faces more scrutiny over deepfakes as Irish regulator opens EU privacy investigation

Updated 17 February 2026
Follow

Grok faces more scrutiny over deepfakes as Irish regulator opens EU privacy investigation

  • The regulator says Grok has created and shared sexualized images of real people, including children. Researchers say some examples appear to involve minors
  • X also faces other probes in Europe over illegal content and user safety

LONDON: Elon Musk’s social media platform X faces a European Union privacy investigation after its Grok AI chatbot started spitting out nonconsensual deepfake images, Ireland’s data privacy regulator said Tuesday.
Ireland’s Data Protection Commission said it notified X on Monday that it was opening the inquiry under the 27-nation EU’s strict data privacy regulations, adding to the scrutiny X is facing in Europe and other parts of the world over Grok’s behavior.
Grok sparked a global backlash last month after it started granting requests from X users to undress people with its AI image generation and editing capabilities, including putting females in transparent bikinis or revealing clothing. Researchers said some images appeared to include children. The company later introduced some restrictions on Grok, though authorities in Europe weren’t satisfied.
The Irish watchdog said its investigation focuses on the apparent creation and posting on X of “potentially harmful” nonconsensual intimate or sexualized images containing or involving personal data from Europeans, including children.
X did not respond to a request for comment.
Grok was built by Musk’s artificial intelligence company xAI and is available through X, where its responses to user requests are publicly visible.
The watchdog said the investigation will seek to determine whether X complied with the EU data privacy rules known as GDPR, or the General Data Protection Regulation. Under the rules, the Irish regulator takes the lead on enforcing the bloc’s privacy rules because X’s European headquarters is in Dublin. Violations can result in hefty fines.
The regulator “has been engaging” with X since media reports started circulating weeks earlier about “the alleged ability of X users to prompt the @Grok account on X to generate sexualized images of real people, including children,” Deputy Commissioner Graham Doyle said in a press statement.
Spain’s government has ordered prosecutors to investigate X, Meta and TikTok for alleged crimes related to the creation and proliferation of AI-generated child sex abuse material on their platforms, Spanish Prime Minister Pedro Sánchez said on Tuesday.
“These platforms are attacking the mental health, dignity and rights of our sons and daughters,” Sánchez wrote on X.
Spain announced earlier this month that it was pursuing a ban on access to social media platforms for under-16s.
Earlier this month, French prosecutors raided X’s Paris offices and summoned Musk for questioning. Meanwhile, the data privacy and media regulators in Britain, which has left the EU, have opened their own investigations into X.
The platform is already facing a separate EU investigation from Brussels over whether it has been complying with the bloc’s digital rulebook for protecting social media users that requires platforms to curb the spread of illegal content such as child sexual abuse material.