PARIS: American artificial intelligence firm OpenAI said Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenaged son to kill himself.
“Within the next month, parents will be able to... link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behavior rules,” the generative AI company said in a blog post.
Parents will also receive notifications from ChatGPT “when the system detects their teen is in a moment of acute distress,” OpenAI added.
Matthew and Maria Raine argue in a lawsuit filed last week in a California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.
The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped 16-year-old Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it “could potentially suspend a human.”
Adam was found dead hours later, having used the same method.
“When a person is using ChatGPT it really feels like they’re chatting with something on the other end,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the legal complaint.
“These are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers,” Dincer said.
Product design features set the scene for users to slot a chatbot into trusted roles like friend, therapist or doctor, she said.
Dincer said the OpenAI blog post announcing parental controls and other safety measures seemed “generic” and lacking in detail.
“It’s really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented,” she added.
“It’s yet to be seen whether they will do what they say they will do and how effective that will be overall.”
The Raines’ case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots — prompting OpenAI to say it would reduce models’ “sycophancy” toward users.
“We continue to improve how our models recognize and respond to signs of mental and emotional distress,” OpenAI said Tuesday.
The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting “some sensitive conversations... to a reasoning model” that puts more computing power into generating a response.
“Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI said.
ChatGPT to get parental controls after teen’s death
https://arab.news/2qu2s
ChatGPT to get parental controls after teen’s death
- Parents Matthew and Maria Raine have filed a lawsuit alleging that a chatbot helped their 16-year-old son steal vodka and provided instructions for a noose he used to take his own life
- OpenAI announced new safety tools, including age-appropriate response controls and notifications for detecting acute distress in children
China’s national security agency in Hong Kong summons international media representatives
HONG KONG: China’s national security agency in Hong Kong summoned international media representatives for a “regulatory talk” on Saturday, saying some had spread false information and smeared the government in recent reports on a deadly fire and upcoming legislative elections.
Senior journalists from several major outlets operating in the city, including AFP, were summoned to the meeting by the Office for Safeguarding National Security (OSNS), which was opened in 2020 following Beijing’s imposition of a wide-ranging national security law on the city.
Through the OSNS, Beijing’s security agents operate openly in Hong Kong, with powers to investigate and prosecute national security crimes.
“Recently, some foreign media reports on Hong Kong have disregarded facts, spread false information, distorted and smeared the government’s disaster relief and aftermath work, attacked and interfered with the Legislative Council election, (and) provoked social division and confrontation,” an OSNS statement posted online shortly after the meeting said.
At the meeting, an official who did not give his name read out a similar statement to media representatives.
He did not give specific examples of coverage that the OSNS had taken issue with, and did not take questions.
The online OSNS statement urged journalists to “not cross the legal red line.”
“The Office will not tolerate the actions of all anti-China and trouble-making elements in Hong Kong, and ‘don’t say we didn’t warn you’,” it read.
For the past week and a half, news coverage in Hong Kong has been dominated by a deadly blaze on a residential estate which killed at least 159 people.
Authorities have warned against crimes that “exploit the tragedy” and have reportedly arrested at least three people for sedition in the fire’s aftermath.
Dissent in Hong Kong has been all but quashed since Beijing brought in the national security law, after huge and sometimes violent protests in 2019.
Hong Kong’s electoral system was revamped in 2021 to ensure that only “patriots” could hold office, and the upcoming poll on Sunday will select a second batch of lawmakers under those rules.










