WASHINGTON DC: The US surgeon general has called on Congress to require warning labels on social media platforms similar to those now mandatory on cigarette boxes.
In a Monday opinion piece in the The New York Times, Dr. Vivek Murthy said that social media is a contributing factor in the mental health crisis among young people.
“It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents. A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe,” Murthy said. “Evidence from tobacco studies show that warning labels can increase awareness and change behavior.”
Murthy said that the use of just a warning label wouldn’t make social media safe for young people, but would be a part of the steps needed.
Social media use is prevalent among young people, with up to 95 percent of youth ages 13 to 17 saying that they use a social media platform, and more than a third saying that they use social media “almost constantly,” according to 2022 data from the Pew Research Center.
“Social media today is like tobacco decades ago: It’s a product whose business model depends on addicting kids. And as with cigarettes, a surgeon general’s warning label is a critical step toward mitigating the threat to children,” Josh Golin, executive director at Fairplay, an organization that is dedicated to ending marketing to children, said in a statement.
Last year Murthy warned that there wasn’t enough evidence to show that social media is safe for children and teens. He said at the time that policymakers needed to address the harms of social media the same way they regulate things like car seats, baby formula, medication and other products children use.
To comply with federal regulation, social media companies already ban kids under 13 from signing up for their platforms — but children have been shown to easily get around the bans, both with and without their parents’ consent.
Other measures social platforms have taken to address concerns about children’s mental health can also be easily circumvented. For instance, TikTok introduced a default 60-minute time limit for users under 18. But once the limit is reached, minors can simply enter a passcode to keep watching.
Murthy believes the impact of social media on young people should be a more pressing concern.
“Why is it that we have failed to respond to the harms of social media when they are no less urgent or widespread than those posed by unsafe cars, planes or food? These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability,” he wrote.
In January the CEOs of Meta, TikTok, X and other social media companies went before the Senate Judiciary Committee to testify as parents worry that they’re not doing enough to protect young people. The executives touted existing safety tools on their platforms and the work they’ve done with nonprofits and law enforcement to protect minors.
Murthy said Monday that Congress needs to implement legislation that will protect young people from online harassment, abuse and exploitation and from exposure to extreme violence and sexual content.
“The measures should prevent platforms from collecting sensitive data from children and should restrict the use of features like push notifications, autoplay and infinite scroll, which prey on developing brains and contribute to excessive use,” Murthy wrote.
Sens. Marsha Blackburn and Richard Blumenthal supported Murthy’s message Monday.
“We are pleased that the Surgeon General — America’s top doctor — continues to bring attention to the harmful impact that social media has on our children,” the senators said in a prepared statement.
The surgeon general is also recommending that companies be required to share all their data on health effects with independent scientists and the public, which they currently don’t do, and allow independent safety audits.
Murthy said schools and parents also need to participate in providing phone-free times and that doctors, nurses and other clinicians should help guide families toward safer practices.
While Murthy pushes for more to be done about social media in the United States, the European Union enacted groundbreaking new digital rules last year. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc — long a global leader in cracking down on tech giants.
The DSA is designed to keep users safe online and make it much harder to spread content that’s either illegal, like hate speech or child sexual abuse, or violates a platform’s terms of service. It also looks to protect citizens’ fundamental rights such as privacy and free speech.
Officials have warned tech companies that violations could bring fines worth up to 6 percent of their global revenue — which could amount to billions — or even a ban from the EU.
Tobacco-like warning label for social media sought by US surgeon general who asks Congress to act
https://arab.news/rwbe2
Tobacco-like warning label for social media sought by US surgeon general who asks Congress to act
- Dr. Vivek Murthy said that social media is a contributing factor in the mental health crisis among young people
‘AI is here, now what?’ Arab News unveils report on future of media ahead of Bridge Summit
- As the Bridge Summit opens in Abu Dhabi, Arab News releases a landmark report on how AI is transforming media in the MENA region
- Based on a high-level roundtable at the Dubai Future Forum, the new report highlights both the opportunities and risks facing Arab media
DUBAI: As the Bridge Summit kicks off in Abu Dhabi on Monday, bringing together global leaders to explore the future of media, entertainment, and the creative economy, Arab News has launched a timely report on how artificial intelligence is transforming the media industry in the Middle East and beyond.
The report, produced by the Arab News Research and Studies Unit following a high-level roundtable at the Dubai Future Forum, captures the urgency and complexity of AI adoption in the media industry of the Middle East and North Africa region.
It explores how AI is transforming newsroom operations, redefining journalistic roles, and raising critical questions around credibility, accuracy, and trust amid rapid technological disruption.
AI is no longer an emerging trend in the Middle East — it is a central force reshaping economies, governance and public communication.
With AI projected to contribute $320 billion to the regional economy by 2030, including more than $135 billion to Saudi Arabia’s gross domestic product and nearly $96 billion to the UAE’s, governments and industries are racing to integrate it.
But, for the region’s news media, AI represents something deeper than economic potential: a direct challenge to the foundations of credibility, trust and fact-based reporting.
Such were the questions that set the stage for the roundtable hosted and moderated by Arab News’ Deputy Editor-in-Chief Noor Nugali in collaboration with the Dubai Future Foundation, where editors, media executives and tech specialists convened to confront an industry experiencing one of the most dramatic transformations in its history.
The result is an exhaustive and insightful report, which offers both optimism and unease as AI’s looming presence weaves into daily newsroom operations, just as the guardrails needed to protect journalism from misinformation, bias and opacity remain dangerously underdeveloped.
“AI is here and it’s transforming our newsroom,” said Mina Al-Oraibi, editor in chief of the UAE’s leading daily The National, as she described how her team recently held a full-newsroom AI workshop to generate internal use cases.
“We got 26 ideas that we’re working through so people don’t feel this is something imposed,” she said. “They need to feel they’re ahead of the curve rather than being eaten up by it.”

Across the region, that curve is moving quickly. Globally, 81 percent of journalists now use AI tools during their general work, while nearly half do so daily.
However, reporters admit they rely on it mostly to handle mundane, time-consuming tasks such as transcribing interviews, summarizing reports, and translating documents.
Nabeel Al-Khatib, general manager of Asharq News, explained how the shift has already redefined newsroom economics.
“A newsroom of 50 can now publish the equivalent of what 500 once could,” he said. However, although “machines will take over the production line,” he argued that “human oversight must remain to ensure accuracy, context and editorial standards.”
For many newsrooms, the advent of generative AI — machines creating new original content — has created valuable efficiencies, freeing journalists to spend more time verifying and reporting, which are tasks no machine can yet replace.
However, several speakers stressed that the value of AI depends entirely on how intentionally it is used.
“We believe it’s human first, human last,” said Nayla Tueni, editor in chief of Lebanese daily An-Nahar. “We need to always fact-check everything. But at the same time, we need to use all the tools.”
For Tueni, transformation is not optional. “I don’t think journalism will end,” she said. However, if outlets “don’t transform, they cannot continue because the world is transforming every second.”
Accessing revenue streams is also a concern. Elda Choucair, CEO of Omnicom Media Group MENA, said “the biggest danger is … if you don’t have content that you advertise around.”
The region’s audiences appear more comfortable with AI-enhanced content than those in Western markets. But even as opportunities expand, risks multiply. AI-generated misinformation has surged so dramatically that the World Economic Forum ranked it the top global short-term threat for the second year in a row.
A BBC-led audit of four major AI systems found that nearly half of AI-generated answers contained significant errors, fabricated details or incorrect sourcing.
“It’s already very difficult to differentiate between the (true) and the fake,” said Choucair. “We need to create awareness that sometimes, if you really want the truth, you’ve got to wait.”
At a time when 70 percent of global audiences say they struggle to trust online content, speakers warned that the misuse or undisclosed use of AI could deepen a crisis of confidence.
“The machine should be a slave to human beings,” advertising media mogul Pierre Choueiri said, adding: “This is where governments, or regulations, should come in.”
However, regulation in the region remains elusive. While Saudi Arabia has taken major steps, including the establishment of the Saudi Data & AI Authority and the Kingdom’s Generative AI Guidelines, efforts remain far from the comprehensive frameworks seen in Europe.
“It’s inconceivable that Arab consumers are left to face significant risks with no regulatory shield,” said media strategist and legal expert Mazen Hayek. He argued that the region needs its own protections, like the EU’s General Data Protection Regulation, to ensure transparency, safeguard data and hold AI providers accountable.

For Hayek and others, the deeper problem involves technological sovereignty. Nearly all of the AI platforms used in the Middle East today — from search engines to large language models — are built and controlled abroad, often trained on datasets that do not reflect the region’s linguistic, cultural or political realities.
“We live in a region that has zero control over the platforms and the technology that we consume,” Hayek said. “Someone needs to create a platform that empowers the region to create and distribute its own content.”
Julien Hawari, CEO of the emerging social media platform Million, said the main issue is integrity. “That has been a problem for as long as we can think of.”
Rashid Al-Marri, CEO of the Media Regulation Sector at the Dubai Media Council, explained that “there has to be that human element understanding (the content) and what’s happening and being able to come out and speak and get the truth out there.”
Saudi Arabia’s push toward sovereign AI infrastructure, including Public Investment Fund-backed HUMAIN and the $100 billion Project Transcendence, was cited as a step in the right direction. However, roundtable participants warned that unless the region accelerates these efforts, it risks ceding its information future to external algorithms and foreign companies.
The human-capital gap is equally pressing. Despite widespread adoption, most journalists using AI have received little or no training. Many rely on self-learning or online tutorials, and nearly eight in 10 work in newsrooms without formal AI policies.
This lack of structure has created an environment where AI is widely deployed but rarely governed.
For CAMB.AI co-founder Avneesh Prakash, the solution requires both precaution and empowerment. “Like any innovation, AI needs to be regulated,” he said. “Just as a car has an accelerator and a brake, AI must include a kill switch because it requires human judgment, human creativity and human resilience.”
Despite the risks, the discussion ended on a note of guarded optimism. Participants agreed that AI can help rebuild journalism for a digital era — but only if newsrooms combine innovation with rigorous editorial oversight, transparency and a renewed commitment to verification.
Mamoon Sbeih, regional president of advertising firm APCO, offered a clear warning of what lies ahead. AI, he said, “might help the journalism industry progress and redefine itself, or it might expedite its demise.”
For now, the region’s media leaders remain determined to pursue the first path — ensuring that even as machines play a growing role in production, the values that define journalism remain firmly, unmistakably human.











