UN council to hold first meeting on potential threats of artificial intelligence to global peace

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. (REUTERS)
Short Url
Updated 04 July 2023
Follow

UN council to hold first meeting on potential threats of artificial intelligence to global peace

  • Guterres announced plans to appoint an advisory board on artificial intelligence in September to prepare initiatives that the UN can take

UNITED NATIONS: The UN Security Council will hold a first-ever meeting on the potential threats of artificial intelligence to international peace and security, organized by the United Kingdom which sees tremendous potential but also major risks about AI’s possible use for example in autonomous weapons or in control of nuclear weapons.
UK Ambassador Barbara Woodward on Monday announced the July 18 meeting as the centerpiece of its presidency of the council this month. It will include briefings by international AI experts and Secretary-General Antonio Guterres, who last month called the alarm bells over the most advanced form of AI “deafening,” and loudest from its developers.
“These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war,” the UN chief said.
Guterres announced plans to appoint an advisory board on artificial intelligence in September to prepare initiatives that the UN can take. He also said he would react favorably to a new UN agency on AI and suggested as a model the International Atomic Energy Agency, which is knowledge-based and has some regulatory powers.
Woodward said the UK wants to encourage “a multilateral approach to managing both the huge opportunities and the risks that artificial intelligence holds for all of us,” stressing that “this is going to take a global effort.”
She stressed that the benefits side is huge, citing AI’s potential to help UN development programs, improve humanitarian aid operations, assist peacekeeping operations and support conflict prevention, including by collecting and analyzing data. “It could potentially help us close the gap between developing countries and developed countries,” she added.
But the risk side raises serious security question that must also be addressed, Woodward said.
Europe has led the world in efforts to regulate artificial intelligence, which gained urgency with the rise of a new breed of artificial intelligence that gives AI chatbots like ChatGPT the power to generate text, images, video and audio that resemble human work. On June 14, EU lawmakers signed off on the world’s first set of comprehensive rules for artificial intelligence, clearing a key hurdle as authorities across the globe race to rein in AI.
In May, the head of the artificial intelligence company that makes ChatGPT told a US Senate hearing that government intervention will be critical to mitigating the risks of increasingly powerful AI systems, saying as this technology advances people are concerned about how it could change their lives, and “we are too.”
OpenAI CEO Sam Altman proposed the formation of a US or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”
Woodward said the Security Council meeting, to be chaired by UK Foreign Secretary James Cleverly, will provide an opportunity to listen to expert views on AI, which is a very new technology that is developing very fast, and start a discussion among the 15 council members on its implications.
Britain’s Prime Minister Rishi Sunak has announced that the UK will host a summit on AI later this year, “where we’ll be able to have a truly global multilateral discussion,” Woodward said.

 


A matter of trust: Media leaders look to rebuild credibility in age of AI

Updated 08 December 2025
Follow

A matter of trust: Media leaders look to rebuild credibility in age of AI

  • ‘Don’t do what pleases platforms, do what is right,’ journalism professor says
  • ‘General journalism is going to be very difficult,’ media boss says

ABU DHABI: Media organizations are facing unprecedented disruption to their industry, as traditional business models come under strain from rapid technological shifts, the rise of independent creators and a growing public distrust in news.

This fragmented landscape has transformed the essence of journalism and content creation in the 21st century.

Amid the upset, journalists, creators and industry executives were in Abu Dhabi on Monday for the opening day of the inaugural Bridge Media summit, where they hoped to map a path forward in a rapidly evolving industry.

Jeff Zucker, CEO and operating partner at RedBird IMI and RedBird Capital Partners, said that while storytelling remained at the core of the media, artificial intelligence was fundamentally reshaping how stories were created, delivered and consumed.

“General journalism, by and large, is going to be very difficult in a world of AI,” he told the conference.

Having been at the helm of some of the biggest media businesses in the world, including CNN and NBC Universal, Zucker emphasized the value of deep, niche journalism, arguing that the viability of future news models will hinge on offering something readers cannot get elsewhere.

“Economic models may broaden, so I think that niche journalism that goes deep and gives the consumer an edge and a reason to subscribe to that journalistic outlet — that’s what will work and that’s what will succeed.”

It is an idea that featured across the first day of the summit, with media practitioners from all disciplines pushing colleagues to focus on elevating the quality and originality of their content, rather than being dismayed at the fall in advertising revenue and chokehold of algorithms.

Moataz Fattah, a journalism professor and presenter at Al-Mashhad TV, decried media organizations’ constant focus on algorithms, saying they would be better served by honing their craft.

“Don’t do what pleases platforms, do what is right and go to where the audience is,” he said.

“How to be authentic is to be true to what you believe in.”

Fattah argued that while it was true that younger generations gravitated toward short form content, it was still possible to engage them to take deeper dives on subjects.

What mattered most, he said, was ensuring that the right format was used for the subject matter, applying creativity and flair to keep audiences challenged and informed so that they might get the full context.

This idea of challenging audiences, rather than caving to what may seem trendy was echoed by Branko Brkic, leader at Project Kontinuum, an initiative that aims to reaffirm news media’s positive role in the global community.

“If we are giving readers and audiences (only) what they want, why do we exist? Why do they need us?” he said.

“We have to be half a step ahead, we need to satisfy the needs that they know they have but also fill the needs they didn’t know they wanted.”

Sulemana Braimah, executive director at the Media Foundation for West Africa, said transparency, credibility and, ultimately, the impact on society were what should drive storytelling, rather than just views and likes.

“In the newsroom, we always have to ask why we are doing this story, what is the story in the story, who is it for?” she said, urging media outlets to choose depth over superficial recognition of content.

“Stories that get views don’t necessarily mean they hold value. We need to keep asking why, what’s the value, what are we helping by making this story.”

Individuals over institutions 

Another theme that dominated discussions at the conference was the idea that trust was increasingly being driven by individuals rather than brands and institutions. The argument, put forward by Zucker, is that unlike in the past, when legacy outlets conferred trust upon journalists, audiences now place their trust in individual voices within a media institution, making personal reputation a critical currency in modern journalism.

“People are looking much more to individuals in this new creator economy, this new AI world,” he said.

Jim Bankoff, co-founder and CEO of Vox Media, echoed that sentiment and predicted that more news content would be led by trustworthy and notable personalities.

Speaking on the strategy of his own media company, he said the future would likely see lower headcounts within institutions, due to AI and automation, but more emphasis on talented individuals.

“Work on something that makes you essential to your core audience,” he said.

Consolidation, AI and finances in flux

One of the big talking points of the opening day was Netflix’s attempt to acquire Warner Bros., a move seen by some as evidence of a rapidly consolidating industry challenged by shrinking profit margins.

AI seemingly only seeks to further challenge these margins. With many more people using AI summaries and overviews to get news and information, chatbots are becoming the new face of the internet, reducing traffic flow to news websites and destroying the ad-based revenue model.

Pooja Bagga, chief information officer at Guardian Media Group, said audiences defined the rules of the internet and delivery of news content and that the onus lay with media companies to reinvent themselves.

“It’s all about what our audience want, what they want to see, how they want to see it, which formats they want to interact with and when they want to consume the news,” she said.

Many media outlets have signed licensing deals with AI companies to include the use of their content as reference points for user queries in tools like ChatGPT while ensuring attribution back to their websites.

These agreements also allow tech firms to access publishers’ content — including material held behind paywalls — to train large language models and power AI-driven services in exchange for media organizations’ use of the tech to build their own products or for revenue sharing.

In October last year, the Financial Times, Reuters, Axel Springer, Hearst and USA Today Network signed an agreement with Microsoft allowing it to republish their content in exchange for a share of the advertising revenue.

Bagga said that such agreements were essential to safeguarding news content and ensuring tech companies upheld their responsibility to handle journalistic material with integrity and accountability.

She also stressed the need for greater transparency from tech companies in how they use journalistic content to train large language models, emphasizing the importance of ensuring accuracy in AI-generated overviews.

An alternative route, she said, was collaborating with other publishing companies under rules and regulations that ensure intellectual property was protected.

In newsrooms, amid the fast-evolving world of tech and artificial intelligence, there must be a trusted supervisory body to safeguard editorial integrity, she said.

Elizabeth Linder, founder and chief diplomatic officer at Brooch Associates, stressed the need for transparency and broad understanding on how decisions are made by media and tech companies to ensure “a productive social contract.”

She called for conversations between governments, tech platforms and individuals, citing Australia's Communications Minister Anika Wells, who introduced a bill to ban social media use for children under the age of 16.

“Especially with the development of AI technology coming in, we need to take a really big step back and reframe this entire conversation.”