Global news publisher Axel Springer partners with OpenAI in landmark deal

News publishers have been slow to adopt generative AI technology over concerns about its tendency to generate factually incorrect information. (AFP/File)
Short Url
Updated 14 December 2023
Follow

Global news publisher Axel Springer partners with OpenAI in landmark deal

  • OpenAI will pay to use Axel Springer’s content
  • OpenAI’s ChatGPT will provide summaries of news stories citing Axel Springer brands as the source

NEW YORK: Global news publisher Axel Springer is partnering with OpenAI, the company behind the ChatGPT chatbot, in a first-of-its-kind deal that will deliver summaries of Axel Springer content in response to ChatGPT queries, the companies announced on Wednesday.
As part of the deal, when users ask ChatGPT a question, the chatbot will deliver summaries of relevant news stories from Axel Springer brands including Politico, Business Insider, Bild and Welt. Those summaries will include material from stories that would otherwise require subscriptions to read. The summaries will cite the Axel Springer publication as the source, and also provide a link to the full article it summarizes.
The summaries will be available on ChatGPT as soon as the article has been published, so that breaking news is part of the user experience, according to Tom Rubin, OpenAI’s head of intellectual property and content. The Axel Springer content will begin appearing in the first quarter of 2024, Rubin said.
The content will get a “favorable position” in ChatGPT search results, with the goal of helping to drive traffic and subscription revenue to Axel Springer brands, according to a source familiar with the deal.
OpenAI will also pay for the Axel Springer content it uses to train the large language models that power ChatGPT. That content includes archived material, Rubin said.
The companies did not disclose financial terms of the deal, which is for multiple years and is not exclusive, according to Rubin.
“We want to explore the opportunities of AI empowered journalism – to bring quality, societal relevance and the business model of journalism to the next level,” said Axel Springer Chief Executive Mathias Doepfner in a statement.
The deal comes as publishers contemplate suing technology companies for violating their copyrights by using, without permission, their content to train large language models. In addition to striking deals with AI companies, they are threatening litigation over the possibility of copyright infringement and demanding to be compensated for the content used to train AI models.
AI companies, for their part, benefit from training their models on accurate, recent information, making news content a desirable source of training data. AI systems such as ChatGPT have dazzled consumers and businesses with their ability to plan vacations, summarize legal documents and write computer code.
The Axel Springer deal is the second between OpenAI and a major news publisher. In July OpenAI struck a deal with the Associated Press, in which the AP is licensing part of its archive of news stories to the Microsoft-backed tech company. The AP will gain access to OpenAI’s technology and product expertise as part of the deal, for which financial details were not disclosed. The AP deal “wasn’t about display of content,” said Rubin.
Other deals may soon follow. In November, News Corp. chief executive Robert Thomson said the company was in “advanced discussions” to strike deals on the use of its content for generative AI.
News publishers have been slow to adopt generative AI technology over concerns about its tendency to generate factually incorrect information, as well as challenges in differentiating between content produced by humans and computer programs.
Europe on Friday reached a provisional deal on landmark European Union rules governing the use of AI. The accord includes new transparency obligations for foundation models like those powering ChatGPT, including revealing what material they use to train their models. Those obligations could expose technology companies to more potential lawsuits or push them to strike deals.


Disinformation the new enemy in disaster zones, says Red Cross

Updated 05 March 2026
Follow

Disinformation the new enemy in disaster zones, says Red Cross

  • “Harmful information and dehumanizing narratives” undermines humanitarian aid and putting lives of aid workers at risk
  • Between 2020 and 2024, disasters affected nearly 700 million people, displaced over 105 million, and killed more than 270,000 — doubling the number in need of humanitarian aid

GENEVA: The rise of disinformation is undermining humanitarian aid and putting lives at risk, while disasters are affecting ever more people, the Red Cross warned Thursday.
“Between 2020 and 2024, disasters affected nearly 700 million people, caused more than 105 million displacements, and claimed over 270,000 lives,” the International Federation of Red Cross and Red Crescent Societies said.
The number of people needing humanitarian assistance more than doubled in the same timeframe, the IFRC said in its World Disasters Report 2026.
But the world’s largest humanitarian network said that “harmful information and dehumanizing narratives” were increasingly undermining trust, putting the lives of aid workers at risk.
“In polarized and politically-charged contexts, humanitarian principles such as neutrality and impartiality are increasingly misunderstood, misrepresented or deliberately attacked online,” it said.
The IFRC has more than 17 million volunteers across more than 191 countries.
“In every crisis I have witnessed, information is as essential as food, water and shelter,” said the Geneva-based federation’s secretary general Jagan Chapagain.
“But when information is false, misleading or deliberately manipulated, it can deepen fear, obstruct humanitarian access and cost lives.”
He said harmful information was not a new phenomenon, but it was now moving “with unprecedented speed and reach.”
Chapagain said digital platforms were proving “fertile ground for lies.”
The IFRC report said the challenge nowadays was no longer about the availability of information but its reliability, noting that the production and spread of disinformation was easily amplified by artificial intelligence.

- ‘Life and death’ -

The report cited numerous recent examples of harmful information hampering crisis response.
During the 2024 floods in Valencia, false narratives online accused the Spanish Red Cross of diverting aid to migrants, which in turn fueled “xenophobic attacks on volunteers,” the IFRC said.
In South Sudan, rumors that humanitarian agencies were distributing poisoned food “caused people to avoid life-saving aid” and led to threats against Red Cross staff.
In Lebanon, false claims that volunteers were spreading Covid-19, favoring certain groups with aid and providing unsafe cholera vaccines eroded trust and endangered vulnerable communities, the IFRC said.
And in Bangladesh, during political unrest, volunteers faced “widespread accusations of inaction and political alignment,” leading to harassment and reputational damage, it added.
Similar events were registered by the IFRC in Sudan, Myanmar, Peru, the United States, New Zealand, Canada, Kenya and Bulgaria.
The report underlined that around 94 percent of disasters were handled by national authorities and local communities, without international interventions.
“However, while volunteers, local leaders and community media are often the most trusted messengers, they operate in increasingly hostile and polarized information environments,” the IFRC said.
The federation called on governments, tech firms, humanitarian agencies and local actors to recognize that reliable information “is a matter of life and death.”
“Without trust, people are less likely to prepare, seek help or follow life-saving guidance; with it, communities act together, absorb shocks and recover more effectively,” said Chapagain.
The organization urged technology platforms to prioritize authoritative information from trusted sources in crisis contexts, and transparently moderate harmful content.
And it said humanitarian agencies needed to make preparing to deal with disinformation “a core function” of their operations, with trained teams and analytics.