New York Times sues OpenAI, Microsoft for infringing copyrighted works

The Times’ lawsuit cited several instances in which OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles. (AFP/File)
Short Url
Updated 27 December 2023
Follow

New York Times sues OpenAI, Microsoft for infringing copyrighted works

  • The newspaper said that its articles were used to train ChatGPT and Bing Chat chatbots without permission

NEW YORK: The New York Times sued OpenAI and Microsoft on Wednesday, accusing them of using millions of the newspaper’s articles without permission to help train chatbots to provide information to readers.
The Times said it is the first major US media organization to sue OpenAI and Microsoft, which created popular artificial-intelligence platforms such as ChatGPT and Bing Chat, now known as Copilot, over copyright issues associated with its works.
Writers and others have also sued to limit the so-called scraping by AI services of their online content without compensation.
The newspaper’s complaint filed in Manhattan federal court accused OpenAI and Microsoft of trying to “free-ride on The Times’s massive investment in its journalism” by using it to provide alternative means to deliver information to readers.
“There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it,” the Times said.
OpenAI and Microsoft did not immediately respond to requests for comment. They have said using copyrighted works to train AI products amounts to “fair use.”
The Times is not seeking a specific amount of damages, but the 172-year-old newspaper estimated damages in the “billions of dollars.”
It also wants the companies to destroy chatbot models and training sets that incorporate its material.

$80 BILLION VALUATION
AI companies scrape information online to train generative AI chatbots, and have attracted billions of dollars in investments.
Investors have valued OpenAI at more than $80 billion.
While OpenAI’s parent is a nonprofit, Microsoft has invested $13 billion in a for-profit subsidiary, for what would be a 49 percent stake.
Novelists including David Baldacci, Jonathan Franzen, John Grisham and Scott Turow have also sued OpenAI and Microsoft in the Manhattan court, claiming that AI systems might have co-opted tens of thousands of their books.
In July, the comedian Sarah Silverman and other authors sued OpenAI and Meta Platforms in San Francisco for having “ingested” their works, including Silverman’s 2010 book “The Bedwetter.” A judge dismissed most of that case in November.
Chatbots compound the struggle among major media organizations to attract and retain readers, though the Times has fared better than most.
The Times ended September with 9.41 million digital-only subscribers, up from 8.59 million a year earlier, while print subscribers fell to 670,000 from 740,000.
Subscriptions generate more than two-thirds of the Times’ revenue, while ads generate about 20 percent of its revenue.

’MISINFORMATION’
The Times’ lawsuit cited several instances in which OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles.
These included a Pulitzer Prize-winning 2019 series on predatory lending in New York City’s taxi industry, and Pete Wells’ 2012 review of Guy Fieri’s since-closed Guy’s American Kitchen & Bar that became a viral sensation.
The Times said such infringements threaten high-quality journalism by reducing readers’ perceived need to visit its website, reducing traffic and potentially cutting in to advertising and subscription revenue.
It also said the defendants’ chatbots make it harder for readers to distinguish fact from fiction, including when their technology falsely attributes information to the newspaper.
In one instance, the Times said ChatGPT falsely attributed two recommendations for office chairs to its Wirecutter product review website.
“In AI parlance, this is called a ‘hallucination,’” the Times said. “In plain English, it’s misinformation.”
Talks earlier this year to avert a lawsuit, and allow “a mutually beneficial value exchange between defendants and the Times,” were unsuccessful, the Times said.


UAE outlines approach to AI governance amid regulation debate at World Economic Forum

Updated 22 January 2026
Follow

UAE outlines approach to AI governance amid regulation debate at World Economic Forum

  • Minister of State Maryam Al-Hammadi highlights importance of a robust regulatory framework to complement implementation of AI technology
  • Other experts in panel discussion say regulators should address problems as they arise, rather than trying to solve problems that do not yet exist

DUBAI: The UAE has made changes to 90 percent of its laws in the past four years, Maryam Al-Hammadi, minister of state and the secretary-general of the Emirati Cabinet, told the World Economic Forum in Davos on Wednesday.

Speaking during a panel discussion titled “Regulating at the Speed of Code,” she highlighted the importance of having a robust regulatory framework in place to complement the implementation of artificial intelligence technology in the public and private sectors.

The process of this updating and repealing of laws has driven the UAE’s efforts to develop an AI model that can assist in the drafting of legislation, along with collecting feedback from stakeholders on proposed laws and suggesting improvements, she said.

Although AI might be more agile at shaping regulation, “there are some principles that we put in the model that we are developing that we cannot compromise,” Al-Hammadi added. These include rules for human accountability, transparency, privacy and data protection, along with constitutional safeguards and a thorough understanding of the law.

At this stage, “we believe AI can advise but still (the) human is in command,” she said.

Authorities in the UAE are aiming to develop, within a two-year timeline, a shareable model to help other nations learn and benefit from its experiences, Al-Hammadi added.

Argentina’s minister of deregulation and state transformation, Federico Sturzenegger, warned against overregulation at the cost of innovation.

Politicians often react to a “salient event” by overreacting, he said, describing most regulators as “very imaginative of all the terrible things that will happen to people if they’re free.”

He said that “we have to take more risk,” and regulators should wait to address problems as they arise rather than trying to create solutions for problems that do not yet exist.

This sentiment was echoed by Joel Kaplan, Meta’s chief global affairs officer, who said “imaginative policymakers” often focus more on risks and potential harms than on the economic and growth benefits of innovation.

He pointed to Europe as an example of this, arguing that an excessive focus on “all the possible harms” of new technologies has, over time, reduced competitiveness and risks leaving the region behind in what he described as a “new technological revolution.”