Elon Musk sues OpenAI and CEO Sam Altman, claiming betrayal of its goal to benefit humanity

1 / 2
OpenAI CEO Sam Altman has turned ChatGPT into a profit-making endeavor, a betrayal of the project's founding aims of benefiting humanity, says billionaire Elon Musk. (AP/File)
2 / 2
Elon Musk says that OpenAI and its CEO Sam Altman has turned ChatGPT into a profit-making endeavour, a betrayal of the project's founding aims of benefiting humanity. (AP/File)
Short Url
Updated 02 March 2024
Follow

Elon Musk sues OpenAI and CEO Sam Altman, claiming betrayal of its goal to benefit humanity

Elon Musk is suing OpenAI and its CEO Sam Altman over what he says is a betrayal of the ChatGPT maker’s founding aims of benefiting humanity rather than pursuing profits.
In a lawsuit filed at San Francisco Superior Court, billionaire Musk said that when he bankrolled OpenAI’s creation, he secured an agreement with Altman and Greg Brockman, the president, to keep the AI company as a nonprofit that would develop technology for the benefit of the public.
Under its founding agreement, OpenAI would also make its code open to the public instead of walling it off for any private company’s gains, the lawsuit says.
However, by embracing a close relationship with Microsoft, OpenAI and its top executives have set that pact “aflame” and are “perverting” the company’s mission, Musk alleges in the lawsuit.
OpenAI declined to comment on the lawsuit Friday.
“OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” the lawsuit filed Thursday says. “Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity.”




REUTERS illustration

AGI refers to artificial general intelligence, which are general purpose AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.
Musk is suing over breach of contract, breach of fiduciary duty and unfair business practices. He also wants an injunction to prevent anyone, including Microsoft, from benefiting from OpenAI’s technology.
Those claims are unlikely to succeed in court but that might not be the point for Musk, who is getting his take and personal story on the record, said Anupam Chander, a law professor at Georgetown University.
“Partly there’s an assertion of Elon’s founding role in OpenAI and generative AI technology, in particularly his claim he named OpenAI and he hired the key scientist and that he was the primary funder of its early years,” Chander said. “In some sense it’s a lawsuit that tries to establish his own place in the history of generative AI.”
Musk was an early investor in OpenAI when it was founded in 2015 and co-chaired its board alongside Altman. In the lawsuit, he said he invested “tens of millions” of dollars in the nonprofit research laboratory.
Musk resigned from the board in early 2018 in a move that OpenAI said at the time would prevent conflicts of interest as the Tesla CEO was recruiting AI talent to build self-driving technology at the electric car maker. “This will eliminate a potential future conflict for Elon,” OpenAI said in a February 2018 blog post. Musk has since said he also had disagreements with the startup’s direction, but he continued to donate to the nonprofit.
Later that year, OpenAI filed papers to incorporate a for-profit arm and began shifting most of its workforce to that business, but retained a nonprofit board of directors that governed the company. Microsoft made its first $1 billion investment in the company in 2019 and the next year, signed an agreement that gave the software giant exclusive rights to its AI models. That license is supposed to expire once OpenAI has achieved artificial general intelligence, the company has said.




ChatGPT-maker OpenAI is looking to fuse its artificial intelligence systems into the bodies of humanoid robots as part of a new deal with robotics startup Figure. (AP/File)

Its unveiling of ChatGPT in late 2022 bought worldwide fame to OpenAI and helped spark a race by tech companies to capitalize on the public’s fascination with the technology.
When the nonprofit board abruptly fired Altman as CEO late last year, for reasons that still haven’t been fully disclosed, it was Microsoft that helped drive the push that brought Altman back as CEO and led most of the old board to resign. Musk’s lawsuit alleged that those changes caused the checks and balances protecting the nonprofit mission to “collapse overnight.”
One of Musk’s claims is that the directors of the nonprofit have failed to uphold their obligations to follow its mission, but Dana Brakman Reiser, a professor at Brooklyn Law School, is skeptical that Musk had standing to bring that claim.
“It would be very worrisome if every person who cared about or donated to a charity could suddenly sue their directors and officers to say, ‘You’re not doing what I think is the right thing to run this nonprofit,’” she said. In general, only other directors or an attorney general, for example, could bring that type of suit, she said.
Even if Musk invested in the for-profit business, his complaint seems to be that the organization is making too much profit in contradiction to its mission, which includes making its technology publicly available.
“I care about nonprofits actually following the mission that they set out and not being captured for some kind for profit purpose. That is a real concern,” Brakman Reiser said. “Whether Elon Musk is the person to raise that claim, I’m less sure.”
Whatever the legal merits of the claims, a brewing courtroom fight between Musk and Altman could offer the public a peek into the internal debates and decision-making at OpenAI, though the company’s lawyers will likely fight to keep some of those documents confidential.
“The discovery will be epic,” posted venture capitalist Chamath Palihapitiya on Musk’s social media platform X on Friday. To which Musk replied in his only public commentary so far on the case: “Yes.”
 


‘AI is here, now what?’ Arab News unveils report on future of media ahead of Bridge Summit

Updated 07 December 2025
Follow

‘AI is here, now what?’ Arab News unveils report on future of media ahead of Bridge Summit

  • As the Bridge Summit opens in Abu Dhabi, Arab News releases a landmark report on how AI is transforming media in the MENA region
  • Based on a high-level roundtable at the Dubai Future Forum, the new report highlights both the opportunities and risks facing Arab media

DUBAI: As the Bridge Summit kicks off in Abu Dhabi on Monday, bringing together global leaders to explore the future of media, entertainment, and the creative economy, Arab News has launched a timely report on how artificial intelligence is transforming the media industry in the Middle East and beyond.

The report, produced by the Arab News Research and Studies Unit following a high-level roundtable at the Dubai Future Forum, captures the urgency and complexity of AI adoption in the media industry of the Middle East and North Africa region.

It explores how AI is transforming newsroom operations, redefining journalistic roles, and raising critical questions around credibility, accuracy, and trust amid rapid technological disruption.

AI is no longer an emerging trend in the Middle East — it is a central force reshaping economies, governance and public communication.

Journalists watch an introductory video by the 'artificial intelligence' anchor Fedha on the twitter account of Kuwait News service, in Kuwait City on April 9, 2023. (AFP file photo)

With AI projected to contribute $320 billion to the regional economy by 2030, including more than $135 billion to Saudi Arabia’s gross domestic product and nearly $96 billion to the UAE’s, governments and industries are racing to integrate it.

But, for the region’s news media, AI represents something deeper than economic potential: a direct challenge to the foundations of credibility, trust and fact-based reporting.

Such were the questions that set the stage for the roundtable hosted and moderated by Arab News’ Deputy Editor-in-Chief Noor Nugali in collaboration with the Dubai Future Foundation, where editors, media executives and tech specialists convened to confront an industry experiencing one of the most dramatic transformations in its history.

Arab News held a roundtable on the sidelines of the Dubai Future Forum. (AN photo)

The result is an exhaustive and insightful report, which offers both optimism and unease as AI’s looming presence weaves into daily newsroom operations, just as the guardrails needed to protect journalism from misinformation, bias and opacity remain dangerously underdeveloped.

“AI is here and it’s transforming our newsroom,” said Mina Al-Oraibi, editor in chief of the UAE’s leading daily The National, as she described how her team recently held a full-newsroom AI workshop to generate internal use cases.

“We got 26 ideas that we’re working through so people don’t feel this is something imposed,” she said. “They need to feel they’re ahead of the curve rather than being eaten up by it.”

Across the region, that curve is moving quickly. Globally, 81 percent of journalists now use AI tools during their general work, while nearly half do so daily.

However, reporters admit they rely on it mostly to handle mundane, time-consuming tasks such as transcribing interviews, summarizing reports, and translating documents.

Nabeel Al-Khatib, general manager of Asharq News, explained how the shift has already redefined newsroom economics.

“A newsroom of 50 can now publish the equivalent of what 500 once could,” he said. However, although “machines will take over the production line,” he argued that “human oversight must remain to ensure accuracy, context and editorial standards.”

For many newsrooms, the advent of generative AI — machines creating new original content — has created valuable efficiencies, freeing journalists to spend more time verifying and reporting, which are tasks no machine can yet replace.

US President Donald Trump is shown praying in this AI-generated image. Media experts worry that differentiating between true and fake pictures is becoming difficult. 

However, several speakers stressed that the value of AI depends entirely on how intentionally it is used.

“We believe it’s human first, human last,” said Nayla Tueni, editor in chief of Lebanese daily An-Nahar. “We need to always fact-check everything. But at the same time, we need to use all the tools.”

For Tueni, transformation is not optional. “I don’t think journalism will end,” she said. However, if outlets “don’t transform, they cannot continue because the world is transforming every second.”

Accessing revenue streams is also a concern. Elda Choucair, CEO of Omnicom Media Group MENA, said “the biggest danger is … if you don’t have content that you advertise around.”

The region’s audiences appear more comfortable with AI-enhanced content than those in Western markets. But even as opportunities expand, risks multiply. AI-generated misinformation has surged so dramatically that the World Economic Forum ranked it the top global short-term threat for the second year in a row.

A BBC-led audit of four major AI systems found that nearly half of AI-generated answers contained significant errors, fabricated details or incorrect sourcing.

This AI-generated image shows US President Donald Trump being arrested by the police. Media experts worry that differentiating between true and fake pictures is becoming difficult. 

“It’s already very difficult to differentiate between the (true) and the fake,” said Choucair. “We need to create awareness that sometimes, if you really want the truth, you’ve got to wait.”

At a time when 70 percent of global audiences say they struggle to trust online content, speakers warned that the misuse or undisclosed use of AI could deepen a crisis of confidence.

“The machine should be a slave to human beings,” advertising media mogul Pierre Choueiri said, adding: “This is where governments, or regulations, should come in.”

However, regulation in the region remains elusive. While Saudi Arabia has taken major steps, including the establishment of the Saudi Data & AI Authority and the Kingdom’s Generative AI Guidelines, efforts remain far from the comprehensive frameworks seen in Europe.

“It’s inconceivable that Arab consumers are left to face significant risks with no regulatory shield,” said media strategist and legal expert Mazen Hayek. He argued that the region needs its own protections, like the EU’s General Data Protection Regulation, to ensure transparency, safeguard data and hold AI providers accountable.

For Hayek and others, the deeper problem involves technological sovereignty. Nearly all of the AI platforms used in the Middle East today — from search engines to large language models — are built and controlled abroad, often trained on datasets that do not reflect the region’s linguistic, cultural or political realities.

“We live in a region that has zero control over the platforms and the technology that we consume,” Hayek said. “Someone needs to create a platform that empowers the region to create and distribute its own content.”

Julien Hawari, CEO of the emerging social media platform Million, said the main issue is integrity. “That has been a problem for as long as we can think of.”

Rashid Al-Marri, CEO of the Media Regulation Sector at the Dubai Media Council, explained that “there has to be that human element understanding (the content) and what’s happening and being able to come out and speak and get the truth out there.”

Saudi Arabia’s push toward sovereign AI infrastructure, including Public Investment Fund-backed HUMAIN and the $100 billion Project Transcendence, was cited as a step in the right direction. However, roundtable participants warned that unless the region accelerates these efforts, it risks ceding its information future to external algorithms and foreign companies.

The human-capital gap is equally pressing. Despite widespread adoption, most journalists using AI have received little or no training. Many rely on self-learning or online tutorials, and nearly eight in 10 work in newsrooms without formal AI policies.

This lack of structure has created an environment where AI is widely deployed but rarely governed.

For CAMB.AI co-founder Avneesh Prakash, the solution requires both precaution and empowerment. “Like any innovation, AI needs to be regulated,” he said. “Just as a car has an accelerator and a brake, AI must include a kill switch because it requires human judgment, human creativity and human resilience.”

Despite the risks, the discussion ended on a note of guarded optimism. Participants agreed that AI can help rebuild journalism for a digital era — but only if newsrooms combine innovation with rigorous editorial oversight, transparency and a renewed commitment to verification.

Mamoon Sbeih, regional president of advertising firm APCO, offered a clear warning of what lies ahead. AI, he said, “might help the journalism industry progress and redefine itself, or it might expedite its demise.”

For now, the region’s media leaders remain determined to pursue the first path — ensuring that even as machines play a growing role in production, the values that define journalism remain firmly, unmistakably human.