OpenAI is working on X-like social media network, the Verge reports

In February, a consortium of investors led by Musk made an unsolicited $97.4 billion bid for the control of OpenAI, only to be rejected by Altman with a swift “no thank you.” (REUTERS/File)
Short Url
Updated 16 April 2025
Follow

OpenAI is working on X-like social media network, the Verge reports

  • The project remains in its early stages, with its release as a standalone application or integration into ChatGPT yet to be determined, report says
  • Potential move could escalate tensions between OpenAI CEO Sam Altman and X’s owner Elon Musk

LONDON: OpenAI is working on its own X-like social media network, the Verge reported on Tuesday, citing multiple sources familiar with the matter.
There is an internal prototype focused on ChatGPT’s image generation that has a social feed, the report said.
OpenAI CEO Sam Altman has been privately asking outsiders for feedback about the project, which is still in early stages, according to the Verge. It is unclear whether the company plans to release the social network as a separate application or integrate it into ChatGPT, the report said.
The company did not immediately respond to a Reuters request for comment.
The potential move could escalate tensions between Altman and billionaire Elon Musk — the owner of X and an OpenAI co-founder who left the startup in 2018 before it emerged as a front-runner in the generative artificial intelligence race.
The feud has intensified in recent months. In February, a consortium of investors led by Musk made an unsolicited $97.4 billion bid for the control of OpenAI, only to be rejected by Altman with a swift “no thank you.”
Musk had sued the ChatGPT maker and Altman last year, alleging they had abandoned OpenAI’s original goal of developing AI for the benefit of humanity — not corporate gain.
OpenAI counter-sued Musk earlier this month, accusing him of a pattern of harassment and attempting to derail its shift to a for-profit model. The two parties are set to begin a jury trial in spring next year.
An OpenAI social network could also put the company in direct competition with Facebook-owner Meta, which is reportedly working on a standalone Meta AI service. In February, Altman responded on X over media reports on Meta’s plans, saying “ok fine maybe we’ll do a social app.”
Both Meta and X have access to a massive amount of data — public content posted by users on their social media platforms — that they train their AI models on.


Disinformation the new enemy in disaster zones, says Red Cross

Updated 05 March 2026
Follow

Disinformation the new enemy in disaster zones, says Red Cross

  • “Harmful information and dehumanizing narratives” undermines humanitarian aid and putting lives of aid workers at risk
  • Between 2020 and 2024, disasters affected nearly 700 million people, displaced over 105 million, and killed more than 270,000 — doubling the number in need of humanitarian aid

GENEVA: The rise of disinformation is undermining humanitarian aid and putting lives at risk, while disasters are affecting ever more people, the Red Cross warned Thursday.
“Between 2020 and 2024, disasters affected nearly 700 million people, caused more than 105 million displacements, and claimed over 270,000 lives,” the International Federation of Red Cross and Red Crescent Societies said.
The number of people needing humanitarian assistance more than doubled in the same timeframe, the IFRC said in its World Disasters Report 2026.
But the world’s largest humanitarian network said that “harmful information and dehumanizing narratives” were increasingly undermining trust, putting the lives of aid workers at risk.
“In polarized and politically-charged contexts, humanitarian principles such as neutrality and impartiality are increasingly misunderstood, misrepresented or deliberately attacked online,” it said.
The IFRC has more than 17 million volunteers across more than 191 countries.
“In every crisis I have witnessed, information is as essential as food, water and shelter,” said the Geneva-based federation’s secretary general Jagan Chapagain.
“But when information is false, misleading or deliberately manipulated, it can deepen fear, obstruct humanitarian access and cost lives.”
He said harmful information was not a new phenomenon, but it was now moving “with unprecedented speed and reach.”
Chapagain said digital platforms were proving “fertile ground for lies.”
The IFRC report said the challenge nowadays was no longer about the availability of information but its reliability, noting that the production and spread of disinformation was easily amplified by artificial intelligence.

- ‘Life and death’ -

The report cited numerous recent examples of harmful information hampering crisis response.
During the 2024 floods in Valencia, false narratives online accused the Spanish Red Cross of diverting aid to migrants, which in turn fueled “xenophobic attacks on volunteers,” the IFRC said.
In South Sudan, rumors that humanitarian agencies were distributing poisoned food “caused people to avoid life-saving aid” and led to threats against Red Cross staff.
In Lebanon, false claims that volunteers were spreading Covid-19, favoring certain groups with aid and providing unsafe cholera vaccines eroded trust and endangered vulnerable communities, the IFRC said.
And in Bangladesh, during political unrest, volunteers faced “widespread accusations of inaction and political alignment,” leading to harassment and reputational damage, it added.
Similar events were registered by the IFRC in Sudan, Myanmar, Peru, the United States, New Zealand, Canada, Kenya and Bulgaria.
The report underlined that around 94 percent of disasters were handled by national authorities and local communities, without international interventions.
“However, while volunteers, local leaders and community media are often the most trusted messengers, they operate in increasingly hostile and polarized information environments,” the IFRC said.
The federation called on governments, tech firms, humanitarian agencies and local actors to recognize that reliable information “is a matter of life and death.”
“Without trust, people are less likely to prepare, seek help or follow life-saving guidance; with it, communities act together, absorb shocks and recover more effectively,” said Chapagain.
The organization urged technology platforms to prioritize authoritative information from trusted sources in crisis contexts, and transparently moderate harmful content.
And it said humanitarian agencies needed to make preparing to deal with disinformation “a core function” of their operations, with trained teams and analytics.