Facebook to apply state media labels on Russian, Chinese outlets

Facebook has stepped up its cybersecurity defenses and imposed greater transparency requirements for pages and ads on its platforms. (AFP)
Short Url
Updated 05 June 2020
Follow

Facebook to apply state media labels on Russian, Chinese outlets

  • Facebook will not label any US-based news organizations
  • Social media giant said even US government-run outlets have editorial independence

SAN FRANCISCO: Facebook will start labeling Russian, Chinese and other state-controlled media organizations, and later this summer will block any ads from such outlets that target US users, it said on Thursday.
The world’s biggest social network will apply the label to Russia’s Sputnik, Iran’s Press TV and China’s Xinhua News, according to a partial list Facebook provided. The company will apply the label to about 200 pages at the outset.
Facebook will not label any US-based news organizations, as it determined that even US government-run outlets have editorial independence, Nathaniel Gleicher, Facebook’s head of cybersecurity policy, said in an interview.
Facebook, which has acknowledged its failure to stop Russian use of its platforms to interfere in the 2016 US presidential election, has since stepped up its defenses and imposed greater transparency requirements for pages and ads on its platforms.
The company announced plans last year to create a state media label, but is introducing it amid criticism over its hands-off treatment of misleading and racially charged posts by US President Donald Trump.
The new measure comes just months ahead of the November US presidential election.
Under the move, Facebook will not use the label for media outlets affiliated with individual political figures or parties, which Gleicher said could push “boundaries that are very, very slippery.”
“What we want to do here is start with the most critical case,” he said.
Chinese foreign ministry spokesman Geng Shuang told reporters during a daily briefing in Beijing on Friday that social media companies should not selectively create obstacles for media agencies.
“We hope that the relevant social media platform can put aside the ideological bias and hold an open and accepting attitude toward each country’s media role,” he said.
Facebook is not the first company to take such action.
YouTube, owned by Alphabet Inc’s Google, in 2018 started identifying video channels that predominantly carry news items and are funded by governments. But critics charge YouTube has failed to label some state news outlets, allowing them to earn ad revenue from videos with misinformation and propaganda.
In a blog post, Facebook said its label would appear on pages globally, as well as on News Feed posts within the United States.
Facebook also said it would ban US-targeted ads from state-controlled entities “out of an abundance of caution” ahead of the November presidential election. Elsewhere, the ads will receive a label.


Disinformation the new enemy in disaster zones, says Red Cross

Updated 05 March 2026
Follow

Disinformation the new enemy in disaster zones, says Red Cross

  • “Harmful information and dehumanizing narratives” undermines humanitarian aid and putting lives of aid workers at risk
  • Between 2020 and 2024, disasters affected nearly 700 million people, displaced over 105 million, and killed more than 270,000 — doubling the number in need of humanitarian aid

GENEVA: The rise of disinformation is undermining humanitarian aid and putting lives at risk, while disasters are affecting ever more people, the Red Cross warned Thursday.
“Between 2020 and 2024, disasters affected nearly 700 million people, caused more than 105 million displacements, and claimed over 270,000 lives,” the International Federation of Red Cross and Red Crescent Societies said.
The number of people needing humanitarian assistance more than doubled in the same timeframe, the IFRC said in its World Disasters Report 2026.
But the world’s largest humanitarian network said that “harmful information and dehumanizing narratives” were increasingly undermining trust, putting the lives of aid workers at risk.
“In polarized and politically-charged contexts, humanitarian principles such as neutrality and impartiality are increasingly misunderstood, misrepresented or deliberately attacked online,” it said.
The IFRC has more than 17 million volunteers across more than 191 countries.
“In every crisis I have witnessed, information is as essential as food, water and shelter,” said the Geneva-based federation’s secretary general Jagan Chapagain.
“But when information is false, misleading or deliberately manipulated, it can deepen fear, obstruct humanitarian access and cost lives.”
He said harmful information was not a new phenomenon, but it was now moving “with unprecedented speed and reach.”
Chapagain said digital platforms were proving “fertile ground for lies.”
The IFRC report said the challenge nowadays was no longer about the availability of information but its reliability, noting that the production and spread of disinformation was easily amplified by artificial intelligence.

- ‘Life and death’ -

The report cited numerous recent examples of harmful information hampering crisis response.
During the 2024 floods in Valencia, false narratives online accused the Spanish Red Cross of diverting aid to migrants, which in turn fueled “xenophobic attacks on volunteers,” the IFRC said.
In South Sudan, rumors that humanitarian agencies were distributing poisoned food “caused people to avoid life-saving aid” and led to threats against Red Cross staff.
In Lebanon, false claims that volunteers were spreading Covid-19, favoring certain groups with aid and providing unsafe cholera vaccines eroded trust and endangered vulnerable communities, the IFRC said.
And in Bangladesh, during political unrest, volunteers faced “widespread accusations of inaction and political alignment,” leading to harassment and reputational damage, it added.
Similar events were registered by the IFRC in Sudan, Myanmar, Peru, the United States, New Zealand, Canada, Kenya and Bulgaria.
The report underlined that around 94 percent of disasters were handled by national authorities and local communities, without international interventions.
“However, while volunteers, local leaders and community media are often the most trusted messengers, they operate in increasingly hostile and polarized information environments,” the IFRC said.
The federation called on governments, tech firms, humanitarian agencies and local actors to recognize that reliable information “is a matter of life and death.”
“Without trust, people are less likely to prepare, seek help or follow life-saving guidance; with it, communities act together, absorb shocks and recover more effectively,” said Chapagain.
The organization urged technology platforms to prioritize authoritative information from trusted sources in crisis contexts, and transparently moderate harmful content.
And it said humanitarian agencies needed to make preparing to deal with disinformation “a core function” of their operations, with trained teams and analytics.