Google opposes Facebook-backed proposal for self-regulatory body in India - sources

Western tech giants have for years been at odds with the Indian government, arguing that strict regulations are hurting their business and investment plans. (Shutterstock/File)
Short Url
Updated 11 August 2022
Follow

Google opposes Facebook-backed proposal for self-regulatory body in India - sources

  • India wants a panel to review complaints about content decisions
  • Google says self-regulatory system sets bad precedent - sources

NEW DELHI: Google has grave reservations about developing a self-regulatory body for the social media sector in India to hear user complaints, though the proposal has support from Facebook and Twitter, sources with knowledge of the discussions told Reuters.
India in June proposed appointing a government panel to hear complaints from users about content moderation decisions, but has also said it is open to the idea of a self-regulatory body if the industry is willing.
The lack of consensus among the tech giants, however, increases the likelihood of a government panel being formed — a prospect that Meta Platforms Inc’s Facebook and Twitter are keen to avoid as they fear government and regulatory overreach in India, the sources said.
At a closed-door meeting this week, an executive from Alphabet Inc’s Google told other attendees the company was unconvinced about the merits of a self-regulatory body. The body would mean external reviews of decisions that could force Google to reinstate content, even if it violated Google’s internal policies, the executive was quoted as saying.
Such directives from a self-regulatory body could set a dangerous precedent, the sources also quoted the Google executive as saying.
The sources declined to be identified as the discussions were private.
In addition to Facebook, Twitter and Google, representatives from Snap Inc. and popular Indian social media platform ShareChat also attended the meeting. Together, the companies have hundreds of millions of users in India.
Snap and ShareChat also voiced concern about a self-regulatory system, saying the matter requires much more consultation including with civil society, the sources said.
Google said in a statement it had attended a preliminary meeting and is engaging with the industry and the government, adding that it was “exploring all options” for a “best possible solution.”
ShareChat and Facebook declined to comment. The other companies did not respond to Reuters requests for comment.

THORNY ISSUE
Self-regulatory bodies to police content in the social media sector are rare, though there have been instances of cooperation. In New Zealand, big tech companies have signed a code of practice aimed at reducing harmful content online.
Tension over social media content decisions has been a particularly thorny issue in India. Social media companies often receive takedown requests from the government or remove content proactively. Google’s YouTube, for example, removed 1.2 million videos in the first quarter of this year that were in violation of its guidelines, the highest in any country in the world.
India’s government is concerned that users upset with decisions to have their content taken down do not have a proper system to appeal those decisions and that their only legal recourse is to go to court.
Twitter has faced backlash after it blocked accounts of influential Indians, including politicians, citing violation of its policies. Twitter also locked horns with the Indian government last year when it declined to comply fully with orders to take down accounts the government said spread misinformation.
An initial draft of the proposal for the self-regulatory body said the panel would have a retired judge or an experienced person from the field of technology as chairperson, as well as six other individuals, including some senior executives at social media companies.
The panel’s decisions would be “binding in nature,” stated the draft, which was seen by Reuters.
Western tech giants have for years been at odds with the Indian government, arguing that strict regulations are hurting their business and investment plans. The disagreements have also strained trade ties between New Delhi and Washington.
US industry lobby groups representing the tech giants believe a government-appointed review panel raises concern about how it could act independently if New Delhi controls who sits on it.
The proposal for a government panel was open to public consultation until early July. No fixed date for implementation has been set.


Disinformation the new enemy in disaster zones, says Red Cross

Updated 05 March 2026
Follow

Disinformation the new enemy in disaster zones, says Red Cross

  • “Harmful information and dehumanizing narratives” undermines humanitarian aid and putting lives of aid workers at risk
  • Between 2020 and 2024, disasters affected nearly 700 million people, displaced over 105 million, and killed more than 270,000 — doubling the number in need of humanitarian aid

GENEVA: The rise of disinformation is undermining humanitarian aid and putting lives at risk, while disasters are affecting ever more people, the Red Cross warned Thursday.
“Between 2020 and 2024, disasters affected nearly 700 million people, caused more than 105 million displacements, and claimed over 270,000 lives,” the International Federation of Red Cross and Red Crescent Societies said.
The number of people needing humanitarian assistance more than doubled in the same timeframe, the IFRC said in its World Disasters Report 2026.
But the world’s largest humanitarian network said that “harmful information and dehumanizing narratives” were increasingly undermining trust, putting the lives of aid workers at risk.
“In polarized and politically-charged contexts, humanitarian principles such as neutrality and impartiality are increasingly misunderstood, misrepresented or deliberately attacked online,” it said.
The IFRC has more than 17 million volunteers across more than 191 countries.
“In every crisis I have witnessed, information is as essential as food, water and shelter,” said the Geneva-based federation’s secretary general Jagan Chapagain.
“But when information is false, misleading or deliberately manipulated, it can deepen fear, obstruct humanitarian access and cost lives.”
He said harmful information was not a new phenomenon, but it was now moving “with unprecedented speed and reach.”
Chapagain said digital platforms were proving “fertile ground for lies.”
The IFRC report said the challenge nowadays was no longer about the availability of information but its reliability, noting that the production and spread of disinformation was easily amplified by artificial intelligence.

- ‘Life and death’ -

The report cited numerous recent examples of harmful information hampering crisis response.
During the 2024 floods in Valencia, false narratives online accused the Spanish Red Cross of diverting aid to migrants, which in turn fueled “xenophobic attacks on volunteers,” the IFRC said.
In South Sudan, rumors that humanitarian agencies were distributing poisoned food “caused people to avoid life-saving aid” and led to threats against Red Cross staff.
In Lebanon, false claims that volunteers were spreading Covid-19, favoring certain groups with aid and providing unsafe cholera vaccines eroded trust and endangered vulnerable communities, the IFRC said.
And in Bangladesh, during political unrest, volunteers faced “widespread accusations of inaction and political alignment,” leading to harassment and reputational damage, it added.
Similar events were registered by the IFRC in Sudan, Myanmar, Peru, the United States, New Zealand, Canada, Kenya and Bulgaria.
The report underlined that around 94 percent of disasters were handled by national authorities and local communities, without international interventions.
“However, while volunteers, local leaders and community media are often the most trusted messengers, they operate in increasingly hostile and polarized information environments,” the IFRC said.
The federation called on governments, tech firms, humanitarian agencies and local actors to recognize that reliable information “is a matter of life and death.”
“Without trust, people are less likely to prepare, seek help or follow life-saving guidance; with it, communities act together, absorb shocks and recover more effectively,” said Chapagain.
The organization urged technology platforms to prioritize authoritative information from trusted sources in crisis contexts, and transparently moderate harmful content.
And it said humanitarian agencies needed to make preparing to deal with disinformation “a core function” of their operations, with trained teams and analytics.