Rohingya refugees sue Facebook for $150 billion over Myanmar violence

Rohingya Muslim children refugees, who crossed over from Myanmar into Bangladesh, wait squashed against each other to receive food handouts at Thaingkhali refugee camp, Bangladesh on Oct. 21, 2017. (AP)
Short Url
Updated 07 December 2021
Follow

Rohingya refugees sue Facebook for $150 billion over Myanmar violence

  • Facebook has said it is protected from liability over content posted by users by a US Internet law known as Section 230, which holds that online platforms are not liable for content posted by third parties

CALIFORNIA: Rohingya refugees from Myanmar are suing Meta Platforms Inc, formerly known as Facebook, for $150 billion over allegations that the social media company did not take action against anti-Rohingya hate speech that contributed to violence.
A US class-action complaint, filed in California on Monday by law firms Edelson PC and Fields PLLC, argues that the company’s failures to police content and its platform’s design contributed to real-world violence faced by the Rohingya community. In a coordinated action, British lawyers also submitted a letter of notice to Facebook’s London office.
Facebook did not immediately respond to a Reuters request for comment about the lawsuit. The company has said it was “too slow to prevent misinformation and hate” in Myanmar and has said it has since taken steps to crack down on platform abuses in the region, including banning the military from Facebook and Instagram after the Feb. 1 coup.
Facebook has said it is protected from liability over content posted by users by a US Internet law known as Section 230, which holds that online platforms are not liable for content posted by third parties. The complaint says it seeks to apply Burmese law to the claims if Section 230 is raised as a defense.
Although US courts can apply foreign law to cases where the alleged harms and activity by companies took place in other countries, two legal experts interviewed by Reuters said they did not know of a successful precedent for foreign law being invoked in lawsuits against social media companies where Section 230 protections could apply.
Anupam Chander, a professor at Georgetown University Law Center, said that invoking Burmese law wasn’t “inappropriate.” But he predicted that “It’s unlikely to be successful,” saying that “It would be odd for Congress to have foreclosed actions under US law but permitted them to proceed under foreign law.”
More than 730,000 Rohingya Muslims fled Myanmar’s Rakhine state in August 2017 after a military crackdown that refugees said included mass killings and rape. Rights groups documented killings of civilians and burning of villages.
Myanmar authorities say they were battling an insurgency and deny carrying out systematic atrocities.
In 2018, UN human rights investigators said the use of Facebook had played a key role in spreading hate speech that fueled the violence. A Reuters investigation https://www.reuters.com/investigates/special-report/myanmar-facebook-hate that year, cited in the US complaint, found more than 1,000 examples of posts, comments and images attacking the Rohingya and other Muslims on Facebook.
The International Criminal Court has opened a case into the accusations of crimes in the region. In September, a US federal judge ordered Facebook to release records of accounts connected to anti-Rohingya violence in Myanmar that the social media giant had shut down.
The new class-action lawsuit references claims by Facebook whistleblower Frances Haugen, who leaked a cache https://www.reuters.com/technology/facebook-whistleblower-says-transpare... of internal documents this year, that the company does not police abusive content in countries where such speech is likely to cause the most harm.
The complaint also cites recent media reports, including a Reuters report https://www.reuters.com/world/asia-pacific/information-combat-inside-fig... last month, that Myanmar’s military was using fake social media accounts to engage in what is widely referred to in the military as “information combat.”


AI fuels cyber threats but also offers new defenses, panel tells WEF

Updated 21 January 2026
Follow

AI fuels cyber threats but also offers new defenses, panel tells WEF

  • Cyber threats surged in 2025, with Distributed Denial of Service attack records shattered 25 times and a staggering 1,400% rise in incidents involving AI-powered bots incarcerating humans
  • Experts agreed that while AI has accelerated new and sophisticated threats, with phishing and impersonation on the rise, it has also improved solutions

DUBAI: Artificial intelligence is making cyberattacks more sophisticated and widespread, but it is also enhancing digital defenses, experts told the World Economic Forum on Wednesday, as they stressed the need for zero-trust systems and robust AI frameworks to reduce vulnerabilities.

Cyber threats surged in 2025, with Distributed Denial of Service attack records shattered 25 times and a staggering 1,400 percent rise in incidents involving AI-powered bots incarcerating humans.

Experts agreed that while AI has accelerated new and sophisticated threats, with phishing and impersonation on the rise, it has also improved solutions.

Michelle Zatlyn, co-founder, president and COO of Cloudflare, pointed to modern solutions organizations can invest in. However, she warned against the digital divide between major financial institutions that have robust cybersecurity measures, and smaller organizations struggling with outdated security solutions.

This divide, she said, necessitates heightened awareness and adaptation to modern security technologies to prevent crises, especially during vulnerable times like weekends.

The panelists stressed international collaboration and intelligence sharing between government agencies, law enforcement and the private sector as the way to tackle cross-border threats and build more resilient societies.

Catherine de Bolle, executive director at Europol, said AI has transformed the policing scene where traditional methods no longer function. She emphasized Europol’s extensive efforts to boost collaboration with the private sector to develop tools to protect the digital ecosystem, enhance crypto tracing and boost financial security.

De Bolle said AI had enhanced the capabilities and outreach of organized crime groups “because it facilitates the business model where you only need a computer and some people who are technically schooled.”

“We predict that in the future, digital crime frauds will be much easier as you gain a lot of money and reach more people without the need of an infrastructure,” she added. Collaboration with the private sector, she said, helps ensure a secure ecosystem that maintains user trust in online platforms.

However, Michael Miebach, CEO of Mastercard, said while AI can help defend against cyberattacks, trust needs to be built first among people to make these technologies fulfill its promises in driving prosperity and growth.

“If we don’t build a trusted layer around these technologies, people will not use it,” he said, pointing out that cyber threats have impacted the geopolitical, societal and corporate aspects of life.

Hatem Dowidar, group CEO of e&, called for more intelligent networks to deploy AI agents that detect and isolate malicious behavior early on to protect digital ecosystems from highly disruptive cyberattacks.

“So you are in some sense more cognizant of malicious hardware being embedded in your system,” he said. However, he warned against the loophole created as more companies implement agentic AI agents that could expose networks. Therefore, he urged the building of zero-trust systems to prevent incursions of new threats coming through these technologies.

He also stressed the need to establish guardrails to monitor AI agents because they are “programmed in plain language and it’s very easy that the programming goes out of context.”

“We never could have relied 100 percent on a human agent to work if there is no supervision and that will hold true for AI,” said Dowidar.

Another challenge the panelists highlighted was the blurred lines between state and non-state actors, with states potentially using organized crime to execute cyber operations.

Europol’s de Bolle said this brings new challenges for traditional policing and necessitates joint efforts across intelligence, defense, and law enforcement sectors.

“State actors are using criminal groups for their own purposes to launch DDoS attacks,” she said, adding that the danger comes from the fact that “states can hide behind and criminals can hide after the state and they don’t have to make the investment because the structure is already there.”

She said such developments make it necessary to think of the future of defense police intelligence services where law enforcement works closely with the private sector to tackle such dangers, while respecting the boundaries of different agencies: “If we do not put the information and intelligence together to tackle this, we will never win the battle.”

Dowidar said information sharing needed to happen on national and international security levels. Nationally, there should be an entity that coordinates between the police, intelligence, network operators and the critical infrastructure companies.

Internationally, there should be security centers that immediately inform other like-minded organizations around the world of any new threat, along with sharing how the problem was solved or whether help is needed from other experts.

Meanwhile, de Bolle said it was the responsibility of the private and public sectors to build societal resilience, boost digital literacy, revamp the education system and develop the critical mindset of the young generation who will use these tools in the future.

Cloudflare’s Zatlyn urged business leaders to understand the basics of new technologies, beyond only relying on technical teams, to keep revenue flowing and minimize risks facing their networks.

She also stressed that CEOs and organizations must consider AI agents as an “extension” of their teams.

“Organizations are concerned that their data will leak with the use of new technologies, but this depends how to train the agents. These are all stoppable issues,” said Zatlyn.