Rohingya seek reparations from Facebook for role in massacre

For years, Facebook, now called Meta Platforms Inc., pushed the narrative that it was a neutral platform in Myanmar that was misused by malicious people, and that despite its efforts to remove violent and hateful material, it unfortunately fell short. (AP/File)
Short Url
Updated 29 September 2022
Follow

Rohingya seek reparations from Facebook for role in massacre

  • “Meta — through its dangerous algorithms and its relentless pursuit of profit — substantially contributed to the serious human rights violations perpetrated against the Rohingya,” the report says

With roosters crowing in the background as he speaks from the crowded refugee camp in Bangladesh that’s been his home since 2017, Maung Sawyeddollah, 21, describes what happened when violent hate speech and disinformation targeting the Rohingya minority in Myanmar began to spread on Facebook.
“We were good with most of the people there. But some very narrow minded and very nationalist types escalated hate against Rohingya on Facebook,” he said. “And the people who were good, in close communication with Rohingya. changed their mind against Rohingya and it turned to hate.”
For years, Facebook, now called Meta Platforms Inc., pushed the narrative that it was a neutral platform in Myanmar that was misused by malicious people, and that despite its efforts to remove violent and hateful material, it unfortunately fell short. That narrative echoes its response to the role it has played in other conflicts around the world, whether the 2020 election in the US or hate speech in India.
But a new and comprehensive report by Amnesty International states that Facebook’s preferred narrative is false. The platform, Amnesty says, wasn’t merely a passive site with insufficient content moderation. Instead, Meta’s algorithms “proactively amplified and promoted content” on Facebook, which incited violent hatred against the Rohingya beginning as early as 2012.
Despite years of warnings, Amnesty found, the company not only failed to remove violent hate speech and disinformation against the Rohingya, it actively spread and amplified it until it culminated in the 2017 massacre. The timing coincided with the rising popularity of Facebook in Myanmar, where for many people it served as their only connection to the online world. That effectively made Facebook the Internet for a vast number of Myanmar’s population.
More than 700,000 Rohingya fled into neighboring Bangladesh that year. Myanmar security forces were accused of mass rapes, killings and torching thousands of homes owned by Rohingya.
“Meta — through its dangerous algorithms and its relentless pursuit of profit — substantially contributed to the serious human rights violations perpetrated against the Rohingya,” the report says.
A spokesperson for Meta declined to answer questions about the Amnesty report. In a statement, the company said it “stands in solidarity with the international community and supports efforts to hold the Tatmadaw accountable for its crimes against the Rohingya people.”
“Our safety and integrity work in Myanmar remains guided by feedback from local civil society organizations and international institutions, including the UN Fact-Finding Mission on Myanmar; the Human Rights Impact Assessment we commissioned in 2018; as well as our ongoing human rights risk management,” Rafael Frankel, director of public policy for emerging markets, Meta Asia-Pacific, said in a statement.
Like Sawyeddollah, who is quoted in the Amnesty report and spoke with the AP on Tuesday, most of the people who fled Myanmar — about 80 percent of the Rohingya living in Myanmar’s western state of Rakhine at the time — are still staying in refugee camps. And they are asking Meta to pay reparations for its role in the violent repression of Rohingya Muslims in Myanmar, which the US declared a genocide earlier this year.
Amnesty’s report, out Wednesday, is based on interviews with Rohingya refugees, former Meta staff, academics, activists and others. It also relied on documents disclosed to Congress last year by whistleblower Frances Haugen, a former Facebook data scientist. It notes that digital rights activists say Meta has improved its civil society engagement and some aspects of its content moderation practices in Myanmar in recent years. In January 2021, after a violent coup overthrew the government, it banned the country’s military from its platform.
But critics, including some of Facebook’s own employees, have long maintained such an approach will never truly work. It means Meta is playing whack-a-mole trying to remove harmful material while its algorithms designed to push “engaging” content that’s more likely to get people riled up essentially work against it.
“These algorithms are really dangerous to our human rights. And what happened to the Rohingya and Facebook’s role in that specific conflict risks happening again, in many different contexts across the world,” said Pat de Brún, researcher and adviser on artificial intelligence and human rights at Amnesty.
“The company has shown itself completely unwilling or incapable of resolving the root causes of its human rights impact.”
After the UN’s Independent International Fact-Finding Mission on Myanmar highlighted the “significant” role Facebook played in the atrocities perpetrated against the Rohingya, Meta admitted in 2018 that “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.”
In the following years, the company “touted certain improvements in its community engagement and content moderation practices in Myanmar,” Amnesty said, adding that its report “finds that these measures have proven wholly inadequate.”
In 2020, for instance, three years after the violence in Myanmar killed thousands of Rohingya Muslims and displaced 700,000 more, Facebook investigated how a video by a leading anti-Rohingya hate figure, U Wirathu, was circulating on its site.
The probe revealed that over 70 percent of the video’s views came from “chaining” — that is, it was suggested to people who played a different video, showing what’s “up next.” Facebook users were not seeking out or searching for the video, but had it fed to them by the platform’s algorithms.
Wirathu had been banned from Facebook since 2018.
“Even a well-resourced approach to content moderation, in isolation, would likely not have sufficed to prevent and mitigate these algorithmic harms. This is because content moderation fails to address the root cause of Meta’s algorithmic amplification of harmful content,” Amnesty’s report says.
The Rohingya refugees are seeking unspecified reparations from the Menlo Park, California-based social media giant for its role in perpetuating genocide. Meta, which is the subject of twin lawsuits in the US and the UK seeking $150 billion for Rohingya refugees, has so far refused.
“We believe that the genocide against Rohingya was possible only because of Facebook,” Sawyeddollah said. “They communicated with each other to spread hate, they organized campaigns through Facebook. But Facebook was silent.”


Live video of man who set himself on fire outside court proves challenging for news organizations

Updated 20 April 2024
Follow

Live video of man who set himself on fire outside court proves challenging for news organizations

  • The man, who distributed pamphlets before dousing himself in an accelerant and setting himself on fire, was in critical condition
  • The incident tested how quickly the networks could react, and how they decided what would be too disturbing for their viewers to see

NEW YORK: Video cameras stationed outside the Manhattan courthouse where former President Donald Trump is on trial caught the gruesome scene Friday of a man who lit himself on fire and the aftermath as authorities tried to rescue him.

CNN, Fox News Channel and MSNBC were all on the air with reporters talking about the seating of a jury when the incident happened and other news agencies, including The Associated Press, were livestreaming from outside the courthouse. The man, who distributed pamphlets before dousing himself in an accelerant and setting himself on fire, was in critical condition.
The incident tested how quickly the networks could react, and how they decided what would be too disturbing for their viewers to see.
With narration from Laura Coates, CNN had the most extensive view of the scene. Coates, who at first incorrectly said it was a shooting situation, then narrated as the man was visible onscreen, enveloped in flames.
“You can smell burning flesh,” Coates, an anchor and CNN’s chief legal analyst, said as she stood at the scene with reporter Evan Perez.

The camera switched back and forth between Coates and what was happening in the park. Five minutes after the incident started, CNN posted the onscreen message “Warning: Graphic Content.”
Coates later said she couldn’t “overstate the emotional response of watching a human being engulfed in flames and to watch his body be lifted into a gurney.” She described it as an “emotional and unbelievably disturbing moment here.”
Fox’s cameras caught the scene briefly as reporter Eric Shawn talked, then the network switched to a courtroom sketch of Trump on trial.
“We deeply apologize for what has happened,” Shawn said.
On MSNBC, reporter Yasmin Vossoughian narrated the scene. The network showed smoke in the park, but no picture where the body was visible.
“I could see the outline of his body inside the flames,” Vossoughian said, “which was so terrifying to see. As he went to the ground his knees hit the ground first.”
The AP had a camera with an unnarrated live shot stationed outside the courthouse, shown on YouTube and APNews.com. The cameras caught an extensive view, with the man lighting himself afire and later writhing on the ground before a police officer tried to douse the flames with a jacket.
The AP later removed its live feed from its YouTube channel and replaced it with a new one because of the graphic nature of the content.
The news agency distributed carefully edited clips to its video clients — not showing the moment the man lit himself on fire, for example, said executive producer Tom Williams.


Russian war correspondent for Izvestia killed in Ukraine

Updated 20 April 2024
Follow

Russian war correspondent for Izvestia killed in Ukraine

  • Izvestia said Semyon Eremin, 42, died of wounds from a drone attack in Zaporizhzhia region
  • Eremin had reported for the Russian daily from hottest battles in Ukraine during the 25-month-old war

Semyon Eremin, a war correspondent for the Russian daily Izvestia, was killed on Friday in a drone attack in southeastern Ukraine, the daily said.

Izvestia said Eremin, 42, died of wounds suffered when a drone made a second pass over the area where he was reporting in Zaporizhzhia region.
Izvestia said Eremin had sent reports from many of the hottest battles in Ukraine’s eastern regions during the 25-month-old war, including Mariupol, besieged by Russian troops for nearly three months in 2022.
He had also reported from Maryinka and Vuhledar, towns at the center of many months of heavy fighting.


WhatsApp being used to target Palestinians through Israel’s Lavender AI system

Updated 20 April 2024
Follow

WhatsApp being used to target Palestinians through Israel’s Lavender AI system

  • Targets’ selection based on membership to some WhatsApp groups, new report reveals
  • Accusation raises questions about app’s privacy and encryption claims

LONDON: WhatsApp is allegedly being used to target Palestinians through Israel’s contentious artificial intelligence system, Lavender, which has been linked to the deaths of Palestinian civilians in Gaza, recent reports have revealed.

Earlier this month, Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call published a report by journalist Yuval Abraham, exposing the Israeli army’s use of an AI system capable of identifying targets associated with Hamas or Palestinian Islamic Jihad.

This revelation, corroborated by six Israeli intelligence officers involved in the project, has sparked international outrage, as it suggested Lavender has been used by the military to target and eliminate suspected militants, often resulting in civilian casualties.

In a recent blog post, software engineer and activist Paul Biggar highlighted Lavender’s reliance on WhatsApp.

He pointed out how membership in a WhatsApp group containing a suspected militant can influence Lavender’s identification process, highlighting the pivotal role messaging platforms play in supporting AI targeting systems like Lavender.

“A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same WhatsApp group as a suspected militant,” Bigger wrote. “There’s a lot wrong with this.”

He explained that users often find themselves in groups with strangers or acquaintances.

Biggar also suggested that WhatsApp’s parent company, Meta, may be complicit, whether knowingly or unknowingly, in these operations.

He accused Meta of potentially violating international humanitarian law and its own commitments to human rights, raising questions about the privacy and encryption claims of WhatsApp’s messaging service.

The revelation is just the latest of Meta’s perceived attempts to silence pro-Palestinian voices.

Since before the beginning of the conflict, the Menlo Park giant has faced accusations of double standards favoring Israel.

In February, the Guardian revealed that Meta was considering the expansion of its hate speech policy to the term “Zionist.”

More recently, Meta quietly introduced a new feature on Instagram that automatically limits users’ exposure to what it deems “political” content, a decision criticized by experts as a means of systematically censoring pro-Palestinian content.

Responding to requests for comment, a WhatsApp spokesperson said that the company could not verify the accuracy of the report but assured that “WhatsApp has no backdoors and does not provide bulk information to any government.”


Eastern European mercenaries suspected of attacking Iranian journalist Pouria Zeraati

Updated 19 April 2024
Follow

Eastern European mercenaries suspected of attacking Iranian journalist Pouria Zeraati

  • UK security services believe criminal proxies with links to Tehran carried out London knife attack

LONDON: Police said on Friday that a group of Eastern European mercenaries is suspected to have carried out the knife attack on Iranian journalist Pouria Zeraati in late March.

Zeraati was stabbed repeatedly by three men in an attack outside his south London home.

The Iran International presenter lost a significant amount of blood and was hospitalized for several days. He has since returned to work, but is now living in a secure location.

Iran International and its staff have faced repeated threats, believed to be linked to the Iranian regime, which designated the broadcaster as a terrorist organization for its coverage of the 2022 protests.

Iran’s charge d’affaires, Seyed Mehdi Hosseini Matin, denied any government involvement in the attack on Zeraati.

Investigators revealed that the suspects fled the UK immediately after the incident, with reports suggesting they traveled to Heathrow Airport before boarding commercial flights to different destinations.

Police are pursuing leads in Albania as part of their investigation.

Counterterrorism units and Britain’s security services leading the inquiry believe that the attack is another instance of the Iranian regime employing criminal proxies to target its critics on foreign soil.

This method allows Tehran to maintain plausible deniability and avoids raising suspicions when suspects enter the country.

Zeraati was attacked on March 29 as he left his home home to travel to work. His weekly show serves as a source of impartial and uncensored news for many Iranians at home and abroad.

In an interview with BBC Radio 4’s “Today” program this week, Zeraati said that while he is physically “much better,” mental recovery from the assault “will take time.”


Court orders release of prominent Palestinian professor suspected of incitement

Updated 19 April 2024
Follow

Court orders release of prominent Palestinian professor suspected of incitement

  • Nadera Shalhoub-Kevorkian was under investigation after questioning Hamas atrocities, criticizing Israel
  • Insufficient justification for arrest, says court
  • Detention part of a broader campaign, says lawyer

LONDON: The prominent Hebrew University of Jerusalem professor, Nadera Shalhoub-Kevorkian, was released on Friday after a court order rejected police findings.

The criminologist and law professor was arrested the previous day on suspicion of incitement. She had been under investigation for remarks regarding the Oct. 7 attacks by Hamas and for saying Israelis were committing “genocidal crimes” in the Gaza Strip and should fear the consequences.

On Friday, the court dismissed a police request to extend her remand, citing insufficient justification for the arrest, according to Hebrew media reports.

Protesters gathered outside the courthouse to demonstrate against Shalhoub-Kevorkian’s arrest.

Israeli Channel 12, which first reported the news, did not specify where Shalhoub was arrested but her lawyer later confirmed she was apprehended at her home in the Armenian Quarter of Jerusalem.

“She’s not been in good health recently and was arrested in her home,” Alaa Mahajna said. “Police searched the house and seized her computer and cellphone, [Palestinian] poetry books and work-related papers.”

Mahajna described Shalhoub-Kevorkian’s arrest as part of a broader campaign against her, which has included numerous threats to her life and of violence. 

The professor was suspended by her university last month after calling for the abolition of Zionism and suggesting that accounts of sexual assault during the Hamas-led attacks on Israel were fabricated.

The suspension was initially criticized by the university community as a blow to academic freedom in Israel. However, the decision was later reversed following an apology from Shalhoub-Kevorkian and an admission that sexual assaults took place.

Since hostilities began last year, numerous dissenting voices in Israel have faced arrest for expressing solidarity with victims of the bombardment in Gaza.

In October, well-known ultra-Orthodox Israeli journalist Israel Frey was forced into hiding following a violent attack on his home.

Bayan Khateeb, a student at the Technion-Israel Institute of Technology, was arrested last year for incitement after posting an Instagram story showing the preparation of a popular spicy egg dish with the caption: “We will soon be eating the victory shakshuka.”