Facebook dithered in curbing divisive user content in India

Facebook saw India as one of the most ‘at risk countries’ in the world and identified both Hindi and Bengali languages as priorities for ‘automation on violating hostile speech.’ (AFP)
Short Url
Updated 24 October 2021
Follow

Facebook dithered in curbing divisive user content in India

  • Communal and religious tensions in India have a history of boiling over on social media and stoking violence
  • Facebook has become increasingly important in politics, and India is no different

NEW DELHI: Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as the Internet giant’s own employees cast doubt over the its motivations and interests.
Based on research produced as recently as March of this year to company memos that date back to 2019, internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.
The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address the issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party are involved.
Across the world, Facebook has become increasingly important in politics, and India is no different.
Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at the Facebook headquarters.
The leaked documents include a trove of internal company reports on hate speech and misinformation in India that in some cases appeared to have been intensified by its own “recommended” feature and algorithms. They also include the company staffers’ concerns over the mishandling of these issues and their discontent over the viral “malcontent” on the platform.
According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.
In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which “reduced the amount of hate speech that people see by half” in 2021.
“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.
This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.
Back in February 2019 and ahead of a general election when concerns of misinformation were running high, a Facebook employee wanted to understand what a new user in India saw on their news feed if all they did was follow pages and groups solely recommended by the platform itself.
The employee created a test user account and kept it live for three weeks, a period during which an extraordinary event shook India — a militant attack in disputed Kashmir had killed over 40 Indian soldiers, bringing the country close to war with rival Pakistan.
In the note, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee whose name is redacted said they were “shocked” by the content flooding the news feed. The person described the content as having “become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”
Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.
The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.
One included a man holding the bloodied head of another man covered in a Pakistani flag, with an Indian flag partially covering it. Its “Popular Across Facebook” feature showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebook’s fact-check partners.
“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.
The report sparked deep concerns over what such divisive content could lead to in the real world, where local news outlets at the time were reporting on Kashmiris being attacked in the fallout.
“Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?” the researcher asked in their conclusion.
The memo, circulated with other employees, did not answer that question. But it did expose how the platform’s own algorithms or default settings played a part in producing such objectionable content. The employee noted that there were clear “blind spots,” particularly in “local language content.” They said they hoped these findings would start conversations on how to avoid such “integrity harms,” especially for those who “differ significantly” from the typical US user.
Even though the research was conducted during three weeks that weren’t an average representation, they acknowledged that it did show how such “unmoderated” and problematic content “could totally take over” during “a major crisis event.”
The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”
“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.


EU bans 4 more Russian media outlets from broadcasting in the bloc, citing disinformation

Updated 18 May 2024
Follow

EU bans 4 more Russian media outlets from broadcasting in the bloc, citing disinformation

  • The EU has already suspended Russia Today and Sputnik among several other outlets since February 2022

BRUSSELS: The European Union on Friday banned four more Russian media outlets from broadcasting in the 27-nation bloc for what it calls the spread of propaganda about the invasion of Ukraine and disinformation as the EU heads into parliamentary elections in three weeks.
The latest batch of broadcasters consists of Voice of Europe, RIA Novosti, Izvestia and Rossiyskaya Gazeta, which the EU claims are all under control of the Kremlin. It said in a statement that the four are in particular targeting “European political parties, especially during election periods.”
Belgium already last month opened an investigation into suspected Russian interference in June’s Europe-wide elections, saying its country’s intelligence service has confirmed the existence of a network trying to undermine support for Ukraine.
The Czech government has imposed sanctions on a number of people after a pro-Russian influence operation was uncovered there. They are alleged to have approached members of the European Parliament and offered them money to promote Russian propaganda.
Since the war started in February 2022, the EU has already suspended Russia Today and Sputnik among several other outlets.

 

 


Israeli soldiers post abusive videos despite army’s pledge to act: BBC analysis

Updated 17 May 2024
Follow

Israeli soldiers post abusive videos despite army’s pledge to act: BBC analysis

  • The BBC analyzed 45 photos and videos posted online by Israeli soldiers that showed Palestinian prisoners in the West Bank being abused and humiliated

LONDON: Israeli soldiers continue to post videos of abuse against Palestinian detainees despite a military pledge to take action against the perpetrators, analysis by the BBC has found.

The broadcaster said it had analyzed 45 photos and videos posted online by Israeli soldiers that showed Palestinian prisoners in the West Bank being abused and humiliated. Some were draped in Israeli flags. 

Experts say the footage and images, which showed Palestinians being stripped, beaten and blindfolded, could breach international law and amount to a war crime.

The Israel Defense Forces said some soldiers had been disciplined or suspended for “unacceptable behavior” but did not comment on the individual cases identified by the BBC.

The most recent investigation into social media misconduct by Israeli soldiers follows a previous inquiry in which BBC Verify confirmed Israeli soldiers had filmed Gazan detainees while beating them and then posted the material on social platforms.

The Israeli military has carried out arbitrary arrests across Gaza and the West Bank, including East Jerusalem, since the Hamas attack on Oct. 7. The number of Palestinian prisoners in the West Bank has since risen to more than 7,060 according to the Commission of Detainees’ Affairs and the Palestinian Prisoner Society.

Ori Givati, spokesperson for Breaking the Silence, a non-governmental organization for Israeli veterans working to expose wrongdoing in the IDF, told the BBC he was “far from shocked” to hear the misconduct was ongoing.

Blaming “current far-right political rhetoric in the country” for further encouraging the abuse, he added: “There are no repercussions. They [Israeli soldiers] get encouraged and supported by the highest ministers of the government.”

He said this played into a mindset already subscribed to by the military: “The culture in the military, when it comes to Palestinians, is that they are only targets. They are not human beings. This is how the military teaches you to behave.”

The BBC’s analysis found that the videos and photos it examined were posted by 11 soldiers of the Kfir Brigade, the largest infantry brigade in the IDF. None of them hid their identity.

The IDF did not respond when the BBC asked about the actions of the individual soldiers and whether they had been disciplined.

The BBC also attempted to contact the soldiers on social media. The organization was blocked by one, while none of the others responded.

Mark Ellis, executive director of the International Bar Association, urged an investigation into the incidents shown in the footage and called for the IDF to discipline those involved.

In response to the BBC’s investigation, the IDF said: “The IDF holds its soldiers to a professional standard … and investigates when behavior is not in line with the IDF’s values. In the event of unacceptable behavior, soldiers were disciplined and even suspended from reserve duty.

“Additionally, soldiers are instructed to avoid uploading footage of operational activities to social media networks.”

However, it did not acknowledge its pledge to act on BBC Verify’s earlier findings in Gaza, according to the broadcaster.


4 journalists killed in Gaza as death toll climbs above 100

Updated 17 May 2024
Follow

4 journalists killed in Gaza as death toll climbs above 100

  • 104 Palestinian media workers reported dead, along with 3 Lebanese and 2 Israelis

LONDON: The Gaza Media Authority on Thursday said that four journalists had been killed in an Israeli airstrike, bringing the total number of journalists killed in the conflict to more than 100.

The victims were identified as Hail Al-Najjar, a video editor at the Al-Aqsa Media Network; Mahmoud Jahjouh, a photojournalist at the Palestine Post website; Moath Mustafa Al-Ghefari, a photojournalist at the Kanaan Land website and Palestinian Media Foundation; and Amina Mahmoud Hameed, a program presenter and editor at several media outlets, according to the Anadolu Agency.

The Gaza Media Office said the four were killed in an Israeli airstrike, but did not provide additional details on the circumstances surrounding their deaths.

A total of 104 Palestinian journalists have been killed since the conflict began on Oct. 7. Two Israeli and three Lebanese media workers also have been killed.

The latest loss adds to the already heavy toll on media workers, with the Committee to Protect Journalists saying the Gaza conflict is the deadliest for journalists and media workers since it began keeping records.

Israel is continuing its offensive on Gaza despite a UN Security Council resolution demanding an immediate ceasefire.

On Thursday, South Africa, which has brought a case accusing Israel of genocide to the International Court of Justice, urged the court to order Israel to halt its assault on Rafah.

According to Gaza medical authorities, more than 35,200 Palestinians have been killed, mostly women and children, and over 79,200 have been injured since early October when Israel launched its offensive following an attack by Hamas.


Russia outlaws SOTA opposition news outlet

Updated 17 May 2024
Follow

Russia outlaws SOTA opposition news outlet

  • Authorities said outlet tries to destabilize the socio-political situation in Russia
  • Move could criminalize SOTA content and puts its reporters at risk of arrest

LONDON: Russia declared opposition media outlet SOTA “undesirable” on Thursday, a move that could criminalize the sharing of its content and put its reporters at risk of arrest.
Authorities in Russia have declared dozens of news outlets, think tanks and non-profit organizations “undesirable” since 2015, a label rights groups say is designed to deter dissent.
In a statement, Russia’s Prosecutor General accused SOTA of “frank attempts to destabilize the socio-political situation in Russia” and “create tension and irritation in society.”
“Such activities, obviously encouraged by so-called Western inspirers, have the goal of undermining the spiritual and moral foundations of Russian society,” it said.
It also accused SOTA of co-operating with TV Rain and The Insider, two other independent Russian-language outlets based outside of the country that are linked to the opposition.
SOTA Project, which covers opposition protests and has been fiercely critical of the Kremlin, denied it had anything to do with TV Rain and The Insider and rejected the claims.
But it advised its followers in Russia to “remove reposts and links” to its materials to avoid the risk of prosecution. SOTA’s Telegram channel has around 137,000 subscribers.
“Law enforcement and courts consider publishing online to be a continuing offense. This means that you can be prosecuted for reposts from 2023, 2022, 2021,” it said.
SOTA Project was born out of a split with a separate news outlet called SOTAvision, which still covers the opposition but distanced itself from the prosecutors’ ruling on Thursday.
Since launching its offensive in Ukraine, Moscow has waged an unprecedented crackdown on dissent that rights groups have likened to Soviet-era mass repression.
Among other organizations labelled as “undesirable” in Russia are the World Wildlife Fund, Greenpeace, Transparency International and Radio Free Europe/Radio Liberty.


OpenAI strikes deal to bring Reddit content to ChatGPT

Updated 17 May 2024
Follow

OpenAI strikes deal to bring Reddit content to ChatGPT

  • Deal underscores Reddit’s attempt to diversify beyond its advertising business
  • Content will be used to train AI models

LONDON: Reddit has partnered with OpenAI to bring its content to popular chatbot ChatGPT, the companies said on Thursday, sending the social media platform’s shares up 12 percent in extended trade.
The deal underscores Reddit’s attempt to diversify beyond its advertising business, and follows its recent partnership with Alphabet to make its content available for training Google’s AI models.
ChatGPT and other OpenAI products will use Reddit’s application programming interface, the means by which Reddit distributes its content, following the new partnership.
OpenAI will also become a Reddit advertising partner, the company said.
Ahead of Reddit’s March IPO, Reuters reported that Reddit struck its deal with Alphabet, worth about $60 million per year.
Investors view selling its data to train AI models as a key source of revenue beyond Reddit’s advertising business.
The social media company earlier this month reported strong revenue growth and improving profitability in the first earnings since its market debut, indicating that its Google deal and its push to grow its ads business were paying off.
Reddit’s shares rose 10.5 percent to $62.31 after the bell. As of Wednesday’s close, the stock is up nearly 12 percent since its market debut in March.