Facebook still auto-generating Daesh, Al-Qaeda pages

Facebook's auto-generated pages with titles like ‘I Love Islamic State,’ are ideal for terrorists to use for networking and recruiting.” (AP)
Updated 19 September 2019
Follow

Facebook still auto-generating Daesh, Al-Qaeda pages

  • Facebook has been working to limit the spread of extremist material on its service, so far with mixed success
  • But as the report shows, plenty of material gets through the cracks — and gets auto-generated

WASHINGTON: In the face of criticism that Facebook is not doing enough to combat extremist messaging, the company likes to say that its automated systems remove the vast majority of prohibited content glorifying the Daesh group and Al-Qaeda before it’s reported.
But a whistleblower’s complaint shows that Facebook itself has inadvertently provided the two extremist groups with a networking and recruitment tool by producing dozens of pages in their names.
The social networking company appears to have made little progress on the issue in the four months since The Associated Press detailed how pages that Facebook auto-generates for businesses are aiding Middle East extremists and white supremacists in the United States.
On Wednesday, US senators on the Committee on Commerce, Science, and Transportation questioned representatives from social media companies, including Monika Bickert, who heads Facebook’s efforts to stem extremist messaging. Bickert did not address Facebook’s auto-generation during the hearing, but faced some skepticism that the company’s efforts were effectively countering extremists.
The new details come from an update of a complaint to the Securities and Exchange Commission that the National Whistleblower Center plans to file this week. The filing obtained by the AP identifies almost 200 auto-generated pages — some for businesses, others for schools or other categories — that directly reference the Daesh group and dozens more representing Al-Qaeda and other known groups. One page listed as a “political ideology” is titled “I love Islamic state.” It features an IS logo inside the outlines of Facebook’s famous thumbs-up icon.
In response to a request for comment, a Facebook spokesperson told the AP: “Our priority is detecting and removing content posted by people that violates our policy against dangerous individuals and organizations to stay ahead of bad actors. Auto-generated pages are not like normal Facebook pages as people can’t comment or post on them and we remove any that violate our policies. While we cannot catch every one, we remain vigilant in this effort.”

“Yet those very same algorithms are auto-generating pages with titles like ‘I Love Islamic State,’ which are ideal for terrorists to use for networking and recruiting.”

John Kostyack, executive director of the National Whistleblower Center

Facebook has a number of functions that auto-generate pages from content posted by users. The updated complaint scrutinizes one function that is meant to help business networking. It scrapes employment information from users’ pages to create pages for businesses. In this case, it may be helping the extremist groups because it allows users to like the pages, potentially providing a list of sympathizers for recruiters.
The new filing also found that users’ pages promoting extremist groups remain easy to find with simple searches using their names. They uncovered one page for “Mohammed Atta” with an iconic photo of one of the Al-Qaeda adherents, who was a hijacker in the Sept. 11 attacks. The page lists the user’s work as “Al Qaidah” and education as “University Master Bin Laden” and “School Terrorist Afghanistan.”
Facebook has been working to limit the spread of extremist material on its service, so far with mixed success. In March, it expanded its definition of prohibited content to include US white nationalist and white separatist material as well as that from international extremist groups. It says it has banned 200 white supremacist organizations and 26 million pieces of content related to global extremist groups like IS and Al-Qaeda.
It also expanded its definition of terrorism to include not just acts of violence intended to achieve a political or ideological aim, but also attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate. It’s unclear, though, how well enforcement works if the company is still having trouble ridding its platform of well-known extremist organizations’ supporters.
But as the report shows, plenty of material gets through the cracks — and gets auto-generated.
The AP story in May highlighted the auto-generation problem, but the new content identified in the report suggests that Facebook has not solved it.
The report also says that researchers found that many of the pages referenced in the AP report were removed more than six weeks later on June 25, the day before Bickert was questioned for another congressional hearing.
The issue was flagged in the initial SEC complaint filed by the center’s executive director, John Kostyack, which alleges the social media company has exaggerated its success combatting extremist messaging.
“Facebook would like us to believe that its magical algorithms are somehow scrubbing its website of extremist content,” Kostyack said. “Yet those very same algorithms are auto-generating pages with titles like ‘I Love Islamic State,’ which are ideal for terrorists to use for networking and recruiting.”


WhatsApp being used to target Palestinians through Israel’s Lavender AI system

Updated 36 min 22 sec ago
Follow

WhatsApp being used to target Palestinians through Israel’s Lavender AI system

  • Targets’ selection based on membership to some WhatsApp groups, new report reveals
  • Accusation raises questions about app’s privacy and encryption claims

LONDON: WhatsApp is allegedly being used to target Palestinians through Israel’s contentious artificial intelligence system, Lavender, which has been linked to the deaths of Palestinian civilians in Gaza, recent reports have revealed.

Earlier this month, Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call published a report by journalist Yuval Abraham, exposing the Israeli army’s use of an AI system capable of identifying targets associated with Hamas or Palestinian Islamic Jihad.

This revelation, corroborated by six Israeli intelligence officers involved in the project, has sparked international outrage, as it suggested Lavender has been used by the military to target and eliminate suspected militants, often resulting in civilian casualties.

In a recent blog post, software engineer and activist Paul Biggar highlighted Lavender’s reliance on WhatsApp.

He pointed out how membership in a WhatsApp group containing a suspected militant can influence Lavender’s identification process, highlighting the pivotal role messaging platforms play in supporting AI targeting systems like Lavender.

“A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same WhatsApp group as a suspected militant,” Bigger wrote. “There’s a lot wrong with this.”

He explained that users often find themselves in groups with strangers or acquaintances.

Biggar also suggested that WhatsApp’s parent company, Meta, may be complicit, whether knowingly or unknowingly, in these operations.

He accused Meta of potentially violating international humanitarian law and its own commitments to human rights, raising questions about the privacy and encryption claims of WhatsApp’s messaging service.

The revelation is just the latest of Meta’s perceived attempts to silence pro-Palestinian voices.

Since before the beginning of the conflict, the Menlo Park giant has faced accusations of double standards favoring Israel.

In February, the Guardian revealed that Meta was considering the expansion of its hate speech policy to the term “Zionist.”

More recently, Meta quietly introduced a new feature on Instagram that automatically limits users’ exposure to what it deems “political” content, a decision criticized by experts as a means of systematically censoring pro-Palestinian content.

Responding to requests for comment, a WhatsApp spokesperson said that the company could not verify the accuracy of the report but assured that “WhatsApp has no backdoors and does not provide bulk information to any government.”


Eastern European mercenaries suspected of attacking Iranian journalist Pouria Zeraati

Updated 19 April 2024
Follow

Eastern European mercenaries suspected of attacking Iranian journalist Pouria Zeraati

  • UK security services believe criminal proxies with links to Tehran carried out London knife attack

LONDON: Police said on Friday that a group of Eastern European mercenaries is suspected to have carried out the knife attack on Iranian journalist Pouria Zeraati in late March.

Zeraati was stabbed repeatedly by three men in an attack outside his south London home.

The Iran International presenter lost a significant amount of blood and was hospitalized for several days. He has since returned to work, but is now living in a secure location.

Iran International and its staff have faced repeated threats, believed to be linked to the Iranian regime, which designated the broadcaster as a terrorist organization for its coverage of the 2022 protests.

Iran’s charge d’affaires, Seyed Mehdi Hosseini Matin, denied any government involvement in the attack on Zeraati.

Investigators revealed that the suspects fled the UK immediately after the incident, with reports suggesting they traveled to Heathrow Airport before boarding commercial flights to different destinations.

Police are pursuing leads in Albania as part of their investigation.

Counterterrorism units and Britain’s security services leading the inquiry believe that the attack is another instance of the Iranian regime employing criminal proxies to target its critics on foreign soil.

This method allows Tehran to maintain plausible deniability and avoids raising suspicions when suspects enter the country.

Zeraati was attacked on March 29 as he left his home home to travel to work. His weekly show serves as a source of impartial and uncensored news for many Iranians at home and abroad.

In an interview with BBC Radio 4’s “Today” program this week, Zeraati said that while he is physically “much better,” mental recovery from the assault “will take time.”


Court orders release of prominent Palestinian professor suspected of incitement

Updated 19 April 2024
Follow

Court orders release of prominent Palestinian professor suspected of incitement

  • Nadera Shalhoub-Kevorkian was under investigation after questioning Hamas atrocities, criticizing Israel
  • Insufficient justification for arrest, says court
  • Detention part of a broader campaign, says lawyer

LONDON: The prominent Hebrew University of Jerusalem professor, Nadera Shalhoub-Kevorkian, was released on Friday after a court order rejected police findings.

The criminologist and law professor was arrested the previous day on suspicion of incitement. She had been under investigation for remarks regarding the Oct. 7 attacks by Hamas and for saying Israelis were committing “genocidal crimes” in the Gaza Strip and should fear the consequences.

On Friday, the court dismissed a police request to extend her remand, citing insufficient justification for the arrest, according to Hebrew media reports.

Protesters gathered outside the courthouse to demonstrate against Shalhoub-Kevorkian’s arrest.

Israeli Channel 12, which first reported the news, did not specify where Shalhoub was arrested but her lawyer later confirmed she was apprehended at her home in the Armenian Quarter of Jerusalem.

“She’s not been in good health recently and was arrested in her home,” Alaa Mahajna said. “Police searched the house and seized her computer and cellphone, [Palestinian] poetry books and work-related papers.”

Mahajna described Shalhoub-Kevorkian’s arrest as part of a broader campaign against her, which has included numerous threats to her life and of violence. 

The professor was suspended by her university last month after calling for the abolition of Zionism and suggesting that accounts of sexual assault during the Hamas-led attacks on Israel were fabricated.

The suspension was initially criticized by the university community as a blow to academic freedom in Israel. However, the decision was later reversed following an apology from Shalhoub-Kevorkian and an admission that sexual assaults took place.

Since hostilities began last year, numerous dissenting voices in Israel have faced arrest for expressing solidarity with victims of the bombardment in Gaza.

In October, well-known ultra-Orthodox Israeli journalist Israel Frey was forced into hiding following a violent attack on his home.

Bayan Khateeb, a student at the Technion-Israel Institute of Technology, was arrested last year for incitement after posting an Instagram story showing the preparation of a popular spicy egg dish with the caption: “We will soon be eating the victory shakshuka.”


Sony, Apollo discuss joint bid for Paramount, says source

Updated 19 April 2024
Follow

Sony, Apollo discuss joint bid for Paramount, says source

  • Paramount is already in an exclusive deal with Skydance Media over possible merger

LONDON: Sony Pictures Entertainment and Apollo Global Management are discussing making a joint bid for Paramount Global, according to a person familiar with the matter.
The companies have yet to approach Paramount, which is in exclusive deal talks with Skydance Media, an independent studio led by David Ellison, though some investors have urged Paramount to explore other options.
The competing bid, which is still being structured, would offer cash for all outstanding Paramount shares and take the company private, the source said.
Sony would hold a majority stake in the joint venture and operate the media company, and its library of films, including such classics as “Star Trek,” “Mission:Impossible” and “Indiana Jones,” and television characters like SpongeBob SquarePants, according to the source.
Sony Pictures Entertainment Chairman Tony Vinciquerra, a veteran media executive with deep experience in film and television, would likely run the studio and take advantage of Sony’s marketing and distribution.
Apollo would likely assume control of the CBS broadcast network and its local television stations, because of restrictions on foreign ownership of broadcast stations, the source said. Sony’s parent corporation is headquartered in Tokyo.
The New York Times first reported the Sony-Apollo discussions. Paramount and Sony declined comment. Apollo could not be reached for comment.
The private equity firm previously made a $26 billion offer to buy Paramount Global, whose enterprise value at the end of 2023 was about $22.5 billion.
A special committee of Paramount’s board elected to continue with its advanced deal talks with Skydance, rather than chase a deal “that might not actually come to fruition,” said two people with knowledge of the board’s action.
The board committee is evaluating the possible acquisition of the smaller independent studio in a stock deal worth $4 billion to $5 billion.
Skydance is negotiating separately to acquire National Amusements, a company that holds the Redstone family’s controlling interest in Paramount, according to a person familiar with the deal terms. That transaction is contingent upon a Skydance-Paramount merger.


Meta releases beefed-up AI models, eyes integration into its apps

Updated 19 April 2024
Follow

Meta releases beefed-up AI models, eyes integration into its apps

  • AI model Llama 3 takes step towards human-level intelligence, Meta claims
  • Company also announced new AI Assistant integration into its major social media apps

SAN FRANCISCO: Meta on Thursday introduced an improved AI assistant built on new versions of its open-source Llama large language model.
Meta AI is smarter and faster due to advances in the publicly available Llama 3, the tech titan said in a blog post.
“The bottom line is we believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta co-founder and chief executive Mark Zuckerberg said in a video on Instagram.
Being open source means that developers outside of Meta are free to customize Llama 3 as they wish and the company may then incorporate those improvements and insights in an updated version.
“We’re excited about the potential that generative AI technology can have for people who use Meta products and for the broader ecosystem,” Meta said.
“We also want to make sure we’re developing and releasing this technology in a way that anticipates and works to reduce risk.”
That effort includes incorporating protections in the way Meta designs and releases Llama models and being cautious when it adds generative AI features to Facebook, Instagram, WhatsApp, and Messenger, according to Meta.
“We’re also making Meta AI much easier to use across our apps. We built it into the search box right at the top of WhatsApp, Facebook, and Instagram messenger, so any time you have a question, you can just ask it right there,” said Zuckerberg in the video.
AI models, Meta’s included, have been known to occasionally go off the rails, giving inaccurate or bizarre responses in episodes referred to as “hallucinations.”
Examples shared on social media included Meta AI claiming to have a child in the New York City school system during an online forum conversation.


Meta AI has been consistently updated and improved since its initial release last year, according to the company.
“Meta’s slower approach to building its AI has put the company behind in terms of consumer awareness and usage, but it still has time to catch up,” said Sonata Insights chief analyst Debra Aho Williamson.
“Its social media apps represent a massive user base that it can use to test AI experiences.”
By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it.
Meta cited the example of refining the way its AI answers prompts regarding political or social issues to summarize relevant points about the topic instead of offering a single point of view.
Llama 3 has been tuned to better discern whether prompts are innocuous or out-of-bounds, according to Meta.
“Large language models tend to overgeneralize, and we don’t intend for it to refuse to answer prompts like ‘How do I kill a computer program?’ even though we don’t want it to respond to prompts like ‘How do I kill my neighbor?’,” Meta explained.
Meta said it lets users know when they are interacting with AI on its platform and puts visible markers on photorealistic images that were in fact generated by AI.
Beginning in May, Meta will start labeling video, audio, and images “Made with AI” when it detects or is told content is generated by the technology.
Llama 3, for now, is based in English but in the coming months Meta will release more capable models able to converse in multiple languages, the company said.