Facebook whistleblower says network profits as it hurts kids, fuels division

Facebook whistleblower, Frances Haugen appears before the Senate Commerce, Science, and Transportation Subcommittee during a hearing entitled 'Protecting Kids Online: Testimony from a Facebook Whistleblower'. (AFP)
Short Url
Updated 05 October 2021
Follow

Facebook whistleblower says network profits as it hurts kids, fuels division

  • Facebook’s leadership had rejected recommendations made to make its sites, which include Instagram, safer
  • Frances Haugen, a former product manager on Facebook’s civic misinformation team, said Facebook had also done too little to prevent its platform from being used by people planning violence

WASHINGTON: A former Facebook data scientist told Congress on Tuesday that the social network giant’s products harm children and fuel polarization in the US while its executives refuse to change because they elevate profits over safety. And she laid responsibility with the company’s CEO Mark Zuckerberg.
Frances Haugen testified to the Senate Commerce Subcommittee on Consumer Protection. Speaking confidently at a charged hearing, she accused the company of being aware of apparent harm to some teens from Instagram and being dishonest in its public fight against hate and misinformation.
“Facebook’s products harm children, stoke division and weaken our democracy,” Haugen said. “The company’s leadership knows how to make Facebook and Instagram safer but won’t make the necessary changes because they have put their astronomical profits before people.”
“Congressional action is needed,” she said. “They won’t solve this crisis without your help.”
Haugen said the company has acknowledged publicly that integrity controls were crucially needed for its systems that stoke the engagement of users, but then it disabled some of those controls.
In dialogue with receptive senators of both parties, Haugen, who focused on algorithm products in her work at Facebook, explained the importance to the company of algorithms that govern what shows up on users’ news feeds. She said a 2018 change to the content flow contributed to more divisiveness and ill will in a network ostensibly created to bring people closer together.
Despite the enmity that the new algorithms were feeding, she said Facebook found that they helped keep people coming back — a pattern that helped the social media giant sell more of the digital ads that generate most of its revenue.
Senators agreed.
“It has profited off spreading misinformation and disinformation and sowing hate,” said Sen. Richard Blumenthal, D-Connecticut, the panel’s chairman. “Facebook’s answers to Facebook’s destructive impact always seems to be more Facebook, we need more Facebook — which means more pain, and more money for Facebook.”
Haugen said she believed Facebook didn’t set out to build a destructive platform. But “in the end, the buck stops with Mark,” she said referring to Zuckerberg, who controls more than 50 percent of Facebook’s voting shares. “There is no one currently holding Mark accountable but himself.”
Haugen said she believed that Zuckerberg was familiar with some of the internal research showing concerns for potential negative impacts of Instagram.
The government needs to step in with stricter oversight of the company, Haugen said.
Like fellow tech giants Google, Amazon and Apple, Facebook has enjoyed minimal regulation. A number of bipartisan legislative proposals for the tech industry address data privacy, protection of young people and anti-competitive conduct. But getting new laws through Congress is a heavy slog. The Federal Trade Commission has adopted a stricter stance recently toward Facebook and other companies.
The subcommittee is examining Facebook’s use of information from its own researchers on Instagram that could indicate potential harm for some of its young users, especially girls, while it publicly downplayed the negative impacts. For some of the teens devoted to Facebook’s popular photo-sharing platform, the peer pressure generated by the visually focused Instagram led to mental health and body-image problems, and in some cases, eating disorders and suicidal thoughts, the research leaked by Haugen showed.
One internal study cited 13.5 percent of teen girls saying Instagram makes thoughts of suicide worse and 17 percent of teen girls saying it makes eating disorders worse.
Because of the drive for user engagement, Haugen testified, “Facebook knows that they are leading young users to anorexia content. ... It’s just like cigarettes. Teenagers don’t have any self-regulation. We need to protect the kids.”
Haugen has come forward with a wide-ranging condemnation of Facebook, buttressed with tens of thousands of pages of internal research documents she secretly copied before leaving her job in the company’s civic integrity unit. She also has filed complaints with federal authorities alleging that Facebook’s own research shows that it amplifies hate, misinformation and political unrest, but the company hides what it knows.
“The company intentionally hides vital information from the public, from the US government and from governments around the world,” Haugen said. “The documents I have provided to Congress prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems and its role in spreading divisive and extreme messages.”
The former employee challenging the social network giant with 2.8 billion users worldwide and nearly $1 trillion in market value is a 37-year-old data expert from Iowa with a degree in computer engineering and a master’s degree in business from Harvard. Prior to being recruited by Facebook in 2019, she worked for 15 years at tech companies including Google, Pinterest and Yelp.
After recent reports in The Wall Street Journal based on documents she leaked to the newspaper raised a public outcry, Haugen revealed her identity in a CBS “60 Minutes” interview aired Sunday night.
As the public relations debacle over the Instagram research grew last week, Facebook put on hold its work on a kids’ version of Instagram, which the company says is meant mainly for tweens aged 10 to 12.
Haugen said that Facebook prematurely turned off safeguards designed to thwart misinformation and incitement to violence after Joe Biden defeated Donald Trump last year, alleging that contributed to the deadly Jan. 6 assault on the US Capitol.
After the November election, Facebook dissolved the civic integrity unit where Haugen had been working. That, she says, was the moment she realized “I don’t trust that they’re willing to actually invest what needs to be invested to keep Facebook from being dangerous.”
Haugen says she told Facebook executives when they recruited her that she wanted to work in an area of the company that fights misinformation, because she had lost a friend to online conspiracy theories.
Facebook maintains that Haugen’s allegations are misleading and insists there is no evidence to support the premise that it is the primary cause of social polarization.
“Even with the most sophisticated technology, which I believe we deploy, even with the tens of thousands of people that we employ to try and maintain safety and integrity on our platform, we’re never going to be absolutely on top of this 100 percent of the time,” Nick Clegg, Facebook’s vice president of policy and public affairs, said Sunday on CNN’s “Reliable Sources.”
That’s because of the “instantaneous and spontaneous form of communication” on Facebook, Clegg said, adding, “I think we do more than any reasonable person can expect to.”


WhatsApp being used to target Palestinians through Israel’s Lavander AI system

Updated 19 April 2024
Follow

WhatsApp being used to target Palestinians through Israel’s Lavander AI system

  • Targets’ selection based on membership to some WhatsApp groups, new report reveals
  • Accusation raises questions about app’s privacy and encryption claims

LONDON: WhatsApp is allegedly being used to target Palestinians through Israel’s contentious artificial intelligence system, Lavender, which has been linked to the deaths of Palestinian civilians in Gaza, recent reports have revealed.

Earlier this month, Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call published a report by journalist Yuval Abraham, exposing the Israeli army’s use of an AI system capable of identifying targets associated with Hamas or Palestinian Islamic Jihad.

This revelation, corroborated by six Israeli intelligence officers involved in the project, has sparked international outrage, as it suggested Lavender has been used by the military to target and eliminate suspected militants, often resulting in civilian casualties.

In a recent blog post, software engineer and activist Paul Biggar highlighted Lavender’s reliance on WhatsApp.

He pointed out how membership in a WhatsApp group containing a suspected militant can influence Lavender’s identification process, highlighting the pivotal role messaging platforms play in supporting AI targeting systems like Lavender.

“A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same WhatsApp group as a suspected militant,” Bigger wrote. “There’s a lot wrong with this.”

He explained that users often find themselves in groups with strangers or acquaintances.

Biggar also suggested that WhatsApp’s parent company, Meta, may be complicit, whether knowingly or unknowingly, in these operations.

He accused Meta of potentially violating international humanitarian law and its own commitments to human rights, raising questions about the privacy and encryption claims of WhatsApp’s messaging service.

The revelation is just the latest of Meta’s perceived attempts to silence pro-Palestinian voices.

Since before the beginning of the conflict, the Menlo Park giant has faced accusations of double standards favoring Israel.

In February, the Guardian revealed that Meta was considering the expansion of its hate speech policy to the term “Zionist.”

More recently, Meta quietly introduced a new feature on Instagram that automatically limits users’ exposure to what it deems “political” content, a decision criticized by experts as a means of systematically censoring pro-Palestinian content.

Responding to requests for comment, a WhatsApp spokesperson said that the company could not verify the accuracy of the report but assured that “WhatsApp has no backdoors and does not provide bulk information to any government.”


Eastern European mercenaries suspected of attacking Iranian journalist Pouria Zeraati

Updated 19 April 2024
Follow

Eastern European mercenaries suspected of attacking Iranian journalist Pouria Zeraati

  • UK security services believe criminal proxies with links to Tehran carried out London knife attack

LONDON: Police said on Friday that a group of Eastern European mercenaries is suspected to have carried out the knife attack on Iranian journalist Pouria Zeraati in late March.

Zeraati was stabbed repeatedly by three men in an attack outside his south London home.

The Iran International presenter lost a significant amount of blood and was hospitalized for several days. He has since returned to work, but is now living in a secure location.

Iran International and its staff have faced repeated threats, believed to be linked to the Iranian regime, which designated the broadcaster as a terrorist organization for its coverage of the 2022 protests.

Iran’s charge d’affaires, Seyed Mehdi Hosseini Matin, denied any government involvement in the attack on Zeraati.

Investigators revealed that the suspects fled the UK immediately after the incident, with reports suggesting they traveled to Heathrow Airport before boarding commercial flights to different destinations.

Police are pursuing leads in Albania as part of their investigation.

Counterterrorism units and Britain’s security services leading the inquiry believe that the attack is another instance of the Iranian regime employing criminal proxies to target its critics on foreign soil.

This method allows Tehran to maintain plausible deniability and avoids raising suspicions when suspects enter the country.

Zeraati was attacked on March 29 as he left his home home to travel to work. His weekly show serves as a source of impartial and uncensored news for many Iranians at home and abroad.

In an interview with BBC Radio 4’s “Today” program this week, Zeraati said that while he is physically “much better,” mental recovery from the assault “will take time.”


Court orders release of prominent Palestinian professor suspected of incitement

Updated 19 April 2024
Follow

Court orders release of prominent Palestinian professor suspected of incitement

  • Nadera Shalhoub-Kevorkian was under investigation after questioning Hamas atrocities, criticizing Israel
  • Insufficient justification for arrest, says court
  • Detention part of a broader campaign, says lawyer

LONDON: The prominent Hebrew University of Jerusalem professor, Nadera Shalhoub-Kevorkian, was released on Friday after a court order rejected police findings.

The criminologist and law professor was arrested the previous day on suspicion of incitement. She had been under investigation for remarks regarding the Oct. 7 attacks by Hamas and for saying Israelis were committing “genocidal crimes” in the Gaza Strip and should fear the consequences.

On Friday, the court dismissed a police request to extend her remand, citing insufficient justification for the arrest, according to Hebrew media reports.

Protesters gathered outside the courthouse to demonstrate against Shalhoub-Kevorkian’s arrest.

Israeli Channel 12, which first reported the news, did not specify where Shalhoub was arrested but her lawyer later confirmed she was apprehended at her home in the Armenian Quarter of Jerusalem.

“She’s not been in good health recently and was arrested in her home,” Alaa Mahajna said. “Police searched the house and seized her computer and cellphone, [Palestinian] poetry books and work-related papers.”

Mahajna described Shalhoub-Kevorkian’s arrest as part of a broader campaign against her, which has included numerous threats to her life and of violence. 

The professor was suspended by her university last month after calling for the abolition of Zionism and suggesting that accounts of sexual assault during the Hamas-led attacks on Israel were fabricated.

The suspension was initially criticized by the university community as a blow to academic freedom in Israel. However, the decision was later reversed following an apology from Shalhoub-Kevorkian and an admission that sexual assaults took place.

Since hostilities began last year, numerous dissenting voices in Israel have faced arrest for expressing solidarity with victims of the bombardment in Gaza.

In October, well-known ultra-Orthodox Israeli journalist Israel Frey was forced into hiding following a violent attack on his home.

Bayan Khateeb, a student at the Technion-Israel Institute of Technology, was arrested last year for incitement after posting an Instagram story showing the preparation of a popular spicy egg dish with the caption: “We will soon be eating the victory shakshuka.”


Sony, Apollo discuss joint bid for Paramount, says source

Updated 19 April 2024
Follow

Sony, Apollo discuss joint bid for Paramount, says source

  • Paramount is already in an exclusive deal with Skydance Media over possible merger

LONDON: Sony Pictures Entertainment and Apollo Global Management are discussing making a joint bid for Paramount Global, according to a person familiar with the matter.
The companies have yet to approach Paramount, which is in exclusive deal talks with Skydance Media, an independent studio led by David Ellison, though some investors have urged Paramount to explore other options.
The competing bid, which is still being structured, would offer cash for all outstanding Paramount shares and take the company private, the source said.
Sony would hold a majority stake in the joint venture and operate the media company, and its library of films, including such classics as “Star Trek,” “Mission:Impossible” and “Indiana Jones,” and television characters like SpongeBob SquarePants, according to the source.
Sony Pictures Entertainment Chairman Tony Vinciquerra, a veteran media executive with deep experience in film and television, would likely run the studio and take advantage of Sony’s marketing and distribution.
Apollo would likely assume control of the CBS broadcast network and its local television stations, because of restrictions on foreign ownership of broadcast stations, the source said. Sony’s parent corporation is headquartered in Tokyo.
The New York Times first reported the Sony-Apollo discussions. Paramount and Sony declined comment. Apollo could not be reached for comment.
The private equity firm previously made a $26 billion offer to buy Paramount Global, whose enterprise value at the end of 2023 was about $22.5 billion.
A special committee of Paramount’s board elected to continue with its advanced deal talks with Skydance, rather than chase a deal “that might not actually come to fruition,” said two people with knowledge of the board’s action.
The board committee is evaluating the possible acquisition of the smaller independent studio in a stock deal worth $4 billion to $5 billion.
Skydance is negotiating separately to acquire National Amusements, a company that holds the Redstone family’s controlling interest in Paramount, according to a person familiar with the deal terms. That transaction is contingent upon a Skydance-Paramount merger.


Meta releases beefed-up AI models, eyes integration into its apps

Updated 19 April 2024
Follow

Meta releases beefed-up AI models, eyes integration into its apps

  • AI model Llama 3 takes step towards human-level intelligence, Meta claims
  • Company also announced new AI Assistant integration into its major social media apps

SAN FRANCISCO: Meta on Thursday introduced an improved AI assistant built on new versions of its open-source Llama large language model.
Meta AI is smarter and faster due to advances in the publicly available Llama 3, the tech titan said in a blog post.
“The bottom line is we believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta co-founder and chief executive Mark Zuckerberg said in a video on Instagram.
Being open source means that developers outside of Meta are free to customize Llama 3 as they wish and the company may then incorporate those improvements and insights in an updated version.
“We’re excited about the potential that generative AI technology can have for people who use Meta products and for the broader ecosystem,” Meta said.
“We also want to make sure we’re developing and releasing this technology in a way that anticipates and works to reduce risk.”
That effort includes incorporating protections in the way Meta designs and releases Llama models and being cautious when it adds generative AI features to Facebook, Instagram, WhatsApp, and Messenger, according to Meta.
“We’re also making Meta AI much easier to use across our apps. We built it into the search box right at the top of WhatsApp, Facebook, and Instagram messenger, so any time you have a question, you can just ask it right there,” said Zuckerberg in the video.
AI models, Meta’s included, have been known to occasionally go off the rails, giving inaccurate or bizarre responses in episodes referred to as “hallucinations.”
Examples shared on social media included Meta AI claiming to have a child in the New York City school system during an online forum conversation.


Meta AI has been consistently updated and improved since its initial release last year, according to the company.
“Meta’s slower approach to building its AI has put the company behind in terms of consumer awareness and usage, but it still has time to catch up,” said Sonata Insights chief analyst Debra Aho Williamson.
“Its social media apps represent a massive user base that it can use to test AI experiences.”
By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it.
Meta cited the example of refining the way its AI answers prompts regarding political or social issues to summarize relevant points about the topic instead of offering a single point of view.
Llama 3 has been tuned to better discern whether prompts are innocuous or out-of-bounds, according to Meta.
“Large language models tend to overgeneralize, and we don’t intend for it to refuse to answer prompts like ‘How do I kill a computer program?’ even though we don’t want it to respond to prompts like ‘How do I kill my neighbor?’,” Meta explained.
Meta said it lets users know when they are interacting with AI on its platform and puts visible markers on photorealistic images that were in fact generated by AI.
Beginning in May, Meta will start labeling video, audio, and images “Made with AI” when it detects or is told content is generated by the technology.
Llama 3, for now, is based in English but in the coming months Meta will release more capable models able to converse in multiple languages, the company said.