Thousands of fake Facebook accounts shut down by Meta were primed to polarize voters ahead of 2024

Attendees visit the Meta booth at the Game Developers Conference 2023 in San Francisco on March 22, 2023. (AP/File)
Short Url
Updated 30 November 2023
Follow

Thousands of fake Facebook accounts shut down by Meta were primed to polarize voters ahead of 2024

  • The network of nearly 4,800 fake accounts hints at serious threats posed by online disinformation 
  • National elections will occur in the US, Pakistan, India, Ukraine, Taiwan and other nations next year 

WASHINGTON: Someone in China created thousands of fake social media accounts designed to appear to be from Americans and used them to spread polarizing political content in an apparent effort to divide the US ahead of next year’s elections, Meta said Thursday. 

The network of nearly 4,800 fake accounts was attempting to build an audience when it was identified and eliminated by the tech company, which owns Facebook and Instagram. The accounts sported fake photos, names and locations as a way to appear like everyday American Facebook users weighing in on political issues. 

Instead of spreading fake content as other networks have done, the accounts were used to reshare posts from X, the platform formerly known as Twitter, that were created by politicians, news outlets and others. The interconnected accounts pulled content from both liberal and conservative sources, an indication that its goal was not to support one side or the other but to exaggerate partisan divisions and further inflame polarization. 

The newly identified network shows how America’s foreign adversaries exploit US-based tech platforms to sow discord and distrust, and it hints at the serious threats posed by online disinformation next year, when national elections will occur in the US, India, Mexico, Ukraine, Pakistan, Taiwan and other nations. 

“These networks still struggle to build audiences, but they’re a warning,” said Ben Nimmo, who leads investigations into inauthentic behavior on Meta’s platforms. “Foreign threat actors are attempting to reach people across the Internet ahead of next year’s elections, and we need to remain alert.” 

Meta Platforms Inc., based in Menlo Park, California, did not publicly link the Chinese network to the Chinese government, but it did determine the network originated in that country. The content spread by the accounts broadly complements other Chinese government propaganda and disinformation that has sought to inflate partisan and ideological divisions within the US 

To appear more like normal Facebook accounts, the network would sometimes post about fashion or pets. Earlier this year, some of the accounts abruptly replaced their American-sounding user names and profile pictures with new ones suggesting they lived in India. The accounts then began spreading pro-Chinese content about Tibet and India, reflecting how fake networks can be redirected to focus on new targets. 

Meta often points to its efforts to shut down fake social media networks as evidence of its commitment to protecting election integrity and democracy. But critics say the platform’s focus on fake accounts distracts from its failure to address its responsibility for the misinformation already on its site that has contributed to polarization and distrust. 

For instance, Meta will accept paid advertisements on its site to claim the US election in 2020 was rigged or stolen, amplifying the lies of former President Donald Trump and other Republicans whose claims about election irregularities have been repeatedly debunked. Federal and state election officials and Trump’s own attorney general have said there is no credible evidence that the presidential election, which Trump lost to Democrat Joe Biden, was tainted. 

When asked about its ad policy, the company said it is focusing on future elections, not ones from the past, and will reject ads that cast unfounded doubt on upcoming contests. 

And while Meta has announced a new artificial intelligence policy that will require political ads to bear a disclaimer if they contain AI-generated content, the company has allowed other altered videos that were created using more conventional programs to remain on its platform, including a digitally edited video of Biden that claims he is a pedophile. 

“This is a company that cannot be taken seriously and that cannot be trusted,” said Zamaan Qureshi, a policy adviser at the Real Facebook Oversight Board, an organization of civil rights leaders and tech experts who have been critical of Meta’s approach to disinformation and hate speech. “Watch what Meta does, not what they say.” 

Meta executives discussed the network’s activities during a conference call with reporters on Wednesday, the day after the tech giant announced its policies for the upcoming election year — most of which were put in place for prior elections. 

But 2024 poses new challenges, according to experts who study the link between social media and disinformation. Not only will many large countries hold national elections, but the emergence of sophisticated AI programs means it’s easier than ever to create lifelike audio and video that could mislead voters. 

“Platforms still are not taking their role in the public sphere seriously,” said Jennifer Stromer-Galley, a Syracuse University professor who studies digital media. 

Stromer-Galley called Meta’s election plans “modest” but noted it stands in stark contrast to the “Wild West” of X. Since buying the X platform, then called Twitter, Elon Musk has eliminated teams focused on content moderation, welcomed back many users previously banned for hate speech and used the site to spread conspiracy theories. 

Democrats and Republicans have called for laws addressing algorithmic recommendations, misinformation, deepfakes and hate speech, but there’s little chance of any significant regulations passing ahead of the 2024 election. That means it will fall to the platforms to voluntarily police themselves. 

Meta’s efforts to protect the election so far are “a horrible preview of what we can expect in 2024,” according to Kyle Morse, deputy executive director of the Tech Oversight Project, a nonprofit that supports new federal regulations for social media. “Congress and the administration need to act now to ensure that Meta, TikTok, Google, X, Rumble and other social media platforms are not actively aiding and abetting foreign and domestic actors who are openly undermining our democracy.” 

Many of the fake accounts identified by Meta this week also had nearly identical accounts on X, where some of them regularly retweeted Musk’s posts. 

Those accounts remain active on X. A message seeking comment from the platform was not returned. 

Meta also released a report Wednesday evaluating the risk that foreign adversaries including Iran, China and Russia would use social media to interfere in elections. The report noted that Russia’s recent disinformation efforts have focused not on the US but on its war against Ukraine, using state media propaganda and misinformation in an effort to undermine support for the invaded nation. 

Nimmo, Meta’s chief investigator, said turning opinion against Ukraine will likely be the focus of any disinformation Russia seeks to inject into America’s political debate ahead of next year’s election. 

“This is important ahead of 2024,” Nimmo said. “As the war continues, we should especially expect to see Russian attempts to target election-related debates and candidates that focus on support for Ukraine.” 


Social media companies face legal reckoning over mental health harms to children

Updated 5 sec ago
Follow

Social media companies face legal reckoning over mental health harms to children

For years, social media companies have disputed allegations that they harm children’s mental health through deliberate design choices that addict kids to their platforms and fail to protect them from sexual predators and dangerous content. Now, these tech giants are getting a chance to make their case in courtrooms around the country, including before a jury for the first time.
Some of the biggest players from Meta to TikTok are facing federal and state trials that seek to hold them responsible for harming children’s mental health. The lawsuits have come from school districts, local, state and the federal government as well as thousands of families.
Two trials are now underway in Los Angeles and in New Mexico, with more to come. The courtroom showdowns are the culmination of years of scrutiny of the platforms over child safety, and whether deliberate design choices make them addictive and serve up content that leads to depression, eating disorders or suicide.
Experts see the reckoning as reminiscent of cases against tobacco and opioid markets, and the plaintiffs hope that social media platforms will see similar outcomes as cigarette makers and drug companies, pharmacies and distributors.
The outcomes could challenge the companies’ First Amendment shield and Section 230 of the 1996 Communications Decency Act, which protects tech companies from liability for material posted on their platforms. They could also be costly in the form of legal fees and settlements. And they could force the companies to change how they operate, potentially losing users and advertising dollars.
Here’s a look at the major social media harms cases in the United States.
The Los Angeles case centers on addiction
Jurors in a landmark social media case that seeks to hold tech companies responsible for harms to children got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining defendants, Meta and YouTube.
At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of similar lawsuits will play out. KGM and the cases of two other plaintiffs have been selected to be bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.
“This is a monumental inflection point in social media,” said Matthew Bergman of the Seattle-based Social Media Victims Law Center, which represents more than 1,000 plaintiffs in lawsuits against social media companies. “When we started doing this four years ago no one said we’d ever get to trial. And here we are trying our case in front of a fair and impartial jury.”
On Wednesday Meta CEO Mark Zuckerberg testified, mostly sticking to past talking points, including a lengthy back-and-forth about age verification where he said ““I don’t see why this is so complicated,” reiterating that the company’s policy restricts users under the age of 13 and that it works to detect users who have lied about their ages to bypass restrictions..
At one point, the plaintiff’s attorney, Mark Lanier, asked Zuckerberg if people tend to use something more if it’s addictive.
“I’m not sure what to say to that,” Zuckerberg said. “I don’t think that applies here.”
New Mexico goes after Meta over sexual exploitation
A team led by New Mexico Attorney General Raúl Torrez, who sued Meta in 2023, built their case by posing as children on social media, then documenting sexual solicitations they received as well as Meta’s response.
Torrez wants Meta to implement more effective age verification and do more to remove bad actors from its platform.
He also is seeking changes to algorithms that can serve up harmful material, and has criticized the end-to-end encryption that can prevent the monitoring of communications with children for safety. Meta has noted that encrypted messaging is encouraged in general as a privacy and security measure by some state and federal authorities.
The trial kicked off in early February. In his opening statement, prosecuting attorney Donald Migliori said Meta has misrepresented the safety of its platforms, choosing to engineer its algorithms to keep young people online while knowing that children are at risk of sexual exploitation.
“Meta clearly knew that youth safety was not its corporate priority ... that youth safety was less important than growth and engagement,” Migliori told the jury.
Meta attorney Kevin Huff pushed back on those assertions in his opening statement, highlighting an array of efforts by the company to weed out harmful content from its platforms while warning users that some dangerous content still gets past its safety net.
School districts head to trial
A trial scheduled for this summer pits school districts against social media companies before US District Judge Yvonne Gonzalez Rogers in Oakland, California. Called a multidistrict litigation, it names six public school districts from around the country as the bellwethers.
Jayne Conroy, a lawyer on plaintiffs’ trial team, was also an attorney for plaintiffs seeking to hold pharmaceutical companies responsible for the opioid epidemic. She said the cornerstone of both cases is the same: addiction.
“With the social media case, we’re focused primarily on children and their developing brains and how addiction is such a threat to their wellbeing and ... the harms that are caused to children — how much they’re watching and what kind of targeting is being done,” she said.
The medical science, she added, “is not really all that different, surprisingly, from an opioid or a heroin addiction. We are all talking about the dopamine reaction.”
Both the social media and the opioid cases claim negligence on the part of the defendants.
“What we were able to prove in the opioid cases is the manufacturers, the distributors, the pharmacies, they knew about the risks, they downplayed them, they oversupplied, and people died,” Conroy said. “Here, it is very much the same thing. These companies knew about the risks, they have disregarded the risks, they doubled down to get profits from advertisers over the safety of kids. And kids were harmed and kids died.”
Resolution could take years amid dueling narratives
Social media companies have disputed that their products are addictive. During questioning Wednesday by the plaintiff’s lawyer during the Los Angeles trial, Zuckerberg said he still agrees with a previous statement he made that the existing body of scientific work has not proven that social media causes mental health harms.
Some researchers do indeed question whether addiction is the appropriate term to describe heavy use of social media. Social media addiction is not recognized as an official disorder in the Diagnostic and Statistical Manual of Mental Disorders, the authority within the psychiatric community.
But the companies face increasing pushback on the issue of social media’s effects on children’s mental health, not only among academics but also parents, schools and lawmakers.
“While Meta has doubled down in this area to address mounting concerns by rolling out safety features, several recent reports suggest that the company continues to aggressively prioritize teens as a user base and doesn’t always adhere to its own rules,” said Emarketer analyst Minda Smiley.
With appeals and any settlement discussions, the cases against social media companies could take years to resolve. And unlike in Europe and Australia, tech regulation in the US is moving at a glacial pace.
“Parents, education, and other stakeholders are increasingly hoping lawmakers will do more,” Smiley said. “While there is momentum at the state and federal level, Big Tech lobbying, enforcement challenges, and lawmaker disagreements over how to best regular social media have slowed meaningful progress.”