Facebook isn’t deleting the fake Pelosi video. Should it?

When a doctored video of House Speaker Nancy Pelosi — one altered to show the Democratic leader slurring her words — began making the rounds on Facebook last week, the social network didn’t take it down. (File/AFP)
Updated 30 May 2019

Facebook isn’t deleting the fake Pelosi video. Should it?

  • Pelosi derided Facebook Wednesday for not taking down the video even though it knows it is false
  • Facebook has long resisted making declarations about the truthfulness of posts that could open it up to charges of censorship or political bias

SAN FRANCISCO: When a doctored video of House Speaker Nancy Pelosi — one altered to show the Democratic leader slurring her words — began making the rounds on Facebook last week, the social network didn’t take it down. Instead, it “downranked” the video, a behind-the-scenes move intended to limit its spread.
That outraged some people who believe Facebook should do more to clamp down on misinformation. Pelosi derided Facebook Wednesday for not taking down the video even though it knows it is false.
But the company and some civil libertarians warn that Facebook could evolve into an unaccountable censor if it’s forced to make judgment calls on the veracity of text, photos or videos.
Facebook has long resisted making declarations about the truthfulness of posts that could open it up to charges of censorship or political bias. It manages to get itself in enough trouble simply trying to enforce more basic rules in difficult cases, such as the time a straightforward application of its ban on nudity led it to remove an iconic Vietnam War photo of a naked girl fleeing a napalm attack. (It backed down after criticism from the prime minister of Norway, among others.)
But staying out of the line of fire is harder than it used to be, given Facebook’s size, reach and impact on global society. The social network can’t help but run into controversy given its 2.4 billion users and the sorts of decisions it must make daily— everything from which posts and links it highlights in your news feed to deciding what counts as hate speech to banning controversial figures or leaving them be.
Facebook has another incentive to keep its head down. The deeper it gets into editorial decisions, the more it looks like a publisher, which could tempt legislators to limit the liability shield it currently enjoys under federal law. In addition, making judgments about truth and falsity could quickly become one of the world’s biggest headaches.
For instance, Republican politicians and other conservatives, from President Donald Trump to Fox News personalities, have been trumpeting the charge that Facebook is biased against conservatives. That’s a “false narrative,” said Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia. But as a result, he said, “any effort to clean up Facebook now would spark tremendous fury.”
Twitter hasn’t removed the doctored Pelosi video, either, and declined comment on its handling of it. But YouTube yanked it down, pointing to community guidelines that prohibit spam, deceptive practices and scams. Facebook has a similar policy that prohibits the use of “misleading and inaccurate” information to gain likes, followers or shares, although it apparently decided not to apply it in this case.
None of these companies explicitly prohibit false news, although Facebook notes that it “significantly” reduces the distribution of such posts by pushing them lower in user news feeds.
The problem is that such downranking doesn’t quite work, Vaidhyanathan said. As of Wednesday, the video shared on Facebook by the group Politics Watchdog had been viewed nearly 3 million times and shared more than 48,000 times. By contrast, other videos posted by this group in the past haven’t had more than a few thousand views apiece.
Further complicating matters is the fact that Facebook is starting to de-emphasize the news feed itself. CEO Mark Zuckerberg has outlined a broad strategy that will emphasize private messaging over public sharing on Facebook. And Facebook groups, many of which are private, aren’t subject to downranking, Vaidhyanathan said.
Facebook didn’t respond to emailed questions about its policies and whether it is considering changes that would allow it to remove similar videos in the future. In an interview last week with CNN’s Anderson Cooper, Facebook’s head of global policy, Monika Bickert, defended the company’s decision , noting that users are “being told” that the video is false when they view or share it.
That might be a stretch. When an Associated Press reporter attempted to share the video as a test, a Facebook pop-up noted the existence of “additional reporting” on the video with links to fact-check articles, but didn’t directly describe the video as false or misleading.
Alex Stamos, Facebook’s former security chief, tweeted Sunday that few critics of the social network’s handling of the Pelosi video could articulate realistic enforcement standards beyond “take down stuff I don’t like.” Mass censorship of misleading speech on Facebook, he wrote, would be “a huge and dangerous increase in FB’s editorial power.”
Last year, Zuckerberg wrote on Facebook that the company focuses on downranking so-called “borderline content,” stuff that doesn’t violate its rules but is provocative, sensationalist, “click-bait or misinformation.”
While it’s true that Facebook could just change its rules around what is allowed — moving the line on acceptable material — Zuckerberg said this doesn’t address the underlying problem of incentive. If the line of what is allowed moves, those creating material would just push closer to that new line.
Facebook continuously grapples with the right way to deal with new forms of misinformation, Nathaniel Gleicher, the company’s head of cybersecurity policy, said in a February interview with the AP. The problem is far more complex than carefully manipulated “deepfake” videos that show people doing things they never did, or even crudely doctored videos such as the Pelosi clip.
Any consistent policy, Gleicher said, would have to account for edited images, ones presented out of context (such as a decade-old photo presented as current), doctored audio and more. He said it’s a huge challenge to accurately identify such items and decide what type of disclosure to require when they’re edited.


Social media app TikTok removes Daesh propaganda videos

Updated 22 October 2019

Social media app TikTok removes Daesh propaganda videos

  • An employee at TikTok told AFP that about 10 accounts were removed for posting the videos
  • The videos featured corpses being paraded through streets and Daesh fighters with guns
BEIJING: Social media app TikTok has taken down accounts that were posting propaganda videos for the Daesh group, a company employee said Tuesday, in the latest scandal to hit the popular platform.

TikTok, which is owned by Chinese firm ByteDance, claimed some 500 million users globally last year, making it one of the most popular social apps.

An employee at TikTok told AFP that about 10 accounts were removed for posting the videos.

“Only one of those videos even had views that reached into double digits before being taken down,” said the staffer, who declined to be named.

The videos featured corpses being paraded through streets and Daesh fighters with guns, according to the Wall Street Journal, which first reported the story on Monday.

The Journal said the posts were from about two dozen accounts, which were identified by social media intelligence company Storyful.

“Content promoting terrorist organizations have absolutely no place on TikTok,” the company said in a statement emailed to AFP.

“We permanently ban any such accounts and associated devices as soon as identified, and we continuously develop ever-stronger controls to proactively detect suspicious activity,” it said.

Daesh's self-declared “caliphate” in Iraq and Syria fell in March, but the group remains active in several countries in the Middle East, Africa and Asia, as well as still inspiring jihadists through an online presence.

The TikTok platform, which allows users to create and share videos of 15 seconds, is particularly popular with teenagers.

“Unlike other platforms, which are centered around users’ friends or communities, TikTok is based on engaging with a never-ending stream of new content,” said Darren Davidson, the editor-in-chief of Storyful.

“The Daesh postings violate TikTok’s policies, but the sheer volume of content makes it difficult for TikTok to police their platform and root out these videos,” he said.

The app has been marred by controversy in recent months. In April, TikTok was briefly banned by an Indian court over claims it was promoting pornography among children.

The app is banned in neighboring Bangladesh and was hit with an enormous fine in the United States for illegally collecting information from children.

The company has refuted the allegations, saying they abide by local privacy laws.

ByteDance has a version of TikTok in China called Douyin.