YouTube shooting shows how fake news spirals on social media

YouTube employees are seen walking away from Youtube headquarters following an active shooter situation in San Bruno, California, U.S., on Tuesday April 3, 2018. (REUTERS)
Updated 04 April 2018
Follow

YouTube shooting shows how fake news spirals on social media

PARIS: Within minutes of the shooting at YouTube offices in California, social media was awash with conspiracy theories and images of the supposed “shooter” wearing a Muslim headscarf.
Some Facebook videos were quick to claim that it was a “false flag” attack, carried out to discredit the powerful US gun lobby in the wake of the Parkland high school massacre in Florida.
With wildly exaggerated accounts of the death toll circulating, several pictures of the purported attacker and some of the “victims” posted to Twitter Tuesday turned out to be of well-known YouTubers.
Other widely-shared posts speculated that the attacker had been provoked by YouTube censoring political content, and one Twitter user posted a picture of the suspect as Hillary Clinton in a headscarf.
His account was later suspended.
Hoaxers too took advantage of the situation to post several pictures of the US comic Sam Hyde, who is known for Internet pranks.
None of which came as any surprise to researchers at the Massachusetts Institute of Technology, whose report last month found that false news spreads far faster on Twitter than real news — and by a substantial margin.
“We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information,” said Sinan Aral, a professor at the MIT Sloan School of Management.
They found that false political news reached more people faster and went deeper into their social networks than any other category of false information.

While Russian troll factories have got much of the blame for attempting to poison the political discourse in election campaigns across the US and Europe, the team from the MIT Media Lab found that fake news spreads not because of bots but from people retweeting inaccurate reports.
Researchers found that “false news stories are 70 percent more likely to be retweeted than true one. It also takes true stories about six times as long to reach 1,500 people as it does for false stories.”
While real news stories are rarely retweeted by more than a thousand people, the most popular fake news items are regularly shared by up to 100,000.
Emma Gonzalez, one of the Parkland students who has become a leader of the #NeverAgain movement pushing for tougher gun control, has become a particular target for misinformation attacks in recent weeks.
A doctored picture of her ripping up the US constitution trended last week, exposing her to vicious online vitriol. She had actually been ripping up a gun target in a photo shoot for Teen Vogue magazine.

Another fake meme went viral showing Gonzalez allegedly attacking a gun supporter’s truck, when it was in fact an image of the then shaven-headed pop star Britney Spears in a infamous meltdown from 2007.
Rudy Reichstadt, of the Conspiracy Watch website, said disinformation feeds on the “shock and stupor” that traumatic events create.
“We now have conspiracy theory entrepreneurs who react instantly to these events and rewrite unfolding narratives to fit their conspiratorial alternative storytelling.”
He said US shock jock and Infowars founder Alex Jones, a prominent pro-gun activist, had set the template for generating fake news to fit a particular agenda.
He plays up “conspiracy theories every time there is a new shooting,” Reichstadt told AFP. “He is a prisoner of his own theories and is constantly trying to move the story on (with new elements) to keep the conspiracy alive.”
The France-based researcher said there was now a whole ecosystem of fake news manufacturers, from those who “use clickbait sensationalism to increase their advertising revenue to disinformation professionals and weekend conspiracy theorists who sound off on YouTube.”
The MIT study, which was inspired by the online rumors which circulated after the Boston marathon attack in 2013, focused on what it called “rumor cascades” — unbroken chains of retweets after a Twitter user makes a false claim.
Aral said they concluded that people are more likely to share fake news because “false news is more novel, and people are more likely to share novel information. Those who do are seen as being in the know.”


Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes

Updated 12 January 2026
Follow

Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes

  • Authorities in both countries acted over the weekend, citing concerns about non-consensual and sexual deepfakes
  • Regulators say existing controls cannot prevent fake pornographic content, especially involving women and minors

KUALA LUMPUR: Malaysia and Indonesia have become the first countries to block Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, after authorities said it was being misused to generate sexually explicit and non-consensual images.
The moves reflect growing global concern over generative AI tools that can produce realistic images, sound and text, while existing safeguards fail to prevent their abuse. The Grok chatbot, which is accessed through Musk’s social media platform X, has been criticized for generating manipulated images, including depictions of women in bikinis or sexually explicit poses, as well as images involving children.
Regulators in the two Southeast Asian nations said existing controls were not preventing the creation and spread of fake pornographic content, particularly involving women and minors. Indonesia’s government temporarily blocked access to Grok on Saturday, followed by Malaysia on Sunday.
“The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesia’s Communication and Digital Affairs Minister Meutya Hafid said in a statement Saturday.
The ministry said the measure was intended to protect women, children and the broader community from fake pornographic content generated using AI.
Initial findings showed that Grok lacks effective safeguards to stop users from creating and distributing pornographic content based on real photos of Indonesian residents, Alexander Sabar, director general of digital space supervision, said in a separate statement. He said such practices risk violating privacy and image rights when photos are manipulated or shared without consent, causing psychological, social and reputational harm.
In Kuala Lumpur, the Malaysian Communications and Multimedia Commission ordered a temporary restriction on Grok on Sunday after what it said was “repeated misuse” of the tool to generate obscene, sexually explicit and non-consensual manipulated images, including content involving women and minors.
The regulator said notices issued this month to X Corp. and xAI demanding stronger safeguards drew responses that relied mainly on user reporting mechanisms.
“The restriction is imposed as a preventive and proportionate measure while legal and regulatory processes are ongoing,” it said, adding that access will remain blocked until effective safeguards are put in place.
Launched in 2023, Grok is free to use on X. Users can ask it questions on the social media platform and tag posts they’ve directly created or replies to posts from other users. Last summer the company added an image generator feature, Grok Imagine, that included a so-called “spicy mode” that can generate adult content.
The Southeast Asian restrictions come amid mounting scrutiny of Grok elsewhere, including in the European Union, Britain, India and France. Grok last week limited image generation and editing to paying users following a global backlash over sexualized deepfakes of people, but critics say it did not fully address the problem.