‘Tool for grifters’: AI deepfakes push bogus sexual cures

“AI is a useful tool for grifters looking to create large volumes of content slop for a low cost,” misinformation researcher Abbie Richards told AFP. (Reuters)
Short Url
Updated 12 May 2025
Follow

‘Tool for grifters’: AI deepfakes push bogus sexual cures

  • The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products

WASHINGTON: Holding an oversized carrot, a brawny, shirtless man promotes a supplement he claims can enlarge male genitalia — one of countless AI-generated videos on TikTok peddling unproven sexual treatments.

The rise of generative AI has made it easy — and financially lucrative — to mass-produce such videos with minimal human oversight, often featuring fake celebrity endorsements of bogus and potentially harmful products.

In some TikTok videos, carrots are used as a euphemism for male genitalia, apparently to evade content moderation policing sexually explicit language.

“You would notice that your carrot has grown up,” the muscled man says in a robotic voice in one video, directing users to an online purchase link.

“This product will change your life,” the man adds, claiming without evidence that the herbs used as ingredients boost testosterone and send energy levels “through the roof.”

The video appears to be AI-generated, according to a deepfake detection service recently launched by the Bay Area-headquartered firm Resemble AI, which shared its results with AFP.

“As seen in this example, misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers’ health at risk,” Zohaib Ahmed, Resemble AI’s chief executive and co-founder, told AFP.

“We’re seeing AI-generated content weaponized to spread false information.”

The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products.

They include everything from unverified — and in some cases, potentially harmful — dietary supplements to weight loss products and sexual remedies.

“AI is a useful tool for grifters looking to create large volumes of content slop for a low cost,” misinformation researcher Abbie Richards told AFP.

 

“It’s a cheap way to produce advertisements,” she added.

Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, has observed a surge of “AI doctor” avatars and audio tracks on TikTok that promote questionable sexual remedies.

Some of these videos, many with millions of views, peddle testosterone-boosting concoctions made from ingredients such as lemon, ginger and garlic.

More troublingly, rapidly evolving AI tools have enabled the creation of deepfakes impersonating celebrities such as actress Amanda Seyfried and actor Robert De Niro.

“Your husband can’t get it up?” Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases, appears to ask in a TikTok video promoting a prostate supplement.

But the clip is a deepfake, using Fauci’s likeness.

Many manipulated videos are created from existing ones, modified with AI-generated voices and lip-synced to match what the altered voice says.

“The impersonation videos are particularly pernicious as they further degrade our ability to discern authentic accounts online,” Mantzarlis said.

Last year, Mantzarlis discovered hundreds of ads on YouTube featuring deepfakes of celebrities — including Arnold Schwarzenegger, Sylvester Stallone, and Mike Tyson — promoting supplements branded as erectile dysfunction cures.

The rapid pace of generating short-form AI videos means that even when tech platforms remove questionable content, near-identical versions quickly reappear — turning moderation into a game of whack-a-mole.

Researchers say this creates unique challenges for policing AI-generated content, requiring novel solutions and more sophisticated detection tools.

AFP’s fact checkers have repeatedly debunked scam ads on Facebook promoting treatments — including erectile dysfunction cures — that use fake endorsements by Ben Carson, a neurosurgeon and former US cabinet member.

Yet many users still consider the endorsements legitimate, illustrating the appeal of deepfakes.

“Scammy affiliate marketing schemes and questionable sex supplements have existed for as long as the Internet and before,” Mantzarlis said.

“As with every other bad thing online, generative AI has made this abuse vector cheaper and quicker to deploy at scale.”


China’s national security agency in Hong Kong summons international media representatives

Updated 5 sec ago
Follow

China’s national security agency in Hong Kong summons international media representatives

HONG KONG: China’s national security agency in Hong Kong summoned international media representatives for a “regulatory talk” on Saturday, saying some had spread false information and smeared the government in recent reports on a deadly fire and upcoming legislative elections.
Senior journalists from several major outlets operating in the city, including AFP, were summoned to the meeting by the Office for Safeguarding National Security (OSNS), which was opened in 2020 following Beijing’s imposition of a wide-ranging national security law on the city.
Through the OSNS, Beijing’s security agents operate openly in Hong Kong, with powers to investigate and prosecute national security crimes.
“Recently, some foreign media reports on Hong Kong have disregarded facts, spread false information, distorted and smeared the government’s disaster relief and aftermath work, attacked and interfered with the Legislative Council election, (and) provoked social division and confrontation,” an OSNS statement posted online shortly after the meeting said.
At the meeting, an official who did not give his name read out a similar statement to media representatives.
He did not give specific examples of coverage that the OSNS had taken issue with, and did not take questions.
The online OSNS statement urged journalists to “not cross the legal red line.”
“The Office will not tolerate the actions of all anti-China and trouble-making elements in Hong Kong, and ‘don’t say we didn’t warn you’,” it read.
For the past week and a half, news coverage in Hong Kong has been dominated by a deadly blaze on a residential estate which killed at least 159 people.
Authorities have warned against crimes that “exploit the tragedy” and have reportedly arrested at least three people for sedition in the fire’s aftermath.
Dissent in Hong Kong has been all but quashed since Beijing brought in the national security law, after huge and sometimes violent protests in 2019.
Hong Kong’s electoral system was revamped in 2021 to ensure that only “patriots” could hold office, and the upcoming poll on Sunday will select a second batch of lawmakers under those rules.