‘Tool for grifters’: AI deepfakes push bogus sexual cures

“AI is a useful tool for grifters looking to create large volumes of content slop for a low cost,” misinformation researcher Abbie Richards told AFP. (Reuters)
Short Url
Updated 12 May 2025
Follow

‘Tool for grifters’: AI deepfakes push bogus sexual cures

  • The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products

WASHINGTON: Holding an oversized carrot, a brawny, shirtless man promotes a supplement he claims can enlarge male genitalia — one of countless AI-generated videos on TikTok peddling unproven sexual treatments.

The rise of generative AI has made it easy — and financially lucrative — to mass-produce such videos with minimal human oversight, often featuring fake celebrity endorsements of bogus and potentially harmful products.

In some TikTok videos, carrots are used as a euphemism for male genitalia, apparently to evade content moderation policing sexually explicit language.

“You would notice that your carrot has grown up,” the muscled man says in a robotic voice in one video, directing users to an online purchase link.

“This product will change your life,” the man adds, claiming without evidence that the herbs used as ingredients boost testosterone and send energy levels “through the roof.”

The video appears to be AI-generated, according to a deepfake detection service recently launched by the Bay Area-headquartered firm Resemble AI, which shared its results with AFP.

“As seen in this example, misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers’ health at risk,” Zohaib Ahmed, Resemble AI’s chief executive and co-founder, told AFP.

“We’re seeing AI-generated content weaponized to spread false information.”

The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products.

They include everything from unverified — and in some cases, potentially harmful — dietary supplements to weight loss products and sexual remedies.

“AI is a useful tool for grifters looking to create large volumes of content slop for a low cost,” misinformation researcher Abbie Richards told AFP.

 

“It’s a cheap way to produce advertisements,” she added.

Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, has observed a surge of “AI doctor” avatars and audio tracks on TikTok that promote questionable sexual remedies.

Some of these videos, many with millions of views, peddle testosterone-boosting concoctions made from ingredients such as lemon, ginger and garlic.

More troublingly, rapidly evolving AI tools have enabled the creation of deepfakes impersonating celebrities such as actress Amanda Seyfried and actor Robert De Niro.

“Your husband can’t get it up?” Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases, appears to ask in a TikTok video promoting a prostate supplement.

But the clip is a deepfake, using Fauci’s likeness.

Many manipulated videos are created from existing ones, modified with AI-generated voices and lip-synced to match what the altered voice says.

“The impersonation videos are particularly pernicious as they further degrade our ability to discern authentic accounts online,” Mantzarlis said.

Last year, Mantzarlis discovered hundreds of ads on YouTube featuring deepfakes of celebrities — including Arnold Schwarzenegger, Sylvester Stallone, and Mike Tyson — promoting supplements branded as erectile dysfunction cures.

The rapid pace of generating short-form AI videos means that even when tech platforms remove questionable content, near-identical versions quickly reappear — turning moderation into a game of whack-a-mole.

Researchers say this creates unique challenges for policing AI-generated content, requiring novel solutions and more sophisticated detection tools.

AFP’s fact checkers have repeatedly debunked scam ads on Facebook promoting treatments — including erectile dysfunction cures — that use fake endorsements by Ben Carson, a neurosurgeon and former US cabinet member.

Yet many users still consider the endorsements legitimate, illustrating the appeal of deepfakes.

“Scammy affiliate marketing schemes and questionable sex supplements have existed for as long as the Internet and before,” Mantzarlis said.

“As with every other bad thing online, generative AI has made this abuse vector cheaper and quicker to deploy at scale.”


Amazon’s AWS reports outage after UAE datacenter struck by ‘objects’

Updated 02 March 2026
Follow

Amazon’s AWS reports outage after UAE datacenter struck by ‘objects’

  • AWS confirmed sparks and fire after objects hit UAE data center causing disruptions to Emirate and Bahrain regions
  • Full recovery ‌expected to “be many hours away”

LONDON: Amazon’s cloud-computing facilities in the Middle East faced power and connectivity issues on Monday after unidentified “objects” struck its data center in the United Arab Emirates.
The objects had triggered a fire on Sunday that forced authorities to eventually cut power to two clusters of Amazon data centers in the UAE, with restoration expected to take several more hours, according to Amazon Web Services’ (AWS) status page.
Localized power issues impacted AWS services ‌in both ‌the UAE and neighboring Bahrain, according to the ​page. ‌Abu ⁠Dhabi Commercial Bank ​said ⁠its platforms and mobile app were unavailable due to a region-wide IT disruption, although it did not directly link the outage to the AWS incident.
While Amazon did not identify the objects, the incident happened on the same day Iran fired a barrage of drones and missiles at Gulf States in retaliation for US and Israeli strikes that killed Supreme Leader Ayatollah Ali Khamenei.
A ⁠strike, if confirmed, on the AWS facility in ‌the UAE will mark the first time a ‌major US tech company’s data center has been ​knocked offline by military action. ‌It could also raise questions around Big Tech’s pace of expansion in ‌the region.
US tech giants have been positioning the UAE as a regional hub for artificial intelligence computing needed to power services such as ChatGPT. Microsoft said in November it plans to bring its total investment in the UAE to $15 billion by ‌the end of 2029 and will use Nvidia chips for its data centers there.
“In previous conflicts, regional ⁠adversaries such as ⁠Iran and its proxies targeted pipelines, refineries, and oil fields in Gulf partner states. In the compute era, these actors could also target data centers, energy infrastructure supporting compute, and fiber chokepoints,” Washington-based think tank Center for Strategic and International Studies said last week.
Microsoft as well as Google and Oracle — both of which also operate facilities in the UAE — did not immediately respond to Reuters requests for comment.
AWS said a full recovery from the issues was expected to “be many hours away” for both UAE and Bahrain.
The outage had disrupted a dozen core cloud services and the company ​advised customers to back up ​critical data and shift operations to servers in unaffected AWS regions.