‘AI president’: Trump deepfakes glorify himself, trash rivals

President Donald Trump boards Air Force One on November 05, 2025 at Joint Base Andrews, Maryland. (Getty Images via AFP)
Short Url
Updated 06 November 2025
Follow

‘AI president’: Trump deepfakes glorify himself, trash rivals

  • Trump, no stranger to conspiracy theories and unfounded claims, has used the content in his breathless social media commentary to glorify himself and skewer his critics 

WASHINGTON:  In a parallel reality, Donald Trump reigns as king, fighter pilot, and Superman, and his political opponents are cast as criminals and laughingstocks — an unprecedented weaponization of AI imagery by a sitting American president.
Trump has ramped up his use of artificial intelligence-generated content on his Truth Social channel since starting his second White House term, making his administration the first to deploy hyper-realistic fake visuals as a core communications strategy.
Trump, no stranger to conspiracy theories and unfounded claims, has used the content in his breathless social media commentary to glorify himself and skewer his critics — particularly during moments of national outrage.
Last month, he posted a fake video showing himself wearing a crown and flying a fighter jet labeled “King Trump” that dumps what appears to be excrement on crowds of protesters.




Meme of President Trump as "King Trump". (Truth Social)

The clip — accompanied by singer Kenny Loggins’s “Danger Zone” — was posted the same day as nationwide “No Kings” protests against what critics called his authoritarian behavior.
In another post, the White House depicted Trump as Superman amid fevered social media speculation about his health.
“THE SYMBOL OF HOPE,” the post said.
“SUPERMAN TRUMP.”

‘Distort reality’ 

Trump or the White House have similarly posted AI-made images showing the president dressed as the pope, roaring alongside a lion, and conducting an orchestra at the Kennedy Center, a venerable arts complex in the US capital.
The fabricated imagery has deceived social media users, some of whom questioned in comments whether they were authentic.


It was unclear whether the imagery was generated by Trump himself or his aides. The White House did not respond to AFP’s request for comment.
Wired magazine recently labeled Trump “America’s first generative AI president.”
“Trump peddles disinformation on and offline to boost his own image, attack his adversaries and control public discourse,” Nora Benavidez, senior counsel at the advocacy group Free Press, told AFP.
“For someone like him, unregulated generative AI is the perfect tool to capture people’s attention and distort reality.”
In September, the president triggered outrage after posting an apparent AI-generated video of himself promising every American access to all-healing “MedBed” hospitals.
MedBed, a widely debunked conspiracy theory popular among far-right circles, refers to an imaginary medical device equipped with futuristic technology. Adherents say it can cure any ailment, from asthma to cancer.
Trump’s phony clip — later deleted without any explanation — was styled as a Fox News segment and featured his daughter-in-law Lara Trump promoting a fictitious White House launch of the “historic new health care system.”

‘Campaigning through trolling’

“How do you bring people back to a shared reality when those in power keep stringing them along?” asked Noelle Cook, a researcher and author of “The Conspiracists: Women, Extremism, and the Lure of Belonging.”
Trump has reserved the most provocative AI posts for his rivals and critics, using them to rally his conservative base.
In July, he posted an AI video of former president Barack Obama being arrested in the Oval Office and appearing behind bars in an orange jumpsuit.
Later, he posted an AI clip of House minority leader Hakeem Jeffries — who is Black — wearing a fake mustache and a sombrero.
Jeffries slammed the image as racist.
“While it would in many ways be desirable for the president of the United States to stay above the fray and away from sharing AI images, Trump has repeatedly demonstrated that he sees his time in office as a non-stop political campaign,” Joshua Tucker, co-director of the New York University Center for Social Media and Politics, told AFP.
“I would see his behavior more as campaigning through trolling than actively trying to propagate the false belief that these images depict reality.”
Mirroring Trump’s strategy, California Governor Gavin Newsom on Tuesday posted an apparent AI video on X lampooning Republicans after Democrats swept key US elections.

 

The clip depicted wrestlers inside a ring with superimposed faces of Democratic leaders knocking down their Republican opponents, including Trump.
The post read: “Now that’s what we call a takedown.”
 


Keep it real: Tech giants urged to lead on safeguarding online privacy

Updated 09 December 2025
Follow

Keep it real: Tech giants urged to lead on safeguarding online privacy

  • AI, deepfakes, misinformation under scrutiny at Bridge Summit
  • Media, tech professionals discuss how to keep users safe

ABU DHABI: As AI-generated deepfakes and bots grow more sophisticated, online privacy and identity protection have become urgent global concerns, especially for journalists, influencers and media professionals, whose lives unfold in the digital spotlight.

The growing threats of impersonation, character assassination and coordinated online abuse was at the center of a high-stakes conversation on the second day of the Bridge Summit in Abu Dhabi, where regional and international leaders from the technology and media fields tackled the complex risks surrounding digital safety, security and trust in an AI-powered world.

Adeline Hulin, chief of unit, media and information literacy at UNESCO, highlighted the risks that many people, in particular children and women, are facing online. 

Although her work has long centered on promoting safe internet practices, she said that the onus of safeguarding online privacy and security rested primarily with technology companies — the only actors, she argued, capable of keeping pace with the rapid evolution of AI.

“It is going to be really important that instead of people constantly having to adapt to the technology, if the technology itself is more user-centric,” she told the summit.

“We can train people to recognize deepfakes, but technology can do that quicker.”

Major tech companies have come under fire in recent years for failing to tackle harassment and misinformation. This has led to a litany of legislation as governments try to gain control of a growing problem.

But some companies appear to be heeding the call. Erin Relford, senior privacy engineer at Google, said her company was working to embed privacy protections in the infrastructure level beneath the platform.

“We want to give consumers the choice of how much they can share data-wise,” she said.

“The biggest challenge is making sure you have the right people in the room to create these privacy protection platforms.”

Privacy enhancement technology would see several tools released that empowered users to understand how their data was being monetized and aggregated, Relford said.

Google had been working to change the parental controls and make it easier for users to understand their protection, she said, but admitted it was still difficult and more education was needed.

“Most of the power lies within the user. Consumers drive what is popular. In terms of organizations that protect your privacy, we want to encourage them and use their services rather than empowering websites that don’t,” she said.

Education is key 

Still, Relford argued that education was fundamental in rolling out privacy tools. Tech companies could only do so much if people did not increase their awareness online, she said.

“The better we educate people about privacy tools, the less harm we have from the ground up.”

Echoing similar sentiments, Hulin promoted the idea of including online literacy in school curricula. Even high-profile moves, like Australia’s recent headline-grabbing ban on under-16s using social media, would do little to reduce the risks without more education.

“Even if there is banning, it’s not going to change misinformation and disinformation. You still need to teach these kids about the information ecosystem,” she said.

“Parents need to be really interested in the news information that your children are consuming.”

Assel Mussagaliyeva-Tang, founder of Singapore-based startup EDUTech Future, said that the AI revolution demanded close collaboration between schools, universities and families to equip children with the skills to navigate new technologies safely and responsibly.

“We need to set up the guardrails and protection of the kids because they are not aware how the model will respond to their needs,” she said.

A UNESCO survey found that 62 percent of digital creators skip rigorous fact-checking, while a 2024 YouGov study showed only 27 percent of young adults feel confident about AI in education.

Mussagaliyeva-Tang said educators needed to focus on preparing and nurturing adults who were “ready for the world,” by integrating ethics, data literacy and critical thinking into curricula.

But she said that universities and the broader education system remained behind the curve in adapting to emerging technologies and equipping students with the skills needed for responsible digital engagement.

Likewise, tech companies needed to be transparent and inclusive in training their data in a way that represented different cultures, she said.

While global regulations on AI remain fragmented, Dr. Luca Iando, dean and distinguished chair at the Collins College of Professional Studies at St. John’s University, called on educational institutions to actively collaborate with technology platforms to help shape educational content and mitigate the potential harm of AI on children, especially as technologies continue to grow.

He warned of young people’s overreliance on AI and said that educators in the long term needed to focus on developing “durable, human skills” in students and transform the type of assignments and coursework to meet the new age of AI.

There needed to be guidelines for students on using AI responsibly, to prepare them for the workplace, he said.

Highlighting the skills gap between educational institutions and the modern workplace, Mussagaliyeva-Tang said: “Employers want professionals. They don’t have time and the budgets to retrain after the outdated curriculum of the university.”

The rise of AI demanded a rethinking of the true purpose of education to nurture individuals who strove to make a positive impact on a rapidly evolving world, she said.