WASHINGTON: False or misleading US election claims posted on X by Elon Musk have amassed nearly 1.2 billion views this year, a watchdog reported Thursday, highlighting the billionaire’s potential influence on the highly polarized White House race.
Ahead of the November election, researchers have raised alarm that X, formerly Twitter, is a hotbed of political misinformation.
They have also flagged that Musk, who purchased the platform in 2022 and is a vocal backer of Donald Trump, appears to be swaying voters by spreading falsehoods on his personal account.
Researchers from the Center for Countering Digital Hate (CCDH) identified 50 posts since January by Musk — who has more than 193 million followers on the social media site — with election claims debunked by independent fact-checkers.
None of the posts displayed a “Community Note,” a crowd-sourced moderation tool that X has promoted as the way for users to add context to posts, CCDH said, raising questions about its effectiveness to combat falsehoods.
“Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust,” warned CCDH chief executive Imran Ahmed.
“The lack of Community Notes on these posts shows that his business is failing woefully to contain the kind of algorithmically-boosted incitement that we all know can lead to real-world violence.”
The posts analyzed by CCDH carried widely debunked claims, such as that Democrats are encouraging illegal migration with the aim of “importing voters” or that the election is vulnerable to fraud. Both claims amassed hundreds of millions of views.
Last week, Musk faced a firehose of criticism for sharing with his followers an AI deepfake video featuring Trump’s Democratic rival, Vice President Kamala Harris.
In it, a voiceover mimicking Harris calls President Joe Biden senile before declaring that she does not “know the first thing about running the country.”
The video, viewed by millions, carried no indication that it was parody — save for a laughing emoji. Only later did Musk clarify that the video was meant as satire.
“Musk behaves as if he is beyond reproach despite growing evidence of the harmful role he is personally playing to fuel disinformation and division ahead of the US elections,” Nora Benavidez, from the advocacy group Free Press Action Fund, told AFP.
“As his behavior edges closer to election interference, it’s up to others — the public, regulatory agencies and advertisers — to hold him accountable for his anti-democratic behavior.”
Musk, who purchased the platform in 2022 for $44 billion, is facing growing scrutiny over his potential influence on voters.
On Monday, a bipartisan group of five US secretaries of state sent an open letter to Musk, urging him to fix X’s AI chatbot known as Grok after it produced election misinformation.
Hours after Biden stepped down from the presidential race last month and endorsed Harris as the Democratic nominee, Grok churned out false information about ballot deadlines, which was amplified by other platforms.
X — which also faced criticism for stoking tensions during recent far-right riots across England — has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a haven for disinformation.
X did not respond to an AFP request for comment.
Elon Musk’s misleading election posts viewed 1.2 billion times: study
https://arab.news/yqtwt
Elon Musk’s misleading election posts viewed 1.2 billion times: study
- Ahead of the November election, researchers have raised alarm that X, formerly Twitter, is a hotbed of political misinformation
- “Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust,” warned CCDH chief executive Imran Ahmed.
Sudanese rebel fighters post war crime videos on social media
- Videos show Rapid Support Forces members glorifying destruction, torturing captives
- Footage could provide evidence for future accountability, says expert
LONDON: Rebel fighters from the Sudanese Rapid Support Forces have posted videos on social media that document their involvement in war crimes, according to a recent report by UK-based newspaper The Guardian.
The footage, which has been verified by the independent non-profit organization Centre for Information Resilience shows fighters destroying properties, burning homes and torturing prisoners.
The films could serve as key evidence in potential war crime prosecutions by international courts.
Alexa Koenig, co-developer of the Berkeley Protocol, which sets stands for social media use in war crime investigations, told The Guardian: “It’s someone condemning themselves. It’s not the same as a guilty plea but in some ways, it is a big piece of the puzzle that war crimes investigators have to put together.”
The RSF has been locked in conflict with the Sudanese military since April 2023, bringing the country to the brink of collapse.
Some estimates suggest there have been up to 150,000 civilian casualties, with 12 million people displaced. This would make Sudan the country with the highest internal displacement rate in the world, according to the UN.
In Darfur’s El Geneina, more than 10,000 people — mostly Masalit — were killed in 2023 during intense fighting. Mass graves, allegedly dug by RSF fighters, were discovered by a UN investigation.
One video posted on X by a pro-RSF account showed a fighter in front of the Masalit sultan’s house declaring: “There are no more Masalit … Arabs only.”
Other footage features fighters walking through streets lined with bodies, which they call “roadblocks,” and scenes of captives being abused and mocked. Some even took selfies with their victims.
The videos offer rare glimpses into the atrocities happening in Sudan, a region largely inaccessible to journalists and NGOs.
In August, Human Rights Watch accused both sides in Sudan’s ongoing conflict of committing war crimes, including summary executions and torture, after analyzing similar social media content.
Australia considering banning children from using social media
- Australia is the latest country to take action against these platforms
- Experts voiced concerns ban could fuel underground online activity
LONDON: The Australian government announced Tuesday it is considering banning children from using social media, in a move aimed at protecting young people from harmful online content.
The legislation, expected to pass by the end of the year, has yet to determine the exact age limit, though Prime Minister Anthony Albanese suggested it could be between 14 and 16 years.
“I want to see kids off their devices and onto the footy fields and the swimming pools and the tennis courts,” Albanese told the Australian Broadcasting Corp.
“We want them to have real experiences with real people because we know that social media is causing social harm,” he added, calling the impact a “scourge.”
Several countries in the Asia-Pacific region, including Malaysia, Singapore, and Pakistan, have recently taken action against social media platforms, citing concerns over addictive behavior, bullying, gambling, and cybercrime.
Introducing this legislation has been a key priority for the current Australian government. Albanese highlighted the need for a reliable age verification system before a final decision is made.
The proposal has sparked debate, with digital rights advocates warning that such restrictions might push younger users toward more dangerous, hidden online activity.
Experts voiced concerns during a Parliamentary hearing that the ban could inadvertently harm children by encouraging them to conceal their internet usage.
Meta, the parent company of Facebook and Instagram, which currently enforces a self-imposed minimum age of 13, said it aims to empower young people to benefit from its platforms while providing parents with the necessary tools to support them, rather than “just cutting off access.”
Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit
- KAUST’s president speaks of ‘amazing young talents’
RIYADH: The rapid advancement in artificial intelligence requires a comprehensive reevaluation of traditional educational practices and methodologies and careful use of the technology, said panelists at the Global AI Summit, also known as GAIN, which opened in Riyadh on Tuesday.
During the session “Paper Overdue: Rethinking Schooling for Gen AI,” the panelists delved into the transformative impact of AI on education — from automated essay generation to personalized learning algorithms — and encouraged a rethink of the essence of teaching and learning, speaking of the necessity of an education system that seamlessly integrated with AI advancement.
Edward Byrne, president of King Abdullah University of Science and Technology, said the next decade would be interesting with advanced AI enterprises.
He added: “We now have a program to individualize assessment and, as a result, we have amazing young talents. AI will revolutionize the education system.”
Byrne, however, advised proceeding with caution, advocating the need for a “carefully designed AI system” while stressing the “careful use” of AI for “assessment.”
Alain Le Couedic, senior partner at venture firm Artificial Intelligence Quartermaster, echoed the sentiment, saying: “AI should be used carefully in learning and assessment. It’s good when fairly used to gain knowledge and skills.”
Whether at school or university, students were embracing AI, said David Yarowsky, professor of computer science at Johns Hopkins University.
He added: “So, careful use is important as it’s important to enhance skills and not just use AI to leave traditional methods and be less productive. It (AI) should ensure comprehensive evaluation and fair assessment.”
Manal Abdullah Alohali, dean of the College of Computer and Information Science at Princess Nourah bint Abdulrahman University, underlined that AI was a necessity and not a luxury.
She said the university had recently introduced programs to leverage AI and was planning to launch a “massive AI program next year.”
She explained that the university encouraged its students to “use AI in an ethical way” and “critically examine themselves” while doing so.
In another session, titled “Elevating Spiritual Intelligence and Personal Well-being,” Deepak Chopra, founder of the Chopra Foundation and Chopra Global, explored how AI could revolutionize well-being and open new horizons for personal development.
He said AI had the potential to help create a more peaceful, just, sustainable, healthy, and joyful world as it could provide teachings from different schools of thought and stimulate ethical and moral values.
While AI could not duplicate human intelligence, it could vastly enhance personal and spiritual growth and intelligence through technologies such as augmented reality, virtual reality, and the metaverse, he added.
The GAIN Summit, which is organized by the Saudi Data and AI Authority, is taking place until Sept. 12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman.
The summit is focusing on one of today’s most pressing global issues — AI technology — and aims to find solutions that maximize the potential of these transformative technologies for the benefit of humanity.
Older generations more likely to fall for AI-generated fake news, Global AI Summit hears
- Semafor co-founder Ben Smith says he is ‘much more worried about Gen X and older people’ falling for misinformation than younger generations
RIYADH: Media experts are concerned that older generations are more susceptible to AI-generated deep fakes and misinformation than younger people, the audience at the Global AI Summit in Riyadh heard on Tuesday.
“I am so much more worried about Gen X (those born between 1965 and 1980) and older people,” Semafor co-founder and editor-in-chief Ben Smith said during a panel titled “AI and the Future of Media: Threats and Opportunities.”
He added: “I think that young people, for better and for worse, really have learned to be skeptical, and to immediately be skeptical, of anything they’re presented with — of images, of videos, of claims — and to try to figure out where they’re getting it.”
Smith was joined during the discussion, moderated by Arab News Editor-in-Chief Faisal Abbas, by the vice president and editor-in-chief of CNN Arabic, Caroline Faraj, and Anthony Nakache, the managing director of Google MENA.
They said that AI, as a tool, is too important not to be properly regulated. In particular they highlighted its potential for verification of facts and content creation in the media industry, but said educating people about its uses is crucial.
“We have always been looking at how we can build AI in a very safe and responsible way,” said Nakache, who added that Google is working with governments and agencies to figure out the best way to go about this.
The integration of AI into journalism requires full transparency, the panelists agreed. Faraj said the technology offers a multifunctional tool that can be used for several purposes, including data verification, transcription and translation. But to ensure a report contains the full and balanced truth, a journalist will still always be needed to confirm the facts using their professional judgment.
The panelists also agreed that AI would not take important jobs from humans in the industry, as it is designed to complete repetitive manual tasks, freeing up more of a journalist’s time to interact with people and their environment.
“Are you really going to use AI go to a war zone and to the front line to cover stories? Of course not,” said Faraj.
Smith, who has written a book on news sites and viral content, warned about the unethical ways in which some media outlets knowingly use AI-generated content because they “get addicted” to the traffic such content can generate.
All of the panelists said that educating people is the key to finding the best way forward regarding the role of AI in the media. Nakache said Google has so far trained 20,000 journalists in the region to better equip them with knowledge of how to use digital tools, and funds organizations in the region making innovative use of technology.
“It is a collective effort and we are taking our responsibility,” he added.
The panelists also highlighted some of the methods that can be used to combat confusion and prevent misinformation related to the use of AI, including the use of digital watermarks and programs that can analyze content and inform users if it was AI-generated.
Asked how traditional media organizations can best teach their audiences how to navigate the flood of deep fakes and misinformation, while still delivering the kind of content they want, Faraj said: “You listen to them. We listen to our audience and we hear exactly what they wanted to do and how we can enable them.
“We enable them and equip them with the knowledge. Sometimes we offer training, sometimes we offer listening; but listening is a must before taking any action.”
Governance and regulation of AI is crucial, experts say at Saudi-hosted summit
- Panelists discuss UN initiatives and recommendations to support ethical governance of AI
RIYADH: Governance is crucial for artificial intelligence, said South Africa’s minister of science, technology, and innovation, Blade Nzimande, on Tuesday at the third Global AI Summit in Riyadh.
In a panel titled “Global Approach to Advance Ethical Governance of AI,” Nzimande announced South Africa’s collaboration with international partners to ensure full implementation of UNESCO’s recommendations on the governance of AI.
UNESCO released its first-ever global standard on AI ethics, titled “Recommendation on the Ethics of AI” in 2021, and earlier this year, launched the Global AI Ethics and Governance Observatory, which is a platform for knowledge, expert insights, and good practices on the ethics and governance of AI.
Nzimande said that UNESCO’s recommendations, if implemented, would help “address the racial and gender biases, which are often embedded in AI systems; safeguard against AI applications, which violates human rights; and ensure that AI development does not contribute to climate degradation.”
He added: “We need to ensure that the governance of AI is truly inclusive, and not the self-claimed prerogative of a select few. UNESCO offers us this inclusive, globally representative platform, where the voices of all matter, and South Africa commits our resources to support the recommendation’s implementation, in Africa and elsewhere.”
Other panelists included Laurence Ndong, minister of information and communication technologies for Gabon; Mohammed Ali Al-Qaed, chief executive of the Information and eGovernment Authority for the Kingdom of Bahrain; Makara Khov, secretary of state at the Cambodian Ministry of Post and Telecommunications; Ali Al-Shidhani, undersecretary for communications and information technology for the Sultanate of Oman; German State Secretary for the Federal Ministry of Digital and Transport Stefan Schnorr; Miroslav Trajanovic, state secretary at the Serbian Ministry of Science, Technological Development and Innovation; and Aissatou Jeanne Ndiaye, Senegal’s director of information and communication technology.
During the session, each representative gave a run-down of their country’s commitment to ethical AI governance.
The rapid growth of AI has made its regulation a critical focus with the topic informing another panel, titled “Efforts in Shaping Global AI Governance from the Roadmap for Digital Cooperation to the Global Digital Compact.”
Panelists included Nighat Dad, executive director of the Digital Rights Foundation; Amandeep Singh Gill, the secretary-general’s envoy for technology at the UN; Lattifa Al-Abdulkarim, member of the Shura Council and the UN High-Level Advisory Body on AI; Nazneen Rajani, founder and CEO of Collinear AI; and Philip Thigo, Kenya’s special envoy on technology.
The panelists analyzed the “Interim Report: Governing AI for Humanity” by the UN secretary-general’s AI advisory body focusing on the role of the body in shaping global AI policy.
Rajani highlighted the issue of limited data availability for some countries or entities and the importance of data governance in line with UNESCO’s recommendation of member states developing data governance strategies.
“One way to bridge that gap is to think of data governance in a way where we can have a data trust; a marketplace of sharing anonymized, privacy preserving data,” she said.
The GAIN Summit, organized by the Saudi Data and AI Authority, is taking place from Sept. 10-12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman.