Australia threatens fines for social media giants enabling misinformation

The bill is part of a wide-ranging regulatory crackdown by Australia, where leaders have complained that foreign-domiciled tech platforms are overriding the country’s sovereignty. (AFP/File)
Short Url
Updated 12 September 2024
Follow

Australia threatens fines for social media giants enabling misinformation

  • Breaches face fines up to 5 percent of global revenue
  • Bill seeks to prevent election, public health disinformation

SYDNEY: Australia said it will fine Internet platforms up to 5 percent of their global revenue for failing to prevent the spread of misinformation online, joining a worldwide push to rein in borderless tech giants but angering free speech advocates.
The government said it would make tech platforms set codes of conduct governing how they stop dangerous falsehoods spreading, to be approved by a regulator. The regulator would set its own standard if a platform failed to do so, then fine companies for non-compliance.
The legislation, to be introduced in parliament on Thursday, targets false content that hurts election integrity or public health, calls for denouncing a group or injuring a person, or risks disrupting key infrastructure or emergency services.
The bill is part of a wide-ranging regulatory crackdown by Australia, where leaders have complained that foreign-domiciled tech platforms are overriding the country’s sovereignty, and comes ahead of a federal election due within a year.
Already Facebook owner Meta has said it may block professional news content if it is forced to pay royalties, while X, formerly Twitter, has removed most content moderation since being bought by billionaire Elon Musk in 2022.
“Misinformation and disinformation pose a serious threat to the safety and wellbeing of Australians, as well as to our democracy, society and economy,” said Communications Minister Michelle Rowland in a statement.
“Doing nothing and allowing this problem to fester is not an option.”
An initial version of the bill was criticized in 2023 for giving the Australian Communications and Media Authority too much power to determine what constituted misinformation and disinformation, the term for intentionally spreading lies.
Rowland said the new bill specified the media regulator would not have power to force the takedown of individual pieces of content or user accounts. The new version of the bill protected professional news, artistic and religious content, while it did not protect government-authorized content.
Some four-fifths of Australians wanted the spread of misinformation addressed, the minister said, citing the Australian Media Literary Alliance.
Meta, which counts nearly nine in 10 Australians as Facebook users, declined to comment. Industry body DIGI, of which Meta is a member, said the new regime reinforced an anti-misinformation code it last updated in 2022, but many questions remained.
X was not immediately available for comment.
Opposition home affairs spokesman James Paterson said that while he had yet to examine the revised bill, “Australians’ legitimately-held political beliefs should not be censored by either the government, or by foreign social media platforms.”
The Australia Communications and Media Authority said it welcomed “legislation to provide it with a formal regulatory role to combat misinformation and disinformation on digital platforms.”


WEF report spotlights real-world AI adoption across industries

Updated 19 January 2026
Follow

WEF report spotlights real-world AI adoption across industries

DUBAI: A new report by the World Economic Forum, released Monday, highlights companies across more than 30 countries and 20 industries that are using artificial intelligence to deliver real-world impact.

Developed in partnership with Accenture, “Proof over Promise: Insights on Real-World AI Adoption from 2025 MINDS Organizations” draws on insights from two cohorts of MINDS (Meaningful, Intelligent, Novel, Deployable Solutions), a WEF initiative focused on AI solutions that have moved beyond pilot phases to deliver measurable performance gains.

As part of its AI Global Alliance, the WEF launched the MINDS program in 2025, announcing its first cohort that year and a second cohort this week. Cohorts are selected through an evaluation process led by the WEF’s Impact Council — an independent group of experts — with applications open to public- and private-sector organizations across industries.

The report found a widening gap between organizations that have successfully scaled AI and those still struggling, while underscoring how this divide can be bridged through real-world case studies.

Based on these case studies and interviews with selected MINDS organizations, the report identified five key insights distinguishing successful AI adopters from others.

It found that leading organizations are moving away from isolated, tactical uses of AI and instead embedding it as a strategic, enterprise-wide capability.

The second insight centers on people, with AI increasingly designed to complement human expertise through closer collaboration, rather than replace it.

The other insights focus on the systems needed to scale AI effectively, including strengthening data foundations and strategic data sources, as well as moving away from fragmented technologies toward unified AI platforms.

Lastly, the report underscores the need for responsible AI, with organizations strengthening governance, safeguards and human oversight as automated decision-making becomes more widespread.

Stephan Mergenthaler, managing director and chief technology officer at the WEF, said: “AI offers extraordinary potential, yet many organizations remain unsure about how to realize it.

“The selected use cases show what is possible when ambition is translated into operational transformation and our new report provides a practical guide to help others follow the path these leaders have set.”

Among the examples cited in the report is a pilot led by the Saudi Ministry of Health in partnership with AmplifAI, which used AI-enabled thermal imaging to support early detection of diabetic foot conditions.

The initiative reduced clinician time by up to 90 percent, cut treatment costs by as much as 80 percent, and delivered a 10 time increase in screening capacity. Following clinical trials, the solution has been approved by regulatory authorities in Saudi Arabia, the UAE and Bahrain.

The report also points to work by Fujitsu, which deployed AI across its supply chain to improve inventory management. The rollout helped cut inventory-related costs by $15 million, reduce excess stock by $20 million and halve operational headcount.

In India, Tech Mahindra scaled multilingual large language models capable of handling 3.8 million monthly queries with 92 percent accuracy, enabling more inclusive access to digital services across markets in the Global South.

“Trusted, advanced AI can transform businesses, but it requires organizing data and processes to achieve the best of technology and — this is key — it also requires human ingenuity to maximize returns on AI investments,” said Manish Sharma, chief strategy and services officer at Accenture.