Automation is testing journalism, not ending it

Follow

Automation is testing journalism, not ending it

Author
Short Url

In many newsrooms today, a breaking statement can be transcribed, summarized, translated, and shaped into a publishable update before reporters even finish their first cup of coffee. The software does not hesitate. It simply produces.

This is not hypothetical. It is already an operational reality. Artificial intelligence now performs tasks that once consumed entire editorial shifts, from transcription to data sorting, trend monitoring, and structured briefings. What once required hours can now be completed in minutes. For media organizations navigating relentless digital cycles, that efficiency is transformative.

Yet the claim that AI will “replace journalism” misreads what is actually evolving.

This is what I would call “concern,” at least for the time being. It is part of the broader question dominating discussions today: Will AI replace human intelligence? Will it replace humanity?

Although this question, and the debate surrounding it, has become somewhat rhetorical and exhausted, my answer is simple: We created AI, and it will always require human regulation and guidance.

From my perspective inside a newsroom navigating rapid digital acceleration, the tension is not about drafting speed but about defining value. Over time, parts of journalism drifted toward procedural production — rewriting statements, formatting updates, and generating summaries. These functions are necessary, but they are not the profession’s core. AI is not replacing journalism. It is replacing repetition.

Journalism’s enduring value lies in judgment: deciding what deserves prominence, what requires verification, what context is missing, and what should not be published at all. These decisions are rarely efficient. They involve debate, hesitation, and accountability.

An algorithm can generate a logical summary in seconds. It cannot evaluate whether publishing a sensitive detail in a fragile political environment might create unintended consequences. It can detect anomalies in public datasets, but it cannot determine whether amplifying them responsibly serves the public interest.

This distinction becomes particularly significant in regions undergoing rapid digital transformation.

In Saudi Arabia, where Vision 2030 has accelerated technological integration across sectors, media institutions are confronting the realities of AI adoption. Automation promises greater efficiency, multilingual reach, and faster data processing. But alongside opportunity comes responsibility: Who establishes editorial safeguards? Who audits algorithmic bias? Who defines the ethical boundaries of automated content generation?

The conversation is no longer about whether AI tools should be used. They already are. The strategic question is governance.

Without clear editorial frameworks, automation risks weakening credibility rather than strengthening productivity. Speed without oversight can amplify inaccuracies at scale. Efficiency without accountability can erode public trust.

Across the Middle East, approaches differ. Some organizations are rapidly integrating AI into investigative research and audience analytics. Others proceed cautiously, prioritizing editorial control. Both strategies carry trade-offs. Resistance may limit competitiveness. Unregulated adoption may undermine confidence.

The real risk is not technological displacement. It is insufficient governance.

The real risk is not technological displacement. It is insufficient governance.

Mai Anati

Consider investigative reporting. AI systems can now scan thousands of documents in hours, identifying patterns that might otherwise remain hidden. That capacity enhances journalistic capability. Yet deciding which patterns matter — and understanding their social or political implications — still requires human discernment.

Technology accelerates processing. It does not internalize consequences.

This transformation demands new professional competencies. Journalists must understand how algorithmic systems function, where bias enters training data, and how automated outputs should be reviewed before publication. Supervising intelligent systems is becoming as critical as traditional reporting skills.

As automation expands, credibility becomes even more central.

Trust cannot be optimized through code. It is built through consistency, transparency, and responsibility. In highly connected information environments, once diminished, it is difficult to restore.

The narrative of “replacement” therefore oversimplifies a more complex recalibration.

Repetitive functions will continue to decline. Data-heavy processes will increasingly be automated. But the demand for contextual intelligence, ethical clarity, and analytical depth will intensify. In a landscape flooded with machine-generated content, sound judgment becomes increasingly rare — and therefore more valuable.

I do not see AI as a threat to journalism. I see it as a stress test. It forces institutions to clarify what cannot be automated: editorial judgment, cultural literacy, and accountability.

If journalism remains essential in the AI era, it will not be because it outpaces algorithms in speed. It will be because it assumes responsibility for impact — something no system, however advanced, can carry on its own.

Speed is easy. Governance and judgment are not. That is where the future of journalism will be decided.

Mai Anati is managing editor at The Jordan Times.
 

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point-of-view