The real danger isn’t AI, it’s human stupidity
https://arab.news/vj2qc
By the time you read this, someone, somewhere, will have blamed artificial intelligence for something profoundly human.
A student used it to cheat. A company used it to cut corners. A government used it to surveil. A fraudster used it to deceive. The culprit, we are told, is the algorithm. But perhaps the real danger isn’t artificial intelligence. It is natural stupidity.
This is not a fashionable argument. In an era where headlines oscillate between utopia and apocalypse, nuance struggles for oxygen. AI is either our salvation or our extinction. Venture capitalists promise productivity miracles. Doomsayers warn of rogue systems plotting humanity’s demise. Panels are convened. Regulations drafted. Ethics boards assembled.
Yet amid all the noise, we risk misunderstanding the nature of the threat.
Artificial intelligence does not possess malice. It has no ego, no resentment, no ideology. It does not wake up offended, nor does it go to bed angry. It does not crave power or fear irrelevance. It does not spread misinformation because it prefers chaos to order. It does what it is designed and incentivized to do.
The uncomfortable truth is that the harm attributed to AI is, in most cases, a magnification of human flaws. Bias in algorithms mirrors bias in data, which mirrors bias in society. Disinformation campaigns scale because humans create and share falsehoods. Automation displaces workers because executives choose efficiency over reskilling. Surveillance expands because policymakers prioritize control over liberty. The machine may accelerate the impact. But the direction is set by us.
Was that AI run amok? Or was it a business model exploiting predictable human psychology?
Deepfakes can undermine trust. Automated bots can flood public discourse. AI-generated phishing emails can deceive more convincingly than their human-crafted predecessors. In the wrong hands, these tools can destabilize markets, reputations and even democracies. But the “wrong hands” belong to people.
Blaming AI for these outcomes is intellectually convenient. It externalizes responsibility. It suggests that the threat is an autonomous system beyond our control, rather than a reflection of our own incentives and governance failures.
Natural stupidity, on the other hand, is harder to regulate.
Stupidity in this context is not a lack of IQ. It is the failure to recognize limits of technology, of institutions, of ourselves.
Rafael Hernandez de Santiago
It manifests as short-termism in boardrooms, chasing quarterly earnings at the expense of long-term resilience. It appears as regulatory paralysis, policymakers either overreacting with blunt bans or underreacting with laissez-faire complacency. It shows up as digital illiteracy, users sharing fabricated content without scrutiny. Most dangerously, it thrives in overconfidence.
Stupidity in this context is not a lack of IQ. It is the failure to recognize limits of technology, of institutions, of ourselves.
The philosopher Dietrich Bonhoeffer once suggested that stupidity is more dangerous than malice because it resists reason. A malicious person can be confronted; a stupid one is convinced of their righteousness. In the age of AI, this insight feels prescient.
If we truly wish to mitigate the dangers of AI, we must address the human conditions that shape its deployment.
First, incentives matter. Companies will build and deploy systems aligned with profit unless regulation and market demand reward responsibility. Ethical guidelines without enforcement are public relations exercises. Transparency without accountability is theater.
Second, education is essential. Digital literacy should not be an optional skill but a civic necessity. Citizens must understand not only how to use AI tools, but how they work — their limitations, their biases, their susceptibility to manipulation. A society that cannot critically evaluate information is vulnerable regardless of the technology involved.
Third, governance must evolve. This does not mean stifling innovation with fear-driven prohibitions. It means crafting adaptive frameworks that balance experimentation with safeguards. Policymakers should collaborate with technologists, ethicists and civil society, not react after crises erupt.
Finally, humility is indispensable.
We must resist both techno-optimism and techno-panic. AI will not solve all our problems, nor will it inevitably destroy us. It is a tool — extraordinarily powerful, yes, but still a tool. Its trajectory will be determined less by silicon and more by character.
If misinformation spreads faster with AI, we must ask why truth spreads so slowly. If automation displaces workers, we must question why safety nets lag behind innovation. If deepfakes erode trust, we must examine why trust was so fragile to begin with.
AI is a mirror held up to humanity. It reflects our brilliance and our blind spots. It amplifies our creativity and our carelessness. It exposes the gap between our values and our behavior.
The real danger is not that machines will become more like humans. It is that humans, empowered by machines, will refuse to become wiser.
In the end, the challenge of AI is not technical but moral. It demands better judgment, stronger institutions and a renewed commitment to responsibility. We do not need to outsmart our machines nearly as much as we need to outgrow our own stupidity. And that, unlike code, cannot be debugged overnight.
• Rafael Hernandez de Santiago, viscount of Espes, is a Spanish national residing in Saudi Arabia and working at the Gulf Research Center.

































