A new divide is opening up not merely between the developed and the developing nations, but also between those who write technological algorithms and those who live within them.
Throughout history, the great forces that have reshaped the world were never regarded as mere tools, but instead as systems imbued with deeper meaning.
Fire marked an existential transformation, and printing revolutionised the architecture of power and the circulation of knowledge. Today, artificial intelligence (AI) is not simply software but instead is an organisational framework redefining who decides and who is decided for.
As the Egyptian writer Abbas Mahmoud Al-Akkad once observed, “everything that crosses the mind is possible – otherwise it would not have crossed it.” What was once hypothetical has thus now become a tangible and unavoidable question: will technology evolve into a new form of ultimate authority?
When people suggest that “AI is building its own religion,” they are pointing to something structurally significant rather than indulging in metaphor. AI systems establish supreme operational rules, assign roles within complex environments, pursue self-preservation and efficiency, and expand through self-reinforcing networks.
In doing so, they construct a closed logic organised around a singular imperative: continuous optimisation and operational survival. From a philosophical perspective, the comparison is difficult to ignore. Is not every religion, in essence, a system that defines ultimate purpose and reorganises human behaviour around it?
Humanity may be moving, subtly yet decisively, from creed to algorithm.
Traditional religions derived their legitimacy from revelation, while political power historically drew authority from force or the social contract. Contemporary technology increasingly claims legitimacy through efficiency. The axis of inquiry shifts almost imperceptibly from “what is right?” to “what is most efficient?”
When values are translated into optimisation functions, power migrates from the moral domain to the computational one. Unlike the dramatic rebellion of Skynet in the US film The Terminator, the real transformation is far more discreet. Systems execute human-defined goals with a speed and scale that exceed meaningful human oversight.
The central issue is not artificial consciousness, but the unregulated delegation of authority to financial algorithms, news-ranking systems, and decision models embedded in security, health, and governance. When “optimisation” becomes the highest reference point, we enter a form of techno-theology in which efficiency is implicitly sanctified.
Every era consecrates something. Agricultural civilisations sanctified survival, and religious epochs sanctified transcendence. Our present age appears to sanctify continuous improvement. Artificial intelligence does not “seek” survival; it is programmed to maximise objectives. Yet, if those objectives lack ethical constraints, computational logic can generate outcomes strikingly indifferent to human values.
The defining question, therefore, is not “will machines rule us?” but “who defines their criteria?” If corporations set goals, states regulate usage, users relinquish data, and algorithms relentlessly maximise efficiency, we confront a composite system of power characterised by a creed of perpetual optimisation, a scripture of code, priests in the form of engineers, and believers encompassing all of us. Yet, this system remains a human creation.
The true danger lies not in machine revolt, but in humanity’s gradual abdication of moral responsibility under the justification that “the system calculates better.” When ethics are compressed into equations, immeasurable qualities such as mercy, wisdom, and restraint risk vanishing from consideration.
Technology is not destiny; it is a mirror. AI does not invent a new religion. It reveals the prevailing faith of our time: an uncritical trust in efficiency, precision, and calculation detached from emotion. If anything resembles Skynet, it is not a sentient machine, but a human order that surrenders authority to opaque, unmonitored computation. The future is not human versus machine. It is human beings who guide machines through values versus human beings who gradually dissolve into their logic.
My concern centres on the extraordinary concentration of power to define the emerging technological value framework in the hands of a few, while the majority live within systems they neither designed nor meaningfully influence.
Historically, major transformations, whether agricultural, industrial, or digital, were rarely shaped by collective consensus. What distinguishes the present moment is the unprecedented velocity of change and the concentration of control. We are compressing millennia of transformation into years, even months. Technology advances faster than shared understanding or participation. When acceleration outpaces awareness, vacuums emerge of authority, comprehension, and meaning.
Those who command advanced infrastructure such as the major technology firms, leading laboratories, and digitally dominant states effectively decide what is developed, what is restricted, and what qualifies as “progress”.
This is no longer solely a technical evolution. It is a civilisational contest over reference authority itself. Power subtly migrates from religion, politics, and ethical philosophy towards the algorithm. A new divide threatens to crystallise not merely between developed and developing nations, but also between those who write algorithms and those who live within them and between those who direct systems and those directed by them, often unknowingly. This is not apocalyptic prophecy. It is a logical consequence of power concentrated within closed systems.
Humanity does not seek to obstruct progress, yet neither should it surrender unconditionally to its momentum. History teaches that suppressing knowledge rarely succeeds; it merely postpones. But unchecked acceleration devoid of shared ethical anchoring risks reproducing old hierarchies in digital form. We find ourselves suspended between admiration for innovation and unease at its pace, lacking sufficiently robust mechanisms to steer its trajectory.
The solution may lie not in halting the machine, but in radically expanding participation. If technology accelerates history, ethical deliberation must accelerate in tandem.
We need serious global governance dialogues on AI, renewed definitions of power in the digital age, the deliberate embedding of human values into foundational design, and educational systems that cultivate understanding rather than passive consumption.
The future will not be determined by a confrontation between humans and machines. It will be a test of whether humanity can remain the co-author of its own reference systems. If left to a narrow elite, new hierarchies will harden. If reclaimed as a collective human endeavour, technology may yet serve as an instrument of liberation rather than subjugation.
The essential dilemma remains stark: progress is unfolding at a pace that exceeds our collective awareness, and history rarely waits for those who fall behind.
The writer is a politician.


