But these are nothing when compared to the frightful vision laid out in Frank Herbert’s masterful novel Dune. In Dune, humans built a civilisation so advanced they ceded control to thinking-machines—until those machines decided they no longer needed humans. This resulted in a devastatingly brutal war, called the Butlerian Jihad, after which future use of AI was forbidden and human rule was established once more.
Rational human thought is civilisation’s greatest treasure. But, as AI now infiltrates every aspect of life by our own design, are we marching towards a reckoning similar to the one in Dune’s?
Thinking-Machines
If AI was to take over, where would it start? At work seems the obvious answer. But would that be so terrible? If thinking-machines worked on our behalf while the State sustained us, life might take on a sunnier aspect. More time for leisure, sports, reading, music and family—in short, utopia to all but the staunchest cynic.
While AI replacing jobs might not be that terrible, the real danger lies in us losing what makes us human: freedom. Not just that of movement or speech but the fundamental ability to judge a situation.
But what if tomorrow, Justice—that ultimate, though imperfect, arbiter of what is right or wrong—was handed over to thinking-machines, with no human judgment even entering into it?
Imagine walking down the street, only for a drone to hover beside you, without emotion stating you are placed under arrest for this, that, or the other. No trial, no possibility to enter a plea, the algorithm now decides your fate.
Then again, since traditional justice systems are frequently inefficient and politicised, automated rulings could be seen as progress.
AI-driven justice, once unthinkable, may soon be embraced—not because it is inherently fairer, but because it simply works more efficiently. The blindfolded figure of Lady Justice risks being replaced by the cold, Kafkaesque precision of algorithms, where due process is reduced to data points and verdicts are calculated rather than weighed. When judgment is outsourced, what then remains of human dignity?
Advanced Judges
The point is perhaps moot, since AI is already transforming law enforcement; predictive policing, facial recognition, and automated surveillance programs have already been rolled out. Algorithms like PredPol lay out patrol strategies in U.S. cities, while Clearview AI’s vast facial recognition database has enabled nearly a million searches. Surveillance networks, from Singapore’s AI-assisted drones to China’s extensive monitoring system, are seeing global expansion.
In the judiciary, AI-driven risk assessments influence sentencing and parole, despite concerns over bias—COMPAS, for instance, mislabels Black defendants as high-risk nearly twice as often as white defendants. While Malaysia and China have experimented with AI prosecutors and sentencing tools, Western nations are imposing restrictions. The EU’s AI Act bans real-time biometric surveillance, while the US adopts a mixed approach using both AI and human judgment. Meanwhile, authoritarian regimes like China or Singapore wholeheartedly embrace AI-driven policing.
AI excels at predicting crime, at least on paper. Algorithms crunch historical data, flagging high-risk areas and individuals with mathematical precision. But can numbers predict guilt before the fact? What about the presumption of innocence? Humans are unpredictable creatures; someone poised to commit a crime might have a last-second change of heart.
Or perversely, knowing a machine has likely already marked them as guilty, they could commit the crime anyway, since it might not matter. AI doesn’t hesitate, doesn’t doubt, doesn’t empathise. It simply determines the outcome which is statistically most likely. But herein lies the rub: people, unlike algorithms, are messy, irrational and gloriously unpredictable.
The real question isn’t whether AI can make us safer, it’s whether we’re willing to trade freedom for the illusion of safety. Perhaps a little insecurity is the price of being human. Ask yourself if you would like to live in a padded environment; completely safe, yet without the thrill which freedom—and its concomitant, risk—offers.
Safety over Freedom
However, the promise of security is a seductive one: 93% of the French consider it one of the most pressing concerns in their daily life.
But security, when entirely handed over to the State, carries with it risk : if the State assumes total responsibility for your safety, it inevitably assumes control over other parts of your life. Why stop at catching criminals after the act? Why not preemptively identify “problematic” individuals, assign scores to measure one's potential for dissent through, say, a social credit system?
For obvious reasons, safety is an appealing notion. But when the State alone defines what constitutes a threat, even the most ordinary behavior can become suspect. Orwell’s nightmare of Big Brother would not just track down criminals but anyone who could conceivably question the system’s values and, indeed, authority. The line between maintaining order and oppression is razor-thin. Taking Dune as a warning, let’s not surrender judgment—and with it, our very humanity— to AI for the illusion of perpetual security.
Statement
AI in law enforcement promises efficiency, but at what cost? While there is always a trade-off of freedom for security, trends like predictive policing and algorithmic sentencing take the human element completely out of it, and have the potential to inaugurate an AI-driven authoritarianism.