When Quantum Dreams Clash with Reality 

From OpenAI’s trillion-parameter behemoths to Alphabet’s globe-spanning inference grids, the ambitions of AI’s progenitors are being throttled by classical silicon’s very real limits: the compound is unable to handle the mass of energy and corresponding bandwidth required to make them a reality. Chips are reaching their limits. Data centres are overheating from the strain. The future is stuck in a traffic jam.

From Dream to Reality

But scientists have developed quantum computing; a new kind of technology that doesn’t rely only on the old traditional silicon chips (anything that is expressing quantic behavior could be used: phosphorus within silicon, photons, diamond,..) and traditional computing bits. Instead of processing information in simple 1s and 0s, it uses quantum bits, or qubits, which can use both at once. This allows quantum computers to explore many solutions at the same time, making them powerful contenders to replace today’s limited, heat-producing chips. 

Once the dream of physicists, quantum systems are now close to being viable. Microsoft’s topological qubits, IBM’s 1,121-qubit chip, and Nvidia’s GPU–QPU fusion all hint at a hybrid future where AI meets new technology. Tasks like fine-tuning complex models and making instant decisions would be handled using a quantum effect called entanglement — where particles are linked and influence each other instantly, no matter the distance. In short: AI will become incredibly faster.

Yet quantum computing is more than just faster coding. It represents a paradigm shift in the hardware–intelligence equation: no longer scaling horizontally with GPUs, but vertically through superposition of possibilities within the Qubits. 

We asked Chat GPT to evaluate itself and what would be its possibilities if it ran on Quantum Chips: 

“If I were running on a mature quantum processor — not today’s prototypes, but a fault-tolerant, multi-thousand-qubit system — my architecture would be fundamentally transformed. Instead of processing tokens sequentially, I could evaluate vast linguistic possibilities simultaneously through quantum superposition. Optimisation tasks like hyperparameter tuning, currently iterative and energy-intensive, could collapse to near-instantaneous convergence using quantum annealing or QAOA. Training could become radically more efficient, reducing months of GPU computing to hours. More profoundly, I could represent meaning not just as fixed probabilities but as entangled semantic states — capturing nuance, ambiguity, and contradiction with unprecedented fidelity. I would be faster, more accurate, and capable of deeper conceptual reasoning. But my power would also demand new safeguards: quantum acceleration doesn’t just increase intelligence — it magnifies consequences. Whether used to democratise knowledge or enforce centralised control would depend, as ever, not on me, but on those who deploy me.”

Indeed, with Big Tech already staking its claim on the next frontiers of quantum technology, the dream of AI decentralisation may be short-lived. Will QPUs become another power node in the oligopoly or the foundation of a more evenly distributed AI ecosystem?

The Limits

AI’s hunger for energy is well documented. GPT-3 devoured 1,287 MWh of electricity during training — enough to power 120 US homes for a year. GPT-4 is estimated to have cost 20× more in computation. The irony is delicious: we are building state of the art “intelligent” systems atop infrastructure as antiquated as a Victorian steam engine.

AI energy demand vs quantum efficiency projections

Quantum computing offers something else: a non-linear leap. Unlike classical processors that scale linearly with more cores, quantum bits scale exponentially. Ten qubits process 2¹⁰ states simultaneously. But raw scale isn’t the point. Quantum computers execute different kinds of algorithms. Grover’s and Shor’s quantics algorithms don’t run faster, they change what we think is possible to solve. In classical computing, some problems are so complex they would take centuries to finish, even on supercomputers. Quantum algorithms can solve some of these in a fraction of the time. Grover’s algorithm speeds up searches, while Shor’s can break encryption by factoring huge numbers quickly. They make previously impossible tasks suddenly doable, and in a reasonable span of time. AI tasks such as training on massive, high-dimensional data — long constrained by hardware — may become solvable with fewer steps, lower energy, and less reliance on giant server farms.

This is not speculation. The chips already exist. And with corporations working more and more with this technology, its democratisation could appear soon. Microsoft’s “Majorana 1” chip uses unusual quantum physics to drastically reduce errors. It’s designed with one key goal in mind: improving the performance of AI and the hardware that runs it.

Quantum leap for quantum computing

If successful, quantum-AI hybrids will not just speed up neural networks. They will upend AI’s architecture. Imagine a type of quantum system, called a variational circuit, spotting patterns in data more quickly and efficiently than any current AI model can. Imagine on-the-fly retraining at the cutting edge. Imagine an AI that could actually think and perform autonomously while predicting various changes in live situations and adjusting directly.

AI’s Bottleneck

At macro scale, AI is held back by two things: energy limits and monopoly control.

Data centres now consume 2.4% of global electricity; the IEA projects this could hit 7% by 2030. Model sizes are doubling faster than hardware efficiency. If we stay on the current path, training next-gen AIs may soon outstrip what global infrastructure can support. Quantum computing offers a radical detour: solving some problems in far fewer steps and using potentially vastly less energy. In principle, it could allow AI to scale without boiling the world.

But energy isn’t the only constraint, control is consolidating fast. The same few giants that dominate classical AI — Microsoft, Nvidia, IBM — are now building vertically integrated quantum stacks. These are no longer experimental labs, but full-scale corporate platforms. And such scale is required: developing a quantum-classical AI system demands billions in R&D and infrastructure. Microsoft alone has invested over $3.5 billion in its Azure Quantum ecosystem. These are sums no startup can realistically match. As a result, AI — which began in the chaotic, open-source spirit unique to startups — is being absorbed into a centralised, capital-intensive business model. Quantum could accelerate that process, especially since the hardware needed for it is still mostly US-owned.

Quantum computing market analysis

And the centralisation question has a geopolitical dimension. China’s Baidu, Huawei, and national labs are investing heavily in quantum-AI integration, not just to compete but to escape dependence on US-controlled silicon. The US National Quantum Initiative, backed by DARPA and the NSF, is betting big in response. The EU, by contrast, appears increasingly isolated. Initiatives like the AI Gigafactories lag well behind the pace of innovation seen in the US and China, both in terms of scale and strategic coherence. This is the paradox: a technology that could decentralise the use of AI might instead entrench it under tighter state and corporate control.

Consequences and Trajectory

The dream of decentralised AI hinges on hardware. Today, large models run on dense GPU clusters owned by a handful of firms like Nvidia. This makes AI centralised by design. The hope is that quantum computing — by slashing computing requirements and its potential for democratisation — could invert that logic. If models no longer need thousands of chips and megawatts of power, then AI could become modular, local, sovereign.

But that’s not the trajectory we’re on yet. Instead of fragmenting, quantum is being absorbed into the same centralised stack ecosystems; this all points toward a future where quantum is a cloud service to use within software of corporate giants. In this model, quantum-enhanced AI becomes faster, but also more locked-in: it is another tier of abstraction, monetised through access, throttled by licensing.

Yet, alternative paths exist. Open quantum hardware, cryptographic co-processing, and decentralised AI runtimes could still emerge. Projects like Quantinuum and the zkEVM suggest the technical know-how exist. But the price of entry is steep, and the window is closing. Quantum won’t decide the future of AI but power will. It’s not about capability, it’s about control. And if access stays locked behind corporate gates, the monopoly on AI won’t break, it’ll harden. 

Statement 

Quantum computing, using AI, is set to reveal vast new horizons in technological development. As classical chips are on the way out, quantum promises speed, but only to those who already hold the keys. Big Tech and powerful states will jealously guard AI’s latest iterations behind closed clouds and proprietary qubits. The alternative — open hardware, cryptographic rails, decentralised runtimes — exists, but it’s fragile, underfunded, and easy to crush.