When everyone rushes to mine gold, it is the sellers of shovels who reap the biggest rewards. That long-standing maxim neatly captures today’s artificial intelligence boom. The real profits are not flowing to the AI labs building sophisticated models, many of which remain loss-making, but to the companies supplying the chips, data centres, and infrastructure that power them.
Nvidia’s extraordinary financial
Nvidia’s extraordinary financial performance illustrates this reality. Record earnings over the past year have propelled the chipmaker’s market valuation beyond $5 trillion, underscoring just how lucrative the AI infrastructure trade has become.
Yet a critical question looms: what if the world is investing in the wrong kind of shovel?
AI infrastructure
Analysts estimate that the global expansion of AI infrastructure — spanning data centres, semiconductors, power generation, and cooling systems — will cost several trillion dollars in the coming years. Much of this investment is being designed specifically to support large language models (LLMs) such as OpenAI’s ChatGPT and Anthropic’s Claude.

The risk is that this massive build-out assumes a single future for AI. If progress shifts towards smaller, more efficient models, or if technological breakthroughs reduce the need for energy-hungry chips, vast amounts of expensive infrastructure could become underutilised or even economically unviable. In effect, the global economy is hard-wiring one vision of AI into its foundations — placing all its bets on a single path.
Nvidia chips
Signs of this vulnerability are already emerging. Training cutting-edge models has become staggeringly expensive, yet each new generation is delivering diminishing returns. GPT-5, for example, reportedly required hundreds of thousands of Nvidia chips, but produced only incremental improvements in performance.
As costs soar and gains narrow, the question is no longer whether AI will reshape the world — but whether the world is investing wisely enough to adapt if that vision changes.
If the returns from ever-larger AI models continue to flatten, the world risks locking itself into an artificial intelligence system that may never earn back the immense cost of the hardware it depends on. Yet funding, talent, and research attention are becoming increasingly concentrated on large language models, almost all of which rely on the same underlying transformer architecture.
Smaller models and alternative approaches receive far less support. At precisely the moment when the field would benefit from greater diversity, it is narrowing instead. Research into promising areas such as liquid neural networks or neuro-symbolic AI risks slowing as capital and expertise continue to funnel toward transformer-based systems.

History offers a cautionary parallel. In the late 19th century, American railroad companies laid far more track than demand could justify. It was often more profitable to build new lines than to operate them — a speculative boom that ultimately ended in collapse.
A century later, telecoms firms repeated the pattern, spending billions to lay fibre-optic cables in anticipation of explosive internet growth. Much of that capacity remained unused for years, triggering bankruptcies and write-downs across the sector.
AI boom is heading down a similar path
The question now is whether the AI boom is heading down a similar path of speculative overbuilding. Early signs of strain are already visible. When Google recently launched Gemini 3 — a chatbot widely viewed as surpassing OpenAI’s offerings — it trained the model on its own tensor processing units rather than Nvidia’s chips, triggering a sharp sell-off in Nvidia’s shares.
The episode highlighted how quickly the assumptions underpinning today’s AI infrastructure can change. Earlier this year, China’s DeepSeek demonstrated cutting-edge performance with its R1 model without relying on Nvidia’s costly, power-hungry processors.
If more companies adopt alternative hardware or more efficient model designs, much of the AI infrastructure being built today could become uneconomic. Data centres, chip fabrication plants, and power systems designed specifically for current LLMs would be difficult to repurpose. The trillions invested may not vanish overnight, but their returns could.
The danger extends beyond overbuilding. Power and capital are becoming increasingly concentrated in the hands of a few companies. Since late 2022, the AI rally has driven technology valuations to record highs — a surge the AI boom is heading down a similar path has warned is being fuelled as much by fear of missing out as by underlying fundamentals.
LLMs
Those gains are strikingly concentrated. Eight of the ten largest companies in the S&P 500 are technology firms, together accounting for more than a third of the entire US stock market. Such concentration leaves investors vulnerable to sharp corrections if sentiment shifts.
This is a systemic risk. When so much capital, infrastructure, and market value are tied to a narrow set of companies — and to a single vision of AI — any disruption could ripple through the global economy.
That dependence is most visible at OpenAI. The ChatGPT maker has lined up agreements that could see it spend more than $1 trillion on computing power, financed largely by other technology giants. These partnerships are weaving a dense web of financial and technical dependencies that risk locking the AI industry into a limited set of suppliers and architectures.
At some point, the scale of this spending will have to justify itself. So far, the payoff remains uncertain. Research from MIT suggests that up to 95 per cent of AI projects deliver no measurable returns. Yet investment continues to surge, driven by the fear of being left behind in what many believe could be the next industrial revolution.
For now, tangible productivity gains from generative AI remain elusive. Most companies are still experimenting, even as development costs rise relentlessly. The question is not whether AI will transform the economy — but whether the world is investing in the right version of it.

No comment