A year later, there was indeed a Deepseek effect (and even two), but not the one you think

A year later, there was indeed a Deepseek effect (and even two), but not the one you think

The arrival of Chinese AI has in no way discouraged huge investments in AI infrastructure, but it has facilitated new uses and the rise of China.

At the end of January 2025, the Chinese company DeepSeek tumbled into the world of AI like a dog in a bowling game. With its economical AI, the start-up from the Middle Kingdom seemed to reshuffle the cards and call into question the pharaonic investments of big tech in the infrastructures necessary for training AI models. If it were thus possible to train a competitive model with limited hardware resources, what was the point of investing tens of billions of dollars in the construction of giant data centers equipped with state-of-the-art Nvidia chips? An idea visibly shared by Wall Street, with tech stocks having suffered a mini-stock market crash in the hours following the announcement of the young Chinese startup.

However, the rest of the year in no way confirmed these fears, quite the contrary. Total AI capital investment spending from Amazon, Microsoft, Alphabet, Oracle and Meta over 2025 was $443 billion, up 73% from the previous year, which was already up 63% from 2023. So, did the DeepSeek event have the effect of a damp squib?

A windfall effect serving new uses

More than overestimated, the DeepSeek phenomenon has actually been misinterpreted. The drop in the cost of tokens has not led to a drop in investments in infrastructure, but on the contrary created a window of opportunity to develop new use cases unthinkable with too high token costs, thus contributing to an increase in investments.

“At a time when we are introducing reasoning models everywhere, the orders of magnitude have radically changed. We are no longer dealing with hundreds of tokens per request, but with thousands, and probably soon millions. Thus, in a market where demand is unlimited, we spend everything we can afford to spend. In the same way, the drop in the costs of transistors has not reduced the size of the semiconductor market, quite the contrary”, deciphers Antoine Chkaiban, consultant at New Street Research, a market intelligence firm.

By reducing inference costs by a factor of 20 to 50 compared to GPT-4, DeepSeek has made AI accessible to new uses (video translation, agentic AI, etc.) at a lower cost, leading hyperscalers to press full throttle.

This is the famous Jevons paradox, or rebound effect: when technological progress increases the efficiency with which a resource is used, total demand increases, leading to more investment.

A step more than a breakup

Now that the dust has settled, DeepSeek appears more like a significant step towards increasing the efficiency of AI, than a real breaking point.

“DeepSeek’s strength consisted less in disruptive innovation as such than in the ability to bring together several existing optimization methods (and some already applied by companies like Mistral), such as model distillation, compression of attention matrices, mixture of experts (MoE), low-precision arithmetic, and to combine them to be more effective,” explains Antoine Chkaiban.

These methods have since inspired developer communities and hyperscalers: the AWS Bedrock, Azure AI and Vertex AI platforms deploy the DeepSeek-R1 and V3 models for inference.

The rise of Chinese open source

DeepSeek, on the other hand, marked a fundamental step in another aspect: that of the rise in power of Chinese open source models on AI, which are now beginning to compete with American proprietary models.

The Chinese start-up Moonshot AI has just unveiled Kimi K2.5, which claims video generation and agentic AI capabilities surpassing the three main American AI models: Claude (Anthropic), GPT (OpenAI) and Gemini (Google).

At the same time, e-commerce giant Alibaba announced its latest generative AI model, Qwen3-Max-Thinking, which the Chinese giant said would outperform the American giants’ models in “Humanity’s Last Exam”, a benchmark test for large language models, considered the most difficult to date. Hong Kong-listed shares of Chinese tech giant Baidu, meanwhile, soared to their highest level in nearly three years, following the release of its latest generative AI model, Ernie 5.0. The Chinese giant intends to further accelerate the pace with the IPO of its subsidiary Kunlunxin, dedicated to AI chips, which aims to compete with the Americans Nvidia and AMD.

According to recent statements by Demis Hassabis, CEO of Google DeepMind, Chinese open source models are only “a few months” away from those developed in the United States.

The world of AI is thus confronted with an interesting paradox, in which the American capitalist power offers proprietary models, while the Chinese communist regime intends to compete with open models. For Antoine Chkaiban, China’s bet on open source follows a classic strategy deployed by players seeking to catch up in a given technological ecosystem. “Focusing on open source is the best way to develop your ecosystem when you are behind. AMD is doing the same thing to try to compete with Nvidia, which for its part offers closed technology.”

On the importance of the full stack

But beyond the question of whether or not Chinese models are more efficient than their American counterparts, the next question mark in the AI ​​market is that of monetization, and more specifically of the part of the chain that will generate the most added value: the semiconductor layer, the cloud layer, the models themselves or the applications.

“In my opinion, those who will win will be those who will be able to position themselves at all levels, on the full stack, because the systems are today so intertwined that we must innovate harmoniously on all levels to progress.” Google, which designs its own chips, owns its cloud, its models and its applications, is advantageously positioned in this regard, while the Chinese suffer from the lack of access to the best existing technology on AI chips, that of Nvidia. This is why the Middle Kingdom is working hard to catch up at this level.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment