DeepSeek's New Model Bets on Domestic Chips to Strengthen AI Industry

DeepSeek's latest model, DeepSeek-V4, integrates domestic chips, marking a strategic shift in China's AI infrastructure and reducing reliance on U.S. technology.

DeepSeek’s New Model and Domestic Chips

On April 29, Reuters reported that following the release of DeepSeek-V4, major Chinese tech companies like ByteDance, Tencent, and Alibaba are rapidly acquiring Huawei’s domestic chips.

Image 2

This marks a significant and irreversible collective shift towards domestic chips, with a computing foundation based on local hardware taking shape.

NVIDIA’s founder and CEO, Jensen Huang, has issued a warning. He stated in an interview that if DeepSeek’s latest generation of large models is first released on Huawei’s advanced chip platform and fully adapted, it would be a catastrophic blow to the U.S.’s strategic position in the global AI field.

Huang’s real concern is that once China’s top large models are bound to domestic computing foundations, the long-standing U.S. chip blockade will lose its critical leverage.

A key link in this chain has now been established. The new DeepSeek-V4 model, launched on April 24, has included both Huawei’s Ascend chips and NVIDIA chips in its hardware validation list.

The newly adapted Huawei Ascend inference chip is priced at only a quarter of NVIDIA’s, yet its single-card computing power is 2.87 times greater than NVIDIA’s special version for China, showcasing a significant cost-performance advantage.

This is a tested, high-performance solution of “national model + national chip,” with compelling cost and security benefits.

Not long ago, the shortage of chips was a core bottleneck, especially in the critical area of model training, where domestic chips were largely absent or only able to participate in marginal tasks.

Now, a turning point has been reached. Multiple large models in China have completed adaptations to domestic chips, and 2026 is being referred to as the “year of domestic AI chip training implementation” in the industry.

Questions inevitably arise: Can large models run stably and efficiently on domestic hardware?

DeepSeek admits that the capability level of the new model still lags behind its main competitors, with a development trajectory approximately 3 to 6 months behind leading closed-source models.

Rather than waiting for external criticism or deliberately beautifying the situation, DeepSeek proactively acknowledges its shortcomings and faces the gap, revealing a pragmatic logic: in the competition where technological gaps objectively exist, humbly catching up is far more valuable than pretending to lead.

From core parameters and actual performance, the new model shows impressive breakthroughs. It features 16 trillion total parameters and a million-token ultra-long context as standard. In mathematics, hard science and technology, and competitive coding, the high-performance version of the new model has surpassed all publicly evaluated open-source models, standing shoulder to shoulder with mainstream closed-source models.

Especially in programming for intelligent agents, it has topped the open-source leaderboard and is hailed as a “programming artifact.”

While rationally acknowledging the overall technological gap, DeepSeek has achieved breakthroughs in specific areas and has opened a crushing gap in terms of cost. The DeepSeek-V4-Pro model API has launched a limited-time price promotion at 2.5 times lower than usual, with input prices starting at 0.25 yuan per million tokens. In contrast, the weighted average input price for GPT-5.5 Pro is $30 per million tokens, making DeepSeek-V4-Pro over 700 times cheaper.

Looking at the mainstream international large models, such as Anthropic’s Claude Opus series, OpenAI’s GPT-5.4, and Google’s Gemini 3.1 Pro series, their prices are also quite high.

While performance is only 3 to 6 months behind, the cost has created a massive gap. An asymmetric competition has already begun.

This is not just a victory for a single chip, but the maturation of an entire domestic computing ecosystem. Actual test data shows that after breaking away from the NVIDIA ecosystem, the new model’s end-to-end latency is 35% lower than the existing cluster.

This indicates that domestic computing has entered a stable and efficient “usable” stage.

Goldman Sachs’ latest research report states that with the large-scale supply of Huawei’s Ascend 950 in the second half of this year, the pricing of the new model will see a significant drop. This move not only strengthens DeepSeek’s cost competitiveness but also provides strong endorsement for the migration of China’s top large models to domestic computing.

Crucially, DeepSeek’s choice is not an isolated case.

Looking at other leading players in China, such as Alibaba’s Tongyi Qianwen, Zhiyun Qingyan, Baichuan Intelligence, and ByteDance’s Doubao, they are all simultaneously advancing extreme cost performance, chasing advanced performance, and building open-source ecosystems.

Although each company’s path may differ, the direction is highly consistent: breaking free from external dependencies and solidifying the domestic industrial chain.

The procurement of Huawei chips by Chinese tech companies is not merely a sentimental choice but a rational decision that balances cost accounting, supply chain security, and industrial autonomy.

With domestic chips as the foundation, the continuously improving self-controlled computing base is gradually solidifying the long-term confidence of China’s AI industry.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.