The emergence of DeepSeek's artificial intelligence (AI) models is providing certain Chinese chip manufacturers, including Huawei, with an enhanced opportunity to compete in the domestic market against more formidable U.S. processors.

For years, Huawei and its Chinese counterparts have faced challenges in matching Nvidia's capabilities in developing high-end chips that can rival the U.S. company's offerings for model training, a process where data is utilized to train algorithms for accurate decision-making.

DeepSeek's models, however, emphasize "inference," the phase in which an AI model draws conclusions, focusing on optimizing computational efficiency rather than depending solely on raw processing power.

Analysts suggest that this approach may help narrow the performance gap between Chinese-made AI processors and their more powerful U.S. equivalents.

Recently, Huawei and other Chinese AI chip manufacturers, including Hygon, Tencent-backed EnFlame, Tsingmicro, and Moore Threads, have announced that their products will support DeepSeek models, although specific details remain scarce.

Huawei has chosen not to comment, and Moore Threads, Hygon, EnFlame, and Tsingmicro did not respond to inquiries from Reuters for additional information.

Industry leaders are now forecasting that the open-source nature of DeepSeek, along with its low fees, could facilitate the adoption of AI and the creation of practical applications for the technology, aiding Chinese companies in navigating U.S. export restrictions on their most advanced chips.

Even prior to DeepSeek's recent prominence, products like Huawei's Ascend 910B were regarded by clients such as ByteDance as more suitable for less computationally demanding "inference" tasks, which occur after training and involve AI models making predictions or executing tasks, such as in chatbots.

In China, numerous companies, ranging from automakers to telecommunications providers, have announced intentions to incorporate DeepSeek's models into their products and operations.

This development strongly aligns with the capabilities of Chinese AI chipset vendors, according to Lian Jye Su, Principal Analyst at Omdia. Chinese AI chipsets face challenges competing with Nvidia's GPUs in AI training; however, AI inference workloads present a more accessible market, demanding significant localized and industry-specific expertise.

NVIDIA CONTINUES TO LEAD

Bernstein analyst Lin Qingyuan noted that while Chinese AI chips are competitively priced for inference tasks, their effectiveness is primarily confined to the Chinese market, as Nvidia's offerings still outperform them in this area.

Despite U.S. export restrictions preventing Nvidia's most advanced AI training chips from being sold in China, the company can still provide less powerful training chips that are suitable for inference applications.

On Thursday, Nvidia released a blog post discussing the increasing inference time as a new scaling law, asserting that its chips are essential for enhancing the utility of DeepSeek and other reasoning models.

Beyond raw computing power, Nvidia's CUDA—a parallel computing platform that enables software developers to leverage Nvidia GPUs for a variety of computing tasks beyond just AI or graphics—has become a vital element of its market leadership.

Historically, many Chinese AI chip manufacturers have refrained from directly challenging Nvidia by urging users to move away from CUDA; instead, they have claimed compatibility with the platform.

Huawei has aggressively pursued independence from Nvidia by developing the Compute Architecture for Neural Networks (CANN), a CUDA equivalent. However, industry experts cite challenges in convincing developers to migrate from the established CUDA ecosystem. 

Omdia's Su notes that Chinese AI chip companies currently lag in software performance, highlighting CUDA's extensive library and robust software capabilities, which necessitate substantial long-term investment.