Baidu is stepping up its role in China’s artificial intelligence race with the introduction of two homegrown AI chips and a suite of large-scale computing systems designed to bolster domestic technological self-reliance amid tightening U.S. export controls.

At its annual Baidu World technology conference, the Chinese tech giant unveiled the M100 and M300 semiconductors, marking the latest phase in its long-running chip development program that began in 2011. The M100, optimized for AI inference tasks, is scheduled for release in early 2026, while the M300—capable of handling both training and inference workloads—is expected to follow in 2027.

AI training involves building models by analyzing massive datasets, while inference applies those models to real-world tasks such as generating text or processing user queries.

Baidu also introduced two “supernode” systems designed to link multiple chips through high-speed networking, compensating for individual performance limits and creating more scalable AI infrastructure. One of these, the Tianchi 256, will integrate 256 of Baidu’s existing P800 chips and debut in the first half of next year, followed by a 512-chip configuration later in 2025.

The company’s latest hardware push comes as China’s tech sector accelerates efforts to produce advanced semiconductors domestically, following U.S. curbs on exports of Nvidia and other American AI chips. Competitor Huawei has already fielded its CloudMatrix 384 system—built on 384 Ascend 910C chips—which analysts say rivals top-tier U.S. offerings like Nvidia’s GB200 NVL72.

In addition to hardware, Baidu showcased a new iteration of its Ernie large language model, highlighting expanded capabilities in text, image, and video understanding.

Industry observers view Baidu’s latest announcements as part of a broader strategy to secure China’s AI supply chain and reduce dependence on foreign technology, positioning the firm as a key player in the country’s quest for computational sovereignty.