The new CPUs will use connectivity technology from Nvidia to ensure high-speed communication with Nvidia’s GPUs, the cornerstone of today’s AI computing infrastructure. The collaboration aligns with Nvidia’s broader push into the CPU space, notably through its Arm-based “Grace” processors.
Qualcomm’s return to the data center CPU segment comes after a previously abandoned attempt in the 2010s, when it developed Arm-based chips in collaboration with Meta Platforms. Those efforts were halted due to cost pressures and legal disputes. However, following its 2021 acquisition of a group of former Apple chip designers, Qualcomm has quietly reignited its ambitions in this area.
In addition to ongoing talks with Meta, Qualcomm recently confirmed a letter of understanding with Saudi Arabian AI company Humain to co-develop a custom data center CPU, signaling serious intentions to scale its presence in the server market.
“With the ability to connect our custom processors to Nvidia’s rack-scale architecture, we’re advancing a shared vision of high-performance, energy-efficient computing in the data center,” Qualcomm CEO Cristiano Amon said in a statement.
The move reflects broader trends in the AI hardware ecosystem, where integration between CPUs and GPUs is critical for delivering efficient and scalable AI infrastructure. Qualcomm's reentry—backed by strategic partnerships and advanced design capabilities—could bring a new wave of competition and innovation to the rapidly evolving market.
