Intel has announced plans to launch a new artificial intelligence (AI) graphics chip for data centers next year, marking a fresh bid by the company to reclaim relevance in the fast-growing AI hardware market dominated by Nvidia and AMD.

The new chip, named Crescent Island, was unveiled on Tuesday at the Open Compute Summit in San Jose by Intel’s Chief Technology Officer, Sachin Katti. He described it as a GPU optimized for energy efficiency and AI inference, designed to deliver the “best performance per dollar” for enterprise and cloud customers.

“It emphasizes that focus that I talked about earlier—inference, optimized for AI, optimized for delivering the best token economics out there,” Katti said.

The move signals Intel’s renewed determination to compete in the booming AI chip industry, where it has struggled to gain traction. While rivals like Nvidia and AMD have seen billions in revenue from AI-driven demand, Intel’s earlier ventures—such as its Gaudi line of AI chips and Falcon Shores processor project—were effectively shelved.

Intel’s CEO, Lip-Bu Tan, has vowed to reignite the company’s AI ambitions, with Crescent Island serving as the first major step in that revival.

Technical Overview and Market Challenge

According to Intel, Crescent Island will feature 160 gigabytes of a slower memory type compared to the high-bandwidth memory (HBM) used in AMD and Nvidia’s leading AI chips. The new GPU will reportedly build on a design Intel has previously used for its consumer-grade graphics processors, though the company did not specify the manufacturing process or the foundry that will produce it.

The design prioritizes energy efficiency and inference workloads—the process of running AI models after they have been trained—rather than large-scale model training.

Industry observers note that the chip’s specifications indicate Intel is focusing on affordable, scalable AI solutions rather than directly challenging Nvidia’s high-end GPUs like the H100, which dominate the market for training massive models such as OpenAI’s ChatGPT.

Annual Chip Release Plan

Katti said Intel plans to release new data center AI chips annually, aligning with the update cycles of Nvidia, AMD, and major cloud computing firms that design their own AI processors.

“Instead of trying to build for every workload out there, our focus is increasingly going to be on inference,” he explained.

The company is also adopting an open and modular strategy, allowing customers to mix and match chips from different vendors in their data centers—an approach designed to appeal to large cloud providers seeking flexibility in sourcing components.

Industry Context and Nvidia Partnership

The surge in demand for GPUs since the launch of OpenAI’s ChatGPT in November 2022 has led to global supply shortages and record-high chip prices. Nvidia remains the undisputed leader in AI hardware, powering the infrastructure behind nearly all major generative AI platforms.

In a notable development, Nvidia recently announced a $5 billion investment in Intel, acquiring about a 4% stake in the company. The partnership aims to co-develop future PC and data center chips, further integrating Intel’s CPUs with Nvidia’s AI systems.

Katti said the collaboration is part of Intel’s broader strategy to ensure its central processors (CPUs) remain embedded in every major AI system deployed worldwide.

Despite trailing its rivals, Intel’s reentry into the AI GPU market through Crescent Island reflects the company’s long-term ambition to diversify beyond traditional CPUs and re-establish itself as a major player in the next era of computing.