Surprising partnership highlights shifting dynamics in the AI arms race

Diversifying Beyond Microsoft: OpenAI Turns to Google for Compute Power

OpenAI has finalized a deal to tap into Google Cloud’s computing infrastructure, according to sources familiar with the matter. The move, sealed in May after months of negotiation, adds Google to OpenAI’s roster of infrastructure providers, reflecting the enormous demand for processing power needed to train and operate advanced artificial intelligence models.

Though OpenAI’s long-standing and high-profile relationship with Microsoft has been central to its rise — with Microsoft’s Azure cloud serving as its primary infrastructure provider — this latest deal underscores OpenAI’s intent to diversify and scale. The partnership with Google comes on the heels of previous collaborations with Oracle, SoftBank, and CoreWeave, and is part of OpenAI’s broader strategy to reduce its reliance on a single supplier.

A Strategic Win for Google Amid Competitive Tensions

The partnership is an unexpected win for Google, particularly given the competitive relationship between the two AI giants. OpenAI’s ChatGPT is widely considered the most formidable challenger to Google’s search dominance, and both companies are racing to develop increasingly sophisticated AI systems for enterprise and consumer use.

By securing OpenAI as a cloud customer, Google bolsters its growing cloud division, which contributed $43 billion to Alphabet’s revenue in 2024 — roughly 12% of the company’s total. The agreement also allows Google to further monetize its custom-built tensor processing units (TPUs), a high-performance chip line previously reserved for internal projects, now increasingly offered to external clients such as Apple, Anthropic, and Safe Superintelligence.

Despite the potential conflict of interest, Google has positioned its cloud business as a neutral platform capable of supporting even direct competitors. This neutrality is becoming a critical differentiator in the battle against Microsoft Azure and Amazon Web Services for AI-intensive clients.

Compute Capacity: The Currency of the AI Era

The computing resources required to train large language models and operate them efficiently at scale — often referred to as "compute" — have become one of the most sought-after assets in the generative AI space. As models become larger and more complex, the infrastructure demands balloon. OpenAI’s annualized revenue run rate recently hit $10 billion, reflecting soaring usage of tools like ChatGPT, and with that growth comes intensifying infrastructure needs.

This has driven the company to pursue high-cost, high-scale initiatives like the $500 billion Stargate project with Oracle and SoftBank, and its ongoing work to develop proprietary chips that could eventually reduce dependency on third-party suppliers.

Under Pressure: Google Balances AI Growth and Resource Constraints

Google’s decision to provide compute capacity to OpenAI may also complicate internal resource allocation. With Google’s own AI unit, DeepMind, competing directly with OpenAI and Anthropic, questions arise over how to balance capacity between internal development and external clients. Google CFO Anat Ashkenazi admitted in April that the company had already faced capacity shortages in meeting cloud demand during the previous quarter.

Additionally, Alphabet is under mounting pressure to justify its AI-related capital expenditures — which could reach $75 billion this year — while maintaining profitability and navigating ongoing antitrust scrutiny.

Selling infrastructure to OpenAI, therefore, is both an opportunity and a challenge. It demonstrates the financial returns of Alphabet’s vast investment in AI infrastructure but also raises strategic considerations, especially given the intertwined nature of its consumer and enterprise AI ambitions.

A Sign of Things to Come?

While neither OpenAI, Microsoft, nor Google commented publicly on the matter, the implications of this deal are significant. It signals a maturing phase in the AI industry, where competitive boundaries blur in the face of shared dependencies and astronomical infrastructure costs.

The partnership illustrates how the AI landscape is shifting from a zero-sum rivalry to one of strategic coexistence — at least at the infrastructure level — as even industry leaders find common ground in pursuit of scale, speed, and sustainability.

In the evolving race for AI dominance, cloud capacity may prove as decisive as algorithms themselves.