OpenAI says it won’t ramp up Google’s (NASDAQ:GOOG) TPUs despite early tests. The Microsoft-backed AI outfit confirmed it’s trialing some of Alphabet’s tensor processing units but has no plans to deploy the chips at scale, Reuters reported.
OpenAI continues to rely on Nvidia (NASDAQ:NVDA) GPUs and AMD (NASDAQ:AMD) AI accelerators to power its model training and inference workloads, citing proven performance and existing supply agreements.
Over the weekend, media outlets suggested OpenAI had begun using Google-made AI chips for certain tasks, but sources noted those were lower-tier TPU versions while Google reserves its most advanced silicondesigned for its Gemini large language modelfor internal use.
A recent deal with Google Cloud underpins OpenAI’s broader infrastructure needs, yet the company says it won’t shift significant compute to TPUs any time soon.
Investors and analysts had eyed a potential TPU pact as a sign of diversification beyond Nvidia; Morgan Stanley strategists even flagged such a move as a strong validation of Google’s hardware credentials.
But OpenAI’s comments underscore the stickiness of its current chip partners and the complexities of amping up new hardware at hyperscale. With demand for AI compute still surging, OpenAI appears content to scale on its existing GPUTPU hybrid tests rather than pivot wholesale to TPUs.
Why It Matters: OpenAI’s chip roadmap signals to investors that Nvidia and AMD will remain its core suppliers, potentially limiting Google’s AI hardware market share gains despite its TPU advances.
Investors will watch OpenAI’s next infrastructure update and Google Cloud earnings for any shift in TPU utilization or fresh supplier diversification cues.
This article first appeared on GuruFocus.