OpenAI makes a multibillion-dollar bet on bespoke silicon as it doubles down on
AMD and edges out Nvidia reliance.
The Energy Behind Intelligence
Let’s be blunt: artificial intelligence (AI) is an energy guzzler. Running large
models—especially inference at scale and real-time interaction, means billions
of flops, memory bandwidth, and networking. As AI systems proliferate, the
power draw is no longer an afterthought. In
announcing its 10 GW deal with Broadcom, OpenAI said the deployment would
begin in the second half of 2026 and finish by 2029.
We’re partnering with Broadcom to deploy 10GW of chips designed by OpenAI.Building our own hardware, in addition to our other partnerships, will help all of us meet the world’s growing demand for AI.https://t.co/3vLZFPO0jF
— OpenAI Newsroom (@OpenAINewsroom) October 13, 2025
Ten gigawatts is not trivial. To put it in perspective, 10 GW
would be enough to power more than 8 million U.S. households.
Why go custom? Because dependence on general-purpose AI accelerators
(hello, Nvidia) means you’re subject to supply, margin, and roadmap constraints.
Building your own gear (or co-designing) lets you tailor the stack: chip,
memory, interconnects, software. In the Broadcom deal, OpenAI will design the
accelerators; Broadcom will build and deploy them.
One more twist: Broadcom’s networking tech (Ethernet, etc.) is intended
to be integrated with this stack. This is, perhaps, an opportuninty for OpenAI
and Broadcom to displace Nvidia’s InfiniBand technology.
When AMD and Broadcom Meet
OpenAI isn’t putting all its eggs in one chip basket. In early October
2025, it
struck a multi-year deal with AMD to deploy 6 GW of Instinct GPUs over
several generations. The first tranche, 1 GW, will begin deploying in the
second half of 2026.
That AMD arrangement includes an interesting wrinkle: AMD issued OpenAI
warrants to acquire up to 160 million shares (about 10 %) at a nominal price,
vesting as deployment and share-price milestones are met.
BREAKING: Broadcom stock, $AVGO, surges over +13% after signing a “multi-billion dollar chip deal” with OpenAI.Broadcom will build custom data center chips for OpenAI and the deal covers 10GW of compute capacity.Broadcom now up +$200 BILLION of market cap today. pic.twitter.com/NzktpbSmFo
— The Kobeissi Letter (@KobeissiLetter) October 13, 2025
Taken together, those agreements (Broadcom and AMD) suggest OpenAI is
diversifying its compute partnerships while retaining leverage in its stack.
It’s not abandoning Nvidia (which
recently pledged 10 GW of systems), but it is signaling it wants more
control.
If the math holds, OpenAI could control or influence some 16 GW of
compute across custom accelerators, AMD GPUs, and Nvidia systems (plus
third-party cloud or collaborative deals). That level of scale is not just
ambitious, it’s borderline industrial.
Power, Scaling, and Risk
This isn’t a vanity project. AI compute is on a Moore’s-Law–lite
treadmill: the more models, the deeper the memory, the fatter the activation
traffic, the bigger the cluster. The connections between compute and energy are
multiplying.
Yet risks abound. Designing a chip is one thing. Executing yield,
software stack maturity, cooling/infrastructure, supply chain (memory,
packaging), and the ramp from prototype to volume are where dreams often die.
Just ask Intel.
OpenAI and Broadcom signed a multiyear agreement to collaborate on custom chips and networking equipment, planning to add 10 gigawatts’ worth of AI data center capacity. Caroline Hyde reports pic.twitter.com/QdYRMI4PSm
— Bloomberg TV (@BloombergTV) October 13, 2025
Also, the financing is staggering. Even at $50B–$60B per GW (a
benchmark that’s often cited when talking about AI infrastructure) the
Broadcom component can easily run into the hundreds of billions. OpenAI’s
revenue is orders of magnitude smaller today. That implies heavy leverage, pre-commitments,
and bet-the-future construction. Analysts have warned of the mismatch between
OpenAI’s spending commitments and its current cash flow. “What’s real about
this announcement is OpenAI’s intention of having its own custom chips,” said
analyst Gil Luria, head of technology research at D.A. Davidson. “The rest is
fantastical. OpenAI has made, at this point, approaching $1 trillion of
commitments, and it’s a company that only has $15 billion of revenue,” Gil
Luria, head of technology research at D.A. Davidson said
to AP in an interview.
Still, for those who say “this is too much,” remember: AI is now as
much about infrastructure as algorithms. The winners will be those who master
both.
What this Means for the AI Arms Race
- Nvidia’s dominance is challenged, not toppled, custom chips rarely beat
incumbents early. - AMD gets a line into the compute mix by partnering rather than competing
head-on. - Broadcom gets elevated from networking parts to full AI silicon and
stack supplier. - OpenAI tightens control over its destiny: fewer black boxes, more
custom layers.
The Broadcom deal is audacious. The AMD deal is clever. Combined,
they’re a bold wager: that compute is no longer a cost center, it’s the central
battlefield of AI.
If these deals succeed, OpenAI might well emerge not just as a model
house but a compute juggernaut. If they fail (in yield or funding), it could
crowd out liquidity and distract attention from model advances. Either way, AI
infrastructure just got a lot more interesting.
For more stories around the edges of fintech, visit our trending pages.
This article was written by Louis Parks at www.financemagnates.com.
Source link