Nvidia is entering into a non-exclusive licensing agreement for Groq’s AI inference technology.
Nvidia’s reportedly paying $20 billion for this deal, about three times Groq’s most recent valuation.
Groq founder and CEO Jonathan Ross — who will join Nvidia along with other Groq personnel — is widely considered the creator of Google’s TPU.
10 stocks we like better than Nvidia ›
On Friday, artificial intelligence (AI) chip start-up Groq announced via a very brief press release that it “has entered into a non-exclusive licensing agreement with Nvidia(NASDAQ: NVDA) for Groq’s inference technology.”
The deal also includes Jonathan Ross, Groq’s founder and CEO, Sunny Madra, Groq’s president, and other members of the Groq team joining Nvidia to “help advance and scale the licensed technology.”
This was a smart move by Nvidia, in my view. Nvidia has tons of cash, and it makes sense to use it to accomplish two goals at once: eliminate a potential competitor and obtain a new chip technology to offer its customers.
Here’s what investors should know.
Image source: Getty Images.
That Nvidia is not only entering into a non-exclusive license with Groq, but also hiring its founder-CEO, president, and reportedly key engineering talent, makes this deal an “acqui-hire” and as close to a full-fledged acquisition as possible.
Granted, Groq will reportedly continue to operate, with its CFO taking over the CEO position, and operate its GroqCloud. However, with the founder — who is the mastermind behind the company’s tech — leaving, it appears that all advancements in Groq’s tech will now be made under Nvidia.
No doubt, Nvidia structured the deal to avoid potential regulatory scrutiny. The company already dominates the AI chip space, so any acquisition that could potentially increase its current or future market share further would likely garner a very close look from regulators.
The deal’s size wasn’t disclosed by the companies involved, but one major financial outlet has reported it at $20 billion, which would be Nvidia’s largest deal to date, by far. Its prior largest deal was its $6.9 billion acquisition of high-performance networking specialist Mellanox Technologies in 2020. That acquisition proved to be extremely successful, as Nvidia’s networking business is booming.
If the $20 billion is accurate, it represents a huge premium over Groq’s most recent valuation. After a $750 million financing round in September, Groq’s valuation was $6.9 billion.
Nvidia tried to buy leading central processing unit (CPU) chip designer Arm Holdings in 2020, but that massive deal was called off due to significant antitrust concerns from regulators in the U.S. and elsewhere.
Groq’s chips are language processing units (LPUs) designed for AI inferencing. Inferencing is the second step in the two-step AI process, following training. Training involves using vast amounts of data to teach an AI model, while inference is the deployment of that model to generate answers to a user’s questions, images, and other outputs.
Nvidia’s graphics processing units (GPUs) have long dominated the AI training stage. They’re also leaders in AI inference, but they face growing competition in this area. Competitors include Advanced Micro Devices‘ (AMD) data center GPUs, as well as custom application-specific integrated circuits (ASICs) that Broadcom and Marvell Technology are making for big tech company customers. Until now, the big tech companies have just used their custom AI chips internally and, where applicable, also in their cloud computing services.
However, it was recently revealed that social media giant Meta Platforms is considering buying Alphabet‘s Google custom AI chip, called a tensor processing unit (TPU), for inferencing purposes in its data centers.
The big tech companies are exploring alternatives to Nvidia’s GPUs for two reasons: to reduce costs and to diversify their supply chains. Relying on just a single supplier for anything can be risky.
Groq’s goal was for its LPUs to be a big player in the AI inference market. The company claims its technology is faster than alternatives for specific inference applications. Its plans included selling its chips for less money than Nvidia GPUs and perhaps other offerings.
It makes good sense that Nvidia views Groq’s tech as potentially very valuable and evidently viewed the company as a potential, significant future rival. Groq founder and CEO Jonathan Ross is widely considered the creator of Google’s TPU. Granted, he didn’t create this chip alone, but he was the force behind the effort to develop it.
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $509,470!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,167,988!*
Now, it’s worth noting Stock Advisor’s total average return is 991% — a market-crushing outperformance compared to 196% for the S&P 500. Don’t miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
See the 10 stocks »
*Stock Advisor returns as of December 22, 2025
Beth McKenna has positions in Nvidia. The Motley Fool has positions in and recommends Alphabet, Meta Platforms, and Nvidia. The Motley Fool recommends Broadcom and Marvell Technology. The Motley Fool has a disclosure policy.
Nvidia’s “Aqui-Hire” of Groq Eliminates a Potential Competitor and Marks Its Entrance Into the Non-GPU, AI Inference Chip Space was originally published by The Motley Fool