Buy Sell cards by Kelly Sikkema via Unsplash
Micron (MU) looked infallible just days ago, until Alphabet (GOOGL) broke the news that memory may no longer be in extreme demand. Google revealed an AI efficiency algorithm called TurboQuant that reduces the necessary compute and memory cost per query. That means that AI could end up needing much less memory than previously thought.
Citi is pushing back on AI efficiency fears, however. The firm recently pointed out that cheaper technology historically expands usage rather than shrinks it. For example, the fear of DeepSeek upending the AI sector did not pan out like the bears thought. AI companies instead learned from AI and used its efficiency to make even better and more powerful AI models. Thus, this event could actually bump up memory stocks in the long run by helping AI companies more sustainably develop models.
At the same time, investors can’t expect names like Micron to remain unaffected by this news. There will be some short-term disruptions at least, which is exactly what we are seeing now.
Declining RAM Prices Are Spooking Analysts
DDR5 16GB DRAM spot prices have fallen roughly 6% since Micron’s last earnings report. That alone was enough for analyst Atif Malik to pull the price target. Citi lowered its MU stock price target on near-term spot price softness, but kept a “Buy” rating and held its earnings forecasts steady, betting that long-term hyperscaler agreements and structural AI demand will win out. MU stock is now down more than 20% from its highs.
The market seems to have corrected the stock, and it hasn’t turned into a full-blown panic rout, since MU has held above $300. Micron is even starting to make a slight recovery as analysts digest the news and start plotting the future trajectory of the company, realizing there’s still room to grow for Micron.
Why Micron Isn’t Dead Just Yet
I believe the TurboQuant panic is a market overreaction to a technically specific breakthrough with real-world implications far more nuanced than the headlines suggest. Let’s first take a look at what this really is.
Google published TurboQuant on March 24 as a training-free compression algorithm that targets the key-value cache (KV cache) bottleneck inside large language models (LLMs). The KV cache is the part of an LLM’s working memory that stores intermediate computations during inference, and it scales geometrically with context length. Thus, it becomes a wall quite quickly.
TurboQuant solves this in two stages. First, it uses PolarQuant to convert standard Cartesian vector representations into polar coordinates, essentially compressing each data point’s dimensionality. Then it uses Quantized Johnson-Lindenstrauss (QJL) to apply an error-correction pass to clean up residual inaccuracies from the first stage.
You don’t have to dive into the specifics of what this means or stands for. But what it does achieve is a 6x reduction in KV cache memory and up to 8x faster attention computation without accuracy loss or retraining.
The Jevons Paradox
The entire philosophy of AI startups has been an exponential growth curve from the very beginning. Hence, RAM getting more efficient is an excuse for these companies to keep making more powerful models and push boundaries. They’re not going to consolidate and cancel orders if they can put that extra memory into use.
It’s called the Jevons paradox.ย
As steam engines grew more efficient in the 19th century, most people assumed Britain would burn through coal more slowly. The opposite ended up happening and triggered an explosion in demand.
When efficiency improves, less of a resource is needed per unit of output, which effectively lowers its price. Lower prices, in turn, encourage people to use the resource more often, find entirely new applications for it, and build new industries around it.
I see this phenomenon happening with memory and Micron.
Should You Buy or Sell MU Stock Now?
Despite all the panic about this new technology, MU stock is still a buy. Just because there is a memory breakthrough to make things more efficient, the psychology of AI companies will not allow them to cut back on RAM/DRAM as a response. If anything, they’re going to press down on the accelerator.
MU stock should recover just fine in due time. Citi analyst Atif Malik lowered the valuation benchmark from 6x to 5x trough price-to-earnings (P/E), and derived the new $425 target using projected peak EPS for 2027. Malik still kept the core EPS forecasts for the long run. I expect the short-term slowdown to be already priced in.
The average analyst price target for MU stock sits at nearly $494.81, while the highest target is $750. If Micron proves that orders are still flooding in and growth stays unaffected, the share price should surge back up in earnest.
On the date of publication, Omor Ibne Ehsan did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here.