Memory Chip Stocks Drop 6% as Google Unveils AI Efficiency Algorithm

This article first appeared on GuruFocus. Memory chip stocks extended their slide on Thursday as investors reacted to new research from Alphabet Inc.’s Google (NASDAQ:GOOG) pointing to a potential shift in how artificial intelligence systems use memory. In Seoul, Samsung Electronics (SSNLF) and SK Hynix (HXSCL) each fell at least 6%, while in the US,…


Broadcom Inc. (AVGO) Stock Forecasts

This article first appeared on GuruFocus.

Memory chip stocks extended their slide on Thursday as investors reacted to new research from Alphabet Inc.’s Google (NASDAQ:GOOG) pointing to a potential shift in how artificial intelligence systems use memory. In Seoul, Samsung Electronics (SSNLF) and SK Hynix (HXSCL) each fell at least 6%, while in the US, Micron Technology (NASDAQ:MU), Western Digital (NASDAQ:WDC) and Sandisk (NASDAQ:SNDK) declined at least 5%, adding to losses from the prior session. The pullback follows a strong rally tied to AI-driven shortages, where Samsung and SK Hynix had climbed more than 50% this year through Wednesday and Kioxia Holdings had more than doubled, supported by tight supply and rising pricing dynamics.

The shift in sentiment appears linked to Google’s TurboQuant algorithm, which the company said can reduce the memory required to run large language models by at least a factor of six, potentially lowering the cost of training and operating AI systems. While the research was originally released last year, it was newly publicized this week, prompting investors to reassess whether improved efficiency could ease demand pressure on memory components used across data centers and consumer devices. This comes as hyperscalers, led by Amazon.com and Google, are expected to spend about $650 billion this year on data center buildouts, including purchases of accelerators from Nvidia alongside memory chips.

Morgan Stanley analyst Shawn Kim suggested the development could be more constructive over time, noting that improving efficiency in key value cache for inference could reduce the cost per query and support more profitable AI deployment. This view aligns with the Jevons Paradox, also referenced by JPMorgan Chase and Citigroup, which suggests that greater efficiency may ultimately drive higher usage. While some investors may be taking profits after a strong run, analysts indicated there may be no near-term threat to memory demand given ongoing supply constraints, with SK Group Chairman Chey Tae-won previously stating that the memory chip crunch could last until 2030.

Source link