0:00
/
0:00
Transcript

The great 2025 LLM vibe shift

The landscape around large language models experienced a rapid and unexpected shift in 2025 as investors, researchers, and industry leaders collectively reassessed assumptions

Thank you for tuning in to week 217 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “The great 2025 LLM vibe shift.”

Vibe shifts came and went. People are certainly adding the word vibe to all sorts of things as the initial meaning has ironically faded. Casey Newton in the industry standard setting Platformer newsletter wrote about a big silicon valley vibe shift in 2022 [1]. It was a big thing; until it wasn’t. The really big completely surreal LLM shift has happened toward the tail end of 2025. We went from extreme AI bubble talk to very clear, rational, and thoughtful perspectives on how LLMs won’t realize the promises that have been made. Keep in mind the market fears of an AI bubble are different from the understanding that LLMs might be the technology that ultimately wins. All of the spending in the marketplace and the academic argument may get reconciled at some point, but we have not seen that happen in 2025.

The backward linkages of how potential technological progress regressed may not have been felt just yet, but the overall sentiment has shifted. The ship has indeed sailed. Let that sink in for a moment and think about just how big a shift in sentiment that really happens to be and how it just sort of happened. As OpenAI and Anthropic move toward inevitable IPO, that shift will certainly change things. Maybe the single best written explanation of this is from Benjamin Riley who wrote a piece for The Verge called, “Large language mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it” [2]. I owe a hat tip to Nilay Patel for recommending and helping surface that piece of writing.

I was skeptical at first, but then realized it was a really interesting and well reasoned read. I’ll admit at the same time, I was also reading a 52 paper from the Google Research team, “Nested Learning: The Illusion of Deep Learning Architecture” around the same time which was interesting as a paired reading assignment [3]. More to come on that paper and what it means in a later post. I’m still digesting the deeper implications of that paper.

Maybe to really sell the shift you could take a moment and listen to some of the recent words from OpenAI cofounder Ilya Sutskever. I’m still a little shocked about the casual way Ilaya described how we moved from research and the great AI winter, to the age of scaling, and finally back to the age of research again. The idea that scaling based on compute or size of corpse won’t win the LLM race is a very big shift and Ilya makes it pretty casually during this video.

You will notice I have set the video to play about 1882 seconds into the conversation:

Maybe a video with a really sharp looking classic linux Red Hat fedora in the background featuring a conversation between Nilay Patel and IBM CEO Arvind Krishna can help explain things. Don’t panic when you realize that the CEO of IBM very clearly argues with some back of the envelope math that all the data center investment has no real way to pay off in practical terms or an actual return on investment. Try not to flinch when it is described that within 3-5 years the same data centers could be built at a fraction of the current cost. Technology does just keep getting better. The argument makes sense. It is no less shocking based on the billions being spent.

I set the video to start playing 502 seconds into the conversation.

The argument that I probably prefer in the long run is how quantum computing is going to change the entire scaling and compute landscape [4]. The long-term argument that may end up mattering the most suggests that quantum computing will transform the economics of scale and ultimately reset expectations about what is computationally feasible. Former Intel CEO Pat Gelsinger recently framed quantum as the force likely to deflate the AI bubble by altering the fundamental relationship between compute and capability, a claim that is gaining analytical support across the research community. We may see it be an effective counter to the billions being spent on data centers for a late mover willing to make a prominent investment in the space or it could just end up being Alphabet who is highly invested in both TPU and quantum chips [5].

What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!

Footnotes:

[1] Newton, C. (2022). The vibe shift in Silicon Valley. Platformer. https://www.platformer.news/the-vibe-shift-in-silicon-valley/

[2] Riley, B. (2025). Large language mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it. The Verge. https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

[3] Behrouz, A., Razaviyayn, M., Zhong, P., & Mirrokni, V. (2025). Nested learning: The illusion of deep learning architectures. In The Thirty-ninth Annual Conference on Neural Information Processing Systems. https://abehrouz.github.io/files/NL.pdf

[4] Shrivastava, H. (2025). Quantum computing will pop the AI bubble, claims ex-Intel CEO Pat Gelsinger. Wccftech. https://wccftech.com/quantum-computing-will-pop-the-ai-bubble-claims-ex-intel-ceo-pat-gelsinger/

[5] Yahoo Finance, “Alphabet CEO just said quantum computing could be close to a breakthrough,” https://finance.yahoo.com/news/alphabet-ceo-just-said-quantum-155229893.html

Discussion about this video

User's avatar

Ready for more?