Understanding the next generation of models
That leads us to the big question for today revolving around what exactly will the next generation of models bring forward.
Thank you for tuning in to week 225 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “Understanding the next generation of models.”
Last week I ended up watching Project Hail Mary on the XL which is the larger movie screen at an AMC theater. It’s a great movie and extremely well done in terms of adapting a very complex book to a movie format.
During the 224th issue of the Lindahl Letter we dug into orchestration overload and how moats vanish. We are still waiting to find out exactly what happens with the leaked Claude harness code. Something is going to appear from that leak at some point. That deep dive was a great way to kick the tires on some newly minted words being shared back to you at the end of the day on a Friday. This week everybody has been talking about that Sam Altman profile entitled “Sam Altman May Control Our Future—Can He Be Trusted? New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI” that was published in The New Yorker magazine on April 6, 2026 [1]. It’s a 103 minute listen. That should give you an idea of just how long a profile was written by Ronan Farrow and Andrew Marantz. Those two have been all over the podcast circuit since the article was published so you have probably started to see leakage from the article all over. Sam Altman is a complex character in the modern Silicon Valley story. Without question or any shadow of doubt Sam Altman is one of the best fundraising venture capitalists in history. Full stop. You can get a sense of that by reading what is effectively now a time capsule that includes 71 of Sam’s posts on the Y Combinator blog [2]. Sam was prolific as a fundraiser.
People are really questioning OpenAI in general and openly these days. They made a practical business decision to shut down Sora recently which was burning tons of tokens on throw away video creation [3]. Sam Alman and OpenAI raised 122 billion in committed capital during the March 2026 funding round [4]. Translating that funding and all the previous funding rounds into a go forward product strategy is where things will be decided. Maybe it’s Codex or something else they have in the pipeline. Releasing something novel would make things interesting. Without question the biggest product release from OpenAI was and still remains ChatGPT. That product broke out and was widely used by more than 10% of the adult population placing the user numbers in the billions. However, Google and others have eroded that first mover advantage by building AI into search results and other AI modes that are now pervasive. Inside the arms race to have the best model we are now seeing Anthropic limit access to the Mythos model [5]. Currently, Anthropic teams are working with 40 companies to try to limit potential cybersecurity vulnerabilities the model might expose [6]. They did release the Opus 4.7 model on April 16, 2026, but that is not the Mythos model [7]. I have not had a chance to use that one just yet so I’ll withhold any judgement on the new model’s capabilities.
That leads us to the big question for today revolving around what exactly will the next generation of models bring forward. It’s pretty clear that Anthropic thinks we are on the verge of models that can find security exploits very quickly and create havoc. We certainly are nearing a place where the coding part of models has improved and is now able to engage meaningfully in enterprise settings. It’s an unlock for a lot of people who would not have been able to code something before and are now able to make code appear with a prompt. Low-code building was great, but this is a leap beyond that type of structured development. It’s a trajectory that only becomes more powerful with the types of agentic actions being built around what was OpenClaw and is now being featurized into other platforms and released. We are still a long way from being able to conversationally tell a computer to do things for us, but we are now closer to that with the pending release of the next generation of models. We have seen model ability plateau and be targeted for specific uses. That next set of things the model is going to be targeted at is going to end up showing us the path forward where asking for agentic action will be possible. Certainly, the largest two user bases where we will first see this happen is within the Google and Apple ecosystems. Interestly enough that might end up being based on the same model system given the partnership announced in January 2026 [8].
What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!
Footnotes:
[1] https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted
[2] https://www.ycombinator.com/blog/author/sam-altman
[3] https://techcrunch.com/2026/03/29/why-openai-really-shut-down-sora/
[4] https://openai.com/index/accelerating-the-next-phase-ai/
[5] https://fortune.com/2026/04/10/anthropic-mythos-ai-driven-cybersecurity-risks-already-here/
[7] https://www.anthropic.com/news/claude-opus-4-7
[8] https://blog.google/company-news/inside-google/company-announcements/joint-statement-google-apple/

