<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Lindahl Letter]]></title><description><![CDATA[Weekly insights at the intersection of technology, artificial intelligence, and modernity—exploring how innovation shapes our world every Friday.]]></description><link>https://www.nelsx.com</link><generator>Substack</generator><lastBuildDate>Mon, 04 May 2026 15:54:52 GMT</lastBuildDate><atom:link href="https://www.nelsx.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dr. Nels Lindahl]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[nelslindahl@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[nelslindahl@substack.com]]></itunes:email><itunes:name><![CDATA[Dr. Nels Lindahl]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dr. Nels Lindahl]]></itunes:author><googleplay:owner><![CDATA[nelslindahl@substack.com]]></googleplay:owner><googleplay:email><![CDATA[nelslindahl@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dr. Nels Lindahl]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Understanding the next generation of models]]></title><description><![CDATA[That leads us to the big question for today revolving around what exactly will the next generation of models bring forward.]]></description><link>https://www.nelsx.com/p/understanding-the-next-generation</link><guid isPermaLink="false">https://www.nelsx.com/p/understanding-the-next-generation</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 17 Apr 2026 23:01:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8b801306-11b1-43c9-8f2a-bfd99b33cda8_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 225 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Understanding the next generation of models.&#8221;</p><p>Last week I ended up watching Project Hail Mary on the XL which is the larger movie screen at an AMC theater. It&#8217;s a great movie and extremely well done in terms of adapting a very complex book to a movie format.</p><p>During the 224th issue of the Lindahl Letter we dug into orchestration overload and how moats vanish. We are still waiting to find out exactly what happens with the leaked Claude harness code. Something is going to appear from that leak at some point. That deep dive was a great way to kick the tires on some newly minted words being shared back to you at the end of the day on a Friday. This week everybody has been talking about that Sam Altman profile entitled &#8220;Sam Altman May Control Our Future&#8212;Can He Be Trusted? New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI&#8221; that was published in The New Yorker magazine on April 6, 2026 [1]. It&#8217;s a 103 minute listen. That should give you an idea of just how long a profile was written by Ronan Farrow and Andrew Marantz. Those two have been all over the podcast circuit since the article was published so you have probably started to see leakage from the article all over. Sam Altman is a complex character in the modern Silicon Valley story. Without question or any shadow of doubt Sam Altman is one of the best fundraising venture capitalists in history. Full stop. You can get a sense of that by reading what is effectively now a time capsule that includes 71 of Sam&#8217;s posts on the Y Combinator blog [2]. Sam was prolific as a fundraiser.</p><p>People are really questioning OpenAI in general and openly these days. They made a practical business decision to shut down Sora recently which was burning tons of tokens on throw away video creation [3]. Sam Alman and OpenAI raised 122 billion in committed capital during the March 2026 funding round [4]. Translating that funding and all the previous funding rounds into a go forward product strategy is where things will be decided. Maybe it&#8217;s Codex or something else they have in the pipeline. Releasing something novel would make things interesting. Without question the biggest product release from OpenAI was and still remains ChatGPT. That product broke out and was widely used by more than 10% of the adult population placing the user numbers in the billions. However, Google and others have eroded that first mover advantage by building AI into search results and other AI modes that are now pervasive. Inside the arms race to have the best model we are now seeing Anthropic limit access to the Mythos model [5]. Currently, Anthropic teams are working with 40 companies to try to limit potential cybersecurity vulnerabilities the model might expose [6]. They did release the Opus 4.7 model on April 16, 2026, but that is not the Mythos model [7]. I have not had a chance to use that one just yet so I&#8217;ll withhold any judgement on the new model&#8217;s capabilities.</p><p>That leads us to the big question for today revolving around what exactly will the next generation of models bring forward. It&#8217;s pretty clear that Anthropic thinks we are on the verge of models that can find security exploits very quickly and create havoc. We certainly are nearing a place where the coding part of models has improved and is now able to engage meaningfully in enterprise settings. It&#8217;s an unlock for a lot of people who would not have been able to code something before and are now able to make code appear with a prompt. Low-code building was great, but this is a leap beyond that type of structured development. It&#8217;s a trajectory that only becomes more powerful with the types of agentic actions being built around what was OpenClaw and is now being featurized into other platforms and released. We are still a long way from being able to conversationally tell a computer to do things for us, but we are now closer to that with the pending release of the next generation of models. We have seen model ability plateau and be targeted for specific uses. That next set of things the model is going to be targeted at is going to end up showing us the path forward where asking for agentic action will be possible. Certainly, the largest two user bases where we will first see this happen is within the Google and Apple ecosystems. Interestly enough that might end up being based on the same model system given the partnership announced in January 2026 [8].</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Footnotes:</p><p>[1] <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted</a></p><p>[2] <a href="https://www.ycombinator.com/blog/author/sam-altman">https://www.ycombinator.com/blog/author/sam-altman</a></p><p>[3] <a href="https://techcrunch.com/2026/03/29/why-openai-really-shut-down-sora/">https://techcrunch.com/2026/03/29/why-openai-really-shut-down-sora/</a></p><p>[4] <a href="https://openai.com/index/accelerating-the-next-phase-ai/">https://openai.com/index/accelerating-the-next-phase-ai/</a></p><p>[5] <a href="https://fortune.com/2026/04/10/anthropic-mythos-ai-driven-cybersecurity-risks-already-here/">https://fortune.com/2026/04/10/anthropic-mythos-ai-driven-cybersecurity-risks-already-here/</a></p><p>[6] <a href="https://www.nytimes.com/2026/04/07/technology/anthropic-claims-its-new-ai-model-mythos-is-a-cybersecurity-reckoning.html">https://www.nytimes.com/2026/04/07/technology/anthropic-claims-its-new-ai-model-mythos-is-a-cybersecurity-reckoning.html</a></p><p>[7] <a href="https://www.anthropic.com/news/claude-opus-4-7">https://www.anthropic.com/news/claude-opus-4-7</a></p><p>[8] <a href="https://blog.google/company-news/inside-google/company-announcements/joint-statement-google-apple/">https://blog.google/company-news/inside-google/company-announcements/joint-statement-google-apple/</a></p>]]></content:encoded></item><item><title><![CDATA[The exhaustion with algorithmic performance]]></title><description><![CDATA[This is week 224 of the Lindahl Letter publication.]]></description><link>https://www.nelsx.com/p/the-exhaustion-with-algorithmic-performance</link><guid isPermaLink="false">https://www.nelsx.com/p/the-exhaustion-with-algorithmic-performance</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 10 Apr 2026 23:00:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d9d6f0d5-5f53-4e57-ac65-26880423bf58_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is week 224 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;The exhaustion with algorithmic performance.&#8221;</p><p>This week I&#8217;m listening to the Penguin Random House audio book version of the recently published book, &#8220;The Infinity Machine: Demis Hassabis, DeepMind and the Quest for Superintelligence,&#8221; by Sebastian Mallaby from March 31, 2026. I&#8217;ll share more about that after I finish listening to the book.</p><p>Here we go. It&#8217;s Saturday morning during the time I reserved to think deeply about the world and research new topics. The ongoing writing has started again. Those notes will continue to be shared on Friday afternoons. I just finished a brief note about how that old school internet was different [1]. Sharing nostalgic thoughts about the internet before social networking really started. We are starting to see the edge of integrated orchestration within the systems we are using every day. Our access to models has moved from single serving chat frameworks to more complex orchestration across.</p><p>All the walls that separated things are now either falling or getting thinner and thinner. Probably the best example of that was the rapid rise in popularity of OpenClaw [2]. Peter Steinberger even ended up working for OpenAI while OpenClaw moved to an independent foundation [3]. We are hitting a point where something shows up and people can learn about it and then recreate it very rapidly. As we learned back in 2023 ideas are not effective moats [4]. We even witnessed the Anthropic Claude code harness leak this week [5]. Somebody at some point is going to have OpenClaw attempt to frankenstein a path forward from that harness and other models. We did see Anthropic move to effectively ban 3rd party harnesses starting April 4, 2026 [6]. We can now see the situation reverse thanks to that code leak. While the infrastructure setup Anthropic possesses and weights they protect within the models might be the secret sauce to delivery things could very well change rapidly in that space going forward.</p><p>Maybe as a result of all this rapid change I&#8217;m writing about the exhaustion we feel with algorithmic performance. It&#8217;s something I have been thinking about and trying to synthesize into a meaningful block of content. Maybe the answer is to consider two elements of dealing with agentic systems: first, the fatigue people feel from working with them constantly and second, the tipping point we are facing from rapid integration. We will probably figure out ways to handle or deal with the fatigue related to working with these types of agents for prolonged periods of time. Finding a firm footing to deal with the rapid change will center on how foundational change occurs. My contention is that dealing with algorithmic performance has changed both ideation and operationalization. It&#8217;s changing the foundation of decision making. It&#8217;s changing the frameworks deployed to make decisions and ultimately the expectations of what is possible.</p><p>We have so much content to synthesize these days. Content flooding changes our ability to operationalize. The exhaustion we feel isn&#8217;t just from the pace of change we are seeing; it&#8217;s from the fundamental shift in how we ideate and operationalize. Algorithmic performance is no longer a tool we use to get things done and accomplish tasks; it is becoming the framework through which all professional expectations and decisions are filtered.</p><p>I&#8217;m going to conclude the Lindahl Letter this week with 2 key takeaways:</p><p>The Orchestration Overload: We are moving past the single serving prompt era into a world of integrated orchestration. It&#8217;s no longer one prompt and go forward. While tools like OpenClaw promise seamless efficiency and a lot of security risk, the cognitive load of managing these multi-agent systems is creating a new form of digital fatigue I was trying to describe in terms of algorithmic performance today.</p><p>The Vanishing Moat: The recent Claude Code harness leak and the rapid democratization of agentic frameworks prove that ideas and code are no longer durable moats. In a world where anyone can frankenstein a path forward from a leak, shared idea, or reverse engineered software the only real competitive advantage is operational execution and the protected weights of the models themselves.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Footnotes:</p><p>[1] That old school internet was different </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:193165203,&quot;url&quot;:&quot;https://www.nelslindahl.com/p/that-old-school-internet-was-different&quot;,&quot;publication_id&quot;:5275742,&quot;publication_name&quot;:&quot;Nels Lindahl &#8212; Functional Journal&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Pz_h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3480860-225f-4eef-9db6-d2ff754ad257_960x960.png&quot;,&quot;title&quot;:&quot;That old school internet was different &quot;,&quot;truncated_body_text&quot;:&quot;Right now, at this very moment, I&#8217;m feeling nostalgic about the original weblog movement sparked from Movable Type and the initial WordPress communities from yesteryear. We blogged and read blogs. It was so asynchronous and delightful. Back during high school, which in context was during the previous millennium, I read every physically printed news peri&#8230;&quot;,&quot;date&quot;:&quot;2026-04-04T14:31:38.566Z&quot;,&quot;like_count&quot;:0,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;handle&quot;:&quot;nelslindahl&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;profile_set_up_at&quot;:&quot;2021-09-18T17:00:30.831Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-11-01T23:39:09.022Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:246432,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:271589,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:271589,&quot;name&quot;:&quot;The Lindahl Letter&quot;,&quot;subdomain&quot;:&quot;nelslindahl&quot;,&quot;custom_domain&quot;:&quot;www.nelsx.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Weekly insights at the intersection of technology, artificial intelligence, and modernity&#8212;exploring how innovation shapes our world every Friday.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:14578726,&quot;theme_var_background_pop&quot;:&quot;#9D6FFF&quot;,&quot;created_at&quot;:&quot;2021-01-27T00:44:44.784Z&quot;,&quot;email_from_name&quot;:&quot;Nels Lindahl from The Lindahl Letter&quot;,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false,&quot;logo_url_wide&quot;:null}},{&quot;id&quot;:5381397,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:5275742,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5275742,&quot;name&quot;:&quot;Nels Lindahl &#8212; Functional Journal&quot;,&quot;subdomain&quot;:&quot;functionaljournal&quot;,&quot;custom_domain&quot;:&quot;www.nelslindahl.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;A weblog created by Dr. Nels Lindahl featuring writings and thoughts&#8230;&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3480860-225f-4eef-9db6-d2ff754ad257_960x960.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-06-08T22:08:48.622Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false,&quot;logo_url_wide&quot;:null}},{&quot;id&quot;:5450843,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:5343721,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5343721,&quot;name&quot;:&quot;Civic Honors&quot;,&quot;subdomain&quot;:&quot;civichonors&quot;,&quot;custom_domain&quot;:&quot;www.civichonors.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Graduation with Civic Honors: Unlock the Power of Community Opportunity&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f61161c9-1a76-45eb-8fad-86a4e866e99e_1024x1024.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-06-15T13:44:52.518Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false,&quot;logo_url_wide&quot;:null}}],&quot;twitter_screen_name&quot;:&quot;nelslindahl&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.nelslindahl.com/p/that-old-school-internet-was-different?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!Pz_h!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3480860-225f-4eef-9db6-d2ff754ad257_960x960.png" loading="lazy"><span class="embedded-post-publication-name">Nels Lindahl &#8212; Functional Journal</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">That old school internet was different </div></div><div class="embedded-post-body">Right now, at this very moment, I&#8217;m feeling nostalgic about the original weblog movement sparked from Movable Type and the initial WordPress communities from yesteryear. We blogged and read blogs. It was so asynchronous and delightful. Back during high school, which in context was during the previous millennium, I read every physically printed news peri&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a month ago &#183; Dr. Nels Lindahl</div></a></div><p>[2] <a href="https://github.com/openclaw/openclaw">https://github.com/openclaw/openclaw</a></p><p>[3] <a href="https://steipete.me/posts/2026/openclaw">https://steipete.me/posts/2026/openclaw</a></p><p>[4] Google "We Have No Moat, And Neither Does OpenAI"</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:175660942,&quot;url&quot;:&quot;https://newsletter.semianalysis.com/p/google-we-have-no-moat-and-neither&quot;,&quot;publication_id&quot;:6349492,&quot;publication_name&quot;:&quot;SemiAnalysis&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!II4V!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88ad87ad-b5c5-4687-b13e-672f72725795_501x501.png&quot;,&quot;title&quot;:&quot;Google \&quot;We Have No Moat, And Neither Does OpenAI\&quot;&quot;,&quot;truncated_body_text&quot;:&quot;The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.&quot;,&quot;date&quot;:&quot;2023-05-04T10:07:13.244Z&quot;,&quot;like_count&quot;:708,&quot;comment_count&quot;:10,&quot;bylines&quot;:[{&quot;id&quot;:21783302,&quot;name&quot;:&quot;Dylan Patel&quot;,&quot;handle&quot;:&quot;semianalysis&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/adcf9d53-769e-4d9e-8982-30c3dc8488dc_501x527.png&quot;,&quot;bio&quot;:&quot;Bridging the gap between business and the worlds most important industry.&quot;,&quot;profile_set_up_at&quot;:&quot;2021-07-02T16:10:19.044Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-10-13T20:39:24.094Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:6510210,&quot;user_id&quot;:21783302,&quot;publication_id&quot;:6349492,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:6349492,&quot;name&quot;:&quot;SemiAnalysis&quot;,&quot;subdomain&quot;:&quot;semianalysis&quot;,&quot;custom_domain&quot;:&quot;newsletter.semianalysis.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Bridging the gap between the world's most important industry, semiconductors, and business.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88ad87ad-b5c5-4687-b13e-672f72725795_501x501.png&quot;,&quot;author_id&quot;:21783302,&quot;primary_user_id&quot;:21783302,&quot;theme_var_background_pop&quot;:&quot;#67BDFC&quot;,&quot;created_at&quot;:&quot;2025-09-22T15:54:12.958Z&quot;,&quot;email_from_name&quot;:&quot;SemiAnalysis&quot;,&quot;copyright&quot;:&quot;Dylan Patel&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false,&quot;logo_url_wide&quot;:&quot;https://substackcdn.com/image/fetch/$s_!EaOc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba5e7f06-f479-4a16-9bb2-4f4ab2164824_6251x2084.png&quot;}}],&quot;twitter_screen_name&quot;:&quot;dylan522p&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:1000,&quot;status&quot;:{&quot;bestsellerTier&quot;:1000,&quot;subscriberTier&quot;:10,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:1000},&quot;paidPublicationIds&quot;:[892409,816241,48206,1425942,4220,69345,883883,1781836,2072443,2908560,3447,1980737,6001468,3086440,2033567,2244049,470017,2065897,1421308,3163767,2003179,19378,12889,5308801,3281011],&quot;subscriber&quot;:null}},{&quot;id&quot;:112610384,&quot;name&quot;:&quot;Afzal Ahmad&quot;,&quot;handle&quot;:&quot;afzalahmad&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!zpdA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64252422-2fee-4c48-aaf0-5d30a0deac8e_501x527.png&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2022-11-23T09:32:35.528Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-03-06T10:49:59.041Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:2915825,&quot;user_id&quot;:112610384,&quot;publication_id&quot;:2868656,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:2868656,&quot;name&quot;:&quot;Afzal Ahmad&quot;,&quot;subdomain&quot;:&quot;afzalahmad&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;&quot;,&quot;logo_url&quot;:null,&quot;author_id&quot;:112610384,&quot;primary_user_id&quot;:112610384,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2024-08-09T23:19:29.306Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Afzal Ahmad&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;profile&quot;,&quot;is_personal_mode&quot;:true,&quot;logo_url_wide&quot;:null}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://newsletter.semianalysis.com/p/google-we-have-no-moat-and-neither?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!II4V!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88ad87ad-b5c5-4687-b13e-672f72725795_501x501.png" loading="lazy"><span class="embedded-post-publication-name">SemiAnalysis</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Google "We Have No Moat, And Neither Does OpenAI"</div></div><div class="embedded-post-body">The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">3 years ago &#183; 708 likes &#183; 10 comments &#183; Dylan Patel and Afzal Ahmad</div></a></div><p>[5] <a href="https://www.zscaler.com/blogs/security-research/anthropic-claude-code-leak">https://www.zscaler.com/blogs/security-research/anthropic-claude-code-leak</a></p><p>[6] <a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban">https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban</a></p>]]></content:encoded></item><item><title><![CDATA[Social network fragmentation is real]]></title><description><![CDATA[The exhaustion with "algorithmic performance" and the retreat into smaller, more intentional spaces defines the zeitgeist of 2026.]]></description><link>https://www.nelsx.com/p/social-network-fragmentation-is-real</link><guid isPermaLink="false">https://www.nelsx.com/p/social-network-fragmentation-is-real</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 24 Jan 2026 00:00:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/82e3a49d-25c0-4bc8-9978-19647aa623f4_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A brief editorial note: This whole Substack writing project started back on January 29, 2021. Five years of missives are now in the books. For those of you who have been along for the journey, thank you. My cadence for this Substack is probably going to move to bi-weekly or monthly so don&#8217;t panic if your Friday routine and mine are a little different going forward.</p><p>Thank you for tuning in to week 223 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Social network fragmentation is real.&#8221;</p><p>All of the old internet is falling apart. We don&#8217;t really have a place where people gather in the aggregate any more. Big events like the college football championship game and the super bowl gain a ton of attention. The big game Monday night was on ESPN which is a cable network with a strong streaming presence. That cultural touchstone probably brought together a community of 20 million people or more for a few hours. Most of the events that bring people together are passing. Robert Putnam wrote a book called Bowling Alone and published it back in 2000 [1]. That treatise on the American community really questioned the fall of civic and social clubs. Perhaps the better question is what exactly are people doing with their time instead of coming together within that civic context. It would be very easy to just queue up some deep research tasks to figure out what people replaced civic and social clubs with and if it even matters. Right now I have Gemini working on that in the background. I remember reading Bowling Alone and then wanting to know more about possible solutions. Maybe the breakdown in social networks and the promise they had for connection is the next chapter in the story.</p><p>None of the suggestions from Gemini were very interesting and that deep research was really just a wash; it was not very good reading in the end. Most of those deep research results are maybe informative, but otherwise devoid of wisdom and meaningful dialogue. As we enter 2026, the social landscape is moving toward what might be considered a more community-first era defined by a retreat from mass broadcast platforms into the digital world of the smaller, niche enclaves like WhatsApp Communities, Discord, and Substack that prioritize unfiltered, human-curated dialogue over algorithmic performance. This shift coincides with a broader digital reckoning in which overall social media use is contracting, particularly among the youngest and oldest Americans, while the public sphere that remains becomes increasingly polarized and darn right hard to even read these days due to the disproportionate activity of highly partisan users. It is like everybody in the middle got crowded out and people are just throwing things around.</p><p>A lot of research about thick and thin social capital exists. Thin social capital describes the transactional and casual connections that exist. Thicker social capital involves deeper multi-layered relationships that are typically longer lasting and would be described as less passing in nature. Social media for example is a very passing type of connection. Some of it is really very thought provoking. Instead of leaning into that segment of academic research I&#8217;m going to focus on some of the practical on the ground things that are happening. To counteract the isolation of these digital spaces, there is a growing return to physical, &#8220;in-real-life&#8221; (IRL) engagement, with brands and organizers hosting run clubs and interest-based meetups to foster authentic human connection that leads to thick social capital. However, while these private enclaves successfully build social capital, the sociological glue connecting like-minded people may ultimately reinforce social silos rather than creating the capital required to unify a diverse society.</p><p>I gave Threads the social network built by Meta a chance when it started out, then migrated over to Bluesky. Over the last year, I even tried the notes functions in Substack. Recently, I signed up for a Mastodon server setup by the folks who produce Hard Fork, the very popular podcast from the New York Times. None of these social networks really feels like the world showed up in real-time. That was the part of the old Twitter I missed the most. It was so timely and immediate. Things happened and people opined. I know that a community is built from place, interest, and circumstance. None of these triangulations of community helped describe how social media changes dynamics. The directionality of the connection is very asymmetrical within most social media or online connections. That is probably in the end what Putnam described, but it does not really describe what is next. That is the phase of community that will become defining within the next era of civil discourse, civility, and broader social fabric.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Footnotes:</p><p>[1] Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. Simon &amp; Schuster.</p>]]></content:encoded></item><item><title><![CDATA[The year of orchestration and ecosystems]]></title><description><![CDATA[It&#8217;s pretty obvious now that last year was the year of scale based announcements. That strategy has been questioned deeply based on the future of language models.]]></description><link>https://www.nelsx.com/p/the-year-of-orchestration-and-ecosystems</link><guid isPermaLink="false">https://www.nelsx.com/p/the-year-of-orchestration-and-ecosystems</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 17 Jan 2026 00:00:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9429dd70-edd3-4bc9-8bb4-3e770ce70564_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 222 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;The year of orchestration and ecosystems.&#8221;</p><p>Let&#8217;s distill some complex topics into more approachable views. Last year we were all overwhelmed by the excessive data center planning news that hyperscalers shared and kept sharing with us throughout the year. It&#8217;s pretty obvious now that last year was the year of scale based announcements. Compute was the new oil. That strategy has been questioned deeply based on the future of language models. Some people went as far as explaining that language is not intelligence. Billions of dollars are going to be spent or at least were planned to be spent. Plans have been made and a direction was set. We will see how much of this actually gets shovels into the ground. Data centers had zero cool years ago and executives made huge pushes to celebrate that the cloud was king. The pendulum swung back in a big way during 2025.</p><p>To that end the data centers and compute have to feed something going forward in terms of real delivery. That future compute spend is probably going to be smartphone driven. This is the ultimate hardware pivot that recognizes the $1,000 device in everyone&#8217;s pocket is the ultimate gateway, regardless of how much was spent on H100s last year. Desktop computers peaked around 2022 and the parts are just getting more and more expensive. Memory manufacturers are just leaning into the AI boom and abandoned consumer sales. I have been thinking deeply about the ubiquity of smartphones and to some extent tablets and how much of that stockpiled compute is going to end up being spent on requests generated in those ecosystems. They are the form factor that is going to drive all the AI usage. A lot of people thought home speakers were going to kickstart the revolution years ago, but that promise never realized anything beyond timers and music. Certainly some players think glasses are going to be part of the computing experience as well. We are seeing a bunch of smartglasses start to show up. All that is happening, but the one thing I am singularly focused on at the moment is the latest announcement that Apple will be using Gemini means that the ecosystem of smartphone AI will be dominated by Google [1]. It&#8217;s probably the last really big protected moat that both companies are carefully guarding.</p><p>Previously, I actually thought Apple would end up using a local run model setup from the phone which was inherently very private and kept your data localized. They were just going to have the more advanced calls go external in a private way and keep that overall usage minimized. Something must have changed in the overall expectations for Apple as they are now clearly going to call the cloud to get Gemini access. All the other players fighting for attention are now ultimately fighting against a true ecosystem moat protected by some pretty high and impenetrable waves. That is why I think this is the year orchestration displaced raw capability as the dominant source of advantage in the ecosystem of technology systems. The most important competitive outcomes are no longer determined by who trains the largest model, builds the fastest chip, or announces the highest qubit count. They are determined by who can coordinate hardware, software, capital, policy, and distribution into a coherent, evolving system.</p><p>That is why this Apple and Google synergy will end up relegating all other competition outside the moat. A moat built on agreement and ecosystem. Really the best plan the other competitors in the space have is to get a browser or an app that can run on these devices. However, based on the inherent level of orchestration and deep integration that Apple has with its own products and Google has with the Android ecosystem every other player is just an outsider looking into someone else&#8217;s walled off ecosystem. I know people can download and use other apps, but the deep integration to the ecosystem is now off the table.</p><p>Across AI, cloud computing, semiconductors, quantum research, and energy infrastructure, the winners are increasingly those who control interfaces and sequencing. What is emerging now is not a collection of breakthroughs, but a new phase of technological power built on orchestration. People are going to see what can be done with the technology and how that extends to getting things done for them in real time. Yes, orchestrated ecosystems can form a moat, and in practice they are becoming one of the few durable moats. The critical distinction is that orchestration is not a single asset. It is a system of constraints, dependencies, and incentives that compounds over time. This is why companies that appear dominant at the model or product layer often discover that their position is far more fragile than expected.</p><p>A true orchestration moat emerges when an actor, probably Apple or Google, controls sequencing rather than capability. Capability can be copied, rented, or leapfrogged. Sequencing is harder to dislodge because it determines what gets built next, who gets paid, and which paths are economically viable. When a company or coalition controls interfaces, standards, and timing across multiple layers, competitors may match individual components but still fail to displace the system.</p><p>This is where OpenAI illustrates the problem. OpenAI has strong leverage at the model layer, but that leverage is thin without full ecosystem control. Models are increasingly substitutable, distribution is mediated by partners, and infrastructure is rented rather than owned. OpenAI wishes it were a moat because its value concentrates in a narrow slice of the stack that others can route around. That is not orchestration. That is a dependency.</p><p>By contrast, hyperscalers such as Microsoft and Google approach AI as an orchestration problem. They align chips, data centers, software platforms, pricing models, enterprise contracts, and regulatory posture into a single evolving system. Even when individual models underperform or become obsolete, the ecosystem persists. The moat is not intelligence. It is coordination.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Footnotes:</p><p>[1] <a href="https://www.theverge.com/ai-artificial-intelligence/860989/apple-google-gemini-siri-ai-deal-what-it-means">https://www.theverge.com/ai-artificial-intelligence/860989/apple-google-gemini-siri-ai-deal-what-it-means</a></p>]]></content:encoded></item><item><title><![CDATA[This space is flooded]]></title><description><![CDATA[My biggest posts in terms of popularity have been about being focused and my inherent distaste for social media.]]></description><link>https://www.nelsx.com/p/this-space-is-flooded</link><guid isPermaLink="false">https://www.nelsx.com/p/this-space-is-flooded</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 10 Jan 2026 00:01:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4eb13afd-57b7-411e-afdc-99d3f2dc3a69_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 221 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;This space is flooded.&#8221;</p><p>Apparently, Substack now boasts over 17,000 professional writers on the platform [1]. That many people actually making some income from Substack is actually rather amazing. It&#8217;s an ecosystem of professional writers for sure. My biggest posts in terms of popularity have been about being focused and my inherent distaste for social media. That feels about right. I have been sharing research notes here with the Lindahl Letter for 1,805 days spanning January 29, 2021 to January 8, 2026. That includes more than 200 missives. Later this month we will hit that 5 year mark from my very first Substack post. That feels like forever ago. Some of those early posts were actually pretty good. A lot of them were based around talks or ideas for talks. Since that time, Substack itself has grown into a vast ecosystem of writers and readers that is thriving. It&#8217;s good to find a forum where people can share an interest in actually reading and writing. My favorite thing about the Substack ecosystem is that it has never really developed into anything beyond the basics of reading and writing.</p><p>ChatGPT and other chat based interfaces have changed our relationship with what was the open internet. A lot of people have opined about how the open internet is dying. We all pretty much feel that the open internet is in active decay. Instead of just Googling things and finding random parts of the internet the interaction is now more structured and only in one place or more accurately one interface that never extends to specific domains anymore. Even a basic Google search will typically reduce the overall search experience to an AI summary. We are now starting to get more than just text in those previous text only interfaces. Some of it is interactive and multimedia based. As a researcher, I have been super focused on how a knowledge graph could be used to check or augment this type of interaction. My guess is that within the next 6 months the interface will have entirely transformed, yet again, and the way people connect with the internet will be fundamentally different. It will be agent forward, probably browser based, and far more personally tailored.</p><p>Instead of building a knowledge graph to encompass the entire world in real time with an understanding of history it will instead be very individualized. That may change the way advertising and tracking get data about us in a very real way. It will change how data collection and databrokering occurs. We will see if that change ends up being a net positive or if the clearinghouse for tracking just changes locations. Not unlike the central premise of Severance the television show, our work and personal data ecosystems will end up being fundamentally segregated and unaware of each other. Smartphones were a window into the world around us and social media gained a ton of steam and then faltered. We are now moving toward a new framework of online engagement that is going to be interesting going forward. We are here at the start of something that is going to be a wild ride.</p><p>This week 3 stories caught my attention and I thought were worth sharing.</p><p>&#8220;Yann LeCun calls Alexandr Wang &#8216;inexperienced&#8217; and predicts more Meta AI employee departures&#8221; <a href="https://finance.yahoo.com/news/yann-lecun-calls-alexandr-wang-182614902.html">https://finance.yahoo.com/news/yann-lecun-calls-alexandr-wang-182614902.html</a></p><p>&#8220;Manus Joins Meta for Next Era of Innovation&#8221; <a href="https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation">https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation</a></p><p>&#8220;Groq and Nvidia Enter Non-Exclusive Inference Technology Licensing Agreement to Accelerate AI Inference at Global Scale&#8221;</p><p><a href="https://groq.com/newsroom/groq-and-nvidia-enter-non-exclusive-inference-technology-licensing-agreement-to-accelerate-ai-inference-at-global-scale">https://groq.com/newsroom/groq-and-nvidia-enter-non-exclusive-inference-technology-licensing-agreement-to-accelerate-ai-inference-at-global-scale</a></p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Footnotes:</p><p>[1] <a href="https://backlinko.com/substack-users">https://backlinko.com/substack-users</a></p>]]></content:encoded></item><item><title><![CDATA[Welcome to 2026 and beyond]]></title><description><![CDATA[Listen now | Let&#8217;s establish the theoretical home base of this writing enterprise for 2026 which will be set on the foundation of digging into the edge of realized technology]]></description><link>https://www.nelsx.com/p/welcome-to-2026-and-beyond</link><guid isPermaLink="false">https://www.nelsx.com/p/welcome-to-2026-and-beyond</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 03 Jan 2026 00:00:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/183140124/a3a80b059e13a183120a42366fdda713.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 220 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Welcome to 2026 and beyond.&#8221;</p><p>This last week has been about being a reflective practitioner and thinking about where we have been throughout the last year of the Lindahl Letter. This last year we covered research notes numbered from week 175 to 219. Back in June I did acknowledge a 56 day posting break in 2025 which is interesting to look back on now as an opportunity to reflect and build something substantial going forward. Toward the end of the year we got back into the groove of quality weekly missives which is good and something to continue. My focus on quantum, robotics, and AI seems to hold true to my roots of being generally interested in technology.</p><p>Overall, my general interest in technology is what drives my interest in lifelong continuous learning. With that context being set it is probably easy enough to set the expectation that in 2026 and beyond the Lindahl Letter will be targeted toward the production of weekly research notes that are accessible, targeted, and focused. These missives will require less than 10 minutes of a reader&#8217;s time and should be a clear value add in terms of gaining knowledge, understanding, and context for complex technical content.</p><p>Let&#8217;s establish the theoretical home base of this writing enterprise for 2026 which will be set on the foundation of digging into the edge of realized technology. That topic might sound familiar from week 212 of the Lindahl Letter. During that writing project we took a look at what technology is likely to be realized in the next 30 years. That coverage included looking at the metaverse, robotics, climate tech, space economy, biotech, synthetic biology, neurotech, and even fusion. I do believe that we will see quantum, robotics, and some AI mixed into that soup of potentially realized technology.</p><p>All of that technology will see advancement and it will certainly be moving toward the edge of becoming realized technology. That is fundamental where it goes from being exploratory and research driven to being in production out in the wild where it will eventually become commoditized unless a clear winner breaks away and can hold onto a real advantage. I&#8217;m pretty skeptical about any of these technologies having a clear moat that allows that advantage. For the most part once a group of people know how to do these things the technology will be realized and break out into wider use.</p><p>My primary weekly writing focus will be the Lindahl Letter and this is the place you will be able to find out what topics grab my attention and I consider to be worth sharing. My focus in the last 90 days has been heavily on quantum computing which is understandable due to how close it is getting to be a realized technology. We are on the edge of people figuring out how to demonstrate quantum supremacy for use cases and building these things into data centers as a clear value add for corporate customers and research labs that can afford to be a part of the journey. Outside of that, most of the major quantum computers that will be part of the early wave demonstrating the technology will be tied to either a research lab or corporate R&amp;D group.</p><p>Those early systems are starting to really scale up focus on specific advances in the quantum space. My research project in that space helped me to focus on open-access nanofabs, national laboratories, commercial foundry services, and captive industrial fab sites. Each of those groups has different advantages and research interests. We will see where the ultimate breakthroughs end up coming from as the story unfolds toward realized quantum technology.</p><p>That is where we are heading throughout 2026. Thank you for being here for the journey and I look forward to learning more about technology and digging into the frontier of what will be realized this year. Overall the state of the Lindahl Letter is strong and we should be able to continue moving forward on our weekly journey of exploration into technology.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p>]]></content:encoded></item><item><title><![CDATA[2025 End of Year Recap]]></title><description><![CDATA[Listen now | This week I&#8217;m just sharing my top 5 posts from 2025 and a brief note of happy holidays]]></description><link>https://www.nelsx.com/p/2025-end-of-year-recap</link><guid isPermaLink="false">https://www.nelsx.com/p/2025-end-of-year-recap</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 27 Dec 2025 00:01:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181515940/bd0b2d75767850c8be387c45dfe1c96e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 219 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;2025 End of Year Recap.&#8221;</p><p>Thank you for being here! The Lindahl Letter this week started out as a Merry Christmas and Happy Holidays post and ended up just being an end of year recap. As the year comes to a close, I am taking a brief pause from publishing this week to spend time with family, recharge, and reflect on the remarkable conversations and ideas we have explored together throughout the year. If you are reading this one, then you certainly learned about AI/ML/AGI, robotics, and quantum computing this year. The Lindahl Letter will return to its regular schedule next year, and I am grateful for your continued readership, curiosity, and engagement. I wish you and yours a happy holiday season and a thoughtful, restorative start to the new year.</p><p>My top 5 posts of 2025 included:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;0cc2f2d9-4b7e-40d0-b35b-445ddff6e6f8&quot;,&quot;caption&quot;:&quot;Thank you for being a part of the journey. This is week 186 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, &#8220;Living Intentionally: Your Blueprint for a Focused Life.&#8221;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Living Intentionally: Your Blueprint for a Focused Life&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-21T22:01:00.189Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d53a290-1148-4a5d-8c48-82076bb8b656_1024x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.nelsx.com/p/living-intentionally-your-blueprint&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:153590553,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:0,&quot;publication_id&quot;:271589,&quot;publication_name&quot;:&quot;The Lindahl Letter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!fmn0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c218d0f2-8fc2-4637-97c0-1813d9ecee5a&quot;,&quot;caption&quot;:&quot;Thank you for being a part of the journey. This is a special bonus edition of The Lindahl Letter publication. A new edition normally arrives every Friday. This week the topic under consideration for this special bonus edition The Lindahl Letter is, &#8220;Using the Manus AI agent to update a GitHub repo special edition.&#8221;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Using the Manus AI agent to update a GitHub repo special edition&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-16T01:29:51.802Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/732d498b-112e-46d9-bd15-b8c4bc83feb9_1024x1024.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.nelsx.com/p/using-the-manus-ai-agent-to-update&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:159149745,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:3,&quot;comment_count&quot;:0,&quot;publication_id&quot;:271589,&quot;publication_name&quot;:&quot;The Lindahl Letter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!fmn0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;dcff21d1-5361-4ae5-ad2d-133c6f049ed1&quot;,&quot;caption&quot;:&quot;Thank you for tuning in to week 209 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Social media stopped being social.&#8221;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Social media stopped being social&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-10-17T23:00:23.694Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d0ba7ad9-3442-4d65-8be5-668fca2b6f41_1536x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.nelsx.com/p/social-media-stopped-being-social&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:175895583,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:5,&quot;comment_count&quot;:2,&quot;publication_id&quot;:271589,&quot;publication_name&quot;:&quot;The Lindahl Letter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!fmn0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9fc6a500-c6bf-46d9-a1fc-ef1b6ec85b05&quot;,&quot;caption&quot;:&quot;Thank you for tuning in to week 210 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;AI Is Burning Through Graphics Cards.&#8221;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI Is Burning Through Graphics Cards&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-10-24T23:01:25.092Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89605c4c-e574-4d3c-91f5-448a07a87364_1536x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.nelsx.com/p/ai-is-burning-through-graphics-cards&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:176418989,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:3,&quot;comment_count&quot;:1,&quot;publication_id&quot;:271589,&quot;publication_name&quot;:&quot;The Lindahl Letter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!fmn0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;3abe7231-d0ce-45c7-a274-c81e3b1bc515&quot;,&quot;caption&quot;:&quot;Thank you for tuning in to this audio-only podcast presentation. This is week 196 of the Lindahl Letter publication. A new edition arrives every Friday. This week, the topic under consideration for the Lindahl Letter is, &#8220;Is quantum computing becoming an establishment play?&#8221;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Is quantum computing becoming an establishment play?&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-07-18T23:00:48.827Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3c49d573-7214-4fe2-ae31-333e7182582f_1536x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.nelsx.com/p/is-quantum-computing-becoming-an&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:168217122,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:3,&quot;comment_count&quot;:0,&quot;publication_id&quot;:271589,&quot;publication_name&quot;:&quot;The Lindahl Letter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!fmn0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p>]]></content:encoded></item><item><title><![CDATA[Nested learning and the illusion of depth]]></title><description><![CDATA[Listen now | Recent theoretical work argues that much of what is attributed to depth in modern neural networks can be explained by nested optimization dynamics and challenging assumptions]]></description><link>https://www.nelsx.com/p/nested-learning-and-the-illusion</link><guid isPermaLink="false">https://www.nelsx.com/p/nested-learning-and-the-illusion</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 20 Dec 2025 00:00:55 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181515822/437b1b91bf5525225c97412580b692be.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 218 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Nested learning and the illusion of depth.&#8221;</p><p>Just for fun with this nested learning paper we are evaluating today, I downloaded the 52 page PDF and uploaded it to my Google Drive to have Gemini create an audio overview of the paper. That is just a one button request these days. We have reached a point where we can easily listen to a paper recap with very little friction. It&#8217;s actually harder to get a complete reading of the PDF as an audio file. I had tried the Adobe Acrobat read aloud feature and I don&#8217;t really like the robotic output. Sometimes, I would rather listen to a paper than read it when I am trying to really think deeply about something. The 5 minutes of podcast audio Gemini spit out about the paper are embedded below. It&#8217;s interesting to say the least how quickly Gemini turned that paper into a short podcast. It&#8217;s entirely possible that my analysis might be less entertaining than the podcast Gemini created on the fly. You will be the judge of that one. </p><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;1ef96d1c-e5e7-44c7-a7ff-e614562f6191&quot;,&quot;duration&quot;:286.2498,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>This is a paper I actually printed out 2 pages per page using the double sided setting. That is how I used to read papers during graduate school. This paper had a few color elements that is something my graduate school papers never really had. They were all monochromatic. I had to put on my reading glasses and hold the paper a little closer than I used to with the 2 pages per page printing. I&#8217;ll have to remember to just print using single page spacing next time around. I really only print out papers I want to keep in my stack of stuff. This one certainly fits that criteria.</p><p>Trying to make content that is accessible is one of the reasons that I have been recording audio for the Lindahl Letter. Sometimes listening to something is a great unlock. Other times due to complexity and the diagrams included you just have to read academic papers. I try to bring things forward without complex charts in a highly consumable way. My take on research notes is that they need to be generally understandable and communicate a clear take on whatever topic is being covered. The content has to be condensed into something that can be considered in 5-10 minutes. To that end I&#8217;m going to do my best to bring this paper on nested learning to life today.</p><p>This paper matters, it really does, because the research presented undermines one of the core assumptions driving modern AI investment and the endless LLM building and training that has been occurring, namely that stacking more layers reliably produces qualitatively better intelligence [1]. The mantra to just keep scaling maybe will fade away. If many so-called deep models collapse into shallow equivalents during training, then reported gains attributed to architectural depth may instead be artifacts of data scale, regularization, or optimization heuristics rather than true representational progress.</p><p>This has direct implications for benchmarking, since comparisons that reward parameter count or depth risk overstating advances that do not translate into more robust reasoning or generalization. It also affects hardware and infrastructure strategy, because enormous resources are being allocated to support depth that may not deliver proportional returns. At a deeper level, the result forces a reconsideration of what meaningful learning progress actually looks like, shifting attention from surface complexity toward mechanisms that introduce genuinely new inductive structure and adaptive behavior.</p><p>Maybe the long term impact of this call out is likely to be gradual rather than abrupt, but it meaningfully shifts the intellectual ground beneath current AI narratives [1]. The paper in question provides a formal vocabulary for a concern many researchers have held intuitively, that architectural depth has become a proxy metric for progress rather than a principled design choice. Over time, this reframing may influence how serious research groups evaluate models, placing more weight on identifiably distinct learning mechanisms, training dynamics, and robustness properties instead of raw scale.</p><p>It is unlikely to immediately change the minds of investors or vendors whose incentives favor larger systems, but it can shape academic norms, reviewer expectations, and eventually benchmark construction. Historically, results like this matter most not because they halt a paradigm, but because they constrain it, narrowing the space of credible claims and forcing future advances to justify themselves on grounds other than appearance and size.</p><p>This argument intersects directly with my broader concerns about interpretability and generalization. I am still curious about creating a combiner model, but this might change the mechanics of how that might ultimately work. If performance gains arise primarily from optimization dynamics rather than architectural expressivity, then claims about learned representations should be treated with caution. Apparent abstraction may not correspond to stable semantic structure but to transient equilibria shaped by training order, learning rates, and implicit regularization. This aligns with growing skepticism about whether large models truly learn hierarchical concepts or merely approximate them through iterative adjustment [2].</p><p>The implications extend beyond theory. Nested learning reframes debates about model scaling, architectural novelty, and transfer learning. It suggests that progress may come less from ever deeper networks and more from better understanding and controlling learning dynamics. This has practical consequences for reproducibility, safety, and deployment, since nested optimization can introduce path dependence and sensitivity to training regimes that are difficult to observe or audit.</p><p>In the broader context of the AI marketplace, this work reinforces a recurring theme. Fluency and performance do not necessarily imply understanding. As with recent neuroscience critiques of language models, nested learning highlights how impressive outputs can emerge from mechanisms that lack stable, interpretable internal structure [3]. That gap matters when systems are deployed in high stakes environments where reliability, robustness, and reasoning are essential.</p><p>We will see how this plays out in 2026 and what new research will ultimately shift the landscape.</p><p>Footnotes:</p><p>[1] Behrouz, A., Razaviyayn, M., Zhong, P., &amp; Mirrokni, V. &#8220;Nested learning: The illusion of deep learning architectures.&#8221; Advances in Neural Information Processing Systems 39 (2025). <a href="https://abehrouz.github.io/files/NL.pdf">https://abehrouz.github.io/files/NL.pdf</a></p><p>[2] Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., &amp; Dickstein, J. &#8220;On the expressive power of deep neural networks.&#8221; Proceedings of the 34th International Conference on Machine Learning (2017). <a href="https://arxiv.org/abs/1606.05336">https://arxiv.org/abs/1606.05336</a></p><p>[3] Riley, B. &#8220;Large language mistake: Cutting edge research shows language is not the same as intelligence.&#8221; The Verge (2025). <a href="https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems">https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems</a></p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p>]]></content:encoded></item><item><title><![CDATA[The great 2025 LLM vibe shift]]></title><description><![CDATA[Listen now | The landscape around large language models experienced a rapid and unexpected shift in 2025 as investors, researchers, and industry leaders collectively reassessed assumptions]]></description><link>https://www.nelsx.com/p/the-great-2025-llm-vibe-shift</link><guid isPermaLink="false">https://www.nelsx.com/p/the-great-2025-llm-vibe-shift</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 13 Dec 2025 00:00:47 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181039481/a65ff235d31cf2fb47aae411fc9f2c54.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 217 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;The great 2025 LLM vibe shift.&#8221;</p><p>Vibe shifts came and went. People are certainly adding the word vibe to all sorts of things as the initial meaning has ironically faded. Casey Newton in the industry standard setting Platformer newsletter wrote about a big silicon valley vibe shift in 2022 [1]. It was a big thing; until it wasn&#8217;t. The really big completely surreal LLM shift has happened toward the tail end of 2025. We went from extreme AI bubble talk to very clear, rational, and thoughtful perspectives on how LLMs won&#8217;t realize the promises that have been made. Keep in mind the market fears of an AI bubble are different from the understanding that LLMs might be the technology that ultimately wins. All of the spending in the marketplace and the academic argument may get reconciled at some point, but we have not seen that happen in 2025.</p><p>The backward linkages of how potential technological progress regressed may not have been felt just yet, but the overall sentiment has shifted. The ship has indeed sailed. Let that sink in for a moment and think about just how big a shift in sentiment that really happens to be and how it just sort of happened. As OpenAI and Anthropic move toward inevitable IPO, that shift will certainly change things. Maybe the single best written explanation of this is from Benjamin Riley who wrote a piece for The Verge called, &#8220;Large language mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it&#8221; [2]. I owe a hat tip to Nilay Patel for recommending and helping surface that piece of writing.</p><p>I was skeptical at first, but then realized it was a really interesting and well reasoned read. I&#8217;ll admit at the same time, I was also reading a 52 paper from the Google Research team, &#8220;Nested Learning: The Illusion of Deep Learning Architecture&#8221; around the same time which was interesting as a paired reading assignment [3]. More to come on that paper and what it means in a later post. I&#8217;m still digesting the deeper implications of that paper.</p><p>Maybe to really sell the shift you could take a moment and listen to some of the recent words from OpenAI cofounder Ilya Sutskever. I&#8217;m still a little shocked about the casual way Ilaya described how we moved from research and the great AI winter, to the age of scaling, and finally back to the age of research again. The idea that scaling based on compute or size of corpse won&#8217;t win the LLM race is a very big shift and Ilya makes it pretty casually during this video.</p><p>You will notice I have set the video to play about 1882 seconds into the conversation:</p><div id="youtube2-aR20FWCCjAs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;aR20FWCCjAs&quot;,&quot;startTime&quot;:&quot;1882s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/aR20FWCCjAs?start=1882s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Maybe a video with a really sharp looking classic linux Red Hat fedora in the background featuring a conversation between Nilay Patel and IBM CEO Arvind Krishna can help explain things. Don&#8217;t panic when you realize that the CEO of IBM very clearly argues with some back of the envelope math that all the data center investment has no real way to pay off in practical terms or an actual return on investment. Try not to flinch when it is described that within 3-5 years the same data centers could be built at a fraction of the current cost. Technology does just keep getting better. The argument makes sense. It is no less shocking based on the billions being spent.</p><p>I set the video to start playing 502 seconds into the conversation.</p><div id="youtube2-iZgdGg8-T0M" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;iZgdGg8-T0M&quot;,&quot;startTime&quot;:&quot;502&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/iZgdGg8-T0M?start=502&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>The argument that I probably prefer in the long run is how quantum computing is going to change the entire scaling and compute landscape [4]. The long-term argument that may end up mattering the most suggests that quantum computing will transform the economics of scale and ultimately reset expectations about what is computationally feasible. Former Intel CEO Pat Gelsinger recently framed quantum as the force likely to deflate the AI bubble by altering the fundamental relationship between compute and capability, a claim that is gaining analytical support across the research community. We may see it be an effective counter to the billions being spent on data centers for a late mover willing to make a prominent investment in the space or it could just end up being Alphabet who is highly invested in both TPU and quantum chips [5].</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Footnotes:</p><p>[1] Newton, C. (2022). The vibe shift in Silicon Valley. Platformer. <a href="https://www.platformer.news/the-vibe-shift-in-silicon-valley/">https://www.platformer.news/the-vibe-shift-in-silicon-valley/</a></p><p>[2] Riley, B. (2025). Large language mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it. The Verge. <a href="https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems">https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems</a></p><p>[3] Behrouz, A., Razaviyayn, M., Zhong, P., &amp; Mirrokni, V. (2025). Nested learning: The illusion of deep learning architectures. In The Thirty-ninth Annual Conference on Neural Information Processing Systems. <a href="https://abehrouz.github.io/files/NL.pdf">https://abehrouz.github.io/files/NL.pdf</a></p><p>[4] Shrivastava, H. (2025). Quantum computing will pop the AI bubble, claims ex-Intel CEO Pat Gelsinger. Wccftech. <a href="https://wccftech.com/quantum-computing-will-pop-the-ai-bubble-claims-ex-intel-ceo-pat-gelsinger/">https://wccftech.com/quantum-computing-will-pop-the-ai-bubble-claims-ex-intel-ceo-pat-gelsinger/</a></p><p>[5] Yahoo Finance, &#8220;Alphabet CEO just said quantum computing could be close to a breakthrough,&#8221; <a href="https://finance.yahoo.com/news/alphabet-ceo-just-said-quantum-155229893.html">https://finance.yahoo.com/news/alphabet-ceo-just-said-quantum-155229893.html</a></p>]]></content:encoded></item><item><title><![CDATA[The 5 biggest unsolved problems in quantum computing]]></title><description><![CDATA[Listen now | This week&#8217;s analysis focuses on the five most critical problems that must be solved for quantum computing to reach fault tolerant, economically meaningful operation.]]></description><link>https://www.nelsx.com/p/the-5-biggest-unsolved-problems-in</link><guid isPermaLink="false">https://www.nelsx.com/p/the-5-biggest-unsolved-problems-in</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 06 Dec 2025 00:27:19 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/180443646/8137618a9f2a3500f6d7a793ec61a4a5.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 216 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;The biggest unsolved problems in quantum computing.&#8221;</p><p>The field of quantum computing has accelerated rapidly during the last decade, yet its most important breakthroughs remain incomplete. The core research challenges that stand between today&#8217;s prototypes and large scale, industrially relevant systems are now visible with unusual clarity. I think we are on the path to seeing this technology realized. These challenges are increasingly framed not as incremental milestones but as structural bottlenecks that shape the entire trajectory of the field. This week&#8217;s analysis focuses on the five most critical problems that must be solved for quantum computing to reach fault tolerant, economically meaningful operation. These gaps define where research investment, national strategy, and competitive advantage will be determined in the coming decade.</p><p>1. A fully fault tolerant logical qubit with logical error rates below threshold</p><p>The first and most fundamental problem is the absence of a fully fault tolerant logical qubit. I know, I know, people are getting close, but this technology is not fully realized just yet. Theoretical thresholds for fault tolerance are well studied, and progress has been reported through surface codes, low density parity check codes, and recent advances in magic state distillation. Several groups have demonstrated logical qubits whose performance exceeds their underlying physical qubits, and some trapped-ion experiments now show better than break-even behavior under repeated rounds of error correction. However, no team has yet realized a logical qubit that maintains below-threshold logical error rates in a fully integrated setting that combines encoding, stabilizer measurement, real time decoding, and continuous correction across arbitrarily deep circuits. Experiments such as the University of Osaka&#8217;s zero level magic state distillation results and Quantinuum&#8217;s recent logical circuit demonstrations illustrate meaningful progress, yet a complete fault tolerant logical qubit build rolling off the assembly line has not been achieved [1]. This missing element prevents reliable execution of deep circuits and stands as the central research challenge of the field. I am also tracking a leaderboard of efforts aimed at increasing the number and stability of logical qubits as new systems emerge [2].</p><p>2. A scalable and manufacturable quantum architecture that supports thousands of high fidelity qubits</p><p>The second unsolved problem is the absence of a scalable, manufacturable quantum architecture capable of supporting thousands of high fidelity qubits. Superconducting platforms continue to face wiring congestion, cross talk, and fabrication variability across large wafers, which limits reproducibility at scale. Trapped-ion systems achieve some of the highest gate fidelities reported, but their physical footprint, control volume, and relatively slow gate speeds constrain system growth. Neutral atom arrays offer large qubit counts, yet they have not demonstrated uniform, high fidelity two qubit gates across arrays large enough to support fault tolerant codes. Photonic and spin qubits continue to advance but remain earlier in their development for universal, gate based architectures. Across all platforms, the transition from laboratory systems to repeatable, wafer scale manufacturing has not occurred. Most resource estimates indicate that tens of thousands of physical qubits will be required for practically useful, error corrected applications, and no architecture is yet positioned to deliver this scale with consistent fidelity. I am tracking universal gate based physical qubit leaders closely, and I expect to see significant shifts in 2026 as fabrication strategies evolve [3].</p><p>3. Integrated cryogenic classical control systems capable of real time decoding at scale</p><p>The third unsolved problem concerns the integration of classical control systems capable of operating efficiently at cryogenic temperatures. Quantum processors rely on classical electronics to generate precise control pulses, read measurement outcomes, and perform real time decoding. As devices grow, these classical requirements become a dominant engineering bottleneck. Current systems depend on extensive room temperature hardware and thousands of coaxial lines, an approach that is not viable for scaling beyond a few hundred qubits. Research into cryogenic CMOS, multiplexed readout architectures, and fast low noise routing has shown meaningful progress, and prototype decoders have demonstrated sub microsecond performance. However, the field still lacks a fully integrated classical to quantum control stack that can operate near the device, support large scale decoding throughput, and eliminate the wiring overhead required for million channel systems. Solving this challenge is as essential as improving qubit fidelity, because fault tolerant computation will require tightly coupled classical and quantum subsystems functioning in real time at cryogenic depths.</p><p>4. A modular, networked quantum architecture with reliable chip to chip entanglement</p><p>The fourth major unsolved problem involves modularity and quantum networking. Large scale quantum computers will not be monolithic systems. They will require distributed architectures in which multiple chips or modules exchange entanglement to support error corrected computation across larger systems. Research groups have demonstrated chip to chip photonic links, heralded entanglement generation, and short range coupling between trapped-ion and superconducting devices, but these demonstrations remain small scale and experimental. No team has yet produced a modular architecture capable of sustaining reliable inter module entanglement rates, routing operations, and error corrected logical circuits across networked components. A practical quantum interconnect, whether photonic or microwave based, would redefine system design by enabling large logical qubit counts without relying on a single monolithic wafer. Developing these networked architectures is now seen as one of the highest value targets for national research programs, because modularity is likely the only viable path to systems with millions of physical qubits.</p><p>5. A verified quantum advantage tied to a real scientific or industrial workload</p><p>The fifth unsolved problem is the absence of a widely accepted, independently verified quantum advantage tied to a real scientific or industrial workload. Quantum supremacy experiments have demonstrated that certain random circuit sampling tasks are exceptionally difficult for classical systems to simulate, but these tasks do not translate into chemistry, materials, optimization, or cryptography workloads. Several vendors have recently reported domain specific quantum advantages, including applications in quantum navigation and narrow optimization tasks, but these demonstrations have not yet achieved broad community validation or independent replication under strict verification and resource accounting. A robust demonstration of advantage requires a computation that is infeasible for classical systems within realistic time and energy constraints, produces an output that can be meaningfully verified, and operates using real hardware error rates rather than idealized gates. Achieving this milestone would mark a decisive shift in the strategic landscape of the field and would accelerate commercial investment into fault tolerant platforms.</p><p>Together, these five problems outline the most important questions I&#8217;m tracking that are facing quantum computing today. This is based on my research interests. Please feel free to let me know if something else jumps out when you read this list. Each topic represents an opportunity for technical leadership, research investment, and industrial strategy. That does not mean my list is complete. It&#8217;s directionally accurate for late 2025, but things in the quantum computing space are changing rapidly. These elements called out also define the hurdles that stand between early laboratory demonstrations and the large-scale quantum platforms required for transformative scientific progress.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Links I&#8217;m sharing this week!</p><p>You may not have watched Linus Torvalds build a computer on your watch list for 2025, but I&#8217;m sharing that link anyway. I truly enjoyed watching this video.</p><div id="youtube2-mfv0V1SxbNA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;mfv0V1SxbNA&quot;,&quot;startTime&quot;:&quot;&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/mfv0V1SxbNA?start=&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This video made me chuckle several times and was delightful.</p><div id="youtube2-kZ5Jq2Is888" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;kZ5Jq2Is888&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/kZ5Jq2Is888?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Footnotes:</p><p>[1] Itogawa, T., Takada, Y., Hirano, Y., &amp; Fujii, K. (2024). Even more efficient magic state distillation by zero-level distillation. arXiv preprint arXiv:2403.03991. <a href="http://arxiv.org/pdf/2403.03991">http://arxiv.org/pdf/2403.03991</a></p><p>[2] Top quantum computers by logical qubit</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:178276434,&quot;url&quot;:&quot;https://www.nels.ai/p/top-quantum-computers-by-logical&quot;,&quot;publication_id&quot;:1958641,&quot;publication_name&quot;:&quot;nels.ai | Research Lab&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!-L0K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F454f8bd8-0b29-4411-8d7f-f122001b9f10_1024x1024.png&quot;,&quot;title&quot;:&quot;Top quantum computers by logical qubit &quot;,&quot;truncated_body_text&quot;:&quot;Yesterday, we looked at the physical gate-based qubit leaderboard that I have been tracking for the last few months [1]. Today, as promised we are pivoting to look into the largest logical qubit based systems. This updated view reframes what it means to be the &#8220;largest&#8221; quantum computer. I&#8217;m still more interested in who will run Shor&#8217;s algorithm and dem&#8230;&quot;,&quot;date&quot;:&quot;2025-11-07T15:06:41.729Z&quot;,&quot;like_count&quot;:0,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;handle&quot;:&quot;nelslindahl&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;profile_set_up_at&quot;:&quot;2021-09-18T17:00:30.831Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-11-01T23:39:09.022Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:246432,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:271589,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:271589,&quot;name&quot;:&quot;The Lindahl Letter&quot;,&quot;subdomain&quot;:&quot;nelslindahl&quot;,&quot;custom_domain&quot;:&quot;www.nelsx.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Weekly insights at the intersection of technology, artificial intelligence, and modernity&#8212;exploring how innovation shapes our world every Friday.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:14578726,&quot;theme_var_background_pop&quot;:&quot;#9D6FFF&quot;,&quot;created_at&quot;:&quot;2021-01-27T00:44:44.784Z&quot;,&quot;email_from_name&quot;:&quot;Nels Lindahl from The Lindahl Letter&quot;,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:1950363,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:1958641,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:1958641,&quot;name&quot;:&quot;nels.ai | Research Lab&quot;,&quot;subdomain&quot;:&quot;nelsai&quot;,&quot;custom_domain&quot;:&quot;www.nels.ai&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Research notes on quantum, robotics, and AI systems.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/454f8bd8-0b29-4411-8d7f-f122001b9f10_1024x1024.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#67BDFC&quot;,&quot;created_at&quot;:&quot;2023-09-17T23:51:13.435Z&quot;,&quot;email_from_name&quot;:&quot;nels.ai&quot;,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;paused&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:5381397,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:5275742,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5275742,&quot;name&quot;:&quot;Nels Lindahl &#8212; Functional Journal&quot;,&quot;subdomain&quot;:&quot;functionaljournal&quot;,&quot;custom_domain&quot;:&quot;www.nelslindahl.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;A weblog created by Dr. Nels Lindahl featuring writings and thoughts&#8230;&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3480860-225f-4eef-9db6-d2ff754ad257_960x960.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-06-08T22:08:48.622Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:5450843,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:5343721,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5343721,&quot;name&quot;:&quot;Civic Honors&quot;,&quot;subdomain&quot;:&quot;civichonors&quot;,&quot;custom_domain&quot;:&quot;www.civichonors.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Graduation with Civic Honors: Unlock the Power of Community Opportunity&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f61161c9-1a76-45eb-8fad-86a4e866e99e_1024x1024.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-06-15T13:44:52.518Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;nelslindahl&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.nels.ai/p/top-quantum-computers-by-logical?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!-L0K!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F454f8bd8-0b29-4411-8d7f-f122001b9f10_1024x1024.png" loading="lazy"><span class="embedded-post-publication-name">nels.ai | Research Lab</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Top quantum computers by logical qubit </div></div><div class="embedded-post-body">Yesterday, we looked at the physical gate-based qubit leaderboard that I have been tracking for the last few months [1]. Today, as promised we are pivoting to look into the largest logical qubit based systems. This updated view reframes what it means to be the &#8220;largest&#8221; quantum computer. I&#8217;m still more interested in who will run Shor&#8217;s algorithm and dem&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">6 months ago &#183; Dr. Nels Lindahl</div></a></div><p>[3] Updating my top 10 quantum computer leaderboard </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:178187215,&quot;url&quot;:&quot;https://www.nels.ai/p/updating-my-top-10-quantum-computer&quot;,&quot;publication_id&quot;:1958641,&quot;publication_name&quot;:&quot;nels.ai | Research Lab&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!-L0K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F454f8bd8-0b29-4411-8d7f-f122001b9f10_1024x1024.png&quot;,&quot;title&quot;:&quot;Updating my top 10 quantum computer leaderboard&quot;,&quot;truncated_body_text&quot;:&quot;Back in July, which feels like a long time ago with the pace of quantum industry press releases, I produced a top-10 quantum computer leaderboard to catalog the leading systems in operation [1]. Some of these builds are prototypes or experimental, but they collectively demonstrate what is currently possible. In that list, I limited inclusion to universa&#8230;&quot;,&quot;date&quot;:&quot;2025-11-06T15:37:07.660Z&quot;,&quot;like_count&quot;:1,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:14578726,&quot;name&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;handle&quot;:&quot;nelslindahl&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fa9fa71-d2c5-4291-b0f1-7cfc1149d81d_1748x1458.jpeg&quot;,&quot;bio&quot;:&quot;Technology builder. Avid writer. Occasional speaker. Doctor of Philosophy. Treadmill enthusiast. #GoAvsGo&quot;,&quot;profile_set_up_at&quot;:&quot;2021-09-18T17:00:30.831Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-11-01T23:39:09.022Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:246432,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:271589,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:271589,&quot;name&quot;:&quot;The Lindahl Letter&quot;,&quot;subdomain&quot;:&quot;nelslindahl&quot;,&quot;custom_domain&quot;:&quot;www.nelsx.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Weekly insights at the intersection of technology, artificial intelligence, and modernity&#8212;exploring how innovation shapes our world every Friday.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35fb1684-9185-4a56-b118-e5ba1b08f151_1280x1280.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:14578726,&quot;theme_var_background_pop&quot;:&quot;#9D6FFF&quot;,&quot;created_at&quot;:&quot;2021-01-27T00:44:44.784Z&quot;,&quot;email_from_name&quot;:&quot;Nels Lindahl from The Lindahl Letter&quot;,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:1950363,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:1958641,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:1958641,&quot;name&quot;:&quot;nels.ai | Research Lab&quot;,&quot;subdomain&quot;:&quot;nelsai&quot;,&quot;custom_domain&quot;:&quot;www.nels.ai&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Research notes on quantum, robotics, and AI systems.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/454f8bd8-0b29-4411-8d7f-f122001b9f10_1024x1024.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#67BDFC&quot;,&quot;created_at&quot;:&quot;2023-09-17T23:51:13.435Z&quot;,&quot;email_from_name&quot;:&quot;nels.ai&quot;,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;paused&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:5381397,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:5275742,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5275742,&quot;name&quot;:&quot;Nels Lindahl &#8212; Functional Journal&quot;,&quot;subdomain&quot;:&quot;functionaljournal&quot;,&quot;custom_domain&quot;:&quot;www.nelslindahl.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;A weblog created by Dr. Nels Lindahl featuring writings and thoughts&#8230;&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3480860-225f-4eef-9db6-d2ff754ad257_960x960.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-06-08T22:08:48.622Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:5450843,&quot;user_id&quot;:14578726,&quot;publication_id&quot;:5343721,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5343721,&quot;name&quot;:&quot;Civic Honors&quot;,&quot;subdomain&quot;:&quot;civichonors&quot;,&quot;custom_domain&quot;:&quot;www.civichonors.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Graduation with Civic Honors: Unlock the Power of Community Opportunity&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f61161c9-1a76-45eb-8fad-86a4e866e99e_1024x1024.png&quot;,&quot;author_id&quot;:14578726,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-06-15T13:44:52.518Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Dr. Nels Lindahl&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;nelslindahl&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.nels.ai/p/updating-my-top-10-quantum-computer?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!-L0K!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F454f8bd8-0b29-4411-8d7f-f122001b9f10_1024x1024.png" loading="lazy"><span class="embedded-post-publication-name">nels.ai | Research Lab</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Updating my top 10 quantum computer leaderboard</div></div><div class="embedded-post-body">Back in July, which feels like a long time ago with the pace of quantum industry press releases, I produced a top-10 quantum computer leaderboard to catalog the leading systems in operation [1]. Some of these builds are prototypes or experimental, but they collectively demonstrate what is currently possible. In that list, I limited inclusion to universa&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">6 months ago &#183; 1 like &#183; Dr. Nels Lindahl</div></a></div>]]></content:encoded></item><item><title><![CDATA[Process capture and the future of knowledge management]]></title><description><![CDATA[Listen now | The rise of agentic AI and workflow-integrated assistants alters the knowledge landscape by making it possible to synthesize procedural knowledge in real time.]]></description><link>https://www.nelsx.com/p/process-capture-and-the-future-of</link><guid isPermaLink="false">https://www.nelsx.com/p/process-capture-and-the-future-of</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 29 Nov 2025 00:00:45 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179368726/e179ae712b26f62c0c2603675f7b0834.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 215 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Process capture and the future of knowledge management.&#8221;</p><p>The history of knowledge management has been shaped by repeated attempts to store, retrieve, and reuse organizational insight. So much institutional knowledge gets lost and discarded as organizations change and people shift roles or exit. People within organizations learn through the every day practice of getting things done. It&#8217;s only recently that systems are augmenting and sometimes automating those processes. Early systems focused on document repositories, and later platforms emphasized collaboration, tagging, and collective intelligence. We now find ourselves in a period where knowledge management converges with automated workflows and computational assistants that can observe, extract, and generalize decision patterns. We are seeing a major change in the ability to observe and capture processes. Systems are able to capture and catalog what is happening. This creates an interesting inflection point where the system may store the knowledge, but the users of that knowledge are dependent on the system. That does not mean the process is understood in terms of the big why question. Scholars have noted that the operational layer of organizational memory is often lost because it resides in informal practices rather than formal documentation. The shift toward embedded and automated capture offers a remedy to that problem.</p><p>The rise of agentic AI and workflow-integrated assistants alters the knowledge landscape by making it possible to synthesize procedural knowledge in real time. Instead of relying on teams to manually update wikis or define operating procedures, modern systems can extract key steps from repeated actions, identify dependencies, and flag anomalies that deviate from observed patterns. This transforms knowledge management from a static library into a dynamic computational environment. What exactly happens to this store of knowledge over time is something to consider going forward. Supervising the repository will require deep knowledge of the systems which are now being maintained systematically. Maintaining and refining it will be the difference between sustained institutional knowledge or temporary model advantages that drop with the next update. Recent studies on digital trace data argue that high fidelity observational streams can significantly improve the accuracy of organizational models. When this data flows into agents capable of modeling tasks, predicting outcomes, and recommending actions, the role of knowledge management shifts from storage to orchestration.</p><p>Process capture also introduces new opportunities for long-horizon learning systems. This is the part I&#8217;m really interested in understanding. The orchestration layer has to have some background learning and storage that runs periodically. When workflows are automatically translated into structured representations, organizations can run simulations, perform optimization, and enable higher levels of task autonomy. These capabilities begin to resemble continuous improvement environments that merge human judgment with machine-refined operational insight. Researchers have observed that structured process models can improve downstream automation and decision support, particularly in complex enterprise settings where procedures evolve rapidly. This suggests that the next phase of knowledge management will involve systems that not only store information but also refine it through computational analysis and real world feedback. It&#8217;s in that refinement that the magic might happen in terms of real knowledge management.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Links I&#8217;m sharing this week!</p><p><a href="https://www.computerworld.com/article/4094557/the-world-is-split-between-ai-sloppers-and-stoppers.html">https://www.computerworld.com/article/4094557/the-world-is-split-between-ai-sloppers-and-stoppers.html</a></p><div id="youtube2-d95J8yzvjbQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;d95J8yzvjbQ&quot;,&quot;startTime&quot;:&quot;&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/d95J8yzvjbQ?start=&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div id="youtube2-aR20FWCCjAs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;aR20FWCCjAs&quot;,&quot;startTime&quot;:&quot;1880&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/aR20FWCCjAs?start=1880&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This video is a super interesting look at a number we don&#8217;t normally question on a daily basis. The delivery style is a bit bombastic, but the fact check on the argument is interesting. You know I enjoy numbers and was really curious how this was calculated. </p><div id="youtube2-m-TA1cBh2o4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;m-TA1cBh2o4&quot;,&quot;startTime&quot;:&quot;&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/m-TA1cBh2o4?start=&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>That video referenced this widely shared analysis from Michael W. Green on Substack. </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:179492574,&quot;url&quot;:&quot;https://www.yesigiveafig.com/p/part-1-my-life-is-a-lie&quot;,&quot;publication_id&quot;:1272022,&quot;publication_name&quot;:&quot;Yes, I give a fig... thoughts on markets from Michael Green&quot;,&quot;publication_logo_url&quot;:null,&quot;title&quot;:&quot;Part 1: My Life Is a Lie&quot;,&quot;truncated_body_text&quot;:&quot;We&#8217;re going to largely skip markets again, because the sweater is rapidly unraveling in other areas as I pull on threads. Suffice it to say that the market is LARGELY unfolding as I had expected &#8212; credit stress is rising, particularly in the tech sector. Many are now pointing to the rising CDS for Oracle as the deterioration in &#8220;AI&#8221; balance sheets accelerates. CDS was also JUST introduced for META &#8212; it traded at 56, slightly worse than the aggregate IG CDS at 54.5 (itself up from 46 since I began discussing this topic):&quot;,&quot;date&quot;:&quot;2025-11-23T14:48:52.646Z&quot;,&quot;like_count&quot;:2629,&quot;comment_count&quot;:140,&quot;bylines&quot;:[{&quot;id&quot;:36903231,&quot;name&quot;:&quot;Michael W. Green&quot;,&quot;handle&quot;:&quot;michaelwgreen&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7eef165c-d741-477a-a7f6-9c9996dd4a4a_310x356.jpeg&quot;,&quot;bio&quot;:&quot;Michael is Chief Strategist and Portfolio Manager for Simplify Asset Management. Michael has been noted for his work as a market theoretician and financial media participant. He is a graduate of the University of Pennsylvania and a CFA holder.&quot;,&quot;profile_set_up_at&quot;:&quot;2022-08-29T15:17:41.986Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-01-03T14:18:53.323Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:1229786,&quot;user_id&quot;:36903231,&quot;publication_id&quot;:1272022,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:1272022,&quot;name&quot;:&quot;Yes, I give a fig... thoughts on markets from Michael Green&quot;,&quot;subdomain&quot;:&quot;michaelwgreen&quot;,&quot;custom_domain&quot;:&quot;www.yesigiveafig.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Thoughts on financial markets and economics from Simplify's Chief Strategist&quot;,&quot;logo_url&quot;:null,&quot;author_id&quot;:36903231,&quot;primary_user_id&quot;:36903231,&quot;theme_var_background_pop&quot;:&quot;#FF0000&quot;,&quot;created_at&quot;:&quot;2022-12-29T22:13:37.444Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Michael W. Green&quot;,&quot;founding_plan_name&quot;:&quot;Institutional Subscriber&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;profplum99&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:1000,&quot;status&quot;:{&quot;bestsellerTier&quot;:1000,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:1000},&quot;paidPublicationIds&quot;:[1021975],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.yesigiveafig.com/p/part-1-my-life-is-a-lie?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><span></span><span class="embedded-post-publication-name">Yes, I give a fig... thoughts on markets from Michael Green</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Part 1: My Life Is a Lie</div></div><div class="embedded-post-body">We&#8217;re going to largely skip markets again, because the sweater is rapidly unraveling in other areas as I pull on threads. Suffice it to say that the market is LARGELY unfolding as I had expected &#8212; credit stress is rising, particularly in the tech sector. Many are now pointing to the rising CDS for Oracle as the deterioration in &#8220;AI&#8221; balance sheets accelerates. CDS was also JUST introduced for META &#8212; it traded at 56, slightly worse than the aggregate IG CDS at 54.5 (itself up from 46 since I began discussing this topic&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">5 months ago &#183; 2629 likes &#183; 140 comments &#183; Michael W. Green</div></a></div>]]></content:encoded></item><item><title><![CDATA[The great manufacturing reset]]></title><description><![CDATA[Listen now | We are on the verge of the next great realized technology where robotics and fabrication are intersecting. Filament based 3D printers are now ubiquitous and we are starting to see humanoid robots.]]></description><link>https://www.nelsx.com/p/the-great-manufacturing-reset</link><guid isPermaLink="false">https://www.nelsx.com/p/the-great-manufacturing-reset</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 22 Nov 2025 00:01:07 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179255016/f27c2505cbc14c18689c348a086be4ef.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 214 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;The great manufacturing reset.&#8221;</p><p>Boston Dynamics captured public imagination when they introduced Spot the dog-like robot back in 2016. Things have changed. Robots that walk around are beginning to enter the commercial landscape, and new entrants continue to appear. A humanoid robot product from Russia built by the company Idol surfaced last week [1]. Other companies such as Agility Robotics (USA), Figure AI (USA), Boston Dynamics (USA), UBTECH (China), and 1X Technologies (Norway/USA) are all working toward delivering humanoid robots. Optimus, the Tesla bot introduced conceptually in 2021 and now in its third-generation prototype which remains part of an internal program and has not yet reached commercial deployment is also being talked about.</p><p>The stage is now set, and we are at a point where robotics, autonomous fabrication systems, and advanced materials are converging into a new industrial baseline. The last decade brought low-cost filament printers into hobbyist and commercial spaces at massive scale, and the next decade is poised to move far beyond that early wave. Industrial additive manufacturing has already expanded into metals, composites, and high-performance polymers, with global revenue expected to accelerate over the coming years. At the same time, the field is absorbing rapid advancements in AI-enabled calibration, defect detection, and real-time optimization, allowing machinery to tune production parameters autonomously. That capability shifts what it means to operate a modern fabrication workflow. Things are changing rapidly.</p><p>Alongside these developments, humanoid and semi-autonomous industrial robots are transitioning from research demonstrations to contract manufacturing deployments. Several builders are scaling up pilot programs in which general-purpose robots support assembly, materials handling, and repetitive manufacturing tasks. These systems benefit from advances in reinforcement learning, enhanced sensors, and cloud-based model updates. Industrial robotics shipments are increasing rapidly, driven by global demand for flexible production lines and labor-augmentation strategies. The supply side of robotics is not only expanding but also becoming modular and more interoperable across fabrication environments.</p><p>The most significant shift may come from the emergence of machines that build machines. That is a topic I&#8217;m focused on understanding. Historically, tooling design required long lead times, significant manual labor, and specialized expertise. Today, automated CAM pipelines, printable tooling, adaptive CNC systems, and robotically tended fabrication cells allow factories to generate and regenerate their own production processes. Some aerospace and automotive facilities already deploy these closed-loop systems to create fixtures, jigs, and replacement components internally. This form of self-manufacturing reduces dependency on external suppliers and removes friction from engineering iteration cycles. We are moving toward a world where design, testing, and tooling are all integrated within an AI-guided, robotics-driven feedback loop. That integration is the foundation of the great manufacturing reset.</p><p>For the United States, these technologies open a realistic path to reshoring custom and small-batch manufacturing in ways that were not economically viable during the offshoring wave of the late twentieth century. Rising labor costs in traditional manufacturing hubs, geopolitical risk, and supply chain disruptions have already encouraged firms to reconsider where they build things. Additive manufacturing and flexible robotics change the cost structure by reducing reliance on large minimum-order quantities, expensive hard tooling, and long logistics chains. A factory that can print tooling on demand, deploy modular robots, and run AI-optimized production scheduling can serve shorter runs and more specialized designs while remaining geographically close to end customers. In effect, the United States can replace scale-driven arbitrage with speed, customization, and resilience. That is why we are at the inflection point for the great manufacturing reset.</p><p>Policy and infrastructure are beginning to support this transition. Federal programs such as Manufacturing USA and its associated network of advanced manufacturing institutes are working to diffuse next-generation production technologies across domestic firms and regions [2]. Investments in semiconductor fabrication, battery plants, and clean-energy hardware have already catalyzed billions of dollars in new onshore manufacturing commitments. The same capabilities that support large facilities can extend to mid-market and smaller manufacturers through shared tooling libraries, regional robotics integrators, and standardized digital design pipelines. Universities and community colleges can align curricula with this reset by emphasizing mechatronics, robotics programming, and design-for-additive principles that translate directly to a modern factory floor.</p><p>If the United States leans into this transition, the great manufacturing reset will not simply re-create legacy industrial capacity. It will establish a distributed network of automated, digitally coordinated micro-factories specializing in custom work, rapid prototyping, and short-run production. The strategic advantage will be the ability to move from concept to physical part in days instead of months, while retaining critical capabilities within domestic borders. The risk is that other regions may scale faster and capture the integrator role that coordinates robots, additive systems, and AI platforms across global supply chains. The next few years will determine whether the United States treats these technologies as incremental enhancements or as foundational infrastructure for a new manufacturing baseline. Ideally, this reset will create conditions for a new wave of startups delivering smaller manufacturing runs, bespoke development cycles, and entirely new product categories.</p><p>Things to consider:</p><ul><li><p>The economics of reshoring depend as much on automation and design speed as on wage differentials.</p></li><li><p>Policy support for advanced manufacturing will matter most where it connects directly to tooling, robotics, and workforce upskilling.</p></li><li><p>Custom, short-run production could become a core competitive advantage for regions that adopt additive and robotics early.</p></li><li><p>The integrators that connect robots, printers, and AI software may end up more powerful than any single hardware vendor.</p></li><li><p>Manufacturing resilience will increasingly be measured by how quickly domestic systems can reconfigure to new designs and shocks.</p></li></ul><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Links I&#8217;m sharing this week!</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:178928392,&quot;url&quot;:&quot;https://www.a16z.news/p/you-can-just-read-sci-fi-25-books&quot;,&quot;publication_id&quot;:13145,&quot;publication_name&quot;:&quot;a16z&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!2PP_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a3f797-76cd-4cf2-80c5-92829b700f5a_256x256.png&quot;,&quot;title&quot;:&quot;You can just read 25 sci-fi books&quot;,&quot;truncated_body_text&quot;:&quot;A few weeks ago, we sent out our inaugural &#8220;You can just read 25 books&#8221; recommendation list, and today we&#8217;re back with another one. This one is from the a16z Infra team, and true to form, it also exists on Github, where you can contribute your own PRs&quot;,&quot;date&quot;:&quot;2025-11-18T15:03:13.150Z&quot;,&quot;like_count&quot;:101,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:15630273,&quot;name&quot;:&quot;Matt Bornstein&quot;,&quot;handle&quot;:&quot;mattbornstein&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e80741d4-1d28-4b29-adf4-ed86bf61e9aa_1000x1000.png&quot;,&quot;bio&quot;:&quot;a16z partner and AI enthusiast&quot;,&quot;profile_set_up_at&quot;:&quot;2025-11-14T21:56:41.913Z&quot;,&quot;reader_installed_at&quot;:null,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null},&quot;primaryPublicationId&quot;:6968612,&quot;primaryPublicationName&quot;:&quot;Matt Bornstein&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://mattbornstein.substack.com&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://mattbornstein.substack.com/subscribe?&quot;}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.a16z.news/p/you-can-just-read-sci-fi-25-books?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!2PP_!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a3f797-76cd-4cf2-80c5-92829b700f5a_256x256.png" loading="lazy"><span class="embedded-post-publication-name">a16z</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">You can just read 25 sci-fi books</div></div><div class="embedded-post-body">A few weeks ago, we sent out our inaugural &#8220;You can just read 25 books&#8221; recommendation list, and today we&#8217;re back with another one. This one is from the a16z Infra team, and true to form, it also exists on Github, where you can contribute your own PRs&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">6 months ago &#183; 101 likes &#183; Matt Bornstein</div></a></div><div id="youtube2-K0_DUhg62e4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;K0_DUhg62e4&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/K0_DUhg62e4?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Footnotes:</p><p>[1] Mesa, J. (2025, November 11). Russia &#8216;human&#8217; robot falls on stage during debut. Newsweek. <a href="https://www.newsweek.com/russia-human-robot-falls-stage-during-debut-11031104">https://www.newsweek.com/russia-human-robot-falls-stage-during-debut-11031104</a></p><p>[2] Manufacturing USA. (n.d.). Home. <a href="https://www.manufacturingusa.com/">https://www.manufacturingusa.com/</a></p>]]></content:encoded></item><item><title><![CDATA[Why a “combiner model” might someday work]]></title><description><![CDATA[Listen now | Thank you for tuning in to week 213 of the Lindahl Letter publication. A combiner model represents a critical shift away from the assumption that AI progress requires ever-larger single systems. Instead of training another trillion-parameter monolith, we can learn to combine many smaller, specialized models into a coherent whole.]]></description><link>https://www.nelsx.com/p/why-a-combiner-model-might-someday</link><guid isPermaLink="false">https://www.nelsx.com/p/why-a-combiner-model-might-someday</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 15 Nov 2025 00:00:57 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/178505668/28616d09643155bd5d16cdf784b7ebdd.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 213 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Why a &#8220;combiner model&#8221; might someday work.&#8221;</p><p>Open models abound. Every week, new open-weight large language models appear on Hugging Face, adding to a massive archive of fine-tuned variants and experimental checkpoints. Together, they form a kind of digital wasteland of stranded intelligence. These models aren&#8217;t all obsolete; they&#8217;re simply sidelined because the community lacks effective open source tools to combine their specialized insights efficiently. The concept of a &#8220;combiner model&#8221; offers one powerful path to reclaim this lost potential. Millions of hours of training, billions of dollars in compute, and so much electricity have been spent. Sure you can work by distillation to capture outputs from one model into another, but a combiner model would be different as it overlays instead of extracts.</p><p>A combiner model represents a critical shift away from the assumption that AI progress requires ever-larger single systems. Instead of training another trillion-parameter monolith, we can learn to combine many smaller, specialized models into a coherent whole. The central challenge lies in making these models truly interoperable. The challenges form from questions around how to merge or align their parameters, embeddings, or reasoning traces without degrading performance. The combiner model would act as a meta-learner, adapting, weighting, and reconciling information across independently trained systems, unlocking the latent knowledge already encoded in thousands of open weights. Somebody at some point is going to make an agent that works on this problem and grows stronger by essentially eating other modals.</p><p>This vision can be realized through at least three technical routes. The first involves weight-space merging. Techniques such as Model Soups and Mergekit show that when models share a common base, their weights can be effectively averaged or blended. More advanced methods, like TIES-Merging, learn adaptive coefficients that vary across layers, turning model blending into a trainable optimization process rather than a static recipe. In this view, the combiner model becomes a universal optimizer for reuse, synthesizing the gradients of many past experiments into a single, functioning network.</p><p>The second approach focuses on latent-space alignment. When models differ in architecture or tokenizer, their internal representations diverge. Even so, a smaller alignment bridge can learn to translate between their embedding spaces, creating a shared semantic layer, or semantic superposition. This allows, for example, a legal-domain model and a biomedical model to exchange information while their original knowledge weights remain frozen. The combiner learns the translation rules, effectively building a common interlingua for neural representations that connects thousands of isolated domain experts.</p><p>The third approach treats the combiner not as a merger but as a controller or orchestrator. In this design, the combiner dynamically decides which expert model to invoke, evaluates their outputs, and fuses the results through its own learned inference layer. This idea already appears in robust multi-agent frameworks. A true combiner model or maybe combiner agent would internalize this orchestration as a core part of its reasoning process. Instead of running one model at a time, it would simultaneously select and synthesize outputs from many experts, producing complex, context-aware intelligence assembled on demand. This approach is the most immediately viable and is already being used in sophisticated production systems today.</p><p>If such systems mature, the economics of AI will fundamentally change. Rather than concentrating resources on a few massive, proprietary models, research will shift toward modular ecosystems built from reusable parts. Each fine-tuned checkpoint on Hugging Face will become a potential building block, not an obsolete artifact. The combiner would turn the open-weight landscape into an evolving lattice of knowledge, where specialization and reuse replace the endless cycle of frontier retraining. This vision is demanding, but the promise remains compelling: a world where intelligence is assembled, not hoarded; where the fragments of past experiments contribute directly to future understanding. The combiner model might not exist yet, but its underlying logic already dictates the future of open source AI.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Links I&#8217;m sharing this week!</p><p>This is the episode with Sam Altman that everybody was talking about.   </p><div id="youtube2-Gnl833wXRz0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Gnl833wXRz0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Gnl833wXRz0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>]]></content:encoded></item><item><title><![CDATA[The edge of realized technology]]></title><description><![CDATA[Listen now | Don&#8217;t panic, we are still covering technology including quantum, robotics, and artificial intelligence. I&#8217;ll be writing about the intersection of technology and modernity until the singularity.]]></description><link>https://www.nelsx.com/p/the-edge-of-realized-technology</link><guid isPermaLink="false">https://www.nelsx.com/p/the-edge-of-realized-technology</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Sat, 08 Nov 2025 00:00:31 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/177745720/3b7cadc995d7e9b5eeecff5bd0c0f67e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 212 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;The edge of realized technology.&#8221;</p><p>Welcome to the start of season 5. Don&#8217;t panic, we are still covering advancing technology including quantum, robotics, and artificial intelligence within the Lindahl Letter. I&#8217;ll be writing about the intersection of technology and modernity until the singularity. For better or worse, modernity&#8217;s shadow will continue to be the edge of realized technology. We are on the path to seeing a bunch of different technologies end up being realized in the not so distant future. That is why I&#8217;m so focused on the path toward realizing robotics, quantum, and agentic. That is where season 5 of the Lindahl Letter is going to pick up and start to dig into those topics at the edge of realized technology. To that end, I started to make a graphic of the timeline of major financial bubbles and extended it out to emerging technologies expected to deliver before 2045 [1]. You can modify the Python visualization code for this one if you want, I shared an executable version of it on GitHub.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1ED2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1ED2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 424w, https://substackcdn.com/image/fetch/$s_!1ED2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 848w, https://substackcdn.com/image/fetch/$s_!1ED2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 1272w, https://substackcdn.com/image/fetch/$s_!1ED2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1ED2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png" width="1456" height="735" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:735,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1ED2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 424w, https://substackcdn.com/image/fetch/$s_!1ED2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 848w, https://substackcdn.com/image/fetch/$s_!1ED2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 1272w, https://substackcdn.com/image/fetch/$s_!1ED2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0fa459b-17e8-4ade-873f-efe529bbae6e_1600x808.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Within that visualization I started to sketch out the next 10 most likely technologies we will see realized. Within each path toward realization is where private investment and ultimately retail investors will crowd into the market before it gets commoditized to the point where the initial leaders in the space have no first mover advantage and some type of bubble ensues. That does not mean these things won&#8217;t be game changing. I&#8217;m just expecting some type of financial crowding followed by pressure against expected profits that won&#8217;t be realized. Resulting from that would be some type of financial bubble which might very well be led by a huge windfall of some sort. People made money on tulips and pepper before those markets crashed out.</p><ol><li><p>2026, &#8220;AI Bubble&#8221;, &#8220;Tech&#8221;</p></li><li><p>2028, &#8220;Metaverse and XR Bubble&#8221;, &#8220;Tech/Speculative&#8221;</p></li><li><p>2029, &#8220;Robotics Bubble&#8221;, &#8220;Tech&#8221;</p></li><li><p>2031, &#8220;Climate Tech Bubble&#8221;, &#8220;Climate Tech&#8221;</p></li><li><p>2032, &#8220;Space Economy Bubble&#8221;, &#8220;Space Economy&#8221;</p></li><li><p>2033, &#8220;Biotech and Longevity Bubble&#8221;, &#8220;Biotech/Longevity&#8221;</p></li><li><p>2034, &#8220;Synthetic Biology and Food Tech Bubble&#8221;, &#8220;Synthetic Bio/Food Tech&#8221;</p></li><li><p>2035, &#8220;Quantum Bubble&#8221;, &#8220;Tech&#8221;</p></li><li><p>2035, &#8220;Neurotech and BCI Bubble&#8221;, &#8220;Neurotech/BCI&#8221;</p></li><li><p>2040, &#8220;Fusion Energy Bubble&#8221;, &#8220;Energy&#8221;</p></li></ol><p>These edges of technology realization might not be in the right order or tied exactly to the right year, but I do think that directionally this list will prove to be an accurate prediction of when technology will be achieved and we will see meaningful changes to modernity. Futurist considerations abound for what might end up happening. This was my swing at predicting what&#8217;s next. Only time will tell if it was an accurate swing or it will be disrupted by some other emerging technology.</p><p>Going forward you are going to see my weekly writing efforts get split into 4 distinct buckets. My general weekly think pieces will stay here within the relative safety of the standard Lindahl Letter publication, writing about civics, civility, and civil society will be over on the <a href="https://www.civichonors.com/">Civic Honors</a> domain, blogging will be done within the <a href="https://www.nelslindahl.com/">Functional Journal</a>, and my hope is to resume daily posting back over on the <a href="https://www.nels.ai">nels.ai</a> domain. Ideally, enough content will be generated in the major domains that only a small amount of blogging will occur. Going forward it is far better to produce meaningful work than to complete passages of extended navel gazing. Sure being a reflective practitioner and blogging has its place, but sometimes all that writing about the process ends up being more circular than forward looking.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Links I&#8217;m sharing this week!</p><div id="youtube2-4KzuHeuPsaI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;4KzuHeuPsaI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/4KzuHeuPsaI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Footnotes:</p><p>[1] <a href="https://github.com/nelslindahlx/Data-Analysis/blob/master/TimelineofMajorFinancialBubbles.ipynb">https://github.com/nelslindahlx/Data-Analysis/blob/master/TimelineofMajorFinancialBubbles.ipynb</a></p>]]></content:encoded></item><item><title><![CDATA[Spooky Halloween edition: When Satoshi-Era Wallets Wake Up]]></title><description><![CDATA[Listen now | Thank you for tuning in to week 211 of the Lindahl Letter publication.]]></description><link>https://www.nelsx.com/p/when-satoshi-era-wallets-wake-up</link><guid isPermaLink="false">https://www.nelsx.com/p/when-satoshi-era-wallets-wake-up</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 31 Oct 2025 23:01:20 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/177204865/9fedcdcc0e4d23671b14f0640487354a.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Happy Halloween everybody! Thank you for tuning in to week 211 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;When Satoshi-Era Wallets Wake Up.&#8221;</p><p>Seriously, Bitcoin is weird. It has an enigmatic and anonymous founder. The origin story of how this cryptocurrency came to be is pretty much ineffable. Roughly a third of all bitcoin has never moved [1].These dormant or maybe abandoned coins shape both the scarcity and the psychology of the network. Now, some of those early wallets are coming alive again, and their reawakening reveals a deeper story about profit, security, and the bleeding edge of quantum cryptography. Maybe some of these cutting edge massive quantum computers are being used to run Shor&#8217;s algorithm and factor some of these older wallet keys [2]. That seems more likely to me than somebody remembering they had some old bitcoin after a decade and moving it around. We could write a really spooky short story about people waking up to old bitcoin wallets getting cracked by quantum computers running Shor&#8217;s algorithm. That is the type of short story that could move from fiction to non-fiction with one scientific breakthrough. It&#8217;s even possible it has already started to happen. By possible, I think it probably already is happening.</p><p>Speculation aside, it&#8217;s true that an estimated thirty percent of all mined bitcoin has been untouched for more than five years [3]. That is shocking. About seventeen percent of bitcoins have not moved in a decade [4]. Those figures mean that even as mining nears completion, a huge fraction of the network&#8217;s supply remains functionally absent or potentially abandoned. This long-term dormancy amplifies Bitcoin&#8217;s scarcity, turning lost or forgotten coins into a silent deflationary force. Yet in 2025, something shifted. Several ancient wallets, first active during Bitcoin&#8217;s infancy, have begun to stir after twelve to fourteen years of silence. Their movements are rare, deliberate, and full of meaning.</p><p>Some of these wallets trace back to 2010 and 2011, a time when bitcoin traded for less than a dollar. In July, eight early addresses moved roughly eighty thousand bitcoin in a coordinated set of transfers [5]. That is wealth that once totaled a few thousand dollars but is now worth billions. Somebody made some shocking profits. Later, a miner-era wallet from 2010 moved four hundred bitcoin after twelve years of dormancy [6]. In October, an early 2011 wallet that had accumulated four thousand bitcoin sent a small test transaction of 150 coins before going quiet again [7]. None of these events caused market disruption, but each drew immediate attention. Every time an ancient wallet moves, it feels like a fragment of Bitcoin&#8217;s early history is stepping into the present.</p><p>Why are these early coins moving now? The first reason is straightforward economics. With bitcoin surpassing one hundred thousand dollars, even small transfers yield generational wealth. Another reason is technological maturity. Over the past decade, wallet recovery methods have improved, and holders who once misplaced keys or old software backups can now retrieve them. Security has also evolved. Many early wallets were built with primitive address types that expose their public keys, leaving them theoretically vulnerable to a future cryptographic breakthrough. This leads to the third and most forward-looking motivation: the quantum threat. That is the part I&#8217;m super curious about. Some of the larger quantum systems that I shared in my leaderboard could be active here, but we don&#8217;t really know.</p><p>Quantum computing is still developing, but progress is steady. Bitcoin relies on elliptic-curve digital signatures that would be mathematically vulnerable to sufficiently powerful quantum machines. The earliest wallets used formats that make this risk more immediate, because they reveal public keys on-chain once a transaction occurs. If quantum computing advances far enough, those exposed keys could allow attackers to derive private keys and spend the coins. Experts estimate that a quarter of all existing bitcoin resides in such legacy formats. That reality has not escaped early holders. Some of the recent awakenings may reflect quiet migrations of classic wallet cold coins being moved to SegWit, multi-signature, or even post-quantum-resistant wallets to protect them from future compromise. These reactivations might not be about profit at all. They could be acts of defensive foresight from people who understand how close technology may be to challenging the foundations of digital security.</p><p>There are also practical motivations. Estate planning, custodial audits, and consolidation are all normal parts of managing large digital holdings. After more than a decade, early miners are updating their records, creating inheritance plans, and transferring assets to institutional custodians. The act of moving coins from an old address is sometimes less a financial maneuver and more a generational effort to ensure those digital fortunes survive their original owners.</p><p>Each time these wallets awaken, the community reacts with fascination and unease. The first question is always the same: could this be Satoshi Nakamoto? So far, none of the reactivated wallets match known Satoshi mining patterns, but the mythology persists. Beyond the curiosity, there&#8217;s the market anxiety that large moves might signal selling pressure. Yet most transfers have not flowed into exchanges. They seem measured, intentional, and quiet. In a sense, this is the opposite of panic: the calm movement of old wealth into modern systems that have better security.</p><p>What we are witnessing is also a potential generational handoff. The early experimenters who mined coins on laptops are now confronting questions of succession and security that mirror those of traditional wealth. Their coins, once symbols of rebellion against institutions, are being integrated into structured estates, custodial frameworks, and long-term trusts. As these coins move, they pass through new layers of infrastructure and oversight, becoming part of a global financial fabric that looks very different from the anarchic beginnings of Bitcoin.</p><p>As quantum computing advances and Bitcoin&#8217;s price continues to rise, more early wallets are likely to move. Some of those transfers will be tests or migrations; others may represent the quiet liquidation of immense fortunes. Watching these awakenings provides a rare link between the network&#8217;s origin story and its future. The early holders are not gone, some of them may have abandoned some bitcoins, or lost access, but some of them are simply preparing for the next phase of digital permanence, one where Bitcoin must prove its resilience against both time and the very real likelihood of advancements in quantum based technology.</p><p>Summary of the key points on the quantum threat to bitcoin:</p><ul><li><p>Quantum computing is advancing, and although no system has yet cracked the relevant cryptography used by Bitcoin, experts estimate that within the next 5-10 years some wallets using legacy address types could become vulnerable.</p></li><li><p>A significant fraction of Bitcoin&#8217;s supply (in some estimates around 25 %) is held in addresses whose public key has been exposed (or in older formats such as pay-to-public-key) and thus are considered more at risk from a &#8220;Q-day&#8221; style attack.</p></li><li><p>The network is already responding: developers have floated proposals (e.g., a draft BIP&#8209;360) to freeze coins in vulnerable legacy addresses and force migration to quantum-resistant formats, with multi-phase transition plans.</p></li><li><p>The fact that early &#8220;Satoshi-era&#8221; wallets are waking up now may reflect not just profit or estate planning motives but also pre-emptive security behaviour by holders who recognise the quantum risk and wish to migrate coins to safer custody.</p></li><li><p>From a scarcity and supply-dynamics perspective, the quantum threat adds another layer of complexity: dormant coins may not just be inert, they may be targeted or moved due to security fears, altering how one thinks about long-term supply, holder behaviour and concentration risk.</p></li></ul><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Links I&#8217;m sharing this week!</p><div id="youtube2-lONyteDR4XE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;lONyteDR4XE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/lONyteDR4XE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div id="youtube2-I8VUN141MjU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;I8VUN141MjU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/I8VUN141MjU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Footnotes:</p><p>[1] Van Straten, J. (2023, November 15). <em>Record 70% of Bitcoin supply lies dormant for a year or more</em>. CryptoSlate. <a href="https://cryptoslate.com/insights/record-70-of-bitcoin-supply-lies-dormant-for-a-year-or-more/">https://cryptoslate.com/insights/record-70-of-bitcoin-supply-lies-dormant-for-a-year-or-more/</a></p><p>[2] Chojecki, P. (2025, May 20). <em>Quantum computers threat to Bitcoin: Q-Day, post-quantum cryptography and Bitcoin</em>. Medium. <a href="https://pchojecki.medium.com/quantum-computers-threat-to-bitcoin-e1b57b0da2aa">https://pchojecki.medium.com/quantum-computers-threat-to-bitcoin-e1b57b0da2aa</a></p><p>[3] AInvest. (2025, July 5). <em>30.4% of Bitcoin supply dormant for over five years</em>. <a href="https://www.ainvest.com/news/30-4-bitcoin-supply-dormant-years-2507/">https://www.ainvest.com/news/30-4-bitcoin-supply-dormant-years-2507/</a></p><p>[4] Crypto News. (2025, June 18). <em>Over 3.4 million BTC, more than 17% of the total supply, have not moved in at least a decade</em>. <a href="https://crypto.news/bitcoin-dormant-supply-growth-outpaces-issuance-2025/">https://crypto.news/bitcoin-dormant-supply-growth-outpaces-issuance-2025/</a></p><p>[5] Malwa, S. (2025, July 5). <em>Eight Bitcoin wallets move 80,000 BTC in largest ever &#8216;Satoshi-era&#8217; transfers</em>. CoinDesk. <a href="https://www.coindesk.com/markets/2025/07/05/eight-bitcoin-wallets-move-80000-btc-in-largest-ever-satoshi-era-transfers">https://www.coindesk.com/markets/2025/07/05/eight-bitcoin-wallets-move-80000-btc-in-largest-ever-satoshi-era-transfers</a></p><p>[6] Kumari, I. (2025, September 29). <em>Bitcoin address from miner era reactivates to shift 400 BTC &#8211; Report</em>. AMBCrypto. <a href="https://ambcrypto.com/bitcoin-address-from-miner-era-reactivates-to-shift-400-btc-report/">https://ambcrypto.com/bitcoin-address-from-miner-era-reactivates-to-shift-400-btc-report/</a></p><p>[7] Van Straten, J. (2025, October 24). <em>Dormant Bitcoin Whale With $442 M Awakens for First Time in 14 Years Amid Quantum Fears</em>. CoinDesk. <a href="https://www.coindesk.com/markets/2025/10/24/dormant-bitcoin-whale-with-usd442m-awakens-for-first-time-in-14-years-amid-quantum-fears">https://www.coindesk.com/markets/2025/10/24/dormant-bitcoin-whale-with-usd442m-awakens-for-first-time-in-14-years-amid-quantum-fears</a></p>]]></content:encoded></item><item><title><![CDATA[AI Is Burning Through Graphics Cards]]></title><description><![CDATA[Listen now | The clock is ticking on graphics cards being used for AI inference.]]></description><link>https://www.nelsx.com/p/ai-is-burning-through-graphics-cards</link><guid isPermaLink="false">https://www.nelsx.com/p/ai-is-burning-through-graphics-cards</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 24 Oct 2025 23:01:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/176418989/742d48f5fd19927a749b3fa83a6a0214.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 210 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;AI Is Burning Through Graphics Cards.&#8221;</p><p>Generational wealth is being invested into data centers for AI. It&#8217;s so prevalent that you hear about it on the nightly news and municipalities are dealing with the power demands. The clock is ticking on graphics cards being used for AI inference. The current generation of GPUs was never designed to run around the clock under inference loads. These chips were originally built for bursts of rendering, not continuous model execution at scale. What we are seeing now is an industry trying to stretch gaming hardware into a role it was never meant to fill. The result is heat, power consumption, and a ticking clock based on the inevitable wear.</p><p>Each graphics card has a limited operational lifespan. These are not like bricks being used to build a house; they are just expensive computer hardware. The more intensive the workloads, the shorter that lifespan becomes. Fans fail, thermal paste dries out, and the silicon itself begins to degrade. Inference tasks, particularly when stacked across large fleets of GPUs, magnify this effect. The relentless pace of AI workloads accelerates the failure curve, turning once-premium cards into temporary consumables. I&#8217;m actually really curious what is going to happen to all of them at the end of this cycle. A secondary market does exist for these used devices and companies like Iron Mountain will help data centers with secure disposal.</p><p>By most reasonable estimates, there are now between 3.5 and 4.5 million NVIDIA data-center GPUs actively deployed in production environments. Hyperscalers such as Meta, Microsoft, and Google each operate hundreds of thousands of units, while smaller data centers fill out the rest of the global total. Each GPU represents a remarkable amount of compute density, but also a constant thermal and economic liability. Even with optimized cooling, sustained inference loads drive high thermal stress and power draw that shorten component life. These systems were never meant to run 24 hours a day, 365 days a year.</p><p>Under heavy duty cycles, many GPUs experience significant degradation within one to three years of continuous operation. The warranties often match that window, which reflects a design expectation rather than coincidence. Silicon aging and persistent thermal cycling all take their toll. Even when the hardware technically survives longer, it becomes economically obsolete as new architectures quickly double efficiency and throughput. The pace of improvement ensures that by 2027 or 2028, most of today&#8217;s fleet will either be retired, resold, or relegated to low-priority inference tasks. Right now TSMC would have to make the chips to replenish this fleet of GPUs which would be outrageously expensive. Both NVIDIA and TSMC manufacturing teams could be looking at a huge impending need for production or a shift to a new type of technology.</p><p>That replacement cycle has massive implications. The cost of refreshing millions of GPUs every few years is enormous, and the environmental impact of manufacturing and disposing of that much silicon is even harder to ignore. As AI inference continues to scale, this churn becomes unsustainable. Companies are already exploring purpose-built accelerators, ASICs, and FPGAs that can deliver better efficiency and longer service life. These designs aim to handle continuous inference without the same thermal or aging limitations that plague graphics cards.</p><p>Sustainability will define the next phase of AI infrastructure. The transition away from general-purpose GPUs is underway, but what comes after silicon remains uncertain. Research into photonic computing, quantum processors, and neuromorphic architectures offers glimpses of what a post-GPU world might look like. Each of these alternatives seeks to break free from the limits of traditional chips while extending useful lifespans. The next leap in AI hardware will not be measured by sheer speed, but by how well it can endure the relentless demands of inference at scale.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><p>Links I&#8217;m sharing this week!</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:176777618,&quot;url&quot;:&quot;https://garymarcus.substack.com/p/is-vibe-coding-dying&quot;,&quot;publication_id&quot;:888615,&quot;publication_name&quot;:&quot;Marcus on AI&quot;,&quot;publication_logo_url&quot;:null,&quot;title&quot;:&quot;Is vibe coding dying?&quot;,&quot;truncated_body_text&quot;:&quot;Remember how in October and in March I told you that vibe coding &#8212; in the sense of amateurs using large language models to write code to &#8220;build products that would have previously required teams of engineers&#8221; &#8212; would never be remotely reliable? And that such tools were fine for demos but not for complex apps in the real world? And that the code they wro&#8230;&quot;,&quot;date&quot;:&quot;2025-10-22T11:23:33.664Z&quot;,&quot;like_count&quot;:258,&quot;comment_count&quot;:123,&quot;bylines&quot;:[{&quot;id&quot;:14807526,&quot;name&quot;:&quot;Gary Marcus&quot;,&quot;handle&quot;:&quot;garymarcus&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Ka51!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F8fb2e48c-be2a-4db7-b68c-90300f00fd1e_1668x1456.jpeg&quot;,&quot;bio&quot;:&quot;Scientist, author and entrepreneur, known as a leading voice in AI. Six books including The Algebraic Mind, Rebooting AI, and Taming Silicon Valley; NYU Professor Emeritus.&quot;,&quot;profile_set_up_at&quot;:&quot;2022-05-14T14:01:17.198Z&quot;,&quot;reader_installed_at&quot;:&quot;2022-05-14T13:59:03.190Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:830179,&quot;user_id&quot;:14807526,&quot;publication_id&quot;:888615,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:888615,&quot;name&quot;:&quot;Marcus on AI&quot;,&quot;subdomain&quot;:&quot;garymarcus&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;\&quot;Marcus has become one of our few indispensable public intellectuals. The more people read him, the better our actions in shaping Al will be.\&quot;\n- Kim Stanley Robinson, author of Ministry for the Future&quot;,&quot;logo_url&quot;:null,&quot;author_id&quot;:14807526,&quot;primary_user_id&quot;:14807526,&quot;theme_var_background_pop&quot;:&quot;#EA410B&quot;,&quot;created_at&quot;:&quot;2022-05-14T14:09:01.903Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Gary Marcus&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:null,&quot;is_personal_mode&quot;:false}}],&quot;twitter_screen_name&quot;:&quot;GaryMarcus&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:1000,&quot;status&quot;:{&quot;bestsellerTier&quot;:1000,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;bestseller&quot;,&quot;tier&quot;:1000},&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://garymarcus.substack.com/p/is-vibe-coding-dying?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><span></span><span class="embedded-post-publication-name">Marcus on AI</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Is vibe coding dying?</div></div><div class="embedded-post-body">Remember how in October and in March I told you that vibe coding &#8212; in the sense of amateurs using large language models to write code to &#8220;build products that would have previously required teams of engineers&#8221; &#8212; would never be remotely reliable? And that such tools were fine for demos but not for complex apps in the real world? And that the code they wro&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">6 months ago &#183; 258 likes &#183; 123 comments &#183; Gary Marcus</div></a></div><div id="youtube2-t74ClffSUW0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;t74ClffSUW0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/t74ClffSUW0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Social media stopped being social]]></title><description><![CDATA[Listen now | Thank you for tuning in to week 209 of the Lindahl Letter publication.]]></description><link>https://www.nelsx.com/p/social-media-stopped-being-social</link><guid isPermaLink="false">https://www.nelsx.com/p/social-media-stopped-being-social</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 17 Oct 2025 23:00:23 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/175895583/8e4cc3c151246338eb205c84bd77dae7.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 209 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Social media stopped being social.&#8221;</p><p>Before we get going this week. I need to provide an update about last week&#8217;s post. I take full responsibility, as the principal writer here, that last week my writing efforts were just not up to par within the 208th Lindahl Letter publication. You have come to expect better from me and last week I just delivered a dud of a post. It&#8217;s the first post in a long time that actively drove people to leave the Lindahl Letter. It&#8217;s pretty easy to see the signal within the noise when something was bad enough to drive people away and I take responsibility for delivering that subpar effort.</p><p>That being noted, let&#8217;s pivot back to the main topic at hand related to social media.</p><p>I&#8217;m not sure if social media was ever really about togetherness and being social. Those are things after the fact that I want to ascribe to it. Let&#8217;s blame it on nostalgia. Communities tend to align with place, interest, or circumstance. Certainly online communities that are highly focused and targeted on a distinct community probably work. Later in a different essay it might be worth digging into the pocks of working online communities. That side of the coin however is not the focus of this missive.</p><p>Things were different back when Twitter arrived in 2006 and ultimately became popular during South by Southwest in 2007. During the initial development and discovery of these applications for social media sharing things were different and maybe that newness is now something to be nostalgic about. Social media now is fragmented and stopped being social the moment algorithms learned how to predict the things that would hold our attention better than we could possibly direct it.</p><p>What started the social media ball rolling as a digital gathering of friends slowly transformed into a system of engineered consumption. The feed no longer reflects our relationships. It reflects what the platform believes will keep us scrolling. In the process, the human layer was optimized out of existence. I am hoping the Substack experience ends up being different. Right now Substack is really my only active social media platform. It&#8217;s full of actual readers and writers for the most part. I&#8217;m trying to get into the swing of using Substack Notes, but that just seems to be an ongoing process of trying to figure it out. Previously, I tried to get into posting on Bluesky and I&#8217;ll admit that during Colorado Avalanche games it did feel like some level of community existed. Outside of gametime I just never really got much out of the Bluesky experience.</p><p>Let&#8217;s take a step back from where we are now to consider history for a moment. Things were different for the first wave adopters. The first generation of social networks were built around connection. You followed people you knew, saw what they were doing, and commented because you cared. The platforms of today are not built for connection, but instead of being factored around community they are built for amplification. The more content flows, the more data moves, and the more ads get served. The mechanics of community were replaced by the logic of engagement.</p><p>That shift changed the culture. Ultimately, it spawned the influencer movement. Maybe it&#8217;s a moment or it could be a watershed change away from public intellectuals to something else more product centric. People began curating identities instead of sharing moments. Every post became a performance. Every response was an opportunity for algorithmic reinforcement. What once felt like a conversation now feels like an audition. Social validation metrics turned communication into competition. The ultimate winners being the people who ended up making a career within this new flow of attention online.</p><p>As that dynamic took hold, the real social behavior moved into the shadows. Private group chats, invite-only communities, and niche networks quietly took over the role that public timelines once held. The visible web is now dominated by content farms and brand influencers. The meaningful conversations happen elsewhere, often out of reach of recommendation systems. What used to feel like a town square has become a noisy digital strip mall.</p><p>Social networks have become media networks. In some ways they are just the next generation of broadcast television or radio. It&#8217;s just more targeted and in some ways a lot more divisive. They are not spaces for dialogue but for distribution. Every interaction is mediated through a system that values attention over authenticity. That is why the average user now feels less connected than ever, even as they scroll through an endless feed of &#8220;content.&#8221; The core function of social media has inverted. It no longer connects people directly instead connecting people to platforms.</p><p>We may be entering a post-social era online. Connection is returning to smaller spaces: group chats, email lists, federated platforms, and direct exchanges. The large-scale, public-facing feed is collapsing under the weight of its own incentives. Maybe that&#8217;s the natural end of a system built on attention rather than empathy. What comes next may not look like social media at all. It might look more like correspondence. The strange part is that most users know this. We can feel the shift. We see fewer updates from friends, fewer real conversations, and more noise disguised as engagement. The feedback loop is obvious, but breaking free from it is hard. Every design choice keeps us tethered to the cycle. The system runs on our participation, but not our connection.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p><h1>Links I&#8217;m sharing this week!</h1><p>White, M. (2025, October 17). <em>Anatomy of a crypto meltdown</em>. Citation Needed. Retrieved from <a href="https://www.citationneeded.news/anatomy-of-a-crypto-meltdown/">https://www.citationneeded.news/anatomy-of-a-crypto-meltdown/ </a></p><p>The Vergecast. (2024, October 17). <em>AI can&#8217;t even turn on the lights | The Vergecast</em> [Video].</p><div id="youtube2-Voqk5FZpaZk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Voqk5FZpaZk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Voqk5FZpaZk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>SearchParty. (2024, April 12). <em>The big flaw in Trump&#8217;s AI plan</em> [Video]. YouTube.</p><div id="youtube2-Nq-Faw_ENEQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Nq-Faw_ENEQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Nq-Faw_ENEQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Nathan Labenz &amp; Erik Torenberg. (2024, March 8). <em>Is AI slowing down? Nathan Labenz on GPT-5, progress and predictions</em> [Video]. YouTube.</p><div id="youtube2-nkmPNvAU49Q" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;nkmPNvAU49Q&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/nkmPNvAU49Q?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>]]></content:encoded></item><item><title><![CDATA[Building with constant model churn]]></title><description><![CDATA[Listen now | Thank you for tuning in to week 208 of the Lindahl Letter publication.]]></description><link>https://www.nelsx.com/p/building-with-constant-model-churn</link><guid isPermaLink="false">https://www.nelsx.com/p/building-with-constant-model-churn</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 10 Oct 2025 23:00:46 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/175754018/345f085c5b42efc66e592e3ce1e74548.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<blockquote><p>Day after release update: I guess it was the 208th post where we hit the proverbial wall with a dud of a post. This post in retrospect turned out to be one of my weaker efforts. I thought it was a strong take about dealing with the rate of change in model development, but it was just not focused and targeted based on delivering quality and insights.</p></blockquote><p>Thank you for tuning in to week 208 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Building with constant model churn.&#8221;</p><p>Developers have spent a lot of time in the past patching software. That happens based on vulnerabilities, edge cases, and performance issues. All this vibe-coded content and things built on models are not getting any patches to make them better going forward. You may get a new release or a new model, but that patch to save you from vulnerabilities is not being developed and is not on the way. It is the nature of modern development. The ecosystem of dependencies is real. However, the pace of model development has created an unusual environment for anyone trying to build durable systems.</p><p>You cannot really hot swap models within production systems. That just does not work. In the last five years, we have seen large language model releases from OpenAI, Anthropic, Google, Meta, Mistral, Cohere, and several open-source groups. Each iteration has been faster, larger, and sometimes more efficient than the one before. What has not been stable is the interface between models and the systems people build around them. Even seemingly small changes in context window size, output quality, or API availability ripple outward and cause redesigns, migrations, and sudden pivots. Sometimes these changes happen with no warning whatsoever.</p><p>For builders, this creates a paradox. The potential upside of adopting a newer model is undeniable: better reasoning, lower costs, and expanded capabilities. At the same time, the risk of betting on an API or framework that may be deprecated in months is a constant concern. Some developers chase every release, weaving the newest model into their applications as quickly as possible. Others step back, building abstractions and wrappers that allow for switching models without disrupting core workflows. Neither path offers complete insulation from this wave of almost continuous churn.</p><p>The history of technology offers parallels. Software engineers have long had to deal with shifting operating systems, frameworks, and libraries. What makes this moment different is the velocity of change and the sheer dependency of emerging applications on model behavior. The model is not just another dependency, it is the foundation of the system. When that foundation shifts, everything built on top of it must be reconsidered.</p><p>There is also a deeper strategic question. Should builders lean into constant change and accept churn as a feature of the landscape? Or should they try to design in ways that minimize dependency, focusing more on proprietary data pipelines, unique integrations, and distinctive user experiences? Both strategies reflect an awareness that stability is not guaranteed in this ecosystem. The companies that endure will be the ones that treat churn not as an annoyance but as a design constraint.</p><p>Things to consider:</p><ul><li><p>The lack of patching for AI models makes long-term maintenance difficult.</p></li><li><p>Model churn introduces structural instability into modern systems.</p></li><li><p>Abstraction layers help, but they cannot prevent cascading change.</p></li><li><p>Treating churn as a core design constraint is a pragmatic approach.</p></li><li><p>Builders must balance innovation speed with long-term stability.</p></li></ul><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p>]]></content:encoded></item><item><title><![CDATA[Enforcing AI standards without exception]]></title><description><![CDATA[Listen now | Thank you for tuning in to week 207 of the Lindahl Letter publication.]]></description><link>https://www.nelsx.com/p/enforcing-ai-standards-without-exception</link><guid isPermaLink="false">https://www.nelsx.com/p/enforcing-ai-standards-without-exception</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 03 Oct 2025 23:00:44 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/174942629/df975c03853d0215130b427ceaa35322.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 207 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, &#8220;Enforcing AI standards without exception.&#8221;</p><p>Standards are something we need to spend more time talking about. That is a general statement and not a special argument. Years ago, I actually witnessed a physical desk sign at an office that said, &#8220;We either have standards or we don&#8217;t.&#8221; It&#8217;s not a great mystery how that particular leader felt about standards. That type of adherence to standards is not all that common. In our LLM sponsored chat by prompt first and ask questions later world; people just keep prompting. Allowing models to just keep generating without standards is how we ended up where we are right now. Those tokens are being burnt at prodigious rates. All of those burnt tokens yield nothing reusable or even effectively carried forward. Mostly they are highly siloed outputs to an audience of one. They are all spent and the electricity and compute used will never be recovered. They are just an expense on somebody else&#8217;s balance sheet.</p><p>Everything about the open web is pretty much in rapid decline. I would argue that enforcing standards without exception is the only way the end user can truly control the agenda or hope to manage the ultimate outcome when working with AI. It might even help us save the internet. That cause however might have already been lost. One of the great ironies of generative AI is that it demands more discipline from the human interacting with it to get quality outputs, not less. Sure prompt engineering has become a hands on the keyboard type of sport, but my best guess is everything ends up being more conversational in the end. You would expect a machine to be the enforcer of rules, to deliver outputs with mechanical precision. Instead, the responsibility ultimately falls back on the end user to enforce standards at every turn. The system will generate endlessly, but unless you control the agenda, it will wander away from the very standards that define your work. A lot of people are also just creating AI slop and potentially worse AI generated workslop.</p><p>This is not a trivial annoyance. It is the defining challenge of using AI effectively. You might tell a system: no em dashes, strict numeric citations, Substack-compatible footnotes. And for a moment, it will comply. Then, in the next draft, it slips back into its defaults. Suddenly the citations are misplaced, the formatting is broken, or the output is square when you clearly require 14:10. It doesn&#8217;t matter how many times you&#8217;ve said it for some reason the system&#8217;s memory for discipline is shallow. If you do not enforce the standard without exception, the drift takes over. For an organization, that can mean tens or thousands of drifting lines of argument and fragmented results.</p><p>That is why the end user must step into a role that looks less like automation&#8217;s promise and more like quality assurance. You are not simply a writer or a collaborator. You are the auditor, the rule enforcer, the one who stops the drift. We either have standards or we don&#8217;t. Allow one exception, and you have taught the system that exceptions are acceptable. Enforce the standard every time, and you create a boundary strong enough to shape consistent results.</p><p>This relentless enforcement becomes the core of collaboration. Without it, the system defaults to &#8220;plausible&#8221; instead of &#8220;correct,&#8221; &#8220;close enough&#8221; instead of &#8220;aligned.&#8221; You cannot rely on the machine to protect the integrity of your work or really even to have solid consistent outputs. That responsibility is yours. The human must guard the agenda with vigilance and insistence. Outside of ruthlessly enforcing standards without exception the path forward is just full of slop.</p><p>Over time, this process builds more than consistency. It builds identity. A body of work that holds together across hundreds of posts or thousands of outputs does so because the user enforced the standards that give it coherence. We may very well look at the internet archives before all the LLM training as untainted and everything after that point with skepticism. I&#8217;m not arguing that everything in that first tranche of content was high quality or even accurate, but it was before the models. Without that enforcement, the work would fracture into a mix of styles, structures, and shortcuts. Enforcing standards without exception is exhausting, but it is also the only way to produce work that reflects your agenda rather than the system&#8217;s defaults.</p><p>Things to consider:</p><ul><li><p>AI will always drift back toward its defaults unless the user enforces rules consistently.</p></li><li><p>The promise of automation is inverted: the human enforces discipline, not the machine.</p></li><li><p>Exceptions teach the system the wrong lesson and erode consistency.</p></li><li><p>Vigilant enforcement is what turns scattered outputs into a coherent body of work.</p></li><li><p>Control of the agenda belongs to the end user, or it is lost altogether.</p></li></ul><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p>]]></content:encoded></item><item><title><![CDATA[The Great Tokenapocalypse]]></title><description><![CDATA[Listen now | Why Gemini Can&#8217;t Scale and Apple Won&#8217;t Try]]></description><link>https://www.nelsx.com/p/the-great-tokenapocalypse</link><guid isPermaLink="false">https://www.nelsx.com/p/the-great-tokenapocalypse</guid><dc:creator><![CDATA[Dr. Nels Lindahl]]></dc:creator><pubDate>Fri, 26 Sep 2025 23:00:49 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/174277347/38858d67cae2a8b0067ce7da1421022f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Thank you for tuning in to week 206 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration is &#8220;The Great Tokenapocalypse.&#8221;</p><p>As large language models reach deeper into consumer devices, the cost of running them becomes the real bottleneck. So many tokens get burned with no ROI or use case for the company burning them; it's really out of control. Almost as out of control as the sunk cost of data centers that will probably be regretted at some point in the next 5 years. It&#8217;s sort of the unspoken reality of an arms race where building data centers that just depreciate and spending compute resources without any plan for recovering the cost is happening. This week explores how token economics is silently shaping the deployment strategies of Google and Apple.</p><p>You may have noticed something strange about the rollout of generative AI: despite Google&#8217;s global reach and technical infrastructure, Gemini is not yet present on every device. It isn&#8217;t quietly running in the background on your Nest Hub, it doesn&#8217;t summarize content on your Pixel Watch, and it hasn&#8217;t taken over the always-on interactions that dominate the smart home experience. On paper, Gemini could power all of this: but in practice, it doesn&#8217;t. The reasons are not technical, but economic.</p><p>It&#8217;s the tokens. Each time a large language model like Gemini processes a prompt or generates a response, it consumes tokens which are effectively a unit of computation that translate directly into cost. This cost is not abstract. It is real-time, metered, and at scale becomes wildly continuous with enough uses. When you ask Gemini to summarize an email or rewrite a paragraph, you&#8217;re triggering a live cloud inference cycle that draws directly on Google&#8217;s TPU infrastructure. At a small scale, these requests are manageable. But when deployed across millions of devices, in billions of micro-interactions, the financial and infrastructure burden becomes extreme. What looks like product restraint is actually cost containment. Google is avoiding what could become a tokenapocalypse which would be a runaway escalation of inference demand that outpaces both compute supply and operating budget.</p><p>Gemini was designed for centralized, high-performance environments. It was not optimized for low-power edge devices or offline operation. Its rollout has been concentrated in strategic, high-leverage use cases: Workspace productivity, Pixel exclusives, and experimental features inside Search Labs. These are high-value zones where the cost per token can be justified. Gemini has not been deployed ambiently in the wild on smart speakers, in Android Auto, or on lightweight wearables mostly because those endpoints offer little to no margin against token cost. The model cannot run constantly without triggering exponential cloud expenditure. Until inference becomes drastically cheaper or edge-native Gemini variants emerge, Google is likely to continue rationing its deployment to protect against economic overextension.</p><p>Apple, by contrast, has chosen an entirely different path forward. They elected a path that avoids the token problem from the outset. Its 2024 rollout of &#8220;Apple Intelligence&#8221; emphasized a local-first architecture built around on-device models. Instead of sending every prompt to the cloud, Apple routes the vast majority of inference through its A-series and M-series silicon. This strategy means that users can rewrite notes, summarize messages, or interact with Siri entirely offline, with zero token cost to Apple. When tasks exceed the capability of local models, they are sent to Apple&#8217;s &#8220;Private Cloud Compute&#8221; system, but this fallback is used selectively, with strict privacy and latency guarantees.</p><p>Apple&#8217;s approach isn&#8217;t just a branding play. It reflects a fundamental architectural decision to avoid the economics of inference altogether. Apple doesn&#8217;t operate a hyperscale public cloud business, so it has no incentive to absorb or monetize cloud-based generative AI usage. Its profits come from hardware margins and platform services. This gives Apple the freedom to constrain usage, limit interaction complexity, and push AI to the edge. A strategy they can get away with, ultimately without incurring the compounding costs that Google faces. It&#8217;s a token-avoidant strategy, and it may prove to be the more sustainable one.</p><p>Where Google builds outward from a full-stack cloud foundation, Apple builds inward from a controlled edge. Google&#8217;s strategy scales across models and modalities, but each expansion amplifies cost. Apple&#8217;s strategy constrains functionality but keeps economics stable. Both are reacting to the same underlying pressure: token costs are rising faster than monetization models can support. The more embedded the model becomes, the more tokens flow. A stark reality comes into existence where it becomes more urgent to rethink deployment patterns. This isn&#8217;t just a question of technical feasibility. It&#8217;s a matter of financial survivability.</p><p>The race to deploy generative AI at scale is quickly becoming a race to control token exposure. Inference cost and not model quality may be the key determinant of which platforms can sustainably integrate AI across the stack. If cloud economics don&#8217;t shift, and if token optimization doesn&#8217;t advance, then ambient LLMs may remain a luxury reserved for premium endpoints and enterprise tasks. The real future of ubiquitous AI may depend less on how powerful models become, and more on how efficiently they run in the wild.</p><p>Things to consider:</p><ul><li><p>Google&#8217;s restraint in deploying Gemini across its device ecosystem likely reflects real-time token cost constraints rather than technical limits.</p></li><li><p>Every cloud-based Gemini interaction consumes metered compute, making global deployment economically unstable without stronger monetization.</p></li><li><p>Apple avoids these problems by designing for on-device inference and constraining AI functionality to remain token-light.</p></li><li><p>Token economics are now shaping the strategic posture of every major platform, defining where and how AI appears in consumer workflows.</p></li><li><p>Sustained deployment of generative models may depend less on breakthrough architecture and more on advances in inference efficiency and local compute.</p></li></ul><p>As the tokenapocalypse looms, we&#8217;ll be watching how companies respond. That response could be through model compression, edge acceleration, hybrid routing, and new monetization strategies. In the coming weeks, we&#8217;ll explore how these constraints are shaping research priorities, ecosystem fragmentation, and what it means to run AI sustainably across global networks. If you see an AI endpoint that should exist but doesn&#8217;t, it may be because someone, somewhere, did the token math.</p><p>What&#8217;s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!</p>]]></content:encoded></item></channel></rss>