The Lindahl Letter
The Lindahl Letter
Enforcing AI standards without exception
0:00
-5:52

Enforcing AI standards without exception

Thank you for tuning in to week 207 of the Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for the Lindahl Letter is, “Enforcing AI standards without exception.”

Standards are something we need to spend more time talking about. That is a general statement and not a special argument. Years ago, I actually witnessed a physical desk sign at an office that said, “We either have standards or we don’t.” It’s not a great mystery how that particular leader felt about standards. That type of adherence to standards is not all that common. In our LLM sponsored chat by prompt first and ask questions later world; people just keep prompting. Allowing models to just keep generating without standards is how we ended up where we are right now. Those tokens are being burnt at prodigious rates. All of those burnt tokens yield nothing reusable or even effectively carried forward. Mostly they are highly siloed outputs to an audience of one. They are all spent and the electricity and compute used will never be recovered. They are just an expense on somebody else’s balance sheet.

Everything about the open web is pretty much in rapid decline. I would argue that enforcing standards without exception is the only way the end user can truly control the agenda or hope to manage the ultimate outcome when working with AI. It might even help us save the internet. That cause however might have already been lost. One of the great ironies of generative AI is that it demands more discipline from the human interacting with it to get quality outputs, not less. Sure prompt engineering has become a hands on the keyboard type of sport, but my best guess is everything ends up being more conversational in the end. You would expect a machine to be the enforcer of rules, to deliver outputs with mechanical precision. Instead, the responsibility ultimately falls back on the end user to enforce standards at every turn. The system will generate endlessly, but unless you control the agenda, it will wander away from the very standards that define your work. A lot of people are also just creating AI slop and potentially worse AI generated workslop.

This is not a trivial annoyance. It is the defining challenge of using AI effectively. You might tell a system: no em dashes, strict numeric citations, Substack-compatible footnotes. And for a moment, it will comply. Then, in the next draft, it slips back into its defaults. Suddenly the citations are misplaced, the formatting is broken, or the output is square when you clearly require 14:10. It doesn’t matter how many times you’ve said it for some reason the system’s memory for discipline is shallow. If you do not enforce the standard without exception, the drift takes over. For an organization, that can mean tens or thousands of drifting lines of argument and fragmented results.

That is why the end user must step into a role that looks less like automation’s promise and more like quality assurance. You are not simply a writer or a collaborator. You are the auditor, the rule enforcer, the one who stops the drift. We either have standards or we don’t. Allow one exception, and you have taught the system that exceptions are acceptable. Enforce the standard every time, and you create a boundary strong enough to shape consistent results.

This relentless enforcement becomes the core of collaboration. Without it, the system defaults to “plausible” instead of “correct,” “close enough” instead of “aligned.” You cannot rely on the machine to protect the integrity of your work or really even to have solid consistent outputs. That responsibility is yours. The human must guard the agenda with vigilance and insistence. Outside of ruthlessly enforcing standards without exception the path forward is just full of slop.

Over time, this process builds more than consistency. It builds identity. A body of work that holds together across hundreds of posts or thousands of outputs does so because the user enforced the standards that give it coherence. We may very well look at the internet archives before all the LLM training as untainted and everything after that point with skepticism. I’m not arguing that everything in that first tranche of content was high quality or even accurate, but it was before the models. Without that enforcement, the work would fracture into a mix of styles, structures, and shortcuts. Enforcing standards without exception is exhausting, but it is also the only way to produce work that reflects your agenda rather than the system’s defaults.

Things to consider:

  • AI will always drift back toward its defaults unless the user enforces rules consistently.

  • The promise of automation is inverted: the human enforces discipline, not the machine.

  • Exceptions teach the system the wrong lesson and erode consistency.

  • Vigilant enforcement is what turns scattered outputs into a coherent body of work.

  • Control of the agenda belongs to the end user, or it is lost altogether.

What’s next for the Lindahl Letter? New editions arrive every Friday. If you are still listening at this point and enjoyed this content, then please take a moment and share it with a friend. If you are new to the Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!

Discussion about this episode

User's avatar