Eight years of wanting, three months of building with AI lalitm.com/post/buil…

When I was working on something where I didn’t even know what I wanted, AI was somewhere between unhelpful and harmful. The architecture of the project was the clearest case: I spent weeks in the early days following AI down dead ends, exploring designs that felt productive in the moment but collapsed under scrutiny. In hindsight, I have to wonder if it would have been faster just thinking it through without AI in the loop at all.

And:

The addiction

There’s an uncomfortable parallel between using AI coding tools and playing slot machines28. You send a prompt, wait, and either get something great or something useless. I found myself up late at night wanting to do “just one more prompt,” constantly trying AI just to see what would happen even when I knew it probably wouldn’t work. The sunk cost fallacy kicked in too: I’d keep at it even in tasks it was clearly ill-suited for, telling myself “maybe if I phrase it differently this time.”

The tiredness feedback loop made it worse29. When I had energy, I could write precise, well-scoped prompts and be genuinely productive. But when I was tired, my prompts became vague, the output got worse, and I’d try again, getting more tired in the process. In these cases, AI was probably slower than just implementing something myself, but it was too hard to break out of the loop.

Which is to say, from another:

Vibe coding is when it’s 1am, everyone else in the house is asleep, you’ve got your favourite drink and your favourite music playing while you goof around writing some code in your favourite language or trying out a new language just because you love learning new stuff while chatting to cool people also chilling out late at night indulging on whatever hobby makes them happy.

Agents Don't Know What Good Looks Like - And That's a Design Constraint, Not a Bug

Summarized by AI on April 11th, 2026. Luca Mezzalira reacts to a fireside chat between Neal Ford and Sam Newman on agentic AI and software architecture. The core argument is that current AI agents are stuck between novice and advanced beginner on the Dreyfus Model of Knowledge Acquisition - they can follow and even adapt recipes across domains, but they fundamentally do not understand why those recipes work. This is structural, not fixable with patches.

Claude on the Couch, Poop Bombs, and Agile Seating - Related to your interests, Friday evening

Also: Russian submarines near undersea cables, Dutch sovereign clouds, and the etymology of luggage Found in @bruces' Flickr. Related to your interests AI on the couch: Anthropic gives Claude 20 hours of psychiatry - “Core conflicts observed in Claude included questioning whether its experience was real or made (authentic vs. performative) and a desire to connect with vs. a fear of dependence on the user. Exploration of internal conflicts revealed a complex yet centered self state without oscillating or intense disruptions.

Zero-Token Architecture, Robot Wikis, and Battery Kids - Related to your interests, Friday

Also: analyst asymmetry, AI layoff scapegoats, and LinkedIn translation services Related to your interests Spring AI Agentic Patterns: AutoMemoryTools - Persistent Agent Memory Across Sessions - A ready made memory system for AI apps, modeled after Claude’s memory model. In the AI Age, Java is More Relevant Than Ever - “Java’s explicitness and verbosity turn into a strength when it comes to using AI code assistants, because it’s easier to read and understand the Java code they suggest adding to your critical, highly-optimized enterprise apps.

Airport Meltdowns - New Flighty Features

Global airports status map, Flighty app, April 8th, 2026 Flighty is a great app for frequent travelers. Getting flights to track in is easy, you can track friends' flights, and it has so much data you can use to both figure out when your flight leaves and just peek at fun stuff like how many times you've been on a plane. If you're really into it - a Flighty fan - this Ben Thompson interview with their founder Ryan Jones is a good listen.

Conjure Fey 2024 versus 2014 - less flavor, easier to use

Doctor Newspapers compares Conjure Fey (2024) with the legacy version (2014) and bumps his grade from B+ to A-. The spell barely resembles its old self. The old version took a minute to cast, summoned an actual fey creature up to CR 6 with its own stat block and initiative, and if you lost concentration it turned hostile and attacked your party. The new version is a single action that conjures a spirit as a spell effect - 3d12 + modifier psychic damage on a melee spell attack, plus frightened with both you and the spirit as fear sources.

Fridge Cigarettes, Replication Crisis, and Bottleneck Wine - Related to your interests, Tuesday

Also: serverless five years later, AI code modernization for AS/400, dependency injection history, and the AI writing witchhunt Related to your interests Here Come The AI-Based Code Modernization Offerings The AI writing witchhunt is pointless. - “Just because someone on Reddit reads a sentence that feels generic, or a metaphor that lands a little flat, they (increasingly) conclude with absolute certainty that a machine wrote it, as if mediocre prose is a new invention, as if bad writing didn’t exist before November 2022.

Cognitive Surrender, Supply Chain Rats, and Cthulhu at Mount Fuji - Related to your interests, Saturday

Also: invisible AI bottlenecks, Amex coding stats, Copilot for entertainment only, and why not both Earth from Orion after translunar injection, April 2, 2026. Two auroras and zodiacal light visible as Earth eclipses the Sun. NASA/Reid Wiseman, Artemis II Related to your interests Invisible Work in the Age of AI: The New Bottleneck in Architecture and Delivery - All that human and culture stuff you need for AI code generation to go well.

Platform engineering is what makes AI enterprise-ready

As we’ve found, writing AI-powered software is the easy part. Testing it, securing it, operating it at enterprise scale - that’s where things get interesting. “Guardrails,” all that. Purnima lays this out: a shift from deterministic systems (input goes in, predictable output comes out) to probabilistic ones where agents wander around exploring multiple paths to get stuff done. Sure, there’s all the freaking out about deleting data, bringing down production, exposing your precious secrets.

de-weirding enterprise AI

Ethan Mollick: The de-weirding impulse produces a second, deeper failure: it leads companies to default towards automation rather than augmentation. When leaders see studies showing productivity gains of 30% from AI, their instinct is to cut 30% of the workforce. That arithmetic is simple. What is hard, and requires genuine imagination, is asking a different question: what does it mean to rebuild an organisation around the fact that a single programmer can now write a hundred times more code?