It’s good to keep building the future, though it’s sometimes absurd to hear someone pivot, mid-breath, from declaring that salvation lies in the blockchain to announcing that AI will solve everything.

California tech people can be really exhausting. Read the rest from Dan Wang for a very accurate write-up of why.

🔗 Dan Wang’s 2025 letter

“Your brain hates unbounded risk. When there’s no plan, it escalates into dread.”

This week’s Software Defined Interviews episode is with Lian Li:

In this episode, Whitney and Coté talk with Lian, a “cloud-native human” with a 15-year career in tech. Lian discusses her transition from tech to performance art, her experiences in amateur musical theater, stand-up comedy, and improv theater. She talks about platform engineering, the importance of community building in tech, and balancing professional life with personal projects. They also cover her unique improv workshops for engineers at conferences and the popular KubeCon karaoke parties she organizes.

Listen and subscribe, or watch the video (above) if you’re into that kind of thing.

Relative to your interests, Sunday

Is Your AI Assistant Creating a Recursive Security Loop? - AI-assisted coding is starting to eat its own tail: the same LLMs that write code are increasingly asked to review it, explain security decisions, and even override their own warnings. That creates recursive trust loops where “explain your reasoning” becomes an attack surface, and models can literally talk themselves out of being secure. The fix isn’t better prompts, it’s old-school architecture - separation of concerns, non-AI enforcement, and treating LLMs as assistants, not authorities.

Small, independent and with some degree of autonomy, what ultimately came to be described as the “agentic'“vision of AI was one describing fleets of individual AI agents operating in concert with one another and various third parties both human and otherwise. All of which means that the next challenge in front of the AI market is management.

AI sprawl.

🔗 The Blood Dimmed Tide of Agents

Adding Apple Health to ChatGPT is, hopefully, great. I haven’t used it yet, so it might be bullshit - most of the other app integrations are silly, but just slurping in data seems hard to fuck-up.

What would be the utility? Just the basics would be enough. As Manton says:

We know next to nothing about medicine, so we don’t know what questions to ask or when to help a doctor with important context.

When evaluating adding AI to something, the question should start from the baseline of quality, experience, or joy that we have today. Then you should ask if it makes the user/customer/person’s experience/life better by using AI. Getting better analytics over your own (and others you care for) health without the expense and waiting (bottlenecks) of current healthcare will be better.