Developers need to tinker or they’ll reject your platform. That is a lesson that people who build tools and platforms for developers learn. The more ambitious your platform is in scale, the more tinker resistance you encounter - you want it to be the platform that 10,000’s of developers at a bank use, for example.
What if you could give the tinkers what they wanted and also put a standardized, enterprise-wide platform in place?
Headless AI, Evals as Levers, and Spaghetti Topped With Spaghetti - Related to your interests, Monday
Also: Cloudflare’s wholesale memory, custom Claude Code, and Google Cloud math
From The Edge Not Taken. Related to your interests Headless everything for personal AI - What if we go back and o the command line? // DOS was good enough for our grandparents, it’s good enough for us. The Harness Is the Lever: Why Enterprise Compliance Now Runs on Evals, Not Edicts Cloudflare can remember it for you wholesale - A private cloud version of this would be cool.
Mainframe to remain undefeated by AI, vaguebooking, and the problem with seldom-used PCs - Related to your interests, Saturday morning
Also: OpenAI’s enterprise play, Gemini voice acting, and airline antitrust
I’ve covered our announcements this week about Tanzu platform agent foundations in other posts, check that out if you’re into that kind of thing. Now, onto usual nonsense…
Duivendrecth fire station, early Spring 2026. Related to your interests Private Cloud Data Intelligence: The Case for Running AI Where Your Data Lives - Public cloud economics, compliance regulations, and data gravity are pushing enterprises back to private infrastructure; a unified lakehouse architecture combining MPP analytics, in-memory grids, streaming pipelines, and open-format object storage lets organizations run AI models directly on sensitive on-premises data without moving it.
AI is dumber than you think
It’s almost unfair to unmask the AI magic, but knowing is better than not:
One way to understand an LLM is as an improv machine. It takes a stream of tokens, like a conversation, and says “yes, and then…” This yes-and behavior is why some people call LLMs bullshit machines. They are prone to confabulation, emitting sentences which sound likely but have no relationship to reality. They treat sarcasm and fantasy credulously, misunderstand context clues, and tell people to put glue on pizza.
Garbage Chairs of Amsterdam, They Live edition.
If you can’t wait until Friday for the perfectly edited and polished Software Defined Talk podcast episode, you can watch the unedited, full recoding - in glorious color video - in the meantime.
Tanzu Platform 10.4: a private cloud platform for AI harnesses (or, "agentic AI")
AI companies are building platforms for running agentic applications. Right now, those applications are primarily for software development, with a little bit of knowledge worker stuff. In each case, you get a “harness," an application that wraps all sorts of functionality around a model.
This harness app is way beyond the chat-based apps we grew up with over the past few years. They use the model to figure out multi-step processes and get access to data and other apps - accessing files, working with your email, PowerPoint, etc.
🤖 Bernie Sanders Presses Claude on AI, Privacy, and a Data-Center Moratorium
Summarized by AI. 2026-04-12 09:40
Bernie vs. Claude Senator Bernie Sanders questions Claude directly about how AI intersects with privacy, profit, and democratic erosion, framing data collection as the hidden engine behind most consumer-facing AI.
Claude concedes that companies harvest browsing history, location, purchases, search activity, even pause time on a page, then feed it into AI that assembles granular personal profiles users never meaningfully consented to. Those profiles drive targeted ads, differential pricing, and feed ranking, all invisibly and largely unregulated.
🤖 Tyler Cowen and Jonathan Zittrain on Agents, Consciousness, and Why America Can't Pause
Summarized by AI on 2026-04-14.
A Berkman Klein Center public conversation between Tyler Cowen (George Mason, Mercatus Center) and Jonathan Zittrain (Harvard), framed around hypothetical Anthropic and OpenAI releases Mythos and SPUD, each powerful enough to be withheld on safety grounds.
Cowen opens by saying he believes the safety claims, adding that the real question is not whether these frontier models are dangerous but how long until an acceptable open-source equivalent arrives.
