Posts in "links"

Infrastructure self-harm

That wasn’t what we hoped would happen when nearly 10 years ago we and others came up with these new automated technologies around Kubernetes. We thought we would make things easier, more automated, safer, more compliant. And instead, people seem to be getting more and more stuck. And that’s partly because systems have grown. We’re the victims of our own success. …. A platform so easy to use that it needs another platform to make it usable…

”Extrapolating these results to the economy, current generation AI models could increase annual US labor productivity growth by 1.8% over the next decade. This would double the annual growth the US has seen since 2019, and places our estimate towards the upper end of recent estimates. ”

🔗 Estimating AI productivity gains, Anthropic

Make it so the robots can use your shit, or you might be irrelevant. At least, less so.

Platforms, tools or frameworks that are hard for large language models (LLMs) and agents to use will start feeling less powerful and require more manual intervention. In contrast, tools that are simple for agents to integrate with and well suited for the strengths and constraints of LLMs will quickly become vastly more capable, efficient and popular.

Do so by:

Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user.

🔗 Introducing AX: Why Agent Experience Matters

Claude Skills are top on my list of “important things no one is talking about” for this year. They’re both an AIPaaS and showing a new programming model and mindset. The educational angle (“View Source”) is a good take.

🔗 What MCP and Claude Skills Teach Us About Open Source for AI

The original is long, so I finished reading it with a summary from one of the discussed robots:

🤖 MCP, Skills, and the Architecture of Participation in Open Source AI

Summarized by AI.

Open source AI is not just about releasing model weights. True innovation comes from an architecture of participation, where developers can inspect, modify, and share small, composable components. Historical breakthroughs like Unix, Linux, and the early web succeeded because they allowed modular contributions—viewing source, remixing, and building on others’ work—rather than requiring deep engagement with the most complex layers of the system.

Anthropic’s MCP (Model Context Protocol) and Claude Skills embody this participatory model. MCP servers let developers give AI systems new capabilities via simple, inspectable interfaces to data, APIs, and tools. Skills are atomic, shareable instructions—bundled expertise that can be read, forked, and adapted. This is the opposite of OpenAI’s GPT “apps,” which live in a closed, app-store-like ecosystem where internals can’t be inspected or reused. Skills and MCP servers are components, not products, and their openness allows a collaborative ecosystem to flourish.

The long-term potential lies in creating “fuzzy function calls”—reusable, human-readable instructions that formalize what LLMs already understand. Just as early compilers and UI toolkits let developers move “up the stack,” MCP and skills will let participants focus on architecture and composition rather than raw code generation. This evolution could preserve mass participation even as layers of abstraction and complexity emerge, as the web did with HTML, CSS, and JavaScript frameworks.

The economic stakes are high. Today’s AI market is extractive: training data is used without recognition, value capture is concentrated in a few companies, and improvement loops are largely closed. MCP and skills could enable participatory markets, where contributions are visible, attributable, and shareable. To reach this future, the AI community must embrace open protocols, inspectable artifacts, new licensing models, and mechanism design that fairly rewards contributors and encourages ecosystem growth.

The future of open source AI will be decided at the interface layer, where ordinary developers and even non-programmers can create reusable skills leveraging their own expertise. If AI development mirrors the open web instead of proprietary app stores, it could become a generative ecosystem that expands opportunity rather than consolidating power.

🤖 What MCP and Claude Skills Teach Us About Open Source for AI - Explores how MCP and Claude Skills could enable a participatory, open-source AI ecosystem similar to the early web, contrasting it with closed, app-store-like approaches.

Summarized by ChatGPT on Dec 3, 2025 at 7:04 AM.

Getting ready for an AI app influx. Did we learn from the digital transformation era?

Will IT get ahead of the chaotic introduction of a new technology, AI, into their organization? Probably not, they rarely do, creating Shadow Whatever the New Tech is. But, for those that do, here’s Tony and I’s recommendations. Platform engineering’s etc. This our second Tanzu Talk livestream. We do it weekly on Tuesdays at 4pm Amsterdam time/10am Eastern time. 🔗 Getting ready for an AI app influx. Did we learn from the digital transformation era?

A fantastic summary of what it feels like to read most executive-level (marketing/comm/) content, on any topic:

If you want to hear some corporate gibberish, OpenAI interviewed executives at companies like Philipsand Scania about their use of ChatGPT, but I do not know what I gleaned from either interview – something about experimentation and vague stuff about people being excited to use it, I suppose. It is not very compelling to me. I am not in the C-suite, though.

Good blog post overall, ending with:

It turns out A.I. is not magic dust you can sprinkle on a workforce to double their productivity. CEOs might be thrilled by having all their email summarized, but the rest of us do not need that. We need things like better balance of work and real life, good benefits, and adequate compensation. Those are things a team leader cannot buy with a $25-per-month-per-seat ChatGPT business license.

🔗 A Questionable A.I. Plateau