Posts in "links"

Claude Skills are top on my list of “important things no one is talking about” for this year. They’re both an AIPaaS and showing a new programming model and mindset. The educational angle (“View Source”) is a good take.

🔗 What MCP and Claude Skills Teach Us About Open Source for AI

The original is long, so I finished reading it with a summary from one of the discussed robots:

🤖 MCP, Skills, and the Architecture of Participation in Open Source AI

Summarized by AI.

Open source AI is not just about releasing model weights. True innovation comes from an architecture of participation, where developers can inspect, modify, and share small, composable components. Historical breakthroughs like Unix, Linux, and the early web succeeded because they allowed modular contributions—viewing source, remixing, and building on others’ work—rather than requiring deep engagement with the most complex layers of the system.

Anthropic’s MCP (Model Context Protocol) and Claude Skills embody this participatory model. MCP servers let developers give AI systems new capabilities via simple, inspectable interfaces to data, APIs, and tools. Skills are atomic, shareable instructions—bundled expertise that can be read, forked, and adapted. This is the opposite of OpenAI’s GPT “apps,” which live in a closed, app-store-like ecosystem where internals can’t be inspected or reused. Skills and MCP servers are components, not products, and their openness allows a collaborative ecosystem to flourish.

The long-term potential lies in creating “fuzzy function calls”—reusable, human-readable instructions that formalize what LLMs already understand. Just as early compilers and UI toolkits let developers move “up the stack,” MCP and skills will let participants focus on architecture and composition rather than raw code generation. This evolution could preserve mass participation even as layers of abstraction and complexity emerge, as the web did with HTML, CSS, and JavaScript frameworks.

The economic stakes are high. Today’s AI market is extractive: training data is used without recognition, value capture is concentrated in a few companies, and improvement loops are largely closed. MCP and skills could enable participatory markets, where contributions are visible, attributable, and shareable. To reach this future, the AI community must embrace open protocols, inspectable artifacts, new licensing models, and mechanism design that fairly rewards contributors and encourages ecosystem growth.

The future of open source AI will be decided at the interface layer, where ordinary developers and even non-programmers can create reusable skills leveraging their own expertise. If AI development mirrors the open web instead of proprietary app stores, it could become a generative ecosystem that expands opportunity rather than consolidating power.

🤖 What MCP and Claude Skills Teach Us About Open Source for AI - Explores how MCP and Claude Skills could enable a participatory, open-source AI ecosystem similar to the early web, contrasting it with closed, app-store-like approaches.

Summarized by ChatGPT on Dec 3, 2025 at 7:04 AM.

Getting ready for an AI app influx. Did we learn from the digital transformation era?

Will IT get ahead of the chaotic introduction of a new technology, AI, into their organization? Probably not, they rarely do, creating Shadow Whatever the New Tech is. But, for those that do, here’s Tony and I’s recommendations. Platform engineering’s etc. This our second Tanzu Talk livestream. We do it weekly on Tuesdays at 4pm Amsterdam time/10am Eastern time. 🔗 Getting ready for an AI app influx. Did we learn from the digital transformation era?

A fantastic summary of what it feels like to read most executive-level (marketing/comm/) content, on any topic:

If you want to hear some corporate gibberish, OpenAI interviewed executives at companies like Philipsand Scania about their use of ChatGPT, but I do not know what I gleaned from either interview – something about experimentation and vague stuff about people being excited to use it, I suppose. It is not very compelling to me. I am not in the C-suite, though.

Good blog post overall, ending with:

It turns out A.I. is not magic dust you can sprinkle on a workforce to double their productivity. CEOs might be thrilled by having all their email summarized, but the rest of us do not need that. We need things like better balance of work and real life, good benefits, and adequate compensation. Those are things a team leader cannot buy with a $25-per-month-per-seat ChatGPT business license.

🔗 A Questionable A.I. Plateau

“in the era of the working class teen, you could get a job at a video store and still afford a car and drive around with your friends and feel free. The sense I had, my friends had, that the world we lived in was temporary, fading fast, was not unique to us, to the working class teens of Buffalo and Rochester and Detroit and Grand Rapids.” // A glimpse of Gen-X nostalgia to come (“Back in my day…"), but a sort of culture plan too. // Big All the Real Girls vibes.

🔗 the last working class teens

“In software development, we have 18,000 developers at the company that use coding agents today to optimize our development process,” Hari Gopalkrishnan. “We’ve already seen 20% productivity [boosts] coming out of those parts of the lifecycle, which we are now reinvesting next year into new growth programs.”

🔗 Bank of America runs 270 AI models across operations

Execs have little knowledge or how things actually work, giving then false hopes on how AI can improve things and replace workers

“In our recent survey of 1,400 U.S.-based employees, 76% of executives reported that their employees feel enthusiastic about AI adoption in their organization. But the view from the bottom up is less sunny: Just 31% of individual contributors expressed enthusiasm about adopting AI. That means leaders are more than two times off the mark.” And: “This disconnect is a symptom of a broader executive blind spot: They’re not especially attuned to what employees think, and they don’t realize it.