Coté

How to code agentic AI tools in Java with goblins

The goblins get into agentic AI.

The above video is exciting for me: it’s me relearning programming, playing D&D with the robot, and coming up with a new type of way I can help out at work. In this introductory video I go over the basics of making a tool (an “MCP Server”) for Claude.

This tool is a very simple oracle that will answer yes/no questions. Oracles are a core part of solo role playing and introduce unknown twists and turns, help you come up with adventures on the fly, and so forth. I’m not sure an AI like Claude needs an oracle, it might be good enough at picking random results that lead to different adventure paths. Or maybe it’s not! AI’s work by typing out the next word (yeah, nerds: token) that logically comes next, so maybe it is very much not good at random results!

Anyhow, this oracle is super basic, but it shows how to make these tools with Spring AI. Java is used by millions of developers, especially enterprise developers, and Spring is used by many (“most all”?) of them. Spring AI makes creating these agentic tools really easy. As you’ll see in the video, setting up Claude is more difficult than making the actual tool!

I’m going to put out more videos, making more complex (and useful) tools. I’ll also explore running this on your own machine - I’m about to try on a 10 hours flight so I hope I can figure it out.

You can see the basic tools I’m making for this video series in my EasyChatDM project and my “real” tools in the ChatDM project.

Relative to your interests

  • DATEV Accelerates Tax and Finance Software Innovations with VMware Tanzu Platform - “We recently patched application platforms that host 18,000 containers four times in one week. That wouldn’t have been possible without all the automation we have with Tanzu Platform.” // DATEV eG modernized its application development and deployment pipeline using VMware Tanzu solutions, enabling rapid development and deployment of tax, finance, and business administration applications.

  • Three Methods for Channeling Shadow IT’s Energy - “For CIOs, the question shouldn’t be about whether to eliminate shadow IT but how to harness its potential while mitigating its dangers.” // Or fine them with “what I like to call DARC (dangerous, awfully conceived, redundant, or costly) solutions, they should face financial consequences.”

  • On Da Vinci and Boredom - “More than anything, observe a brilliant person for whom both the prospect and opportunity of boredom led him to follow his curiousity into whatever intellectual avenues it wanted to pursue, and then turning his imagination into product manifest in text and on canvas.”

  • Running MCP-Based Agents (Clients & Servers) on AWS - Running MCP Servers on AWS, with Spring AI.

  • Researchers suggest OpenAI trained AI models on paywalled O’Reilly books - “GPT-4o [likely] recognizes, and so has prior knowledge of, many non-public O’Reilly books published prior to its training cutoff date,” wrote the co-authors.” // I mean, we know that the AI people are doing regulatory arbitrage, breaking the law to get competitive advantage.

  • Picking Apart An Ebullient GenAI Spending Forecast - “Lovelock says that companies will be looking for third party software providers to add AI features rather than try to do all of this GenAI creation and integration themselves, which reflects the attitude that most IBM i shops have had thus far. Straw polling we have seen shows that companies are waiting for either IBM or application software makers to give them the tools to have AI functions in their applications.” // And, a lot of the big-ass estimate is AI PCs, so you can probably throw that out.

  • JFrog Survey Surfaces Limited DevSecOps Gains - All that DevSecOps marketing from a couple years ago didn’t stick. // “A global survey of 1,402 application developers, cybersecurity and IT operations professionals finds 71% work for organizations that, despite any potential vulnerabilities, still allow developers to download packages directly from the internet.”

  • The Strategy Behind MCP - Good theory on the business strategy for MCP: it keeps LLMs at the center of value. // “Anthropic is not positioned as a consumer brand. Everything in their actions and intent thus far has been positioned towards enterprises or power users.” // “you need the tools to disintermediate themselves. That, at its core, is the goal of MCP and in my opinion a core strategy for Anthropic.”

Wastebook

  • “No one hate-listens to a podcast.” Here.

  • “Face with bags under eyes.” New emoji for recent parents.

  • Featurecide: the slow killing of a product’s soul in pursuit of every trend that moves the needle on engagement metrics, no matter how disconnected it is from the original mission.” Here.

  • “We’ll do four hours of shit-talking nonsense.” Noah.

Logoff

I composed and sent this in my phone. You can tell the Substack product managers were like “sure, let’s make it work on iPhones, but don’t spend a lot of time on it.” Which is fine, really. I imagine there aren’t many people who want to sit down to a breakfast of typing a newsletter on their phone.

“Pageants of minor chaos”

Just wastebook and links this episode.

Wastebook

  • “So what is a critic for? This is the second quote that’s in my notebook. It’s in every notebook because I always write it on the first page: ‘Find a subject you care about and which you in your heart feel others should care about. It is this genuine caring, and not your games with language, which will be the most compelling and seductive element in your style.’ Kurt Vonnegut.” Found by that guy Russell.

  • When you’re talking on a podcast, it’s like you’re talking with your friends. When you publish in social media or blogs (probably YouTube), it’s like you’re talking to strangers.

  • “[I]t’s hard to explain to the French that Americans are much more afraid of each other than they are of Russia. Conflict in the United States is usually an internal convulsion, a civil matter.” boom boom paris.

  • “sitting in the buzzfeed offices just clickin' on this off tweetdeck.” Good times.

  • “The best time to estab­lish alternative, non-algorithmic net­works of com­mu­ni­ca­tion & affinity was five years ago. The second best time is today!” Robin Sloan.

  • And: “pageants of minor chaos.”

  • “I think that [parent’s] resilience. Or, their resilience at work is an incredibly important quality to transfer [to their children] and this might be one way to do that. Ooo! Looks like I had a thought!” On bringing your kids to work, having them see you work, etc. - John Dickerson on the Political Gabfest bonus episode, March 13th, 2025z

  • A lot of lunch and learn sessions, weekly meetings, and other collaborative activities focus on building and maintaining a network of knowledge rather than just learning the specific topic covered in the meeting. These activities involve sharing information and establishing connections with others to enhance your understanding and access to a wider range of knowledge.

  • “toyetic.” Here.

  • ”During that unplanned and somewhat chaotic scene, Jensen walked up the set and asked, “Did somebody order Denny’s?” He then started serving Nvidia Breakfast Bytes to everyone at the table while talking about his time at the diner.” Alumni.

  • “All the Micheladas You Must Sip in Austin.” One of the most 2000s Austin headlines ever, from 2025.

  • “Late night chemist.” Shoreditch side street.

  • And now I find myself in the absurd position of having to put together a talk about playing D&D with AI at the last minute, sitting in a hotel room in Kensington. This is not the first time this type of thing has happened.

  • “I recalled Hegel’s adage that governments based on voodoo religion were bound to be unstable.” Tyler.

  • “So I waited in my car in a supermarket parking lot. If this Signal chat was real, I reasoned, Houthi targets would soon be bombed.” Buck-wild.

  • An infinite hold my beer regress.

  • “Design Fiction." bruces.

  • Related: ‘I enjoy probing the domestic “limits of everyday weirdness.”’

  • “1950s and 1960s Little Golden Books purchased at the Hinky Dinky supermarket down the street.” Chris Ware.

  • "decision-makers can remain irrational longer than you can remain solvent. Career advice in 2025.

  • ”a concept as antiquated as intent.” NYTimes.

  • “a felicitous remove.” Spicy.

  • “Significant improvement but still issues.” Oxide and Friends.

Relevant to your interests

Logoff

Here’s the video of an interview I did a couple weeks ago with ITQ - always fun folks. I gave two talks this week - on the same day! One at SREDay London (on private cloud platform engineering), another at Monki Gras. The second was the first go at a talk about learning agentic AI by playing D&D. Next time I give it, I want to have at least a recording of coding some tools. We’ll see!

“Significant improvement but still issues.”

Not much today.

Found at the ITQ offices.

Wastebook

Relative to your interests

Logoff

I’ve been re-learning Spring with Dan Vega’s Spring Boot crash course. It’s great, and encouraging. So much has changed since 2005, but the thrill of learning and doing little iterations is fun. After this, there’s his Spring AI course. I hope to get skilled enough to make some D&D AI tools/MCP servers, whatever.

Once I can get over the (to this older Java coder) rails-like feel of Java and Spring (where there’s so much going on in the background hat it gets confusing to know what to do - it’s so simple, I have no idea where to start), it’s pretty quick and interesting.

Also, since the effects and outcomes of Spring (along with Tanzu Platform/Cloud Foundry) are what I talk about at work all the time, it’ll be good to have more first hand experience than the “reporting” I do on it.

The Illiterate Corporation

I’m the guest on this week’s When Shit Hits the Fan podcast. You can hear about two of my fan shattings. Here’s the podcast in Apple, Spotify, and Overcast.

Favor documents over slides

Slides are an oral culture, not a written culture. Imagine civilization without writing: that’s what organizations relying on slides instead of documents are like.

There are workarounds, and they tend to prove the comparison. Often, you will see a slide with a lot of words, and the presenter will apologize that there’s too much text. That’s because the slides should have been a document.

Slides are not good at text, they’re good at visuals. Slides are good for enhancing spoken communication: showing examples, visualizing data (charts), even giving a written outline of the topics covered, major conclusions, and suggested actions. McKinsey titles are great for all of that. The right slides will make your talk better, more memorable, more “actionable.”

Slides are a terrible way to share, archive, and “document” your decisions and reasoning. For example, slides are terrible at strategy. Have you ever asked for the plans, the strategy, an overview of what a product does, and been sent slides? They’re usually not good. You’re usually left with many questions, especially when it comes to why and how. That’s because these types of things should be documents.

There’s an old maxim of keynote slide design: for your audience to understand the slides, you should need to be there giving the talk. The slides should not be able to stand alone. A document can stand alone, a document can be re-read, sent to people who weren’t in the room.

You can also collaborate on a document. You can suggest changes, you can ask questions in comments, you can update it. You can track changes on a document. A document is, somewhat ironically, more of a living document than slides. In contrast, have you ever tried to track changes and collaborate with slides? It’s a mess.

I use slides all the time for presentations, both public and internal ones. For internal collaborations and work, however, I start with a document and try to “force” the people I’m working with to use the document as well. Eventually, in most of the corporate cultures I’ve worked with, I have to switch to text pretty early on. But, at least the document is there to serve as the source of truth.

Most corporations are illiterate. From what I can tell, people avoid reading in large organizations. People don’t make the time to read, it’s faster to flip through slides. It’s faster to edit slides.

Guess what else: all this generative AI stuff is really good at text. If you think it’s hard to write, and that most people won’t be able to do it, even the simplest AI can help. You can even take a recording of your presentation of slides and ask the AI to convert it to a document.

This is an opportunity for management. If it seems like people aren’t “getting it” that ideas aren’t trickling down from management, that you keep getting the same questions over and over…maybe you should switch mediums from slides to text. Try something different. Slides are a poor way to run a company, and switching to documents is an easy, no cost way to boost productivity.

Relative to your interests

Wastebook

  • “So what is a critic for? This is the second quote that’s in my notebook. It’s in every notebook because I always write it on the first page: ‘Find a subject you care about and which you in your heart feel others should care about. It is this genuine caring, and not your games with language, which will be the most compelling and seductive element in your style.’ Kurt Vonnegut.” Found by that guy Russell.

  • “it’s hard to explain to the French that Americans are much more afraid of each other than they are of Russia. Conflict in the United States is usually an internal convulsion, a civil matter.” boom boom paris.

  • “sitting in the buzzfeed offices just clickin on this off tweetdeck.” Good times.

  • “The best time to estab­lish alternative, non-algorithmic net­works of com­mu­ni­ca­tion & affinity was five years ago. The second best time is today!” Robin Sloan.

  • And: “pageants of minor chaos.”

  • “I think that [parent’s] resilience. Or, their resilience at work is an incredibly important quality to transfer [to their children] and this might be one way to do that. Ooo! Looks like I had a thought!” On bringing your kids to work, having them see you work, etc. - John Dickerson on the Political Gabfest bonus episode, March 13th, 2025z

  • A lot of lunch and learn sessions, weekly meetings, and other collaborative activities focus on building and maintaining a network of knowledge rather than just learning the specific topic covered in the meeting. These activities involve sharing information and establishing connections with others to enhance your understanding and access to a wider range of knowledge.

  • “We’ve entered the ‘tamale layaway stage’ of late Capitalism.” Chris.

  • “toyetic.” Here.

Conferences

Events I’ll either be speaking at or just attending.

SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th, speaking. NDC Oslo, May 21st to 23rd, speaking. SREDay Cologne, June 12th, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

There's a huge, great line-up of topics and people at Cloud Foundry Day this year, May 14th in Palo Also, hosted by my work, Tanzu. Come check it out - Cloud Foundry is the most proven, mature platform as a service I know of, used for over a decade in the biggest, mission critical organizations, and beloved by developers and operators.

The agentic AI hype-cycle is nearly done

To Have and Have Not, Christopher Still. Found in Leiden thrift.

Wastebook

  • “We live in the age of ‘fuck around and find out’ - of iteration and experimentation.” 8Ball wisdom.

  • “15 years ago, the internet was an escape from the real world. Now, the real world is an escape from the internet.” Noah Smith, via Ibid..

  • “everything is optimized for engagement instead of meaning,” the Curmudgeon Era of life.

  • If the likely outcome is the same, you might as well do a good job, so long as that’s fun too. If it’s not fun, do a bad job, if any at all.

  • Never let a slide template tell you how to live your life.

  • “The key to sales relationships, in my experience, is accountability. Take responsibility for mistakes. If you commit to something, do it. If you aren’t sure you can do something, don’t commit to it. This is harder than it looks.” James Dillard.

  • “the death of the author and the return of pleasure” promises of Barthes-lore.

  • “I blew off everything on Thursday and sat in a local place reading Ionesco with a glass of wine and an excellent coffee made with beans roasted in Naples.” Warren Ellis.

  • I always want to know the “therefore”: What it means, what effect it has, how to think different, what to do next. Knowing the diagnosis is helpful, knowing what to do next is more important.

  • déformation professionnelle.” Found in Russel Davies' neck of the woods.

  • And: “The Nobel laureate Alexis Carrel has observed that ‘[e]very specialist, owing to a well-known professional bias, believes that he understands the entire human being, while in reality he only grasps a tiny part of him.’”

  • “that’s what might occlude our catsup.”

  • “tweakments”

  • “To read - and announce oneself as having read - literature in translation is to be tasteful and intelligent, a latter-day cosmopolitan in an age of blighted provincialism.” Bros of literary criticism.

  • “obscurantism” Ibid.

  • “I don’t think that knowing anything helps. I don’t think there is anything to know.” -Rick Ruben.

  • Anything she says is correct. She just don’t talk much.

  • “To under­stand the San Joaquin Valley, or any pro­duc­tive ag region, as “rural” misses the point. This is a vast, open-air fac­tory floor, totally wired up, care­fully monitored. I say that with appre­ci­a­tion bor­dering on awe.” Robin Sloan.

Apelles painting Campaspe, Willem van Haecht, ~1630. Found in Leiden thrift store.

Relative to your interests

The agentic AI hype-cycle is nearly done

Good news, everyone. We can start doing “useful things” with agentic AI.

Gartner, November 11, 2024.

The hype cycle for agentic AI has been one the fastest ever. And I think we’re almost through hit without very many people actually have written agent AI code.

Let’s say Soaked McConaughey (Super Bowl, Feb 9th 2025) was the peak. And the beginning was sometime around October 2024 is when an NVIDIA blog asked “What is Agentic AI?”

Now, each thing I read about agentic AI, from a technical person, is all like “this is just hooking up genAI to your database.”

One sample:

An “AI Agent” is just a model with access to tools like “escalate ticket”, “run SQL query”, or “draw an image”. The rest of the hype comes from fitting it into existing workloads like ETL nonsense with MuleSoft or something banal like that. This is really what all the hype is about: hooking AI models up to existing infrastructure so that they can do “useful things.”

And MCP has made Richard Seroter’s list every week, sometimes daily, in the past three or so weeks, so we’re hopefully clinging up the trough to the Plateau of Productivity. Hopefully this means we’ll finally start doing “useful things” at scale in enterprises.

That said, the odd thing is I don’t think very many people have even written and deployed agentic AI apps yet.

(I put together a small timeline here, minus the Super Bowl part.)

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking. SREDay Cologne, June 12th, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

I should re-learn programming. It’d make my marketing work a lot easier and truer. A programmer who can market is a good place to be.

For enterprise AI, avoid repeating the wasted DIY year

You're about to waste at least a year and millions of dollars on enterprise AI projects. Here's how to avoid it.

Here’s a new article on enterprise AI from me and a co-worker. As with most maturity models, it’s 2/3 prescriptive and 1/3 “here’s some ideas that might help.” A bit of map and territory.

With AI, we’re seeing a familiar anti-pattern, but this time flavor-injected with generative AI: the board charters a tiger team, gives them budget. The team hires consultants because their developers don’t know Python. Consultants identify AI use cases, write some Python apps, and many of the pilots show good business value.

AI-related spending is set to reach $371.6 billion by 2027 as businesses shift from experimentation to targeted AI investments. IDC, February, 2025.

Then the board says, “OK, now do that 300 more times.” But the team hasn’t built a sustainable strategy. They haven’t used that time and money to add new organizational capabilities. Your developers still aren’t skilled in AI. So now adding AI to those hundreds of other applications isn’t easy. And that’s also when they discover day two AI operations: you have to run this stuff! Now you’ve just got piles of Python without real in-house capability. Day two AI operations isn’t cheap, it’s often underestimated, if planned for at all.

More toil for developers

Developers already spend 60% of their time on non-application work.1 Add AI infrastructure, and, I’d guess, that’ll climb another 10–15%.

This is exactly what happened with Kubernetes. Based on Google-brand, excellent devrel’ing and keynotes, companies assumed it was easy - until developers were drowning in YAML. Perhaps we’ll hear an AI-centric quote like this in a few years:

Well, I don't know how many of you have built Kubernetes-based apps. But one of the key pieces of feedback that we get is that it's powerful. But it can be a little inscrutable for folks who haven't grown up with a distributed systems background. The initial experience, that 'wall of yaml,' as we like to say, when you configure your first application can be a little bit daunting. And, I'm sorry about that. We never really intended folks to interact directly with that subsystem. It's more or less developed a life of its own over time. Craig McLuckie, SpringOne 2021

AI is heading the same way. You’ll spend 12 months building your own platform. It’ll barely work - if at all - and cost ~$2 million in staffing. Developers won’t use it like you expected. There’s no ROI, it delivers a third of what you promised, and you’re not really sure how to run and manage it long-term. And, yet, it still costs a lot of money.

CIOs have found generative AI goals harder to attain than first anticipated. Technology leaders are blaming data challenges, technical debt and poor strategy or execution for slowing down progress. Nearly half of CIOs say AI has not yet met ROI expectations, according to Gartner research. Reporting from the Gartner Symposium, October, 2024.

That original team, now AI experts, will leave, either for to work at an AI tech company or to help do it over again at a new enterprise. You’ll be stuck with an unsupported, incomprehensible mess. No one else understands it. You’ve locked yourself into an opaque platform, wasted years, and landed back where you started - with a sprawl of shadow platforms and tech debt. But, don’t worry: the next CIO will launch a new initiative to fix it.

Two new challenges

With AI, you’ll have two more problems.

First, AI evolves monthly, sometimes weekly. New models, new techniques (“agentic AI”). You need to keep up, or you’ll be stuck on outdated tech, losing competitive advantage. The best way to handle this? Just like any other service you provide (e.g., databases). Centralize those AI services, then you can upgrade and certify once, enterprise-wide, and give developers a single, enterprise-y source for AI models. The alternative is to find all the AI models usage across your hundreds of applications and enterprise-y them up one by one. You’ll quickly fall behind - just look at the versions of software you’re currently running, I bet many of them are three, five versions behind…especially whatever Kubernetes stacks you built on your own.

Second, if you use the same models as everyone else, you’ll get the same answers. Asking the AI “How do I optimize my pipe-fitting supply chain?” will yeild the same response as your competitors. The real advantage is adding your own data. That’s hard work, needing serious budget and time. And once you figure it out, you’ll need to scale it across teams, which means centralizing AI access, just as we saw above in AI model usage and updating.

Enterprise AI needs a platform. And what we learned over the past decade is: building your own platform is a terrible idea. This is especially true in private cloud, which I reckon is where about 50% of the apps in the world run, probably much more in large organizations.

Instead, improve your existing stack. Don’t rip and replace it.

If you’re like most enterprises, you have a lot of Java developers using Spring. Use Spring AI as your AI framework to connect to models. The Spring AI developers have been working quickly over the past year to add in the baseline APIs you’d need and adapt to new innovations. For example, the agentic AI framework Model Context Protocol came out in November, and Spring AI is now the official Java implementation for MCP.

And if you’re like a lot of other larger organizations, you already have a strong platform in place, Cloud Foundry. You can add a full-on model service gateway to host or broker AI models. You can host those models your own if you’re freaked out about public cloud AI, use public cloud AI, or, more than likely, do both! Most importantly, you’ll be able to keep up with model innovations, providing them to all your developers as quickly as your enterprise red-tape will allow for.

GenAI on Tanzu Platform System Architecture

Your platform team can manage AI services like any other - security, compliance, cost tracking. Since it serves OpenAI-compatible endpoints, you can even still use those Python apps, but now your operations team can secure and manage them instead of whatever consultant-built Python stack you got stuck with.

So, the plan: (1) det developers using Spring AI to start embedding AI into their apps. Work on integrating your own, secret/proprietary data to the AI app workflow. When they’re ready, add AI services to your platform so production deployment and management is seamless. You’ll have day two AI operations covered. And because it’s centralized in a platform, you can now roll it out to those 300 more apps the board asked for.

Then, you can execute a proven playbook: developers should be shipping AI-powered features every two weeks, experimenting and refining based on real usage. I outlined this approach in Monolithic Transformation, with more executive-level insights in Changing Mindset and The Business Bottleneck.

You know, as always - try to avoid repeating the anti-patterns of the past.

r/pics - Only in Russia

Wastebook

  • Bathos, n. – The fine art of inspiring deep emotion, only to trip over a misplaced joke, a clumsy metaphor, or an unfortunate mention of flatulence.

  • Alacrity, n. – The suspiciously eager willingness to do something, often confused with competence but more often a sign that the asker failed to mention how much work it actually involves. (Found in the The Economist, explained by the robot, previous as well.)

  • Screwbernetes.

  • “Forward Deployed Engineers.” SEs with commit access.

  • “She glared at me from across the road and shooed me off because I couldn’t stop laughing.” Sting.

  • “Tapestries around her neck.” RotL, #570.

  • “Ms Adichie’s viral TED talk on feminism received an even more impressive accolade: Beyoncé sampled its lines.” The Economist World in Brief.

  • “bantam self-confidence.” Tina Brown.

Relative to your interests

US billionaire investor Warren Buffett

Read by the robot

I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.

AI firms raced to shrink large models into cheaper, faster alternatives, ensuring even small companies can now afford to hallucinate at scale. IBM expanded its AI strategy, embedding intelligence into products and unleashing 160,000 consultants to generate AI-powered assistants. AI, once set to replace lawyers, now helps them work faster—though not quite enough to stop them from citing fake cases in court.

The fragrance industry, once built on the power of scent, now thrives on TikTok, where influencers sell perfumes their followers will never have to smell - arguably the best possible scenario for everyone involved.

Is neoliberalism truly in decline? Despite its failures - rising inequality, social fragmentation - it remains the dominant economic framework with no clear replacement. Meanwhile: it doesn't matter if you saw a rabbit, a vase, or an old woman because a study debunked the idea that optical illusions reveal personality traits.

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Devil in the Daytime, Kyle Dunn, 2024.

Logoff

I’ve been running the above, uh, screed in my mind for a few weeks now. Perhaps I’ll use it as the basis for my VMUG Netherlands talk next week. It’s not exactly the topic of the talk, but good talks, as delivered, often are.

1

60% comes from this IDC survey, where I added up security, implementing CI/CD, app and inf. monitoring and management, deploying code - the rest are definitely things I'd want developers doing. Full citation and background: IDC #US53204725 (February 2025) Source: IDC, Modern Software Development Survey 2024 (N=613) and Developer View 2023 (N=2500).

What we love is good for us, sometimes

What we love is good for us

David Lynch on smoking:

"But that said, he admitted smoking played a huge part in his life. “I don’t regret it. It was important to me. I wish what every addict wishes for: that what we love is good for us.”

He went on: “A big important part of my life was smoking. I loved the smell of tobacco, the taste of tobacco. I loved lighting cigarettes. It was part of being a painter and a filmmaker for me.”

Wastebook

  • “Reams of founderspeak floated up into the warm breeze.” Tyler Cowen profile.

  • And: “‘I’m not very interested in the meaning of life, but I’m very interested in collecting information on what other people think is the meaning of life.’ And it’s not entirely a joke.”

  • “I feel that writing about the topic will make me stupider.” Also Tyler.

  • Gruber on Skype’s EoL: “I don’t think it’s an exaggeration to say that if not for Skype, podcasting would’ve been set back several years.”

  • “My toaster has developed self-awareness, which is concerning because its only purpose is to burn things.” AI IoT FUD.

  • “Vibe working is using AI to turn fuzzy thoughts into structured outputs through iteration.” Azeem Azhar.

  • “Stove touching.” Move fast and touch things.

  • “a beautifully crafted digital fortress.” Ian.

  • ”far left government computer office." DOGE-slang.

  • “Who’s gonna help us? …nobody’s coming.” Noah.

Relative to your interests

Logoff

I’m not contributing to the AI slop problem. I’ve been posting some of the explainer queries I’ve had with ChatGPT (see the AskTheRobot category). I both like them, like easy content, and am curious if they draw incoming traffic. // It’s getting warm and sunny again in the Netherlands. I hope that means I’ll get my ass on the bike and get back to fiets-flâneuring.

How to find waste with the robot

My first law of enterprise AI: if you end up having two robots talk with each other to complete a task, that task was bullshit in the first place, and you should probably eliminate it rather than automate it.

For example, if AI is used in both sides of B2B procurement (enterprise software sales), then much of the process is probably bullshit in the first place. There is so much weird and ancient in procurement, on both sides, that it’s clearly a poorly done process and part of enterprise IT culture.

Nobody likes this, and we all know there’s a high degree of waste to it:

The average software sourcing process involves 28 stakeholders and takes six months. That’s six months of manual research, vendor meetings, demos, internal debates, and ultimately, a decision that still may not be fully informed.

Several years ago, Luke Kanies outlines his frustration and experience with that culture. When the buyers and the users are different people, and deal size goes up, beware: you run the risk of sailing in a sea of bullshit. Those selling (vendors) can bullshit a lot, but those buying can bullshit a lot too. The perfect example of using law one of enterprise AI use to remove waste.

steam.jpg
The Fantod Pack (1995): a Gorey take on the Tarot deck

“Agentic AI” just means “AI middleware”

Related: this is a great industry analyst overview of the enterprise IT category “agentic AI,” from Jason Andersen:

Conceptually, “AI Development Framework” is a type of middleware technology. To be more specific, it's a layer of shared services that provides a set of APIs and integrations for practitioners (not just developers) to build AI applications, particularly agents. The benefit of this is twofold. First, developers don't have to sacrifice too much flexibility while gaining the potential to work more efficiently. Second, the enterprise also gets a more uniform set of standards to drive better governance and sustainability.

It’s middleware, platform, and operations stuff. all the usual develop, operate, optimize. What’s slightly missing is day two operations, but we’ll all re-discover that soon enough when people try to ship version 2.0 of their (agentic) AI apps.

My advice: from now on, when you hear the phrase “agentic AI” just think “AI middleware used to add AI features to apps.”

Anything else is a bit much. The phrase is fine, just keep a realistic and pragmatic definition for enterprise AI in your head.

There is a distinction between “simple AI” and “agentic AI,” but once you poke at it, agentic AI is doing what you assume AI was doing in the first place. But that won’t last. Once “agentic AI” becomes mainstream (and cheap enough), people won’t really be doing “simple AI” anymore. Eventually (two years from now?) we’ll just drop the “agentic” and go back to calling it AI.

Pepper grinder collection and clown, Severance, s2e6.

Focus on individual productivity, let the econ-nerds and rich worry about GDP

Here’s a recent Goldman newsletter (PDF) throwing cold water on AI hype-heat:

We first speak with Daron Acemoglu, Institute Professor at MIT, who's skeptical. He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn't take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won't occur nearly as quickly--or be nearly as impressive--as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are "not a law of nature." So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.

Yes, and…this is why I think individuals will be the ones who benefit from AI usage most.1 Each individual person using even just AI chat apps to get their daily work done.

AI will benefit individuals by reducing the time it takes to do knowledge worker toil, making their work less tedious, and also raising the quality of their work. This means they’ll be able to do their tasks faster, be less bored, and likely get better quality work-product. This gives individuals more time and energy.

You then need to think like a company does: how do you use that extra resource for The Corporation of You?2 You can then choose two strategies:

  1. Up their own productivity - do more work, hoping their employers compensate them more - good luck!), or,

  2. By working less - getting the same pay, upping their personal productivity profit margin.

Either way, people who use AI for their work will see big benefits.

Wastebook

  • “There’s no particular reason 64-year-old alumni should be able go wherever they like. But there’s definitely a different feel.” The dr.

  • “The faux cocaine mirrors are so hard to keep - they’re hard to get, and they’re stolen all the time.” #PalmSpringsLife, used to reflect on “luxury beliefs.”

  • I missed this when I linked to Charles Betz’s simple definition of a platform, but he mentions that he uses the term “application” instead of the Team Topologies term “stream-aligned” team. “I have not seen the term ‘stream-aligned’ get traction in portfolio management,” he says. Checks out for me.

“Business sketch on concrete wall,” Who is Danny, Adobe Express.

Relative to your interests

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

See y’all next time.

1

That idea isn’t original to me. It’s probably from Ben Thompson, but I don’t recall.

2

I cover more of how to think like a company to run The Corporation of You in my thriving and surviving in bigco’s pedantry-fest, part 2 and part 9 are especially applicable here.

What AI is good at, or, please don't fuck up my job and ETFs

I’m clearly a big fan of AI and believe it’s helpful in many ways.

I feel comfortable with that because I’ve used it for over two years now and rely on it daily for a wide variety of tasks, both work- and personal-related. That means I know exactly what it’s capable of, what it’s good at, and what it’s not good at. Me and the robot have a good relationship: we know how to work with each other.

From Katerina Kamprani's The Uncomfortable collection.

Generative AI is good at text

Right now, generative AI is only good at working with text. It generates text—if you can reduce audio to text, it excels at that, and if you can convert text to audio, it’s equally proficient.1

Text can take many forms, and generative AI handles them well. As others have noted, if you want to shorten text, it’s almost amazing at that. If you want to summarize text, it’s pretty good at that. And if you need a summary to help decide whether to read the full text, it’s fantastic.

Learning and strategy

If you want to learn and understand a topic, part of that process involves condensing large amounts of text into a shorter, more digestible form—and it’s pretty good at that. All of the consumer agentic things are going out there and searching the web for you to find that text and then summarizing it all. If what you want to learn and understand is well documented on the World Wide Web, it is good at that. If you want to get insights into secret, obscure, poorly documented things - stuff that has little public text - the AI’s Deep Research is going to be shallow bullshit.

Even when it’s good, with learning and understanding, you need a finely tuned bullshit detector. Once you detect the bullshit, you can ask again or go search on your own. But really, you need a bullshit detector in all of life—robot or meat-sack. If you don’t have one, build one quickly. The benefits you get from that will last longer and far outweigh the benefits you’ll get from AI.

This aspect of learning means it’s not so great at company strategy. If you and your competitors are all using the same public text, you’re all going to get the same answer. There will be no competitive advantage. What’s even worse now is that it’s effortless for your computers to understand your strategy and predict the ones you’d come up with…if you only based it on public text. You have to figure out how to get your secret text into there. With all interactions with the robot, you have to bring a lot to the chat window. The quality of what you bring will determine the quality you get from the robot. Garbage in, garbage out. Which is to say: nothing valuable in, nothing valuable out.

Back to pedantry: It’s proving to be good at teaching-by-modeling: it shows you what The Big Report could look like, explains what the fuck those 80 slides your teacher gave you are asking you to do in your essay, and serves as an additional tutor and instructor when you can’t afford to hire one.

From Katerina Kamprani's The Uncomfortable collection.

Writing & creating

The robot is also effective as a co-writer. In other words, you become a co-writer with the robot. It can generate text endlessly, and if you collaborate with it, you’ll get great results. Just as you would with any co-writer (especially a ghostwriter or a co-author whose name appears in smaller print on the book cover), you need to get to know each other, learn how to work together, and figure out the style you want. Claude is great in this regard—it has a simple tool for learning and refining your style. If you haven’t spent time teaching the robot your style, you should do so.

You can reduce videos, podcasts, scripts, even small talk, to text. Recall, AI is good at text, so it will be OK at that.

It’s okay at imagining. I play D&D with it, and it has gotten a lot better at handling mechanics over the past two years, but it still remains rather boring and predictable when it comes to unassisted imagination. If you feed it published D&D adventures, it does okay. But just try having it come up with ten dwarf names—they’ll all be variations on “Iron Shield” or “Rock Breaker” and the like.

It’s really good at writing code. And guess why? Code is text. Is it good at creating multi-system applications and workflows used to, say, coordinate how an entire bank works? Probably not—very few people are even good at that. And then there’s the whole process of getting it into production and keeping it running. If you think the robot can do that now—or ever—¡vaya con Dios! Please report back if you survive.

The AI is bad at perfection

What about tasks like optimizing supply chains? Maybe one day the robot will be good at other B2B tasks, but I suspect that for many years good old fashioned machine learning will keep doing just fine there.

Don’t use AI for tasks where being 100% correct is important. If the consequences of being wrong are dire, you’re going to get fucked. Worse, someone else is going to get fucked.

But, if you’re using the robot for a system that tolerates—or even thrives on—variety (errors), it’s great. “Anti-fragile” systems? I don’t really know what that means, but: sure. Are you brainstorming, whiteboarding, and exploring? Yup, great at that. Using it for therapy? It’s fascinatingly good at that.

You get the idea: if you’re using generative AI for something where you can recover from errors quickly, there is “no right answer,” and the task is text-based, then yes, it is great for that—and you need to start using it now.

From Katerina Kamprani's The Uncomfortable collection.

Thirty days to defuse the time bomb of false expectations

Let’s build up to my concern:

  1. Text is all generative AI is currently good at.

  2. Most people have not used AI for 12 months, let alone two-plus years, let alone 30 days. I’m just guessing here. Surveys tell dramatically different stories. But most surveys show only a small amount of use, and just recently.

  3. So, I don’t trust that most people yet understand what AI is good at—they often imagine it’s capable of far more. You have to use it to know it, and learning by doing is a lot of effort and usually takes longer than your ROI model’s horizon.

That’s “hype,” sure, but it’s more like misunderstood inexperience. It’s classic diffusion of innovation (ask Chatty-G to tell you about that concept; I bet it’ll be pretty good). Sure, that diffusion has been getting faster, but if humans are involved, we’re still talking decades—at least one decade.

My concern here is that once we collectively set expectations beyond reality, the fall is bigger, and the recovery becomes too great. Worse yet, people waste a lot of time chasing AI fantasies. They thought there’d be 100x returns when, in reality, there were only 10% or even 25% returns. You fire employees, take on investment and risk to overhaul your business, and spend time on those AI fantasies instead of pursuing other strategies. And then, when you learn what AI is truly/only good at, you’ve invested everything—only to find that your assumptions, ROI models, and, thus, investment was a fantasy. Plus, once you build it, you now own it forever, no matter how shit it is. Plus, you played a game of chicken with opportunity cost, and opportunity cost won.

So, don’t do that. Don’t bet the farm on something you haven’t used firsthand for at least 30 days, and certainly don’t stake our jobs or our index funds on it.

Wastebook

  • “I was the man of my dreams," Peter on Peter.

  • “the unexampled,” on Gary Snyder.

  • And, from Gary: “this romantic view of crazy genius is just another reflection of the craziness of our times… I aspire to and admire a sanity from which, as in a climax ecosystem, one has spare energy to go on to even more challenging – which is to say more spiritual and more deeply physical – things”

  • “Mandatory Commute policy,” synonym for RTO.

  • “autogolpe,” self-harm.

  • “If you change it, you own it,” if only.

  • ”monomaniacal dork squads,” power-up.

  • “a steaming pile of, um, DOGEshit,” deep analysis.

  • “Our Son of a Bitch,” various.

  • “You can’t sell a sandwich with secret mayo,” Noah’s quest continues.

  • There’s a first time to forget everything.

  • “[rhapsode]([en.wikipedia.org/wiki/Rhap...](https://en.wikipedia.org/wiki/Rhapsode#:~:text=A%20rhapsode%20(Greek%3A%20)ῥαψῳδός%2C,BC%20(and%20perhaps%20earlier).).”

  • “The Deeply Spiced Meatballs That Call Back to Haiti.”

  • “Features of the future,” a CF Day topic.

  • When submitting a conference talk and given the option to select “audience level,” I’ve started always selecting “intermediate.” I don’t know why, or what that means, but it’s some kind of fun.

  • “LLM aka Large Legal Mess,” don’t use the robot for lawyer-shit.

  • “inspo,” AI hair.

  • “If I’d wanted chatGPT to answer, I’d have asked chatGPT” @byronic.bsky.social.

  • "My leather jacket tailor never flinched, so I'm not sure what's wrong with all the Finance Bros."

  • Deep is the new plus.

Relative to your interests

Predictably, a bunch of AI stuff of late.

  • The reality of long-term software maintenance - “In the long run maintenance is a majority of the work for any given feature, and responsibility for maintenance defaults to the project maintainers.” Related:

  • Top EDI Processes You Should Automate With API - Tech never dies. Helpful consequence: take care of it before it takes care of you.

  • How’s that open source licensing coming along? - ”The takeaway is that forks from relicensing tend to have more organizational diversity than the original projects. In addition, projects that lean on a community of contributors run the risk of that community going elsewhere when relicensing occurs.”

  • Key insights on analytical AI for streamlined enterprise operations - ”The big issue, whether it’s generative or analytical AI, has always been how to we get to production deployments. It’s easy to do a proof of concept, a pilot or a little experiment — but putting something into production means you have to train the people who will be using this system. You have to integrate it with your existing technology architecture; you have to change the business process into which it fits. It’s getting better, I think, with analytical AI.” // It’s always been about day two.

  • Why I think AI take-off is relatively slow - My summary: humans resisting change is a bottleneck; also, humans not knowing what to do with AI; current economic models can’t model an AI-driven paradigm shift, so we can’t measure the change; in general, technology adoption takes decades, 20 for the internet, 40 for electricity. // AI is a technology and is prey to the usual barriers and bottlenecks to mass-adoption.

  • GenAI Possibilities Become Reality When Leaders Tackle The Hard Work First - Like any other tool, people have to learn how to use it: “Whatever communication, enablement, or change management efforts you think you’ll need, plan on tripling them.” // Also, garbage in, garbage out: “GenAI can’t deliver real business value if a foundation is broken. Too many B2B organizations are trying to layer genAI on top of scattered, siloed, and outdated technologies, data, and processes. As a result, they can’t connect the right insights, automations stall, and teams are unsure of how to apply genAI beyond basic tasks.”

  • A.I. Is Changing How Silicon Valley Builds Start-Ups - ”Before this A.I. boom, start-ups generally burned $1 million to get to $1 million in revenue, Mr. Jain said. Now getting to $1 million in revenue costs one-fifth as much and could eventually drop to one-tenth, according to an analysis of 200 start-ups conducted by Afore.” // Smoke 'em if you got 'em…

  • The AI Experience - What’s Next For Technology Marketing - Back-up the truck and dump the enterprise enterprise marketing slop: “Did you consider that soon you may be marketing to GenAI agents of your customers?” // And: “While the term “Account Based Marketing” or ABM is still floating around, less marketers are focused on continuing to enable personalized marketing for a subset of the customer and prospect base.” // Instead of having to craft the personalized content, you have the robot do it. Then the marketing skills you need to back to mechanics of running campaigns. // Yes, and, this is an example of my “bad things are bad” principle. If the slop you get is bad, it will be bad. But it can also be good, in which case, it will be good.

  • How Ikea approaches AI governance - ”Around 30,000 employees have access to an AI copilot, and the retailer is exploring tailoring AI assistants to add more value. Ikea is also exploring AI-powered supply chain optimization opportunities, such as minimizing delivery times and enhancing loading sequences for shipments to minimize costs. AI in CX mostly targets personalization. // ‘“I’m not just talking about generative AI,” Marzoni said. “There’s some old, good machine learning models that are still absolutely delivering a lot of value, if not the majority of the value to date.”’

  • U.S. Economy Being Powered by the Richest 10% of Americans - One estimate: in the US, “spending by the top 10% alone accounted for almost one-third of gross domestic product." // Never mind the, like, morals?…doesn’t seem very anti-fragile. Never mind the, like, morals?…doesn’t seem very anti-fragile. // “Those consumers now account for 49.7% of all spending, a record in data going back to 1989, according to an analysis by Moody’s Analytics. Three decades ago, they accounted for about 36%.”

  • Why it’s nice to compete against a large, profitable company - Because, they can’t lower prices on their core products least Wall Street freak-the-fuck out.

Logoff

See y’all next time! Gotta go run a few ideas by my pal, the robot.

1

It can kind of convert text to images, but only if you like the same people over and over or are an anime fan. If you like a perfectly chiseled chin, AI generated images are for you. You can put a lot of work into getting your text in shape to produce something unique that looks real. In this respect, it gives a tool to people who can’t do graphics (like me!) which is actually pretty amazing. But it can only go so far. Just try to create a “realistic” looking person instead of a perfect fashion model. It is near impossible. Of course, this is because it’s not trained on enough images yet, I guess.

Using AI for HR - management and workers

Enterprises pouring money into GenAI and CEOs treating AI agents like cheap labor - yet only 25% see ROI right now. Vibes: “Europe’s long holiday from history is over.” Also: IBM does RTO, predictions about DOGE layoffs, the term “platform” remains a favorite excuse for overcomplicated tech, and “autonomous killer robots.”

AI comes for HR

What to make of using AI to automate HR processes? Melody Brue and Patrick Moorhead look at Oracle’s work there:

The agents are designed to support several key facets of the employee experience, including hiring, onboarding, career planning, performance reviews and the management of compensation and benefits.

Yes, and…

(1) If it’s bullshit work (“busy work”), eliminate it, don’t automate it. The thinking here promises to automate bullshit work like manually formatting performance reviews, copy/pasting boilerplate onboarding checklists, clicking through timecard approvals, writing job descriptions from scratch, and filling out endless HR forms. Yes, and…are these tasks that should probably just be eliminated or drastically simplified rather than lovingly preserved in AI amber. I’ve written job descriptions several times and there is something wrong-feeling about the process and the results. The same with performance reviews from both sides of the review. If you feel like you’re doing bullshit work and you get excited about automating it with AI, why not eliminate it instead? Or, you know, fix it.

(2) How could workers use similar AI stuff to maximize their advantage versus management? In a heavily bureaucratic HR system, reports and analysis are important: you need to prove that you deserve a promotion, more money, whatever. You’re often weighed against relative metrics: how much do people get paid in a region, how did you perform versus other people on a bell curve (or ranking), etc. Putting together those reports is tedious and your managers may not put in the effort. Have the AI do it for you. You could also look at those wordy job descriptions to extract what your role is responsible for doing. And when you need to come up with annual MBO/KPI/OKR/whatever the next TLA is for “goals,” have the AI look at the goals-trickle down and come up with yours. Then have it track what you should be doing. Negotiating salary could be useful to: how much should you even be asking for, what is your BATNA? What is their BATNA?

(3) Could you run the robot on, say, the last 5 years of reviews and then compare it to what the human evaluators did? Is the robot better (less bias, giving feedback that improves worker performance, finds low performers, etc.), or is it worse (wrong analysis leads to less performant workforce)? As a worker, thought you might not actually have access to full reports, you could try to find out what the real performance measures are. Load in job descriptions, give an overview of what highly rewarded people did, and then see what attributes and actions get rewarded. Never mind what the official metrics are, target those.

There’s a general theory for all AI use here as well: if what your AI produces is something that can just be consumed and used by another AI, it’s probably bullshit work that you can reduce to a quick email or can be eliminated entirely.

***

For him, of course, it was a business opportunity. He was part of what I would come to see as a savvy minority of people and companies capitalizing on AI fatigue.

Meanwhile, this is a fantastic piece on the state of HR tech from the worker’s perspective. There’s plenty of AI talk in it. It’s also fun to see what tech conferences and marketing looks like to (I presume) outside eyes. We are such dorks and, often, tasteless:

While the word people was plastered everywhere as both a noun and an adjective, the workers of the exhibit hall's collective imagination were not real, three-dimensional people. They were shadows without substantive interests or worries beyond the success of their companies. That was the only way these products could be pitched as win-wins. But, come on. We were in Las Vegas - everyone here knew the real money comes from making sure enough people are losing.

Fresh Podcasts

There are new episodes of two of my podcasts, listen to ‘em!

Classroom History, 1938. Philip Evergood.

Relative to your interests

  • AI Agents: Why Workflows Are the LLM Use Case to Watch - The agentic app revolution isn’t a transformation story. It’s a modernization story; a chance to solve small problems with the team you already have.

  • AI Agents and the CEOs - “At the risk of saying the quiet part out loud, the way CEOs are talking about agents sure sounds like how they talk about employees–only cheaper!” // “Companies are dedicating significant spend to AI–approximately 5% of the revenue of large enterprises (revenues over $500 million) according to one survey by Boston Consulting Group, and yet only 25% claim they are seeing value from their AI investment.”

  • To avoid being replaced by LLMs, do what they can’t.

  • Learning from examples: AI assistance can enhance rather than hinder skill development - Could be that AI use makes you better. // “Decades before the advent of generative AI, the legendary UCLA baseball coach John Wooden declared that the four laws of learning are explanation, demonstration, imitation, and repetition (31). Few learners have access to the best human teachers, coaches, and mentors, but generative AI now makes it possible to learn from personalized, just-in-time demonstrations tailored to any domain. In doing so, AI has the potential not only to boost productivity but also to democratize opportunities to build human capital at scale.” // Also, some prompts used to evaluate writing quality. The one rating “easy responding” is interesting: how easy is it to (know how to) respond? Maybe good for CTAs.

  • Gartner Survey Reveals Over a Quarter of Marketing Organizations Have Limited or No Adoption of GenAI for Marketing Campaigns - ”Nearly half (47%) report a large benefit from adopting GenAI for evaluation and reporting in their campaigns.” // The number is reverse is more interesting: 77% of surveys marketing people say they’re using generative AI for marketing stuff. Related:

  • OpenAI reaches 400M weekly active users, doubles enterprise customer base - “The ChatGPT developer currently has 2 million paying enterprise users, twice as many as in September.” With “400 million active weekly users, a 33% increase from December.” And: “The New York Times reported in September that the company was expecting to end 2024 with a $5 billion loss on sales of $3.7 billion.”

  • 2025 is the breakthrough year for Generative Enterprise — and partnering with a capable services partner is critical - “[S]pending on GenAI is rising (HFS data suggests enterprise investment is rising by more than 25% on average into 2025), we start from a low base. We estimate enterprise spending on GenAI in 2024 accounted for less than 1% of global IT services spending. This is just one illustration of how far we still have to go.” // Plus, a whole bunch of commentary in enterprise AI.

  • Data is very valuable, just don’t ask leaders to measure it - AI ROI is difficult: “in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases… It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes”

  • A Simple Definition Of “Platform” - “a product that supports the creation and/or delivery of other products.”

  • IBM co-location program described as worker attrition plan - From the RTO-as-not-so-stealthy-layoff files.

  • YouTube (GOOGL) Plans Lower-Priced, Ad-Free Version of Paid Video Tier.

  • On European Defence, Energy and Growth - Imagining big changes in European priorities: changing policy to get more energy, more emphasis on militaries.

  • No Rules Are Implicit Rules - The European view on enlightened American management policy: “Greg, I hate to bring it to you, but working for ten fucking hours a day is not the normal hour. I don’t care if you live in America or not. The section continues with other “grand” examples of managers taking “up to” 14 days a year off to show their employees they should to so too. Let’s assume the best here: 14 workdays are almost three weeks. A year. The statutory minimum for full-time employees working a forty-hour week is 20 (thus 4 weeks) in Belgium. Oops.”

  • Rage Against the Machine - Perceptive: “They’re going to try two or three things they think will solve everything, which will be thrown out in court. I assume the first thing they’ll do is some kind of hiring freeze, and then, after three months, they’ll realize agencies have started to figure out ways to get around it. And then they’ll try to stop that, and they won’t be able to do that. Then they’ll try to make people come to work five days a week, and that’s going to be difficult because a lot of these agencies don’t have offices for these people anymore. I think it’s going to be one thing after another, and maybe after four years the number of employees will be down 2 percent—maybe.” // The layoff playbook DOGE is working comes from the tech world, and it sort of works there. But that’s because tech companies can die, be acquired, or be reborn. In a tech company, you rarely starve the beast (or amputate parts of it) and have it survive. Do we want the same outcomes with government?

Read by the robot

I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.

Kelsey Hightower declined to join the AI gold rush, advocating instead for a glossary of tech jargon to remind everyone that AI is not new, just rebranded.

Platform engineering teetered between breakthrough and bust, with some heralding it as the savior of DevOps while others braced for its descent into Gartner’s “trough of disillusionment.” Several years ago (February, 2023) Sam Newman insisted that calling something a “platform” is often just an excuse to overcomplicate things, suggesting “Delivery Enablement” as a rebrand.

Meanwhile, IBM Consulting offered enterprises a guided tour of “Agentic AI,” a term that likely needs its own entry in Hightower’s proposed glossary.

Wastebook

  • “effortful,” AI study.

  • “Topological qubits,” MSFT.

  • “Deliberately they don’t give a shit,” Emily, Political Gabfest, February 20th, 2025.

  • And: “chaos entrepreneur,” John.

  • “Europe’s long holiday from history is over,” John Naughton.

  • "This [Trump] administration cares about weapon systems and business systems and not ‘technologies. We're not going to be investing in ‘artificial intelligence’ because I don’t know what that means. We're going to invest in autonomous killer robots." Fund the outcomes, not the tech.

From Dead Motels, USA.

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

Nothing to report today.

@cote@hachyderm.io, @cote@cote.io, @cote, https://proven.lol/a60da7, @cote@social.lol