Not much today.
"decision-makers can remain irrational longer than you can remain solvent. Career advice in 2025.
”a concept as antiquated as intent.” NYTimes.
“a felicitous remove.” Spicy.
“Significant improvement but still issues.” Oxide and Friends.
A Conversation Algorithm I Cribbed From Clinical PsychologistsWhat does “open ended question” even mean? Here’s some examples, and a conversational framework built around it. This also probably good for sales and marketing.
The Product Engineer - “If you build Enterprise Software, you need product managers.” // If consumer, you need developers who use the product.
The good times in tech are over - ”If you were an engineer who loved working on your company’s open-source libraries, it’s probably sensible to confront the fact that the company never really cared about it that much.”
Refactoring to understand and “vibe coding” - ”Code is not the most valuable artifact. Your understanding of the codebase is.”
I’ve been re-learning Spring with Dan Vega’s Spring Boot crash course. It’s great, and encouraging. So much has changed since 2005, but the thrill of learning and doing little iterations is fun. After this, there’s his Spring AI course. I hope to get skilled enough to make some D&D AI tools/MCP servers, whatever.
Once I can get over the (to this older Java coder) rails-like feel of Java and Spring (where there’s so much going on in the background hat it gets confusing to know what to do - it’s so simple, I have no idea where to start), it’s pretty quick and interesting.
Also, since the effects and outcomes of Spring (along with Tanzu Platform/Cloud Foundry) are what I talk about at work all the time, it’ll be good to have more first hand experience than the “reporting” I do on it.
I’m the guest on this week’s When Shit Hits the Fan podcast. You can hear about two of my fan shattings. Here’s the podcast in Apple, Spotify, and Overcast.
Slides are an oral culture, not a written culture. Imagine civilization without writing: that’s what organizations relying on slides instead of documents are like.
There are workarounds, and they tend to prove the comparison. Often, you will see a slide with a lot of words, and the presenter will apologize that there’s too much text. That’s because the slides should have been a document.
Slides are not good at text, they’re good at visuals. Slides are good for enhancing spoken communication: showing examples, visualizing data (charts), even giving a written outline of the topics covered, major conclusions, and suggested actions. McKinsey titles are great for all of that. The right slides will make your talk better, more memorable, more “actionable.”
Slides are a terrible way to share, archive, and “document” your decisions and reasoning. For example, slides are terrible at strategy. Have you ever asked for the plans, the strategy, an overview of what a product does, and been sent slides? They’re usually not good. You’re usually left with many questions, especially when it comes to why and how. That’s because these types of things should be documents.
There’s an old maxim of keynote slide design: for your audience to understand the slides, you should need to be there giving the talk. The slides should not be able to stand alone. A document can stand alone, a document can be re-read, sent to people who weren’t in the room.
You can also collaborate on a document. You can suggest changes, you can ask questions in comments, you can update it. You can track changes on a document. A document is, somewhat ironically, more of a living document than slides. In contrast, have you ever tried to track changes and collaborate with slides? It’s a mess.
I use slides all the time for presentations, both public and internal ones. For internal collaborations and work, however, I start with a document and try to “force” the people I’m working with to use the document as well. Eventually, in most of the corporate cultures I’ve worked with, I have to switch to text pretty early on. But, at least the document is there to serve as the source of truth.
Most corporations are illiterate. From what I can tell, people avoid reading in large organizations. People don’t make the time to read, it’s faster to flip through slides. It’s faster to edit slides.
Guess what else: all this generative AI stuff is really good at text. If you think it’s hard to write, and that most people won’t be able to do it, even the simplest AI can help. You can even take a recording of your presentation of slides and ask the AI to convert it to a document.
This is an opportunity for management. If it seems like people aren’t “getting it” that ideas aren’t trickling down from management, that you keep getting the same questions over and over…maybe you should switch mediums from slides to text. Try something different. Slides are a poor way to run a company, and switching to documents is an easy, no cost way to boost productivity.
Data Looks Better Naked - Good advice on formatting charts and data tables. The examples are incremental, so you can choose to go all the way, or just apply some of the design changes.
Canadians’ Health Data Needs Safeguarding Against Our Increasingly Hostile Neighbor - Maybe Trump drives a lot of sovereign cloud.
Something Is Rotten in the State of Cupertino - When your lover lets you down. // Also, how to detect vaporware. With AI, building up your bullshit detector is very important. A lot of it, at best, is just hopes and dreams. And a lot of it is just vaporware.
Most Externalities are Solved with Technology, Not Coordination - ”Economics should emphasize the importance of technology as a solution to externality problems and focus less on social coordination.” // Does this apply to IT where we often say “technology is easy, culture is hard.”
Ironies of Agentic AI - “[R]ather than removing human dependencies, automation often shifts and amplifies them.”
Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI - Sensitive information in, sensitive information out. Also, make sure to have access control to your models.
What Are Agentic Workflows? Patterns, Use Cases, Examples, and More
Agentic AI Is The Next Competitive Frontier - “CEOs must architect the autonomous enterprise.” // Changing culture, org structure, and how work is done day-to-day. That’s a big ask. It rarely works. // It’s a much better strategy to just figure out how to use AI to improve how things are currently done.
Prompts for management communication - here’s commentary on it.
Monster, Maiden, Madonna, Medusa - Avoid using legs to lure dice-nerds.
The Hypercuriosity Theory of ADHD - ”Hypercuriosity is related to ADHD in several ways: individuals with ADHD often demonstrate heightened novelty-seeking behaviors, show intense focus on topics of interest, and experience stronger urges to explore new information and experiences. Beyond all this experimental data, this connection is supported by qualitative research suggesting that ADHDers relate their curiosity to their tendencies toward both impulsivity and distraction.”
Revenge font - The tagger who did this must be so happy. And, way to turn a frown upside down!
The Secret History of the Manicule - “The Little Hand that’s Everywhere.”
“So what is a critic for? This is the second quote that’s in my notebook. It’s in every notebook because I always write it on the first page: ‘Find a subject you care about and which you in your heart feel others should care about. It is this genuine caring, and not your games with language, which will be the most compelling and seductive element in your style.’ Kurt Vonnegut.” Found by that guy Russell.
“it’s hard to explain to the French that Americans are much more afraid of each other than they are of Russia. Conflict in the United States is usually an internal convulsion, a civil matter.” boom boom paris.
“sitting in the buzzfeed offices just clickin on this off tweetdeck.” Good times.
“The best time to establish alternative, non-algorithmic networks of communication & affinity was five years ago. The second best time is today!” Robin Sloan.
And: “pageants of minor chaos.”
“I think that [parent’s] resilience. Or, their resilience at work is an incredibly important quality to transfer [to their children] and this might be one way to do that. Ooo! Looks like I had a thought!” On bringing your kids to work, having them see you work, etc. - John Dickerson on the Political Gabfest bonus episode, March 13th, 2025z
A lot of lunch and learn sessions, weekly meetings, and other collaborative activities focus on building and maintaining a network of knowledge rather than just learning the specific topic covered in the meeting. These activities involve sharing information and establishing connections with others to enhance your understanding and access to a wider range of knowledge.
“We’ve entered the ‘tamale layaway stage’ of late Capitalism.” Chris.
“toyetic.” Here.
Events I’ll either be speaking at or just attending.
SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th, speaking. NDC Oslo, May 21st to 23rd, speaking. SREDay Cologne, June 12th, speaking.
Discounts: 10% off SREDay London with the code LDN10.
There's a huge, great line-up of topics and people at Cloud Foundry Day this year, May 14th in Palo Also, hosted by my work, Tanzu. Come check it out - Cloud Foundry is the most proven, mature platform as a service I know of, used for over a decade in the biggest, mission critical organizations, and beloved by developers and operators.
“We live in the age of ‘fuck around and find out’ - of iteration and experimentation.” 8Ball wisdom.
“15 years ago, the internet was an escape from the real world. Now, the real world is an escape from the internet.” Noah Smith, via Ibid..
“everything is optimized for engagement instead of meaning,” the Curmudgeon Era of life.
If the likely outcome is the same, you might as well do a good job, so long as that’s fun too. If it’s not fun, do a bad job, if any at all.
Never let a slide template tell you how to live your life.
“The key to sales relationships, in my experience, is accountability. Take responsibility for mistakes. If you commit to something, do it. If you aren’t sure you can do something, don’t commit to it. This is harder than it looks.” James Dillard.
“the death of the author and the return of pleasure” promises of Barthes-lore.
“I blew off everything on Thursday and sat in a local place reading Ionesco with a glass of wine and an excellent coffee made with beans roasted in Naples.” Warren Ellis.
I always want to know the “therefore”: What it means, what effect it has, how to think different, what to do next. Knowing the diagnosis is helpful, knowing what to do next is more important.
“déformation professionnelle.” Found in Russel Davies' neck of the woods.
And: “The Nobel laureate Alexis Carrel has observed that ‘[e]very specialist, owing to a well-known professional bias, believes that he understands the entire human being, while in reality he only grasps a tiny part of him.’”
“that’s what might occlude our catsup.”
“tweakments”
“To read - and announce oneself as having read - literature in translation is to be tasteful and intelligent, a latter-day cosmopolitan in an age of blighted provincialism.” Bros of literary criticism.
“obscurantism” Ibid.
“I don’t think that knowing anything helps. I don’t think there is anything to know.” -Rick Ruben.
Anything she says is correct. She just don’t talk much.
“To understand the San Joaquin Valley, or any productive ag region, as “rural” misses the point. This is a vast, open-air factory floor, totally wired up, carefully monitored. I say that with appreciation bordering on awe.” Robin Sloan.
Google’s Sergey Brin Asks Workers to Spend More Time In the Office - The 60 hour work week. // '“A number of folks work less than 60 hours and a small number put in the bare minimum to get by,” he wrote. “This last group is not only unproductive but also can be highly demoralizing to everyone else.” // These rich guys just can’t read the room. They seem to have failed to surround themselves with humane people and entered some bizarre land of productivity. Theory: they turned this hobby into their job and can’t imagine people who don’t do the same. They don’t even realize they did it. Cynical theory: they’re just crude capitalists. // Meanwhile:
The Work from Home Divide: Insights from Six US Surveys - A study mapping WFH potential vs. actual adoption but sidestepping the bigger question of productivity.
Build vs. Buy: Compare Your Kubernetes Platform Options - Don’t build your own platform. And especially don’t build your own Kubernetes-based platform. It’s not going to turn out well.
Developers spend most of their time not coding - Developers spending something like 50% to 60% of their time on stuff that should be automated and built into the process. // And, now, we’re about to pile a bunch new stuff on them:
AI Adoption: Why Businesses Struggle to Move from Development to Production - Day Two AI Operations: “This interchangeability means the real differentiators lie elsewhere: how you integrate your company data, design safety and guardrails, and adapt your development processes.”
Model Context Protocol Bridges LLMs to the Apps They Need - The idea of having the AI sort out which tools to use is cool. What’s also cool is that you natural language to tell the AI what the tools do and when to use them. It then sorts them out. What’s also cool is that Spring AI is the official Java implementation.
Affording your AI chatbot friends - ‘An “AI Agent” is just a model with access to tools like “escalate ticket”, “run SQL query”, or “draw an image”. The rest of the hype comes from fitting it into existing workloads like ETL nonsense with MuleSoft or something banal like that. This is really what all the hype is about: hooking AI models up to existing infrastructure so that they can do “useful things”.’
The Big Shrink in LLMs - AI is shifting towards smaller, more efficient language models to address sustainability and data quality issues.
Build a Campaign-Unique Faction List - D&D stuff: “faction list turn our world’s lore to specific things the characters interact with during the game. Faction lists turns fuzzy concepts into a practical list we can use in the next game we run.”
The Hobby’s Cult of Personality - Recap of the problematic stuff in the history of RPGs.
Tyler Cowen, the man who wants to know everything - good if you like Cowen work, rather, the style of his work, his “production function.”
Even Our Sex Scandals Are Sexless - Modern “sex scandals” often lack explicit sexual content, reflecting changing cultural and technological norms.
Good news, everyone. We can start doing “useful things” with agentic AI.
The hype cycle for agentic AI has been one the fastest ever. And I think we’re almost through hit without very many people actually have written agent AI code.
Let’s say Soaked McConaughey (Super Bowl, Feb 9th 2025) was the peak. And the beginning was sometime around October 2024 is when an NVIDIA blog asked “What is Agentic AI?”
Now, each thing I read about agentic AI, from a technical person, is all like “this is just hooking up genAI to your database.”
An “AI Agent” is just a model with access to tools like “escalate ticket”, “run SQL query”, or “draw an image”. The rest of the hype comes from fitting it into existing workloads like ETL nonsense with MuleSoft or something banal like that. This is really what all the hype is about: hooking AI models up to existing infrastructure so that they can do “useful things.”
And MCP has made Richard Seroter’s list every week, sometimes daily, in the past three or so weeks, so we’re hopefully clinging up the trough to the Plateau of Productivity. Hopefully this means we’ll finally start doing “useful things” at scale in enterprises.
That said, the odd thing is I don’t think very many people have even written and deployed agentic AI apps yet.
(I put together a small timeline here, minus the Super Bowl part.)
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking. SREDay Cologne, June 12th, speaking.
Discounts: 10% off SREDay London with the code LDN10.
I should re-learn programming. It’d make my marketing work a lot easier and truer. A programmer who can market is a good place to be.
Here’s a new article on enterprise AI from me and a co-worker. As with most maturity models, it’s 2/3 prescriptive and 1/3 “here’s some ideas that might help.” A bit of map and territory.
With AI, we’re seeing a familiar anti-pattern, but this time flavor-injected with generative AI: the board charters a tiger team, gives them budget. The team hires consultants because their developers don’t know Python. Consultants identify AI use cases, write some Python apps, and many of the pilots show good business value.
AI-related spending is set to reach $371.6 billion by 2027 as businesses shift from experimentation to targeted AI investments. IDC, February, 2025.
Then the board says, “OK, now do that 300 more times.” But the team hasn’t built a sustainable strategy. They haven’t used that time and money to add new organizational capabilities. Your developers still aren’t skilled in AI. So now adding AI to those hundreds of other applications isn’t easy. And that’s also when they discover day two AI operations: you have to run this stuff! Now you’ve just got piles of Python without real in-house capability. Day two AI operations isn’t cheap, it’s often underestimated, if planned for at all.
Developers already spend 60% of their time on non-application work.1 Add AI infrastructure, and, I’d guess, that’ll climb another 10–15%.
This is exactly what happened with Kubernetes. Based on Google-brand, excellent devrel’ing and keynotes, companies assumed it was easy - until developers were drowning in YAML. Perhaps we’ll hear an AI-centric quote like this in a few years:
Well, I don't know how many of you have built Kubernetes-based apps. But one of the key pieces of feedback that we get is that it's powerful. But it can be a little inscrutable for folks who haven't grown up with a distributed systems background. The initial experience, that 'wall of yaml,' as we like to say, when you configure your first application can be a little bit daunting. And, I'm sorry about that. We never really intended folks to interact directly with that subsystem. It's more or less developed a life of its own over time. Craig McLuckie, SpringOne 2021
AI is heading the same way. You’ll spend 12 months building your own platform. It’ll barely work - if at all - and cost ~$2 million in staffing. Developers won’t use it like you expected. There’s no ROI, it delivers a third of what you promised, and you’re not really sure how to run and manage it long-term. And, yet, it still costs a lot of money.
CIOs have found generative AI goals harder to attain than first anticipated. Technology leaders are blaming data challenges, technical debt and poor strategy or execution for slowing down progress. Nearly half of CIOs say AI has not yet met ROI expectations, according to Gartner research. Reporting from the Gartner Symposium, October, 2024.
That original team, now AI experts, will leave, either for to work at an AI tech company or to help do it over again at a new enterprise. You’ll be stuck with an unsupported, incomprehensible mess. No one else understands it. You’ve locked yourself into an opaque platform, wasted years, and landed back where you started - with a sprawl of shadow platforms and tech debt. But, don’t worry: the next CIO will launch a new initiative to fix it.
With AI, you’ll have two more problems.
First, AI evolves monthly, sometimes weekly. New models, new techniques (“agentic AI”). You need to keep up, or you’ll be stuck on outdated tech, losing competitive advantage. The best way to handle this? Just like any other service you provide (e.g., databases). Centralize those AI services, then you can upgrade and certify once, enterprise-wide, and give developers a single, enterprise-y source for AI models. The alternative is to find all the AI models usage across your hundreds of applications and enterprise-y them up one by one. You’ll quickly fall behind - just look at the versions of software you’re currently running, I bet many of them are three, five versions behind…especially whatever Kubernetes stacks you built on your own.
Second, if you use the same models as everyone else, you’ll get the same answers. Asking the AI “How do I optimize my pipe-fitting supply chain?” will yeild the same response as your competitors. The real advantage is adding your own data. That’s hard work, needing serious budget and time. And once you figure it out, you’ll need to scale it across teams, which means centralizing AI access, just as we saw above in AI model usage and updating.
Enterprise AI needs a platform. And what we learned over the past decade is: building your own platform is a terrible idea. This is especially true in private cloud, which I reckon is where about 50% of the apps in the world run, probably much more in large organizations.
Instead, improve your existing stack. Don’t rip and replace it.
If you’re like most enterprises, you have a lot of Java developers using Spring. Use Spring AI as your AI framework to connect to models. The Spring AI developers have been working quickly over the past year to add in the baseline APIs you’d need and adapt to new innovations. For example, the agentic AI framework Model Context Protocol came out in November, and Spring AI is now the official Java implementation for MCP.
And if you’re like a lot of other larger organizations, you already have a strong platform in place, Cloud Foundry. You can add a full-on model service gateway to host or broker AI models. You can host those models your own if you’re freaked out about public cloud AI, use public cloud AI, or, more than likely, do both! Most importantly, you’ll be able to keep up with model innovations, providing them to all your developers as quickly as your enterprise red-tape will allow for.
Your platform team can manage AI services like any other - security, compliance, cost tracking. Since it serves OpenAI-compatible endpoints, you can even still use those Python apps, but now your operations team can secure and manage them instead of whatever consultant-built Python stack you got stuck with.
So, the plan: (1) det developers using Spring AI to start embedding AI into their apps. Work on integrating your own, secret/proprietary data to the AI app workflow. When they’re ready, add AI services to your platform so production deployment and management is seamless. You’ll have day two AI operations covered. And because it’s centralized in a platform, you can now roll it out to those 300 more apps the board asked for.
Then, you can execute a proven playbook: developers should be shipping AI-powered features every two weeks, experimenting and refining based on real usage. I outlined this approach in Monolithic Transformation, with more executive-level insights in Changing Mindset and The Business Bottleneck.
You know, as always - try to avoid repeating the anti-patterns of the past.
Bathos, n. – The fine art of inspiring deep emotion, only to trip over a misplaced joke, a clumsy metaphor, or an unfortunate mention of flatulence.
Alacrity, n. – The suspiciously eager willingness to do something, often confused with competence but more often a sign that the asker failed to mention how much work it actually involves. (Found in the The Economist, explained by the robot, previous as well.)
Screwbernetes.
“Forward Deployed Engineers.” SEs with commit access.
“She glared at me from across the road and shooed me off because I couldn’t stop laughing.” Sting.
“Ms Adichie’s viral TED talk on feminism received an even more impressive accolade: Beyoncé sampled its lines.” The Economist World in Brief.
“bantam self-confidence.” Tina Brown.
Measuring Productivity: All Models Are Wrong, But Some Are Useful - “Measure Speed, Ease, and Quality Different facets of productivity warrant the use of different metrics. We typically think of productivity as a balance among speed, ease, and quality.”
Buy an expensive “AI Gateway”? Thanks, we’ll just build and open-source it, says Bloomberg - Should we be calling these “gateways”? Probably better than “broker.” // Also crazy to think we’ll just be recycling the same old patterns. Maybe it’s because they work! But, you have to strip away all the marketing-talk from it. And: building your own commodity infrastructure is a bad idea. It’s great for the team that does it and then quits to found a startup or go work at a higher paying tech company, so, I guess: more like capitalists being accidental generous to employees.
Four Marketing Principles That Redefine Markets from Klaviyo’s CMO - ‘“Creating fear never works, because in the immediate, you can probably prompt people to take action because they’re like, ‘Oh my! I must do something,’ but it leaves a negative perception in their mind.” “You don’t become a beloved brand over a period of time.”’
Skype is dead. What happened? - Ode to Skype, and complaining about Microsoft having no imagination to evolve it. It’d be helpful to read a detailed analysis of how and why.
What I learned from one month without social media - “Literally, it makes no difference what I abstain from, I will always find a way to procrastinate.”
I’m Tired of Pretending Tech is Making the World Better - Try to avoid tools that require you to acquire new tools to use the first tools.
America Voted For Chaos. The Markets Are Feeling the Punch. - Dumb disruption. Also: masters of the universe hubris.
One “Bad Apple” Correct Interpretation - On the use and mis-use of “bad apples.”
Why Skyscrapers Became Glass Boxes - ”Ultimately, it was economics (or at least perceived economics) that drove developers to embrace this style. Glass curtain wall buildings were cheaper to erect than their masonry predecessors, and they allowed developers to squeeze more rentable space from the same building footprint. Ornate, detailed exteriors were increasingly seen as something tenants didn’t particularly care about, making it harder to justify spending money on them. And once this style had taken hold, rational risk aversion encouraged developers and builders to keep using it.”
Yoon Suin and Orientalism - an example of a “woke” look at a D&D setting.
The Lights of My Life - Accent lighting and lamps used by one photographer.
The difficulty level of children - “It also runs the other direction. If you have two kids, and one kid is away (with a grandparent), it feels like having zero kids.”
I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.
AI firms raced to shrink large models into cheaper, faster alternatives, ensuring even small companies can now afford to hallucinate at scale. IBM expanded its AI strategy, embedding intelligence into products and unleashing 160,000 consultants to generate AI-powered assistants. AI, once set to replace lawyers, now helps them work faster—though not quite enough to stop them from citing fake cases in court.
The fragrance industry, once built on the power of scent, now thrives on TikTok, where influencers sell perfumes their followers will never have to smell - arguably the best possible scenario for everyone involved.
Is neoliberalism truly in decline? Despite its failures - rising inequality, social fragmentation - it remains the dominant economic framework with no clear replacement. Meanwhile: it doesn't matter if you saw a rabbit, a vase, or an old woman because a study debunked the idea that optical illusions reveal personality traits.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
I’ve been running the above, uh, screed in my mind for a few weeks now. Perhaps I’ll use it as the basis for my VMUG Netherlands talk next week. It’s not exactly the topic of the talk, but good talks, as delivered, often are.
60% comes from this IDC survey, where I added up security, implementing CI/CD, app and inf. monitoring and management, deploying code - the rest are definitely things I'd want developers doing. Full citation and background: IDC #US53204725 (February 2025) Source: IDC, Modern Software Development Survey 2024 (N=613) and Developer View 2023 (N=2500).
"But that said, he admitted smoking played a huge part in his life. “I don’t regret it. It was important to me. I wish what every addict wishes for: that what we love is good for us.”
He went on: “A big important part of my life was smoking. I loved the smell of tobacco, the taste of tobacco. I loved lighting cigarettes. It was part of being a painter and a filmmaker for me.”
“Reams of founderspeak floated up into the warm breeze.” Tyler Cowen profile.
And: “‘I’m not very interested in the meaning of life, but I’m very interested in collecting information on what other people think is the meaning of life.’ And it’s not entirely a joke.”
“I feel that writing about the topic will make me stupider.” Also Tyler.
Gruber on Skype’s EoL: “I don’t think it’s an exaggeration to say that if not for Skype, podcasting would’ve been set back several years.”
“My toaster has developed self-awareness, which is concerning because its only purpose is to burn things.” AI IoT FUD.
“Vibe working is using AI to turn fuzzy thoughts into structured outputs through iteration.” Azeem Azhar.
“Stove touching.” Move fast and touch things.
“a beautifully crafted digital fortress.” Ian.
”far left government computer office." DOGE-slang.
“Who’s gonna help us? …nobody’s coming.” Noah.
macOS Tips & Tricks - lots of keyboard shortcuts. See also their iOS tips, and MacOS command line tips.
Agentic definition from Azeem Azhar and Nathan Warren - Good simple AI vs. agentic AI framing: “Some view AI as a tool – a system that passively performs functions, while others see it as an agent – a system that actively pursues objectives, makes independent decisions and may develop instrumental goals such as self-preservation.”
ServiceNow’s newest AI agents bring intelligent automation to telecommunications firms - ServiceNow’s AI agents analyze network data to diagnose and resolve issues, predict disruptions, and provide real-time explanations for unusual usage patterns, improving customer service and reducing complaints.
How AI ‘Reasoning’ Models Will Change Companies and the Economy - Lots of good thinking here, not least of which is a very delightful writing style. There’s about five quips in there that are fun. // Also notice that the benefits are accruing the individual here, the enterprises have yet to figure it out.
AI’s productivity paradox: how it might unfold more slowly than we think - The case that productivity effects of AI in the macro economy will be slow. Micro-economy (individuals) still looks good.
State of Java Survey Confirms Majority of Enterprise Apps Are Built on Java Performance, Superior Java Support - ”Typically, when the topic of developing AI or ML functionality arises, people think of Python. Perhaps surprisingly, the survey found that Java was used more often than either Python or other languages. In fact, 50% of organizations use Java to code AI functionality.”
I’m not contributing to the AI slop problem. I’ve been posting some of the explainer queries I’ve had with ChatGPT (see the AskTheRobot category). I both like them, like easy content, and am curious if they draw incoming traffic. // It’s getting warm and sunny again in the Netherlands. I hope that means I’ll get my ass on the bike and get back to fiets-flâneuring.
My first law of enterprise AI: if you end up having two robots talk with each other to complete a task, that task was bullshit in the first place, and you should probably eliminate it rather than automate it.
For example, if AI is used in both sides of B2B procurement (enterprise software sales), then much of the process is probably bullshit in the first place. There is so much weird and ancient in procurement, on both sides, that it’s clearly a poorly done process and part of enterprise IT culture.
Nobody likes this, and we all know there’s a high degree of waste to it:
The average software sourcing process involves 28 stakeholders and takes six months. That’s six months of manual research, vendor meetings, demos, internal debates, and ultimately, a decision that still may not be fully informed.
Several years ago, Luke Kanies outlines his frustration and experience with that culture. When the buyers and the users are different people, and deal size goes up, beware: you run the risk of sailing in a sea of bullshit. Those selling (vendors) can bullshit a lot, but those buying can bullshit a lot too. The perfect example of using law one of enterprise AI use to remove waste.
Related: this is a great industry analyst overview of the enterprise IT category “agentic AI,” from Jason Andersen:
Conceptually, “AI Development Framework” is a type of middleware technology. To be more specific, it's a layer of shared services that provides a set of APIs and integrations for practitioners (not just developers) to build AI applications, particularly agents. The benefit of this is twofold. First, developers don't have to sacrifice too much flexibility while gaining the potential to work more efficiently. Second, the enterprise also gets a more uniform set of standards to drive better governance and sustainability.
It’s middleware, platform, and operations stuff. all the usual develop, operate, optimize. What’s slightly missing is day two operations, but we’ll all re-discover that soon enough when people try to ship version 2.0 of their (agentic) AI apps.
My advice: from now on, when you hear the phrase “agentic AI” just think “AI middleware used to add AI features to apps.”
Anything else is a bit much. The phrase is fine, just keep a realistic and pragmatic definition for enterprise AI in your head.
There is a distinction between “simple AI” and “agentic AI,” but once you poke at it, agentic AI is doing what you assume AI was doing in the first place. But that won’t last. Once “agentic AI” becomes mainstream (and cheap enough), people won’t really be doing “simple AI” anymore. Eventually (two years from now?) we’ll just drop the “agentic” and go back to calling it AI.
Here’s a recent Goldman newsletter (PDF) throwing cold water on AI hype-heat:
We first speak with Daron Acemoglu, Institute Professor at MIT, who's skeptical. He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn't take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won't occur nearly as quickly--or be nearly as impressive--as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are "not a law of nature." So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.
Yes, and…this is why I think individuals will be the ones who benefit from AI usage most.1 Each individual person using even just AI chat apps to get their daily work done.
AI will benefit individuals by reducing the time it takes to do knowledge worker toil, making their work less tedious, and also raising the quality of their work. This means they’ll be able to do their tasks faster, be less bored, and likely get better quality work-product. This gives individuals more time and energy.
You then need to think like a company does: how do you use that extra resource for The Corporation of You?2 You can then choose two strategies:
Up their own productivity - do more work, hoping their employers compensate them more - good luck!), or,
By working less - getting the same pay, upping their personal productivity profit margin.
Either way, people who use AI for their work will see big benefits.
“There’s no particular reason 64-year-old alumni should be able go wherever they like. But there’s definitely a different feel.” The dr.
“The faux cocaine mirrors are so hard to keep - they’re hard to get, and they’re stolen all the time.” #PalmSpringsLife, used to reflect on “luxury beliefs.”
I missed this when I linked to Charles Betz’s simple definition of a platform, but he mentions that he uses the term “application” instead of the Team Topologies term “stream-aligned” team. “I have not seen the term ‘stream-aligned’ get traction in portfolio management,” he says. Checks out for me.
Lack of AI-Ready Data Puts AI Projects at Risk - If you’ve let your data lakes turn into data swamps your AI projects are going to go poorly. // “Gartner predicts that through 2026, organizations will abandon 60% of AI projects unsupported by AI-ready data.”
AI Essentials for Tech Executives - Good tables translating AI-tech-speak to business outcomes.
Paul Millerd on AI and writing - This is the response a very pragmatic writer had to AI.
5 Questions to Help Your Team Make Better Decisions - (1) What Would Happen if We Did Nothing? (2) What Could Make Us Regret This Decision? (3) What Alternatives Did We Overlook? (4) How Will We Know If This Was the Right Decision? (5) Is This Decision Reversible?
Drive Scale And Speed With The Platform Org Model - 59% of respondents have using a platform instead of whole bunch of different platforms as a priority. // Enterprise want the benefits of centralized, standardized IT stacks. Always. DIY platforms and shadow platforms (sprawl od accidental platforms) is often a bad idea. Your platform needs aren’t special, your app needs are. If you focus on platforms, you’ll steal mojo and budget away from those app needs.
A theory of Elons - If you can get away with breaking regulations and laws, you can gain competitive advantage over those who don’t.
The big idea: what do we really mean by free speech? - What “freedom of speech” means to the asshats: “what they actually want is freedom from the consequences of broadcasting their views.”
Vision and Distortion in Cézanne’s ‘Still Life with Plaster Cupid’ - Good example of art criticism, “how to see,” all that.
Old Media Finally Wakes Up from a Coma - “Hey, guys”-style getting more mainstream. // Also, long form podcasts.
Dutch people concerned with U.S., Russia, Ukraine developments; More support EU army
Bill Skarsgård transformed beyond recognition for Robert Eggers’ Nosferatu, a film in which the vampire is still the scariest thing on screen.
Issues for His Prose Style - Picking the right noun as a deep cut mechanic, to show authenticity, and build up a mythos of yourself, and for yourself. // Hemingway’s letters.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
See y’all next time.
That idea isn’t original to me. It’s probably from Ben Thompson, but I don’t recall.
I cover more of how to think like a company to run The Corporation of You in my thriving and surviving in bigco’s pedantry-fest, part 2 and part 9 are especially applicable here.
I’m clearly a big fan of AI and believe it’s helpful in many ways.
I feel comfortable with that because I’ve used it for over two years now and rely on it daily for a wide variety of tasks, both work- and personal-related. That means I know exactly what it’s capable of, what it’s good at, and what it’s not good at. Me and the robot have a good relationship: we know how to work with each other.
Right now, generative AI is only good at working with text. It generates text—if you can reduce audio to text, it excels at that, and if you can convert text to audio, it’s equally proficient.1
Text can take many forms, and generative AI handles them well. As others have noted, if you want to shorten text, it’s almost amazing at that. If you want to summarize text, it’s pretty good at that. And if you need a summary to help decide whether to read the full text, it’s fantastic.
If you want to learn and understand a topic, part of that process involves condensing large amounts of text into a shorter, more digestible form—and it’s pretty good at that. All of the consumer agentic things are going out there and searching the web for you to find that text and then summarizing it all. If what you want to learn and understand is well documented on the World Wide Web, it is good at that. If you want to get insights into secret, obscure, poorly documented things - stuff that has little public text - the AI’s Deep Research is going to be shallow bullshit.
Even when it’s good, with learning and understanding, you need a finely tuned bullshit detector. Once you detect the bullshit, you can ask again or go search on your own. But really, you need a bullshit detector in all of life—robot or meat-sack. If you don’t have one, build one quickly. The benefits you get from that will last longer and far outweigh the benefits you’ll get from AI.
This aspect of learning means it’s not so great at company strategy. If you and your competitors are all using the same public text, you’re all going to get the same answer. There will be no competitive advantage. What’s even worse now is that it’s effortless for your computers to understand your strategy and predict the ones you’d come up with…if you only based it on public text. You have to figure out how to get your secret text into there. With all interactions with the robot, you have to bring a lot to the chat window. The quality of what you bring will determine the quality you get from the robot. Garbage in, garbage out. Which is to say: nothing valuable in, nothing valuable out.
Back to pedantry: It’s proving to be good at teaching-by-modeling: it shows you what The Big Report could look like, explains what the fuck those 80 slides your teacher gave you are asking you to do in your essay, and serves as an additional tutor and instructor when you can’t afford to hire one.
The robot is also effective as a co-writer. In other words, you become a co-writer with the robot. It can generate text endlessly, and if you collaborate with it, you’ll get great results. Just as you would with any co-writer (especially a ghostwriter or a co-author whose name appears in smaller print on the book cover), you need to get to know each other, learn how to work together, and figure out the style you want. Claude is great in this regard—it has a simple tool for learning and refining your style. If you haven’t spent time teaching the robot your style, you should do so.
You can reduce videos, podcasts, scripts, even small talk, to text. Recall, AI is good at text, so it will be OK at that.
It’s okay at imagining. I play D&D with it, and it has gotten a lot better at handling mechanics over the past two years, but it still remains rather boring and predictable when it comes to unassisted imagination. If you feed it published D&D adventures, it does okay. But just try having it come up with ten dwarf names—they’ll all be variations on “Iron Shield” or “Rock Breaker” and the like.
It’s really good at writing code. And guess why? Code is text. Is it good at creating multi-system applications and workflows used to, say, coordinate how an entire bank works? Probably not—very few people are even good at that. And then there’s the whole process of getting it into production and keeping it running. If you think the robot can do that now—or ever—¡vaya con Dios! Please report back if you survive.
What about tasks like optimizing supply chains? Maybe one day the robot will be good at other B2B tasks, but I suspect that for many years good old fashioned machine learning will keep doing just fine there.
Don’t use AI for tasks where being 100% correct is important. If the consequences of being wrong are dire, you’re going to get fucked. Worse, someone else is going to get fucked.
But, if you’re using the robot for a system that tolerates—or even thrives on—variety (errors), it’s great. “Anti-fragile” systems? I don’t really know what that means, but: sure. Are you brainstorming, whiteboarding, and exploring? Yup, great at that. Using it for therapy? It’s fascinatingly good at that.
You get the idea: if you’re using generative AI for something where you can recover from errors quickly, there is “no right answer,” and the task is text-based, then yes, it is great for that—and you need to start using it now.
Let’s build up to my concern:
Text is all generative AI is currently good at.
Most people have not used AI for 12 months, let alone two-plus years, let alone 30 days. I’m just guessing here. Surveys tell dramatically different stories. But most surveys show only a small amount of use, and just recently.
So, I don’t trust that most people yet understand what AI is good at—they often imagine it’s capable of far more. You have to use it to know it, and learning by doing is a lot of effort and usually takes longer than your ROI model’s horizon.
That’s “hype,” sure, but it’s more like misunderstood inexperience. It’s classic diffusion of innovation (ask Chatty-G to tell you about that concept; I bet it’ll be pretty good). Sure, that diffusion has been getting faster, but if humans are involved, we’re still talking decades—at least one decade.
My concern here is that once we collectively set expectations beyond reality, the fall is bigger, and the recovery becomes too great. Worse yet, people waste a lot of time chasing AI fantasies. They thought there’d be 100x returns when, in reality, there were only 10% or even 25% returns. You fire employees, take on investment and risk to overhaul your business, and spend time on those AI fantasies instead of pursuing other strategies. And then, when you learn what AI is truly/only good at, you’ve invested everything—only to find that your assumptions, ROI models, and, thus, investment was a fantasy. Plus, once you build it, you now own it forever, no matter how shit it is. Plus, you played a game of chicken with opportunity cost, and opportunity cost won.
So, don’t do that. Don’t bet the farm on something you haven’t used firsthand for at least 30 days, and certainly don’t stake our jobs or our index funds on it.
“I was the man of my dreams," Peter on Peter.
“the unexampled,” on Gary Snyder.
And, from Gary: “this romantic view of crazy genius is just another reflection of the craziness of our times… I aspire to and admire a sanity from which, as in a climax ecosystem, one has spare energy to go on to even more challenging – which is to say more spiritual and more deeply physical – things”
“Mandatory Commute policy,” synonym for RTO.
“autogolpe,” self-harm.
“If you change it, you own it,” if only.
”monomaniacal dork squads,” power-up.
“a steaming pile of, um, DOGEshit,” deep analysis.
“Our Son of a Bitch,” various.
“You can’t sell a sandwich with secret mayo,” Noah’s quest continues.
There’s a first time to forget everything.
“[rhapsode]([en.wikipedia.org/wiki/Rhap...](https://en.wikipedia.org/wiki/Rhapsode#:~:text=A%20rhapsode%20(Greek%3A%20)ῥαψῳδός%2C,BC%20(and%20perhaps%20earlier).).”
“Features of the future,” a CF Day topic.
When submitting a conference talk and given the option to select “audience level,” I’ve started always selecting “intermediate.” I don’t know why, or what that means, but it’s some kind of fun.
“LLM aka Large Legal Mess,” don’t use the robot for lawyer-shit.
“inspo,” AI hair.
“If I’d wanted chatGPT to answer, I’d have asked chatGPT” @byronic.bsky.social.
"My leather jacket tailor never flinched, so I'm not sure what's wrong with all the Finance Bros."
Predictably, a bunch of AI stuff of late.
The reality of long-term software maintenance - “In the long run maintenance is a majority of the work for any given feature, and responsibility for maintenance defaults to the project maintainers.” Related:
Top EDI Processes You Should Automate With API - Tech never dies. Helpful consequence: take care of it before it takes care of you.
How’s that open source licensing coming along? - ”The takeaway is that forks from relicensing tend to have more organizational diversity than the original projects. In addition, projects that lean on a community of contributors run the risk of that community going elsewhere when relicensing occurs.”
Key insights on analytical AI for streamlined enterprise operations - ”The big issue, whether it’s generative or analytical AI, has always been how to we get to production deployments. It’s easy to do a proof of concept, a pilot or a little experiment — but putting something into production means you have to train the people who will be using this system. You have to integrate it with your existing technology architecture; you have to change the business process into which it fits. It’s getting better, I think, with analytical AI.” // It’s always been about day two.
Why I think AI take-off is relatively slow - My summary: humans resisting change is a bottleneck; also, humans not knowing what to do with AI; current economic models can’t model an AI-driven paradigm shift, so we can’t measure the change; in general, technology adoption takes decades, 20 for the internet, 40 for electricity. // AI is a technology and is prey to the usual barriers and bottlenecks to mass-adoption.
GenAI Possibilities Become Reality When Leaders Tackle The Hard Work First - Like any other tool, people have to learn how to use it: “Whatever communication, enablement, or change management efforts you think you’ll need, plan on tripling them.” // Also, garbage in, garbage out: “GenAI can’t deliver real business value if a foundation is broken. Too many B2B organizations are trying to layer genAI on top of scattered, siloed, and outdated technologies, data, and processes. As a result, they can’t connect the right insights, automations stall, and teams are unsure of how to apply genAI beyond basic tasks.”
A.I. Is Changing How Silicon Valley Builds Start-Ups - ”Before this A.I. boom, start-ups generally burned $1 million to get to $1 million in revenue, Mr. Jain said. Now getting to $1 million in revenue costs one-fifth as much and could eventually drop to one-tenth, according to an analysis of 200 start-ups conducted by Afore.” // Smoke 'em if you got 'em…
The AI Experience - What’s Next For Technology Marketing - Back-up the truck and dump the enterprise enterprise marketing slop: “Did you consider that soon you may be marketing to GenAI agents of your customers?” // And: “While the term “Account Based Marketing” or ABM is still floating around, less marketers are focused on continuing to enable personalized marketing for a subset of the customer and prospect base.” // Instead of having to craft the personalized content, you have the robot do it. Then the marketing skills you need to back to mechanics of running campaigns. // Yes, and, this is an example of my “bad things are bad” principle. If the slop you get is bad, it will be bad. But it can also be good, in which case, it will be good.
How Ikea approaches AI governance - ”Around 30,000 employees have access to an AI copilot, and the retailer is exploring tailoring AI assistants to add more value. Ikea is also exploring AI-powered supply chain optimization opportunities, such as minimizing delivery times and enhancing loading sequences for shipments to minimize costs. AI in CX mostly targets personalization. // ‘“I’m not just talking about generative AI,” Marzoni said. “There’s some old, good machine learning models that are still absolutely delivering a lot of value, if not the majority of the value to date.”’
U.S. Economy Being Powered by the Richest 10% of Americans - One estimate: in the US, “spending by the top 10% alone accounted for almost one-third of gross domestic product." // Never mind the, like, morals?…doesn’t seem very anti-fragile. Never mind the, like, morals?…doesn’t seem very anti-fragile. // “Those consumers now account for 49.7% of all spending, a record in data going back to 1989, according to an analysis by Moody’s Analytics. Three decades ago, they accounted for about 36%.”
Why it’s nice to compete against a large, profitable company - Because, they can’t lower prices on their core products least Wall Street freak-the-fuck out.
See y’all next time! Gotta go run a few ideas by my pal, the robot.
It can kind of convert text to images, but only if you like the same people over and over or are an anime fan. If you like a perfectly chiseled chin, AI generated images are for you. You can put a lot of work into getting your text in shape to produce something unique that looks real. In this respect, it gives a tool to people who can’t do graphics (like me!) which is actually pretty amazing. But it can only go so far. Just try to create a “realistic” looking person instead of a perfect fashion model. It is near impossible. Of course, this is because it’s not trained on enough images yet, I guess.
Enterprises pouring money into GenAI and CEOs treating AI agents like cheap labor - yet only 25% see ROI right now. Vibes: “Europe’s long holiday from history is over.” Also: IBM does RTO, predictions about DOGE layoffs, the term “platform” remains a favorite excuse for overcomplicated tech, and “autonomous killer robots.”
What to make of using AI to automate HR processes? Melody Brue and Patrick Moorhead look at Oracle’s work there:
The agents are designed to support several key facets of the employee experience, including hiring, onboarding, career planning, performance reviews and the management of compensation and benefits.
Yes, and…
(1) If it’s bullshit work (“busy work”), eliminate it, don’t automate it. The thinking here promises to automate bullshit work like manually formatting performance reviews, copy/pasting boilerplate onboarding checklists, clicking through timecard approvals, writing job descriptions from scratch, and filling out endless HR forms. Yes, and…are these tasks that should probably just be eliminated or drastically simplified rather than lovingly preserved in AI amber. I’ve written job descriptions several times and there is something wrong-feeling about the process and the results. The same with performance reviews from both sides of the review. If you feel like you’re doing bullshit work and you get excited about automating it with AI, why not eliminate it instead? Or, you know, fix it.
(2) How could workers use similar AI stuff to maximize their advantage versus management? In a heavily bureaucratic HR system, reports and analysis are important: you need to prove that you deserve a promotion, more money, whatever. You’re often weighed against relative metrics: how much do people get paid in a region, how did you perform versus other people on a bell curve (or ranking), etc. Putting together those reports is tedious and your managers may not put in the effort. Have the AI do it for you. You could also look at those wordy job descriptions to extract what your role is responsible for doing. And when you need to come up with annual MBO/KPI/OKR/whatever the next TLA is for “goals,” have the AI look at the goals-trickle down and come up with yours. Then have it track what you should be doing. Negotiating salary could be useful to: how much should you even be asking for, what is your BATNA? What is their BATNA?
(3) Could you run the robot on, say, the last 5 years of reviews and then compare it to what the human evaluators did? Is the robot better (less bias, giving feedback that improves worker performance, finds low performers, etc.), or is it worse (wrong analysis leads to less performant workforce)? As a worker, thought you might not actually have access to full reports, you could try to find out what the real performance measures are. Load in job descriptions, give an overview of what highly rewarded people did, and then see what attributes and actions get rewarded. Never mind what the official metrics are, target those.
There’s a general theory for all AI use here as well: if what your AI produces is something that can just be consumed and used by another AI, it’s probably bullshit work that you can reduce to a quick email or can be eliminated entirely.
***
For him, of course, it was a business opportunity. He was part of what I would come to see as a savvy minority of people and companies capitalizing on AI fatigue.
Meanwhile, this is a fantastic piece on the state of HR tech from the worker’s perspective. There’s plenty of AI talk in it. It’s also fun to see what tech conferences and marketing looks like to (I presume) outside eyes. We are such dorks and, often, tasteless:
While the word people was plastered everywhere as both a noun and an adjective, the workers of the exhibit hall's collective imagination were not real, three-dimensional people. They were shadows without substantive interests or worries beyond the success of their companies. That was the only way these products could be pitched as win-wins. But, come on. We were in Las Vegas - everyone here knew the real money comes from making sure enough people are losing.
Fresh Podcasts
There are new episodes of two of my podcasts, listen to ‘em!
Software Defined Interviews #94: Adding more condiments to the 7 layer networking burrito, with Marino Wijay - Why do we keep adding new layers and frameworks instead of just fixing the ones we have? They also talk about the challenges of platform engineering, the importance of empathy in tech, the difficulties of integrating multiple layers in tech stacks, the essential role of effective communication and prioritization, and EmpathyOps.
Software Defined Talk #507: Battery of Potential - This week, we discuss how banks beat PayPal with Zelle, what the Wiz survey says about AI usage, and whether you can really “disagree and commit.” Plus, are multitools actually useful?
AI Agents: Why Workflows Are the LLM Use Case to Watch - The agentic app revolution isn’t a transformation story. It’s a modernization story; a chance to solve small problems with the team you already have.
AI Agents and the CEOs - “At the risk of saying the quiet part out loud, the way CEOs are talking about agents sure sounds like how they talk about employees–only cheaper!” // “Companies are dedicating significant spend to AI–approximately 5% of the revenue of large enterprises (revenues over $500 million) according to one survey by Boston Consulting Group, and yet only 25% claim they are seeing value from their AI investment.”
Learning from examples: AI assistance can enhance rather than hinder skill development - Could be that AI use makes you better. // “Decades before the advent of generative AI, the legendary UCLA baseball coach John Wooden declared that the four laws of learning are explanation, demonstration, imitation, and repetition (31). Few learners have access to the best human teachers, coaches, and mentors, but generative AI now makes it possible to learn from personalized, just-in-time demonstrations tailored to any domain. In doing so, AI has the potential not only to boost productivity but also to democratize opportunities to build human capital at scale.” // Also, some prompts used to evaluate writing quality. The one rating “easy responding” is interesting: how easy is it to (know how to) respond? Maybe good for CTAs.
Gartner Survey Reveals Over a Quarter of Marketing Organizations Have Limited or No Adoption of GenAI for Marketing Campaigns - ”Nearly half (47%) report a large benefit from adopting GenAI for evaluation and reporting in their campaigns.” // The number is reverse is more interesting: 77% of surveys marketing people say they’re using generative AI for marketing stuff. Related:
OpenAI reaches 400M weekly active users, doubles enterprise customer base - “The ChatGPT developer currently has 2 million paying enterprise users, twice as many as in September.” With “400 million active weekly users, a 33% increase from December.” And: “The New York Times reported in September that the company was expecting to end 2024 with a $5 billion loss on sales of $3.7 billion.”
2025 is the breakthrough year for Generative Enterprise — and partnering with a capable services partner is critical - “[S]pending on GenAI is rising (HFS data suggests enterprise investment is rising by more than 25% on average into 2025), we start from a low base. We estimate enterprise spending on GenAI in 2024 accounted for less than 1% of global IT services spending. This is just one illustration of how far we still have to go.” // Plus, a whole bunch of commentary in enterprise AI.
Data is very valuable, just don’t ask leaders to measure it - AI ROI is difficult: “in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases… It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes”
A Simple Definition Of “Platform” - “a product that supports the creation and/or delivery of other products.”
IBM co-location program described as worker attrition plan - From the RTO-as-not-so-stealthy-layoff files.
YouTube (GOOGL) Plans Lower-Priced, Ad-Free Version of Paid Video Tier.
On European Defence, Energy and Growth - Imagining big changes in European priorities: changing policy to get more energy, more emphasis on militaries.
No Rules Are Implicit Rules - The European view on enlightened American management policy: “Greg, I hate to bring it to you, but working for ten fucking hours a day is not the normal hour. I don’t care if you live in America or not. The section continues with other “grand” examples of managers taking “up to” 14 days a year off to show their employees they should to so too. Let’s assume the best here: 14 workdays are almost three weeks. A year. The statutory minimum for full-time employees working a forty-hour week is 20 (thus 4 weeks) in Belgium. Oops.”
Rage Against the Machine - Perceptive: “They’re going to try two or three things they think will solve everything, which will be thrown out in court. I assume the first thing they’ll do is some kind of hiring freeze, and then, after three months, they’ll realize agencies have started to figure out ways to get around it. And then they’ll try to stop that, and they won’t be able to do that. Then they’ll try to make people come to work five days a week, and that’s going to be difficult because a lot of these agencies don’t have offices for these people anymore. I think it’s going to be one thing after another, and maybe after four years the number of employees will be down 2 percent—maybe.” // The layoff playbook DOGE is working comes from the tech world, and it sort of works there. But that’s because tech companies can die, be acquired, or be reborn. In a tech company, you rarely starve the beast (or amputate parts of it) and have it survive. Do we want the same outcomes with government?
I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.
Kelsey Hightower declined to join the AI gold rush, advocating instead for a glossary of tech jargon to remind everyone that AI is not new, just rebranded.
Platform engineering teetered between breakthrough and bust, with some heralding it as the savior of DevOps while others braced for its descent into Gartner’s “trough of disillusionment.” Several years ago (February, 2023) Sam Newman insisted that calling something a “platform” is often just an excuse to overcomplicate things, suggesting “Delivery Enablement” as a rebrand.
Meanwhile, IBM Consulting offered enterprises a guided tour of “Agentic AI,” a term that likely needs its own entry in Hightower’s proposed glossary.
“effortful,” AI study.
“Topological qubits,” MSFT.
“Deliberately they don’t give a shit,” Emily, Political Gabfest, February 20th, 2025.
And: “chaos entrepreneur,” John.
“Europe’s long holiday from history is over,” John Naughton.
"This [Trump] administration cares about weapon systems and business systems and not ‘technologies. We're not going to be investing in ‘artificial intelligence’ because I don’t know what that means. We're going to invest in autonomous killer robots." Fund the outcomes, not the tech.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
Nothing to report today.
In this episode: AI eschatology, assology, and a deep, intellectual commitment to hating mayonnaise. Tariff trouble, security panic, and NVIDIA shrugging off DeepSeek. Young voters shift rightward, no one agrees on ‘medium roast,’ and Hollywood still relies on glue to critique its own youth obsession.
“immanetize the AI eschaton,” Charlie Stross.
“The ass is a very strong symbol of how our body is not neutral in the public space. How our body is constantly scrutinized, has been shaped to please the man’s eyes, has been seen as a body part that was objectified, that was detached from the person who was simply bearing it.” Assology. See related boobology below.
“This is the number one YouTube channel about hating mayonnaise.” Noah.
“LLMs are good at the things that computers are bad at, and bad at the things that computers are good at,” Slides Benedict.
“If I live, I must fully accept the game; I must have the most beautiful life. I don’t know why I am here, but since I remain here, I will construct a beautiful edifice.” A young Simone de Beauvoir.
What is AI Middleware, and Why You Need It to Safely Deliver AI Applications - AI middleware is the glue that holds your AI-driven apps together, making sure models, data, and existing systems actually talk to each other instead of breaking everything. It saves developers from reinventing the wheel, adds security layers, and keeps AI projects from becoming yet another unmaintainable mess.
Software development is… - “Software development is holistic.”
Finding Energy to Learn & Build When Burnt Out - "How can [management] support you and your team, shield you and the team, and provide clear direction when they’re barely holding it together?
Moving on from 18F. - when you no longer agree with your employer’s culture.
How Liberty Mutual was able to jump into generative AI thanks to a clear data strategy and FinOps
I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.
The CrowdStrike outage crashed 8.5 million devices, wiped out $5.4 billion, and forced IT leaders to admit that 84% had no real incident response plan. In response, Adaptavist found that 99.5% of companies are now hiring security personnel, diversifying vendors, and possibly sleeping in their data centers for luck.
Trump proposed a 25% tariff on imported semiconductors to force chipmakers back to the U.S., despite most advanced chips being made overseas. Corporate America may be souring on his policies, as erratic tariffs threaten supply chains. Financial analysts determined that economic indicators are surprisingly bad at predicting democratic collapse. Maybe we should blame video games again? The Atlantic reported that young voters have shifted rightward due to pandemic distrust, economic stagnation, and too much time online. Hopefully, those tariffs won’t make their damn video game consoles and vaporware-colored lights more expensive.
Related: NVIDIA’s share price is already within 1% of its pre-DeepSeek drop, showing that while the market can be extremely efficient, it’s not always efficient at thinking things through.
A UC Davis research center revealed that no one agrees on what a “medium roast” is, despite years of artisanal posturing. Kieran Healy warned that your iPhone knows more about your life than your best friend, your partner, or your mom—and it’s probably judging you for it. And all the fitness tracking in the world still wasn’t enough for the perfect boobs required for The Substance, a satire on Hollywood’s obsession with youth: “Unfortunately, there is no magic boob potion,” Margaret Qualley said, “so we had to glue those on.”
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
Off to get a haircut today. I hate getting haircuts, that’s why my hair and beard get wild.
Meanwhile, we’re one away from 900 subscribers. Tell you what, I’d you’re one of the first several new people to sign up, I’ll send you a bundle of my books.
Lots of links and stuff this episode: AI isn’t a coworker, it’s just automation wrapped in hype. Tech moves fast, but nothing lasts—except bad takes, questionable business models, and the creeping realization that managers just want fewer humans to manage. Meanwhile, we live like kings and don’t even notice.
Good episode of Software Defined Talk this week, especially the opening moment of absurdity where we, yet again, try to solve Europe’s ice problem. Take a listen, or watch the unedited recording.
“Layered, polished mix: As expected, Dre’s meticulous production work ensures that every instrument sits perfectly in the mix, making for a cold, calculated vibe.” Respect. (The robot comments in “Big Egos.”)
"razvedka boyem –reconnaissance through battle: You push and you see what happens, and then you change your position."
Long skim content.
“Everything affects everything else,” Julia Evans. // I mean, I think she just cracked the code to, like, reality there, you know, everything.
“[Sorry, ugly people with good ideas.]” // Alternative funding source.
“A Cup of Coffee in Hell,” not cold, but helpful.
“If it moves, it’s probably alive,” logic.
“Cannabis, crypto or half of North Dakota?” Buttonwood.
“Sen. Mitch McConnell (R-KY), a polio survivor, was the lone Republican to vote against him.” Oophff. When you got that guy voting against you know your head is full of bologna.
Making smaller containerized apps - Smaller, more secure, and faster to deploy–because nobody wants a 500MB container just to run “Hello, World.”
The “AI Agent As Coworker” Narrative Is Nonsense The AI agent co-worker narrative is nonsense - Against the agentic hype: “You have to admire Benioff’s chutzpah in defining digital labor as some brand-new massive market opportunity. But to many, it just sounds like automation. Like every other phase of automation since the beginning of the industrial age, this phase is also about doing more with fewer human resources.” // Meanwhile, the counter case from Seth Marrs.
New estimates have ChatGPT using 10x less power than previously thought - ”it would actually be more energy efficient for you to have an LLM turn off your furnace than to walk across the house to manually turn the dial.”
The danger of relying on OpenAI’s Deep Research - Some valid critiques of Deep Research. Though, none of them really amount to “it’s not good.” To sum-up: it can’t do complex research, let alone come up with original ideas nor cover obscure topics. It can’t only tell you what the Internet knows. This is actually not fully accurate: you can also upload your own files and put in your own knowledge. For me, the main problem is the readability of the reports. While they are long and detailed, they’re not written in an engaging way they makes it easy to read. I have a pile of them that I’ve yet to fully pick through. // Yeah, these robots have little creativity and original thought and further on, they can only do the predictable. But, man, they sure can do a lot of it. // There is an annoying “buyer beware” nature of all this AI stuff. If you’ve used it for years, or even a few months, you de-hype it a lot. You know it’s limits and to treat it like a dumb tool. But, that is not how it is sold at all, and it’s not how people who don’t use it think of it.
All hat, no cowboy - A bicycle for your hands: “Becoming a good programmer takes time, so does becoming an artist. What if all the people with ideas but no time or skills or persistence or real interest could participate and _turn their ideas into the thing?_Surely non-musicians have great ideas for songs that they could turn into great songs if it weren’t for the inconvenience of musical instruments.” Yes, and: “One way to look at this – not a charitable way, but a view that feels true to me – is that managers view all need for human labor as an inconvenience. In part because they rarely get to experience what it’s like to be closer to a creative process, but also because they constantly experience the inconvenience of checking on deadlines and paying invoices. They would simply rather manage a robot than a human, so the only other people they have to interact with are other executives. Peak economic efficiency.”
One Year With the Vision Pro - Basically, not enough ROI for $3,500.
The Great AI UI Unification - What’s going on here is a classic power user versus normal user UX problem. I’m probably more power user than normal user. I don’t mind the UX, it’s easy access to docs that explain features that I find annoying. For example, try to do a deep explanation of what’s currently in ChatGPT Pro. There really isn’t. Even more so, last I looked the help page doesn’t list new features like Deep Search. And most ironically of all, if you ask ChatGPT itself, the answers are not great, or accurate. E.g., I asked about using its reminders and it didn’t even know it had them until I fed it to blog post on it. The naming of things is not helpful as well. // Tech companies are terrible about documentation. While obscure, Apple Short Cuts is a great example. Docs for that are terrible, usually non-existent.
Tech continues to be political - ”I don’t know how to attend conferences full of gushing talks about the tools that were designed to negate me. That feels so absurd to say. I don’t have any interest in trying to reverse-engineer use-cases for it, or improve the flaws to make it ‘better,’ or help sell it by bending it to new uses.”
Internal Product Management, Forrester.
AI Alone Won’t Drive Revenue - What Are You Missing? - Some light ROI talk.
I don’t know, despite this being from the UK (or maybe that makes the point): newsflash, Europe is expensive to live in, mostly by design as far as I can tell.
The Tyranny of Now - ”What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.”
Learning from my mistakes… - It’s tough to monetize content that has near zero value or originality, and be easily pirated. This especially true if the price is wrong. That sort of applies to every product. // “In the end though, you can’t optimize your way out of a black hole, the gravity is too heavy. We were marketing a product at a price point that was material to our customers, and giving them content which was largely available from our competitors for free. All the tweaks in the world couldn’t change that.”
Why are big tech companies so slow? - Because they build, sell, and support a lot features.
How to add a directory to your PATH - Computers are easy, they said. You just need to read the manual, they said. It’s so intuitive!
I don’t read everything, sometimes I have to robot read it for me. Here are it’s summaries.
AI agents are not coworkers, according to Forrester analyst Anthony McPartlin, who argued that the idea is little more than a marketing ploy. It’s just automation. His colleague Seth Marrs disagreed, predicting AI will become an indispensable workplace collaborator, though perhaps without an HR complaint line.
Meanwhile, most CFOs planned to increase tech budgets in 2025.
I’m guessing this dude isn’t meaning to be associated with the them, but here’s a little insight into how TheTechBros.gov think that might explain their batshit take on how to run a railroad.
Jack Crosbie mourned the decline of professional dress, noting that executives and tech billionaires get to dress however they want while the rest of us are left to wonder whether wearing Hoka running shoes to worksignals liberation or quiet surrender. This, of course, is only a problem if you don’t already own half of North Dakota.
Samir Varma declared freewill both an illusion and a practical reality in a post that argued no one—not even you—can predict what you will do next. The brain, it turns out, is deterministic but computationally irreducible, which is a fancy way of saying that you can only know what you’ll eat for dinner tomorrow by waiting for tomorrow. Until then, just assume it’s chicken.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
Good overview from Bryan on the changes people often don’t make when they want to do the whole platform engineering things:
Platform teams can have a difficult time convincing their management of the importance of developer experience, instead being pushed toward traditional governance and control measures. While these measures might satisfy IT audit requirements, they can severely impact development team velocity. The result is predictable: development teams, under pressure to deliver business outcomes quickly, create workarounds or turn to "shadow IT" solutions.
Yes, and…
It feels like he’s suggesting either (1) it’s possible to do too much of governance, security, controls, etc., and, thus platform teams don’t have enough time for or stop doing customer work (focusing on developer needs first, security/etc. needs second), or, (2) that the governance, security control, etc. measures aren’t needed (as much). Of course, us platform vendors would say, (3) if you buy our products, out platform will automate a lot of the governance, security, controls so the platform team can focus on the customers, developers.
I don’t hear enough multi-year, enterprise success stories about platform engineering. It’s been three (four?) years since Humanitec declared DevOps dead and ushered in the idea for their IDP (back when “P” meant portal, not platform) product. Backstage was some kind of gas on the fire to all that. And, yes, here we are. It feels like the similar oddity with Kubernetes: lots of talking, then lots of figuring out how to adopt it, and only of big enterprise success stories. There are stories, but enough to justify having destroyed the progress we made with PaaS 5+ years ago. Something is wonky.
What is missing from all of this? Year after year, on this topic, it’s the same conversation.
There’s a digital transformation paradox here too: we’re always on about the urgency of needing to change, then we say there’s not enough change, and yet everything seems to be running just fine. Maybe it could be running even more fine!
One theory: because of the place I work, I don’t see all the success, just hear about the slogging from the people who want help. People who don’t need help don’t ask it. Coupled with: thought leaders don’t talk about everything being fine, that isn’t the job. Few people talk about ongoing success, so all I see is struggling.
//
This week the kids are out from school, so I’m trying to figure out vacationing.