Here’s a new article on enterprise AI from me and a co-worker. As with most maturity models, it’s 2/3 prescriptive and 1/3 “here’s some ideas that might help.” A bit of map and territory.
With AI, we’re seeing a familiar anti-pattern, but this time flavor-injected with generative AI: the board charters a tiger team, gives them budget. The team hires consultants because their developers don’t know Python. Consultants identify AI use cases, write some Python apps, and many of the pilots show good business value.
AI-related spending is set to reach $371.6 billion by 2027 as businesses shift from experimentation to targeted AI investments. IDC, February, 2025.
Then the board says, “OK, now do that 300 more times.” But the team hasn’t built a sustainable strategy. They haven’t used that time and money to add new organizational capabilities. Your developers still aren’t skilled in AI. So now adding AI to those hundreds of other applications isn’t easy. And that’s also when they discover day two AI operations: you have to run this stuff! Now you’ve just got piles of Python without real in-house capability. Day two AI operations isn’t cheap, it’s often underestimated, if planned for at all.
Developers already spend 60% of their time on non-application work.1 Add AI infrastructure, and, I’d guess, that’ll climb another 10–15%.
This is exactly what happened with Kubernetes. Based on Google-brand, excellent devrel’ing and keynotes, companies assumed it was easy - until developers were drowning in YAML. Perhaps we’ll hear an AI-centric quote like this in a few years:
Well, I don't know how many of you have built Kubernetes-based apps. But one of the key pieces of feedback that we get is that it's powerful. But it can be a little inscrutable for folks who haven't grown up with a distributed systems background. The initial experience, that 'wall of yaml,' as we like to say, when you configure your first application can be a little bit daunting. And, I'm sorry about that. We never really intended folks to interact directly with that subsystem. It's more or less developed a life of its own over time. Craig McLuckie, SpringOne 2021
AI is heading the same way. You’ll spend 12 months building your own platform. It’ll barely work - if at all - and cost ~$2 million in staffing. Developers won’t use it like you expected. There’s no ROI, it delivers a third of what you promised, and you’re not really sure how to run and manage it long-term. And, yet, it still costs a lot of money.
CIOs have found generative AI goals harder to attain than first anticipated. Technology leaders are blaming data challenges, technical debt and poor strategy or execution for slowing down progress. Nearly half of CIOs say AI has not yet met ROI expectations, according to Gartner research. Reporting from the Gartner Symposium, October, 2024.
That original team, now AI experts, will leave, either for to work at an AI tech company or to help do it over again at a new enterprise. You’ll be stuck with an unsupported, incomprehensible mess. No one else understands it. You’ve locked yourself into an opaque platform, wasted years, and landed back where you started - with a sprawl of shadow platforms and tech debt. But, don’t worry: the next CIO will launch a new initiative to fix it.
With AI, you’ll have two more problems.
First, AI evolves monthly, sometimes weekly. New models, new techniques (“agentic AI”). You need to keep up, or you’ll be stuck on outdated tech, losing competitive advantage. The best way to handle this? Just like any other service you provide (e.g., databases). Centralize those AI services, then you can upgrade and certify once, enterprise-wide, and give developers a single, enterprise-y source for AI models. The alternative is to find all the AI models usage across your hundreds of applications and enterprise-y them up one by one. You’ll quickly fall behind - just look at the versions of software you’re currently running, I bet many of them are three, five versions behind…especially whatever Kubernetes stacks you built on your own.
Second, if you use the same models as everyone else, you’ll get the same answers. Asking the AI “How do I optimize my pipe-fitting supply chain?” will yeild the same response as your competitors. The real advantage is adding your own data. That’s hard work, needing serious budget and time. And once you figure it out, you’ll need to scale it across teams, which means centralizing AI access, just as we saw above in AI model usage and updating.
Enterprise AI needs a platform. And what we learned over the past decade is: building your own platform is a terrible idea. This is especially true in private cloud, which I reckon is where about 50% of the apps in the world run, probably much more in large organizations.
Instead, improve your existing stack. Don’t rip and replace it.
If you’re like most enterprises, you have a lot of Java developers using Spring. Use Spring AI as your AI framework to connect to models. The Spring AI developers have been working quickly over the past year to add in the baseline APIs you’d need and adapt to new innovations. For example, the agentic AI framework Model Context Protocol came out in November, and Spring AI is now the official Java implementation for MCP.
And if you’re like a lot of other larger organizations, you already have a strong platform in place, Cloud Foundry. You can add a full-on model service gateway to host or broker AI models. You can host those models your own if you’re freaked out about public cloud AI, use public cloud AI, or, more than likely, do both! Most importantly, you’ll be able to keep up with model innovations, providing them to all your developers as quickly as your enterprise red-tape will allow for.
Your platform team can manage AI services like any other - security, compliance, cost tracking. Since it serves OpenAI-compatible endpoints, you can even still use those Python apps, but now your operations team can secure and manage them instead of whatever consultant-built Python stack you got stuck with.
So, the plan: (1) det developers using Spring AI to start embedding AI into their apps. Work on integrating your own, secret/proprietary data to the AI app workflow. When they’re ready, add AI services to your platform so production deployment and management is seamless. You’ll have day two AI operations covered. And because it’s centralized in a platform, you can now roll it out to those 300 more apps the board asked for.
Then, you can execute a proven playbook: developers should be shipping AI-powered features every two weeks, experimenting and refining based on real usage. I outlined this approach in Monolithic Transformation, with more executive-level insights in Changing Mindset and The Business Bottleneck.
You know, as always - try to avoid repeating the anti-patterns of the past.
Bathos, n. – The fine art of inspiring deep emotion, only to trip over a misplaced joke, a clumsy metaphor, or an unfortunate mention of flatulence.
Alacrity, n. – The suspiciously eager willingness to do something, often confused with competence but more often a sign that the asker failed to mention how much work it actually involves. (Found in the The Economist, explained by the robot, previous as well.)
Screwbernetes.
“Forward Deployed Engineers.” SEs with commit access.
“She glared at me from across the road and shooed me off because I couldn’t stop laughing.” Sting.
“Ms Adichie’s viral TED talk on feminism received an even more impressive accolade: Beyoncé sampled its lines.” The Economist World in Brief.
“bantam self-confidence.” Tina Brown.
Measuring Productivity: All Models Are Wrong, But Some Are Useful - “Measure Speed, Ease, and Quality Different facets of productivity warrant the use of different metrics. We typically think of productivity as a balance among speed, ease, and quality.”
Buy an expensive “AI Gateway”? Thanks, we’ll just build and open-source it, says Bloomberg - Should we be calling these “gateways”? Probably better than “broker.” // Also crazy to think we’ll just be recycling the same old patterns. Maybe it’s because they work! But, you have to strip away all the marketing-talk from it. And: building your own commodity infrastructure is a bad idea. It’s great for the team that does it and then quits to found a startup or go work at a higher paying tech company, so, I guess: more like capitalists being accidental generous to employees.
Four Marketing Principles That Redefine Markets from Klaviyo’s CMO - ‘“Creating fear never works, because in the immediate, you can probably prompt people to take action because they’re like, ‘Oh my! I must do something,’ but it leaves a negative perception in their mind.” “You don’t become a beloved brand over a period of time.”’
Skype is dead. What happened? - Ode to Skype, and complaining about Microsoft having no imagination to evolve it. It’d be helpful to read a detailed analysis of how and why.
What I learned from one month without social media - “Literally, it makes no difference what I abstain from, I will always find a way to procrastinate.”
I’m Tired of Pretending Tech is Making the World Better - Try to avoid tools that require you to acquire new tools to use the first tools.
America Voted For Chaos. The Markets Are Feeling the Punch. - Dumb disruption. Also: masters of the universe hubris.
One “Bad Apple” Correct Interpretation - On the use and mis-use of “bad apples.”
Why Skyscrapers Became Glass Boxes - ”Ultimately, it was economics (or at least perceived economics) that drove developers to embrace this style. Glass curtain wall buildings were cheaper to erect than their masonry predecessors, and they allowed developers to squeeze more rentable space from the same building footprint. Ornate, detailed exteriors were increasingly seen as something tenants didn’t particularly care about, making it harder to justify spending money on them. And once this style had taken hold, rational risk aversion encouraged developers and builders to keep using it.”
Yoon Suin and Orientalism - an example of a “woke” look at a D&D setting.
The Lights of My Life - Accent lighting and lamps used by one photographer.
The difficulty level of children - “It also runs the other direction. If you have two kids, and one kid is away (with a grandparent), it feels like having zero kids.”
I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.
AI firms raced to shrink large models into cheaper, faster alternatives, ensuring even small companies can now afford to hallucinate at scale. IBM expanded its AI strategy, embedding intelligence into products and unleashing 160,000 consultants to generate AI-powered assistants. AI, once set to replace lawyers, now helps them work faster—though not quite enough to stop them from citing fake cases in court.
The fragrance industry, once built on the power of scent, now thrives on TikTok, where influencers sell perfumes their followers will never have to smell - arguably the best possible scenario for everyone involved.
Is neoliberalism truly in decline? Despite its failures - rising inequality, social fragmentation - it remains the dominant economic framework with no clear replacement. Meanwhile: it doesn't matter if you saw a rabbit, a vase, or an old woman because a study debunked the idea that optical illusions reveal personality traits.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
I’ve been running the above, uh, screed in my mind for a few weeks now. Perhaps I’ll use it as the basis for my VMUG Netherlands talk next week. It’s not exactly the topic of the talk, but good talks, as delivered, often are.
60% comes from this IDC survey, where I added up security, implementing CI/CD, app and inf. monitoring and management, deploying code - the rest are definitely things I'd want developers doing. Full citation and background: IDC #US53204725 (February 2025) Source: IDC, Modern Software Development Survey 2024 (N=613) and Developer View 2023 (N=2500).
"But that said, he admitted smoking played a huge part in his life. “I don’t regret it. It was important to me. I wish what every addict wishes for: that what we love is good for us.”
He went on: “A big important part of my life was smoking. I loved the smell of tobacco, the taste of tobacco. I loved lighting cigarettes. It was part of being a painter and a filmmaker for me.”
“Reams of founderspeak floated up into the warm breeze.” Tyler Cowen profile.
And: “‘I’m not very interested in the meaning of life, but I’m very interested in collecting information on what other people think is the meaning of life.’ And it’s not entirely a joke.”
“I feel that writing about the topic will make me stupider.” Also Tyler.
Gruber on Skype’s EoL: “I don’t think it’s an exaggeration to say that if not for Skype, podcasting would’ve been set back several years.”
“My toaster has developed self-awareness, which is concerning because its only purpose is to burn things.” AI IoT FUD.
“Vibe working is using AI to turn fuzzy thoughts into structured outputs through iteration.” Azeem Azhar.
“Stove touching.” Move fast and touch things.
“a beautifully crafted digital fortress.” Ian.
”far left government computer office." DOGE-slang.
“Who’s gonna help us? …nobody’s coming.” Noah.
macOS Tips & Tricks - lots of keyboard shortcuts. See also their iOS tips, and MacOS command line tips.
Agentic definition from Azeem Azhar and Nathan Warren - Good simple AI vs. agentic AI framing: “Some view AI as a tool – a system that passively performs functions, while others see it as an agent – a system that actively pursues objectives, makes independent decisions and may develop instrumental goals such as self-preservation.”
ServiceNow’s newest AI agents bring intelligent automation to telecommunications firms - ServiceNow’s AI agents analyze network data to diagnose and resolve issues, predict disruptions, and provide real-time explanations for unusual usage patterns, improving customer service and reducing complaints.
How AI ‘Reasoning’ Models Will Change Companies and the Economy - Lots of good thinking here, not least of which is a very delightful writing style. There’s about five quips in there that are fun. // Also notice that the benefits are accruing the individual here, the enterprises have yet to figure it out.
AI’s productivity paradox: how it might unfold more slowly than we think - The case that productivity effects of AI in the macro economy will be slow. Micro-economy (individuals) still looks good.
State of Java Survey Confirms Majority of Enterprise Apps Are Built on Java Performance, Superior Java Support - ”Typically, when the topic of developing AI or ML functionality arises, people think of Python. Perhaps surprisingly, the survey found that Java was used more often than either Python or other languages. In fact, 50% of organizations use Java to code AI functionality.”
I’m not contributing to the AI slop problem. I’ve been posting some of the explainer queries I’ve had with ChatGPT (see the AskTheRobot category). I both like them, like easy content, and am curious if they draw incoming traffic. // It’s getting warm and sunny again in the Netherlands. I hope that means I’ll get my ass on the bike and get back to fiets-flâneuring.
My first law of enterprise AI: if you end up having two robots talk with each other to complete a task, that task was bullshit in the first place, and you should probably eliminate it rather than automate it.
For example, if AI is used in both sides of B2B procurement (enterprise software sales), then much of the process is probably bullshit in the first place. There is so much weird and ancient in procurement, on both sides, that it’s clearly a poorly done process and part of enterprise IT culture.
Nobody likes this, and we all know there’s a high degree of waste to it:
The average software sourcing process involves 28 stakeholders and takes six months. That’s six months of manual research, vendor meetings, demos, internal debates, and ultimately, a decision that still may not be fully informed.
Several years ago, Luke Kanies outlines his frustration and experience with that culture. When the buyers and the users are different people, and deal size goes up, beware: you run the risk of sailing in a sea of bullshit. Those selling (vendors) can bullshit a lot, but those buying can bullshit a lot too. The perfect example of using law one of enterprise AI use to remove waste.
Related: this is a great industry analyst overview of the enterprise IT category “agentic AI,” from Jason Andersen:
Conceptually, “AI Development Framework” is a type of middleware technology. To be more specific, it's a layer of shared services that provides a set of APIs and integrations for practitioners (not just developers) to build AI applications, particularly agents. The benefit of this is twofold. First, developers don't have to sacrifice too much flexibility while gaining the potential to work more efficiently. Second, the enterprise also gets a more uniform set of standards to drive better governance and sustainability.
It’s middleware, platform, and operations stuff. all the usual develop, operate, optimize. What’s slightly missing is day two operations, but we’ll all re-discover that soon enough when people try to ship version 2.0 of their (agentic) AI apps.
My advice: from now on, when you hear the phrase “agentic AI” just think “AI middleware used to add AI features to apps.”
Anything else is a bit much. The phrase is fine, just keep a realistic and pragmatic definition for enterprise AI in your head.
There is a distinction between “simple AI” and “agentic AI,” but once you poke at it, agentic AI is doing what you assume AI was doing in the first place. But that won’t last. Once “agentic AI” becomes mainstream (and cheap enough), people won’t really be doing “simple AI” anymore. Eventually (two years from now?) we’ll just drop the “agentic” and go back to calling it AI.
Here’s a recent Goldman newsletter (PDF) throwing cold water on AI hype-heat:
We first speak with Daron Acemoglu, Institute Professor at MIT, who's skeptical. He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn't take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won't occur nearly as quickly--or be nearly as impressive--as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are "not a law of nature." So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.
Yes, and…this is why I think individuals will be the ones who benefit from AI usage most.1 Each individual person using even just AI chat apps to get their daily work done.
AI will benefit individuals by reducing the time it takes to do knowledge worker toil, making their work less tedious, and also raising the quality of their work. This means they’ll be able to do their tasks faster, be less bored, and likely get better quality work-product. This gives individuals more time and energy.
You then need to think like a company does: how do you use that extra resource for The Corporation of You?2 You can then choose two strategies:
Up their own productivity - do more work, hoping their employers compensate them more - good luck!), or,
By working less - getting the same pay, upping their personal productivity profit margin.
Either way, people who use AI for their work will see big benefits.
“There’s no particular reason 64-year-old alumni should be able go wherever they like. But there’s definitely a different feel.” The dr.
“The faux cocaine mirrors are so hard to keep - they’re hard to get, and they’re stolen all the time.” #PalmSpringsLife, used to reflect on “luxury beliefs.”
I missed this when I linked to Charles Betz’s simple definition of a platform, but he mentions that he uses the term “application” instead of the Team Topologies term “stream-aligned” team. “I have not seen the term ‘stream-aligned’ get traction in portfolio management,” he says. Checks out for me.
Lack of AI-Ready Data Puts AI Projects at Risk - If you’ve let your data lakes turn into data swamps your AI projects are going to go poorly. // “Gartner predicts that through 2026, organizations will abandon 60% of AI projects unsupported by AI-ready data.”
AI Essentials for Tech Executives - Good tables translating AI-tech-speak to business outcomes.
Paul Millerd on AI and writing - This is the response a very pragmatic writer had to AI.
5 Questions to Help Your Team Make Better Decisions - (1) What Would Happen if We Did Nothing? (2) What Could Make Us Regret This Decision? (3) What Alternatives Did We Overlook? (4) How Will We Know If This Was the Right Decision? (5) Is This Decision Reversible?
Drive Scale And Speed With The Platform Org Model - 59% of respondents have using a platform instead of whole bunch of different platforms as a priority. // Enterprise want the benefits of centralized, standardized IT stacks. Always. DIY platforms and shadow platforms (sprawl od accidental platforms) is often a bad idea. Your platform needs aren’t special, your app needs are. If you focus on platforms, you’ll steal mojo and budget away from those app needs.
A theory of Elons - If you can get away with breaking regulations and laws, you can gain competitive advantage over those who don’t.
The big idea: what do we really mean by free speech? - What “freedom of speech” means to the asshats: “what they actually want is freedom from the consequences of broadcasting their views.”
Vision and Distortion in Cézanne’s ‘Still Life with Plaster Cupid’ - Good example of art criticism, “how to see,” all that.
Old Media Finally Wakes Up from a Coma - “Hey, guys”-style getting more mainstream. // Also, long form podcasts.
Dutch people concerned with U.S., Russia, Ukraine developments; More support EU army
Bill Skarsgård transformed beyond recognition for Robert Eggers’ Nosferatu, a film in which the vampire is still the scariest thing on screen.
Issues for His Prose Style - Picking the right noun as a deep cut mechanic, to show authenticity, and build up a mythos of yourself, and for yourself. // Hemingway’s letters.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
See y’all next time.
That idea isn’t original to me. It’s probably from Ben Thompson, but I don’t recall.
I cover more of how to think like a company to run The Corporation of You in my thriving and surviving in bigco’s pedantry-fest, part 2 and part 9 are especially applicable here.
I’m clearly a big fan of AI and believe it’s helpful in many ways.
I feel comfortable with that because I’ve used it for over two years now and rely on it daily for a wide variety of tasks, both work- and personal-related. That means I know exactly what it’s capable of, what it’s good at, and what it’s not good at. Me and the robot have a good relationship: we know how to work with each other.
Right now, generative AI is only good at working with text. It generates text—if you can reduce audio to text, it excels at that, and if you can convert text to audio, it’s equally proficient.1
Text can take many forms, and generative AI handles them well. As others have noted, if you want to shorten text, it’s almost amazing at that. If you want to summarize text, it’s pretty good at that. And if you need a summary to help decide whether to read the full text, it’s fantastic.
If you want to learn and understand a topic, part of that process involves condensing large amounts of text into a shorter, more digestible form—and it’s pretty good at that. All of the consumer agentic things are going out there and searching the web for you to find that text and then summarizing it all. If what you want to learn and understand is well documented on the World Wide Web, it is good at that. If you want to get insights into secret, obscure, poorly documented things - stuff that has little public text - the AI’s Deep Research is going to be shallow bullshit.
Even when it’s good, with learning and understanding, you need a finely tuned bullshit detector. Once you detect the bullshit, you can ask again or go search on your own. But really, you need a bullshit detector in all of life—robot or meat-sack. If you don’t have one, build one quickly. The benefits you get from that will last longer and far outweigh the benefits you’ll get from AI.
This aspect of learning means it’s not so great at company strategy. If you and your competitors are all using the same public text, you’re all going to get the same answer. There will be no competitive advantage. What’s even worse now is that it’s effortless for your computers to understand your strategy and predict the ones you’d come up with…if you only based it on public text. You have to figure out how to get your secret text into there. With all interactions with the robot, you have to bring a lot to the chat window. The quality of what you bring will determine the quality you get from the robot. Garbage in, garbage out. Which is to say: nothing valuable in, nothing valuable out.
Back to pedantry: It’s proving to be good at teaching-by-modeling: it shows you what The Big Report could look like, explains what the fuck those 80 slides your teacher gave you are asking you to do in your essay, and serves as an additional tutor and instructor when you can’t afford to hire one.
The robot is also effective as a co-writer. In other words, you become a co-writer with the robot. It can generate text endlessly, and if you collaborate with it, you’ll get great results. Just as you would with any co-writer (especially a ghostwriter or a co-author whose name appears in smaller print on the book cover), you need to get to know each other, learn how to work together, and figure out the style you want. Claude is great in this regard—it has a simple tool for learning and refining your style. If you haven’t spent time teaching the robot your style, you should do so.
You can reduce videos, podcasts, scripts, even small talk, to text. Recall, AI is good at text, so it will be OK at that.
It’s okay at imagining. I play D&D with it, and it has gotten a lot better at handling mechanics over the past two years, but it still remains rather boring and predictable when it comes to unassisted imagination. If you feed it published D&D adventures, it does okay. But just try having it come up with ten dwarf names—they’ll all be variations on “Iron Shield” or “Rock Breaker” and the like.
It’s really good at writing code. And guess why? Code is text. Is it good at creating multi-system applications and workflows used to, say, coordinate how an entire bank works? Probably not—very few people are even good at that. And then there’s the whole process of getting it into production and keeping it running. If you think the robot can do that now—or ever—¡vaya con Dios! Please report back if you survive.
What about tasks like optimizing supply chains? Maybe one day the robot will be good at other B2B tasks, but I suspect that for many years good old fashioned machine learning will keep doing just fine there.
Don’t use AI for tasks where being 100% correct is important. If the consequences of being wrong are dire, you’re going to get fucked. Worse, someone else is going to get fucked.
But, if you’re using the robot for a system that tolerates—or even thrives on—variety (errors), it’s great. “Anti-fragile” systems? I don’t really know what that means, but: sure. Are you brainstorming, whiteboarding, and exploring? Yup, great at that. Using it for therapy? It’s fascinatingly good at that.
You get the idea: if you’re using generative AI for something where you can recover from errors quickly, there is “no right answer,” and the task is text-based, then yes, it is great for that—and you need to start using it now.
Let’s build up to my concern:
Text is all generative AI is currently good at.
Most people have not used AI for 12 months, let alone two-plus years, let alone 30 days. I’m just guessing here. Surveys tell dramatically different stories. But most surveys show only a small amount of use, and just recently.
So, I don’t trust that most people yet understand what AI is good at—they often imagine it’s capable of far more. You have to use it to know it, and learning by doing is a lot of effort and usually takes longer than your ROI model’s horizon.
That’s “hype,” sure, but it’s more like misunderstood inexperience. It’s classic diffusion of innovation (ask Chatty-G to tell you about that concept; I bet it’ll be pretty good). Sure, that diffusion has been getting faster, but if humans are involved, we’re still talking decades—at least one decade.
My concern here is that once we collectively set expectations beyond reality, the fall is bigger, and the recovery becomes too great. Worse yet, people waste a lot of time chasing AI fantasies. They thought there’d be 100x returns when, in reality, there were only 10% or even 25% returns. You fire employees, take on investment and risk to overhaul your business, and spend time on those AI fantasies instead of pursuing other strategies. And then, when you learn what AI is truly/only good at, you’ve invested everything—only to find that your assumptions, ROI models, and, thus, investment was a fantasy. Plus, once you build it, you now own it forever, no matter how shit it is. Plus, you played a game of chicken with opportunity cost, and opportunity cost won.
So, don’t do that. Don’t bet the farm on something you haven’t used firsthand for at least 30 days, and certainly don’t stake our jobs or our index funds on it.
“I was the man of my dreams," Peter on Peter.
“the unexampled,” on Gary Snyder.
And, from Gary: “this romantic view of crazy genius is just another reflection of the craziness of our times… I aspire to and admire a sanity from which, as in a climax ecosystem, one has spare energy to go on to even more challenging – which is to say more spiritual and more deeply physical – things”
“Mandatory Commute policy,” synonym for RTO.
“autogolpe,” self-harm.
“If you change it, you own it,” if only.
”monomaniacal dork squads,” power-up.
“a steaming pile of, um, DOGEshit,” deep analysis.
“Our Son of a Bitch,” various.
“You can’t sell a sandwich with secret mayo,” Noah’s quest continues.
There’s a first time to forget everything.
“[rhapsode]([en.wikipedia.org/wiki/Rhap...](https://en.wikipedia.org/wiki/Rhapsode#:~:text=A%20rhapsode%20(Greek%3A%20)ῥαψῳδός%2C,BC%20(and%20perhaps%20earlier).).”
“Features of the future,” a CF Day topic.
When submitting a conference talk and given the option to select “audience level,” I’ve started always selecting “intermediate.” I don’t know why, or what that means, but it’s some kind of fun.
“LLM aka Large Legal Mess,” don’t use the robot for lawyer-shit.
“inspo,” AI hair.
“If I’d wanted chatGPT to answer, I’d have asked chatGPT” @byronic.bsky.social.
"My leather jacket tailor never flinched, so I'm not sure what's wrong with all the Finance Bros."
Predictably, a bunch of AI stuff of late.
The reality of long-term software maintenance - “In the long run maintenance is a majority of the work for any given feature, and responsibility for maintenance defaults to the project maintainers.” Related:
Top EDI Processes You Should Automate With API - Tech never dies. Helpful consequence: take care of it before it takes care of you.
How’s that open source licensing coming along? - ”The takeaway is that forks from relicensing tend to have more organizational diversity than the original projects. In addition, projects that lean on a community of contributors run the risk of that community going elsewhere when relicensing occurs.”
Key insights on analytical AI for streamlined enterprise operations - ”The big issue, whether it’s generative or analytical AI, has always been how to we get to production deployments. It’s easy to do a proof of concept, a pilot or a little experiment — but putting something into production means you have to train the people who will be using this system. You have to integrate it with your existing technology architecture; you have to change the business process into which it fits. It’s getting better, I think, with analytical AI.” // It’s always been about day two.
Why I think AI take-off is relatively slow - My summary: humans resisting change is a bottleneck; also, humans not knowing what to do with AI; current economic models can’t model an AI-driven paradigm shift, so we can’t measure the change; in general, technology adoption takes decades, 20 for the internet, 40 for electricity. // AI is a technology and is prey to the usual barriers and bottlenecks to mass-adoption.
GenAI Possibilities Become Reality When Leaders Tackle The Hard Work First - Like any other tool, people have to learn how to use it: “Whatever communication, enablement, or change management efforts you think you’ll need, plan on tripling them.” // Also, garbage in, garbage out: “GenAI can’t deliver real business value if a foundation is broken. Too many B2B organizations are trying to layer genAI on top of scattered, siloed, and outdated technologies, data, and processes. As a result, they can’t connect the right insights, automations stall, and teams are unsure of how to apply genAI beyond basic tasks.”
A.I. Is Changing How Silicon Valley Builds Start-Ups - ”Before this A.I. boom, start-ups generally burned $1 million to get to $1 million in revenue, Mr. Jain said. Now getting to $1 million in revenue costs one-fifth as much and could eventually drop to one-tenth, according to an analysis of 200 start-ups conducted by Afore.” // Smoke 'em if you got 'em…
The AI Experience - What’s Next For Technology Marketing - Back-up the truck and dump the enterprise enterprise marketing slop: “Did you consider that soon you may be marketing to GenAI agents of your customers?” // And: “While the term “Account Based Marketing” or ABM is still floating around, less marketers are focused on continuing to enable personalized marketing for a subset of the customer and prospect base.” // Instead of having to craft the personalized content, you have the robot do it. Then the marketing skills you need to back to mechanics of running campaigns. // Yes, and, this is an example of my “bad things are bad” principle. If the slop you get is bad, it will be bad. But it can also be good, in which case, it will be good.
How Ikea approaches AI governance - ”Around 30,000 employees have access to an AI copilot, and the retailer is exploring tailoring AI assistants to add more value. Ikea is also exploring AI-powered supply chain optimization opportunities, such as minimizing delivery times and enhancing loading sequences for shipments to minimize costs. AI in CX mostly targets personalization. // ‘“I’m not just talking about generative AI,” Marzoni said. “There’s some old, good machine learning models that are still absolutely delivering a lot of value, if not the majority of the value to date.”’
U.S. Economy Being Powered by the Richest 10% of Americans - One estimate: in the US, “spending by the top 10% alone accounted for almost one-third of gross domestic product." // Never mind the, like, morals?…doesn’t seem very anti-fragile. Never mind the, like, morals?…doesn’t seem very anti-fragile. // “Those consumers now account for 49.7% of all spending, a record in data going back to 1989, according to an analysis by Moody’s Analytics. Three decades ago, they accounted for about 36%.”
Why it’s nice to compete against a large, profitable company - Because, they can’t lower prices on their core products least Wall Street freak-the-fuck out.
See y’all next time! Gotta go run a few ideas by my pal, the robot.
It can kind of convert text to images, but only if you like the same people over and over or are an anime fan. If you like a perfectly chiseled chin, AI generated images are for you. You can put a lot of work into getting your text in shape to produce something unique that looks real. In this respect, it gives a tool to people who can’t do graphics (like me!) which is actually pretty amazing. But it can only go so far. Just try to create a “realistic” looking person instead of a perfect fashion model. It is near impossible. Of course, this is because it’s not trained on enough images yet, I guess.
Enterprises pouring money into GenAI and CEOs treating AI agents like cheap labor - yet only 25% see ROI right now. Vibes: “Europe’s long holiday from history is over.” Also: IBM does RTO, predictions about DOGE layoffs, the term “platform” remains a favorite excuse for overcomplicated tech, and “autonomous killer robots.”
What to make of using AI to automate HR processes? Melody Brue and Patrick Moorhead look at Oracle’s work there:
The agents are designed to support several key facets of the employee experience, including hiring, onboarding, career planning, performance reviews and the management of compensation and benefits.
Yes, and…
(1) If it’s bullshit work (“busy work”), eliminate it, don’t automate it. The thinking here promises to automate bullshit work like manually formatting performance reviews, copy/pasting boilerplate onboarding checklists, clicking through timecard approvals, writing job descriptions from scratch, and filling out endless HR forms. Yes, and…are these tasks that should probably just be eliminated or drastically simplified rather than lovingly preserved in AI amber. I’ve written job descriptions several times and there is something wrong-feeling about the process and the results. The same with performance reviews from both sides of the review. If you feel like you’re doing bullshit work and you get excited about automating it with AI, why not eliminate it instead? Or, you know, fix it.
(2) How could workers use similar AI stuff to maximize their advantage versus management? In a heavily bureaucratic HR system, reports and analysis are important: you need to prove that you deserve a promotion, more money, whatever. You’re often weighed against relative metrics: how much do people get paid in a region, how did you perform versus other people on a bell curve (or ranking), etc. Putting together those reports is tedious and your managers may not put in the effort. Have the AI do it for you. You could also look at those wordy job descriptions to extract what your role is responsible for doing. And when you need to come up with annual MBO/KPI/OKR/whatever the next TLA is for “goals,” have the AI look at the goals-trickle down and come up with yours. Then have it track what you should be doing. Negotiating salary could be useful to: how much should you even be asking for, what is your BATNA? What is their BATNA?
(3) Could you run the robot on, say, the last 5 years of reviews and then compare it to what the human evaluators did? Is the robot better (less bias, giving feedback that improves worker performance, finds low performers, etc.), or is it worse (wrong analysis leads to less performant workforce)? As a worker, thought you might not actually have access to full reports, you could try to find out what the real performance measures are. Load in job descriptions, give an overview of what highly rewarded people did, and then see what attributes and actions get rewarded. Never mind what the official metrics are, target those.
There’s a general theory for all AI use here as well: if what your AI produces is something that can just be consumed and used by another AI, it’s probably bullshit work that you can reduce to a quick email or can be eliminated entirely.
***
For him, of course, it was a business opportunity. He was part of what I would come to see as a savvy minority of people and companies capitalizing on AI fatigue.
Meanwhile, this is a fantastic piece on the state of HR tech from the worker’s perspective. There’s plenty of AI talk in it. It’s also fun to see what tech conferences and marketing looks like to (I presume) outside eyes. We are such dorks and, often, tasteless:
While the word people was plastered everywhere as both a noun and an adjective, the workers of the exhibit hall's collective imagination were not real, three-dimensional people. They were shadows without substantive interests or worries beyond the success of their companies. That was the only way these products could be pitched as win-wins. But, come on. We were in Las Vegas - everyone here knew the real money comes from making sure enough people are losing.
Fresh Podcasts
There are new episodes of two of my podcasts, listen to ‘em!
Software Defined Interviews #94: Adding more condiments to the 7 layer networking burrito, with Marino Wijay - Why do we keep adding new layers and frameworks instead of just fixing the ones we have? They also talk about the challenges of platform engineering, the importance of empathy in tech, the difficulties of integrating multiple layers in tech stacks, the essential role of effective communication and prioritization, and EmpathyOps.
Software Defined Talk #507: Battery of Potential - This week, we discuss how banks beat PayPal with Zelle, what the Wiz survey says about AI usage, and whether you can really “disagree and commit.” Plus, are multitools actually useful?
AI Agents: Why Workflows Are the LLM Use Case to Watch - The agentic app revolution isn’t a transformation story. It’s a modernization story; a chance to solve small problems with the team you already have.
AI Agents and the CEOs - “At the risk of saying the quiet part out loud, the way CEOs are talking about agents sure sounds like how they talk about employees–only cheaper!” // “Companies are dedicating significant spend to AI–approximately 5% of the revenue of large enterprises (revenues over $500 million) according to one survey by Boston Consulting Group, and yet only 25% claim they are seeing value from their AI investment.”
Learning from examples: AI assistance can enhance rather than hinder skill development - Could be that AI use makes you better. // “Decades before the advent of generative AI, the legendary UCLA baseball coach John Wooden declared that the four laws of learning are explanation, demonstration, imitation, and repetition (31). Few learners have access to the best human teachers, coaches, and mentors, but generative AI now makes it possible to learn from personalized, just-in-time demonstrations tailored to any domain. In doing so, AI has the potential not only to boost productivity but also to democratize opportunities to build human capital at scale.” // Also, some prompts used to evaluate writing quality. The one rating “easy responding” is interesting: how easy is it to (know how to) respond? Maybe good for CTAs.
Gartner Survey Reveals Over a Quarter of Marketing Organizations Have Limited or No Adoption of GenAI for Marketing Campaigns - ”Nearly half (47%) report a large benefit from adopting GenAI for evaluation and reporting in their campaigns.” // The number is reverse is more interesting: 77% of surveys marketing people say they’re using generative AI for marketing stuff. Related:
OpenAI reaches 400M weekly active users, doubles enterprise customer base - “The ChatGPT developer currently has 2 million paying enterprise users, twice as many as in September.” With “400 million active weekly users, a 33% increase from December.” And: “The New York Times reported in September that the company was expecting to end 2024 with a $5 billion loss on sales of $3.7 billion.”
2025 is the breakthrough year for Generative Enterprise — and partnering with a capable services partner is critical - “[S]pending on GenAI is rising (HFS data suggests enterprise investment is rising by more than 25% on average into 2025), we start from a low base. We estimate enterprise spending on GenAI in 2024 accounted for less than 1% of global IT services spending. This is just one illustration of how far we still have to go.” // Plus, a whole bunch of commentary in enterprise AI.
Data is very valuable, just don’t ask leaders to measure it - AI ROI is difficult: “in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases… It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes”
A Simple Definition Of “Platform” - “a product that supports the creation and/or delivery of other products.”
IBM co-location program described as worker attrition plan - From the RTO-as-not-so-stealthy-layoff files.
YouTube (GOOGL) Plans Lower-Priced, Ad-Free Version of Paid Video Tier.
On European Defence, Energy and Growth - Imagining big changes in European priorities: changing policy to get more energy, more emphasis on militaries.
No Rules Are Implicit Rules - The European view on enlightened American management policy: “Greg, I hate to bring it to you, but working for ten fucking hours a day is not the normal hour. I don’t care if you live in America or not. The section continues with other “grand” examples of managers taking “up to” 14 days a year off to show their employees they should to so too. Let’s assume the best here: 14 workdays are almost three weeks. A year. The statutory minimum for full-time employees working a forty-hour week is 20 (thus 4 weeks) in Belgium. Oops.”
Rage Against the Machine - Perceptive: “They’re going to try two or three things they think will solve everything, which will be thrown out in court. I assume the first thing they’ll do is some kind of hiring freeze, and then, after three months, they’ll realize agencies have started to figure out ways to get around it. And then they’ll try to stop that, and they won’t be able to do that. Then they’ll try to make people come to work five days a week, and that’s going to be difficult because a lot of these agencies don’t have offices for these people anymore. I think it’s going to be one thing after another, and maybe after four years the number of employees will be down 2 percent—maybe.” // The layoff playbook DOGE is working comes from the tech world, and it sort of works there. But that’s because tech companies can die, be acquired, or be reborn. In a tech company, you rarely starve the beast (or amputate parts of it) and have it survive. Do we want the same outcomes with government?
I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.
Kelsey Hightower declined to join the AI gold rush, advocating instead for a glossary of tech jargon to remind everyone that AI is not new, just rebranded.
Platform engineering teetered between breakthrough and bust, with some heralding it as the savior of DevOps while others braced for its descent into Gartner’s “trough of disillusionment.” Several years ago (February, 2023) Sam Newman insisted that calling something a “platform” is often just an excuse to overcomplicate things, suggesting “Delivery Enablement” as a rebrand.
Meanwhile, IBM Consulting offered enterprises a guided tour of “Agentic AI,” a term that likely needs its own entry in Hightower’s proposed glossary.
“effortful,” AI study.
“Topological qubits,” MSFT.
“Deliberately they don’t give a shit,” Emily, Political Gabfest, February 20th, 2025.
And: “chaos entrepreneur,” John.
“Europe’s long holiday from history is over,” John Naughton.
"This [Trump] administration cares about weapon systems and business systems and not ‘technologies. We're not going to be investing in ‘artificial intelligence’ because I don’t know what that means. We're going to invest in autonomous killer robots." Fund the outcomes, not the tech.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
Nothing to report today.
In this episode: AI eschatology, assology, and a deep, intellectual commitment to hating mayonnaise. Tariff trouble, security panic, and NVIDIA shrugging off DeepSeek. Young voters shift rightward, no one agrees on ‘medium roast,’ and Hollywood still relies on glue to critique its own youth obsession.
“immanetize the AI eschaton,” Charlie Stross.
“The ass is a very strong symbol of how our body is not neutral in the public space. How our body is constantly scrutinized, has been shaped to please the man’s eyes, has been seen as a body part that was objectified, that was detached from the person who was simply bearing it.” Assology. See related boobology below.
“This is the number one YouTube channel about hating mayonnaise.” Noah.
“LLMs are good at the things that computers are bad at, and bad at the things that computers are good at,” Slides Benedict.
“If I live, I must fully accept the game; I must have the most beautiful life. I don’t know why I am here, but since I remain here, I will construct a beautiful edifice.” A young Simone de Beauvoir.
What is AI Middleware, and Why You Need It to Safely Deliver AI Applications - AI middleware is the glue that holds your AI-driven apps together, making sure models, data, and existing systems actually talk to each other instead of breaking everything. It saves developers from reinventing the wheel, adds security layers, and keeps AI projects from becoming yet another unmaintainable mess.
Software development is… - “Software development is holistic.”
Finding Energy to Learn & Build When Burnt Out - "How can [management] support you and your team, shield you and the team, and provide clear direction when they’re barely holding it together?
Moving on from 18F. - when you no longer agree with your employer’s culture.
How Liberty Mutual was able to jump into generative AI thanks to a clear data strategy and FinOps
I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.
The CrowdStrike outage crashed 8.5 million devices, wiped out $5.4 billion, and forced IT leaders to admit that 84% had no real incident response plan. In response, Adaptavist found that 99.5% of companies are now hiring security personnel, diversifying vendors, and possibly sleeping in their data centers for luck.
Trump proposed a 25% tariff on imported semiconductors to force chipmakers back to the U.S., despite most advanced chips being made overseas. Corporate America may be souring on his policies, as erratic tariffs threaten supply chains. Financial analysts determined that economic indicators are surprisingly bad at predicting democratic collapse. Maybe we should blame video games again? The Atlantic reported that young voters have shifted rightward due to pandemic distrust, economic stagnation, and too much time online. Hopefully, those tariffs won’t make their damn video game consoles and vaporware-colored lights more expensive.
Related: NVIDIA’s share price is already within 1% of its pre-DeepSeek drop, showing that while the market can be extremely efficient, it’s not always efficient at thinking things through.
A UC Davis research center revealed that no one agrees on what a “medium roast” is, despite years of artisanal posturing. Kieran Healy warned that your iPhone knows more about your life than your best friend, your partner, or your mom—and it’s probably judging you for it. And all the fitness tracking in the world still wasn’t enough for the perfect boobs required for The Substance, a satire on Hollywood’s obsession with youth: “Unfortunately, there is no magic boob potion,” Margaret Qualley said, “so we had to glue those on.”
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
Off to get a haircut today. I hate getting haircuts, that’s why my hair and beard get wild.
Meanwhile, we’re one away from 900 subscribers. Tell you what, I’d you’re one of the first several new people to sign up, I’ll send you a bundle of my books.
Lots of links and stuff this episode: AI isn’t a coworker, it’s just automation wrapped in hype. Tech moves fast, but nothing lasts—except bad takes, questionable business models, and the creeping realization that managers just want fewer humans to manage. Meanwhile, we live like kings and don’t even notice.
Good episode of Software Defined Talk this week, especially the opening moment of absurdity where we, yet again, try to solve Europe’s ice problem. Take a listen, or watch the unedited recording.
“Layered, polished mix: As expected, Dre’s meticulous production work ensures that every instrument sits perfectly in the mix, making for a cold, calculated vibe.” Respect. (The robot comments in “Big Egos.”)
"razvedka boyem –reconnaissance through battle: You push and you see what happens, and then you change your position."
Long skim content.
“Everything affects everything else,” Julia Evans. // I mean, I think she just cracked the code to, like, reality there, you know, everything.
“[Sorry, ugly people with good ideas.]” // Alternative funding source.
“A Cup of Coffee in Hell,” not cold, but helpful.
“If it moves, it’s probably alive,” logic.
“Cannabis, crypto or half of North Dakota?” Buttonwood.
“Sen. Mitch McConnell (R-KY), a polio survivor, was the lone Republican to vote against him.” Oophff. When you got that guy voting against you know your head is full of bologna.
Making smaller containerized apps - Smaller, more secure, and faster to deploy–because nobody wants a 500MB container just to run “Hello, World.”
The “AI Agent As Coworker” Narrative Is Nonsense The AI agent co-worker narrative is nonsense - Against the agentic hype: “You have to admire Benioff’s chutzpah in defining digital labor as some brand-new massive market opportunity. But to many, it just sounds like automation. Like every other phase of automation since the beginning of the industrial age, this phase is also about doing more with fewer human resources.” // Meanwhile, the counter case from Seth Marrs.
New estimates have ChatGPT using 10x less power than previously thought - ”it would actually be more energy efficient for you to have an LLM turn off your furnace than to walk across the house to manually turn the dial.”
The danger of relying on OpenAI’s Deep Research - Some valid critiques of Deep Research. Though, none of them really amount to “it’s not good.” To sum-up: it can’t do complex research, let alone come up with original ideas nor cover obscure topics. It can’t only tell you what the Internet knows. This is actually not fully accurate: you can also upload your own files and put in your own knowledge. For me, the main problem is the readability of the reports. While they are long and detailed, they’re not written in an engaging way they makes it easy to read. I have a pile of them that I’ve yet to fully pick through. // Yeah, these robots have little creativity and original thought and further on, they can only do the predictable. But, man, they sure can do a lot of it. // There is an annoying “buyer beware” nature of all this AI stuff. If you’ve used it for years, or even a few months, you de-hype it a lot. You know it’s limits and to treat it like a dumb tool. But, that is not how it is sold at all, and it’s not how people who don’t use it think of it.
All hat, no cowboy - A bicycle for your hands: “Becoming a good programmer takes time, so does becoming an artist. What if all the people with ideas but no time or skills or persistence or real interest could participate and _turn their ideas into the thing?_Surely non-musicians have great ideas for songs that they could turn into great songs if it weren’t for the inconvenience of musical instruments.” Yes, and: “One way to look at this – not a charitable way, but a view that feels true to me – is that managers view all need for human labor as an inconvenience. In part because they rarely get to experience what it’s like to be closer to a creative process, but also because they constantly experience the inconvenience of checking on deadlines and paying invoices. They would simply rather manage a robot than a human, so the only other people they have to interact with are other executives. Peak economic efficiency.”
One Year With the Vision Pro - Basically, not enough ROI for $3,500.
The Great AI UI Unification - What’s going on here is a classic power user versus normal user UX problem. I’m probably more power user than normal user. I don’t mind the UX, it’s easy access to docs that explain features that I find annoying. For example, try to do a deep explanation of what’s currently in ChatGPT Pro. There really isn’t. Even more so, last I looked the help page doesn’t list new features like Deep Search. And most ironically of all, if you ask ChatGPT itself, the answers are not great, or accurate. E.g., I asked about using its reminders and it didn’t even know it had them until I fed it to blog post on it. The naming of things is not helpful as well. // Tech companies are terrible about documentation. While obscure, Apple Short Cuts is a great example. Docs for that are terrible, usually non-existent.
Tech continues to be political - ”I don’t know how to attend conferences full of gushing talks about the tools that were designed to negate me. That feels so absurd to say. I don’t have any interest in trying to reverse-engineer use-cases for it, or improve the flaws to make it ‘better,’ or help sell it by bending it to new uses.”
Internal Product Management, Forrester.
AI Alone Won’t Drive Revenue - What Are You Missing? - Some light ROI talk.
I don’t know, despite this being from the UK (or maybe that makes the point): newsflash, Europe is expensive to live in, mostly by design as far as I can tell.
The Tyranny of Now - ”What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.”
Learning from my mistakes… - It’s tough to monetize content that has near zero value or originality, and be easily pirated. This especially true if the price is wrong. That sort of applies to every product. // “In the end though, you can’t optimize your way out of a black hole, the gravity is too heavy. We were marketing a product at a price point that was material to our customers, and giving them content which was largely available from our competitors for free. All the tweaks in the world couldn’t change that.”
Why are big tech companies so slow? - Because they build, sell, and support a lot features.
How to add a directory to your PATH - Computers are easy, they said. You just need to read the manual, they said. It’s so intuitive!
I don’t read everything, sometimes I have to robot read it for me. Here are it’s summaries.
AI agents are not coworkers, according to Forrester analyst Anthony McPartlin, who argued that the idea is little more than a marketing ploy. It’s just automation. His colleague Seth Marrs disagreed, predicting AI will become an indispensable workplace collaborator, though perhaps without an HR complaint line.
Meanwhile, most CFOs planned to increase tech budgets in 2025.
I’m guessing this dude isn’t meaning to be associated with the them, but here’s a little insight into how TheTechBros.gov think that might explain their batshit take on how to run a railroad.
Jack Crosbie mourned the decline of professional dress, noting that executives and tech billionaires get to dress however they want while the rest of us are left to wonder whether wearing Hoka running shoes to worksignals liberation or quiet surrender. This, of course, is only a problem if you don’t already own half of North Dakota.
Samir Varma declared freewill both an illusion and a practical reality in a post that argued no one—not even you—can predict what you will do next. The brain, it turns out, is deterministic but computationally irreducible, which is a fancy way of saying that you can only know what you’ll eat for dinner tomorrow by waiting for tomorrow. Until then, just assume it’s chicken.
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
Good overview from Bryan on the changes people often don’t make when they want to do the whole platform engineering things:
Platform teams can have a difficult time convincing their management of the importance of developer experience, instead being pushed toward traditional governance and control measures. While these measures might satisfy IT audit requirements, they can severely impact development team velocity. The result is predictable: development teams, under pressure to deliver business outcomes quickly, create workarounds or turn to "shadow IT" solutions.
Yes, and…
It feels like he’s suggesting either (1) it’s possible to do too much of governance, security, controls, etc., and, thus platform teams don’t have enough time for or stop doing customer work (focusing on developer needs first, security/etc. needs second), or, (2) that the governance, security control, etc. measures aren’t needed (as much). Of course, us platform vendors would say, (3) if you buy our products, out platform will automate a lot of the governance, security, controls so the platform team can focus on the customers, developers.
I don’t hear enough multi-year, enterprise success stories about platform engineering. It’s been three (four?) years since Humanitec declared DevOps dead and ushered in the idea for their IDP (back when “P” meant portal, not platform) product. Backstage was some kind of gas on the fire to all that. And, yes, here we are. It feels like the similar oddity with Kubernetes: lots of talking, then lots of figuring out how to adopt it, and only of big enterprise success stories. There are stories, but enough to justify having destroyed the progress we made with PaaS 5+ years ago. Something is wonky.
What is missing from all of this? Year after year, on this topic, it’s the same conversation.
There’s a digital transformation paradox here too: we’re always on about the urgency of needing to change, then we say there’s not enough change, and yet everything seems to be running just fine. Maybe it could be running even more fine!
One theory: because of the place I work, I don’t see all the success, just hear about the slogging from the people who want help. People who don’t need help don’t ask it. Coupled with: thought leaders don’t talk about everything being fine, that isn’t the job. Few people talk about ongoing success, so all I see is struggling.
//
This week the kids are out from school, so I’m trying to figure out vacationing.
Catch-up: yesterday, I went over everything you need for tech strategy and marketing.
I find the restrictions on using public AI chat things baffling versus the potential, but obvious benefit. But I don’t know the CISO perspective and way of thinking. What am I missing?
Yes:
But:
My theories:
It’s just too new and unknown, we don’t even know the risk and (is this a layman’s term?) attack vectors (e.g. Whiz findings). Better to lock it down and let others fuck around and find out (sidetone: I didn’t realize that we’d standardized on “FAFO” for that in polite conversation, which is lovely to know. Son of YOLO!).
The restrictions on AI use are move about costs and control/ambiguity of work product.
IP. If an employee pays for their own chat things, who owns the IP? With AI image generation, in the US at least, you have no (defendable) copyright on the generated images and video. I’m no lawyer, but it seems like that’d be easy to extend to text and code.
Costs. “We don’t want to pay $5 to $20 more a seat/month - what, in this economy?”).
Yes, and…so many work functions could get at least a 3x to 5x boost in “productivity” (or whatever figures du jour, I’m just swig-swigging those numbers). Or, maybe not. Then again, maybe yes! Me: If it’s good enough for tutoring, it’s probably good enough for knowledge workers.
My theory: I think CISO’s just don’t trust it because there are so many unknowns. Which is reasonable: there hasn’t been enough time to learn.
Plus, with Altman and Musk involved, you have batshit crazy people who are unpredictable driving the industry. But, you could just use Microsoft, AWS, and Anthropic. If you can get compute cheap enough, have enough ROI to do the capex and opex spend, or can profit form lower-powered/slower AI models, you could host it on your own and get benefit.
Yes, but…isn’t part of CISO risk modeling balancing out business benefit versus zeroing out benefits/potential growth by clamp-downing? Over the next two years if competing firms have looser policy, and they profit without tanking (or being able to pay for/live through risks and still profit/keep share prices high), don’t you lose anyways because [insert software is eating the world digital transformation tub-thumping we all used in the late 2010’s]?
(I hope you either (a) know me well enough, or, (b) intellectually wise enough to realize I’m not, at all, saying that security is a big deal. The point is to discuss the reaction and resulting strategy.)
“defiant jazz,” forever in our hearts.
“welcome our newest colleagues and look forward to the smell of Axe Body Spray in our elevators.” NTEU.
“technomancers,” The new arms dealers.
Emerging GenAI Use Cases and Spotlight on Secure Content Generation - If your AI stuff is using the same pool of knowledge as your competitors, you won’t get much competitive advantage. You need to add your own secret info. // “A common challenge, however, when employees use public generative AI tools or foundation models, is a lack of organizational specificity.”
Is Fine-Tuning or Prompt Engineering the Right Approach for AI? - As it says.
Stuck in the pilot phase: Enterprises grapple with generative AI ROI - ”More than 90% of leaders expressed concern about generative AI pilots proceeding without addressing problems uncovered by previous initiatives, according to the Informatica report. Nearly 3 in 5 respondents admitted to facing pressure to move projects along faster. ”
- Extending AI chat with Model Context Protocol (and why it matters) - Adding plugins to the AIs, the hope being that wide community of developers will form, extending the functionality of the AIs. I’ve seen this in practice with Spring AI and Claude and is very promising, and easy.
Do Marketers Need To Be Writing for AI? - SEO for AI model training. Yup, better start doing that. The good news is, all those SEO-trap pages that you generated (those long one you never actually show to users/customers) would probably work here…are working here. But, it’s likely a good idea to start doing more of this ongoing.
Moderne raises $30M to solve technical debt across complex codebases - ”A quick peek at Moderne’s customer base is telling of who is most likely to benefit from its technology — companies like Walmart and insurance giant Allstate. Its investor base includes names from the enterprise world such as American Express and Morgan Stanley, which, while unconfirmed, is safe to assume have invested strategically.” // From what I’ve seen and heard, seems like good stuff.
Context-switching is the main productivity killer for developers - #1 way to improve developer productivity, 30+ years running: stop interrupting them while they’re coding. // ”Research from UC Irvine shows that developers need an average of 23 minutes to rebuild their focus after an interruption fully.”
I often ask the robot to summarize articles for me that look interesting…but that I don’t want to read. Below are not the full summaries, but I asked it to write a Harper’s Weekly Review style summary for you, lightly edited by a meat-sack with said me comments in italicized brackets.
Russ Vought quietly reinstated a CFPB procedure essential to mortgage markets, ensuring that banks could continue pricing loans without improvising their own math.
Economists warned that high stock valuations may lead to a decade of low returns, an insight that Wall Street will process just in time to act surprised when it happens. [I don’t really get this one, but that’s the case with most long-term investor “logic,” or lack thereof.]
Some commentary of Infrastructure as Code found that most companies are still doing it wrong, proving once again that automation is only as good as the humans failing to implement it.
DeepSeek spent $1.6 billion on AI infrastructure, amassing 50,000 Nvidia GPUs in a move that may or may not justify the hype surrounding its capabilities. Investors watched as a $2 trillion AI market correction erased valuations faster than a chatbot dodging a direct question. [See above on investor’s “logic.”]
Microsoft, Meta, Alphabet, and Amazon continued their spending spree, ensuring that AI-driven margin compression remains a long-term feature rather than a short-term bug.
Spring AI promises to make generative AI accessible for Java developers, proving that some traditions—like running Java in the enterprise—never die.
Cobus Greyling declared that the future belongs to “agentic workflows,” a phrase that sounds revolutionary but, to my meat-sack friend, mostly means workflows with slightly more AI in them, in a good way.
A debate over AI optimization raged [seems a little strong?] between fine-tuning and prompt engineering, though most developers [or their corporate penny-pinchers will likely choose whichever option is cheaper that day.
Related: Adam Van Buskirk warned that in a world where all frontiers have been settled, destruction may be the only remaining path forward—an insight that AI companies and their burn rates appear to have already embraced. [See vintage novel and wastebook yes/and above.]
Events I’ll either be speaking at or just attending.
VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.
Discounts: 10% off SREDay London with the code LDN10.
I need to think about this a lot more but if you (a) want to see some examples of ChatGPT Deep Research in action, and/or, (b) are interested in industry analyst strategy and M&A scenarios (here, with Gartner), check out these two reports I ran on Gartner’s business and strategy, in the SDT Slack. I printed out the whole chat session, so you can see my prompting, questions it asked, the first report, some back and forth, and then the second report. You can find it in the SDT Slack, or, you could jus check it out here:
I did not actually read all the pages, nor did I fact check it. Pretty interesting to see this kind of output though. I’ve used Deep Research for interview prep once so far: it wasn’t very impressive, but maybe that’s because I’d already done all the research myself, and the public info was slim.
Meanwhile, despite headwinds, IT seems to have done so far OK:
In tech product management marketing, there are three phases of your “story” and execution: strategy, planning, and doing (“execution”). I think a lot of people mix up these phases, talk too much about strategy, don’t do enough planning, often poorly communicate the plans to staff, and are not “throw it all at the wall” enough with doing. I’ve worked in this area for, I don’t know, 20 years. Here’s my latest organized brain-dump from watching people from afar and close-up at many places.
This first phase is about figuring it out: researching the market, observing what others are doing, conducting classic competitive analysis (Porter’s Five Forces or however it gets rolled into the strategy airport book meat-loaf du jour). You’re deciding what to do and making the case to your leadership for why you should do it: getting budget, resources, and permission to work on this for the next 12 months.
For example, if you’re creating a generative AI application:
You’d research generative AI in general.
Find underserved markets you can target with your unique advantages.
Define a product area.
Build the corporate strategy case (market sizing, trends, maybe industry surveys like those from the prestigious “Studies Center of Toronto” to wave around in front of the CFO).
Let’s say you identify that there’s a big market for solo roleplaying with generative AI. You already have street cred in the gaming community. You also have developers who are familiar with coding text-based gaming apps with easy access to agentic AI tools.
Then you do a lot of dogs and cows work: figure out likely buyers (individuals, large organizations, industries, geographies, etc.), budget needed and projected ROI, getting over the IRR hurdle, and a stack of slides to SWOT away any doubts.
This is the stuff the Bain interns will rework into 7 or 18 sub-slide decks—complete with stock images of triumphant businesspeople (tastefully mixed between all the attributes of humanity) in suits shaking hands—and then proudly present to the SVP who definitely didn’t read them and has lots of questions about the executive summary slide, not getting what you’re politely trying to tell them with answers like “Yes, we cover that on slide 43” or “We’ll address it in Section 2" or “Interesting—well, backup slide 193 actually covers this.” And then there’s always that one executive in the meeting who suggests that, instead, you should acquire that software company from Iowa that’s somehow been puttering along for 34 years. They’ve done this so many times (likely 6) that you’ve finally prepared a back-up slide on that topic - will you suggest how that’s a great idea and would lead to synergies if you did that along with your plan, or maybe you’ll just show them the 9 point font table that shows that, sadly, regulatory concerns bring on too much risk due to the recent changes in Brussels?
Yup.
And then you pass that down to all the groups in your company to, like, actually go do.
The second phase is about figuring out how to operationalize things and how you’ll sell it (go-to-market, GTM).
Sorry—I know: “operationalize.” Let’s call it “your plan and a realistic way of how you’ll do it.”
This second phase is about figuring out how to operationalize things and how you’ll sell it (go-to-market, GTM). Sorry - I know, “operationalize.” Let’s call it “your plan and a realistic way of how you’ll do it.”
This includes features, scheduling feature releases, choosing platforms and languages, training sales staff, refining your pitch, thinking about marketing campaigns, and all manner of actual things you’ll be doing.
For tech marketing, you’re figuring out the basics of, among other things:
Personas, the types of people and the roles they have that will use the product,
Buyers/decision makers if they’re different than the users
Your pitch structure, and actual pitch: this could be pointing out a problem some one has, or an opportunity…and showing that your product fixes it,
The marketing basics of messaging, positioning, and value props - probably adapted to different personas and phases of buying.
Content for all of this, including different phases of the buying cycle.
Part of this phase is working on how sell this product - will you sell directly to individuals, rely on other people to sell it and “channels” (re-sellers and VARs), sell to large organizations or small ones, etc. If you’re doing something like Product Led Growth model (PLG), you might mix together product features with these marketing and GTM things. PLG relies on frequently making product and UI changes to encourage purchasing, upselling, and preventing churn.
This second part is largely internal facing: it’s your plans for what to do.
In our example, the solo roleplaying with generative AI app, your strategy has identified that selling to individuals is best.
How will you go about doing that? Well:
You’ll need ads, more than likely, maybe you’ll try to partner with Hasbro to latch on-top their D&D franchise (partner synergy!) as a channel/partner. Do you need to talk to gift card companies to make sure you show up on those last minute gift end-caps at Albert Heijn?
Maybe you need some thought-leadership to build up attention and brand, and/or you could coast off influencers - get those YouTube people to talk about it.
Should you open source parts of it, or add in free tiers to get a really wide start of the funnel and then work on upselling?
Can you start to add in little tweaks and features each week to encourage that upgrading and retain people?
If you’re selling to enterprises, you’ll need a different angle. You still leverage high-volume marketing, but you absolutely must appeal to the executive who signs the checks.
This means a different type of thought leadership and marketing: you want to reach those executives who have a problem (or dreams) that their budget can solve.
As one difference, instead of just influencers looking all like they just smelled a fart in their YouTube thumbnails, you also will want industry analysts (probably the ones who farted) to say you’re great, at the very least know you exist and bring you up in the conversations they have with your buyers every week.
Let’s play around with example an: you’ve decided to sell your solo roleplaying with generative AI solution to large organizations. Maybe there’s the pre-Trump era desire to nurture employee mental health and wellness because you believe it makes you more money and, you know, more human. So you, the buyer, want to provide a fun/wellness service: playing D&D during breaks!
So, you put together some white papers and sponsored posts (your own blog and social media, maybe you can get something on TheNewTHAC0.com) about the need for happy employees. After all, could I share with you a report from the Prestigious Human Resources Management University Studies Center of Toronto that found happy employees are 34% more productive according to a recent study from?
Then you connect creating happiness to playing D&D as a way to make them happy. And, hey presto, you get unlimited trips to the TCO-ROI hot food bar.
You’ve found a need that helps you make money: happy employees are more productive employees and, thus, make you more money,
You have a way of satisfying that need: play D&D with generative AI,
You have the tools and conversation happenings to convince people of it
You can find and engage with the buyers.
You can even do some market-segmentation to max your take. For example, you could offer additional features like single sign-on (SSO) integration at a higher price.
And, remember that study from the Institut Parisien de l’Étoile pour l’Étude du Jeu Fictionnel en Milieu Professionnel (IPEEJFMP) which found long-term character development yields 14% more day-to-day productivity (n=300, presumably gathered from a basement next to the catacombs). Isn’t it worth it to pay a little bit more per seat/month to persist sessions across plays?
And, for long term employees, surely you need the ability to keep those session past 12 months, right? Now, you might be thinking: but what about that new EU regulation? Don’t worry, you get that with the Suite - check out the six column pricing page and fill out the contact form in column six, the one labeled the “Enterprise of Many Solutions Suite.”
And so on.
The third phase is execution: creating all the deliverables you plotted out in your plan. Slides, landing pages, blog posts, pitch decks, product demos—this is the “keep the sausage factory running” level of detail.
How do you pitch to customers? What are the discussions you have in sales meetings? You also need to put together the actual “content” and work product. What’s our content schedule? What do the slides look like? We need a feedback loop to hear objections customers have (so we can counter them), feed in what they respond to (so we can do more), and do some competitive research. We also need to produce actual thought leadership and demos of the system. Let’s engage with those influencers to get them to write reviews and recommend it.
Perhaps we should arrange some dinners in major cities where we invite Ed Greenwood to come speak for 20 minutes on how to come up with engaging D&D adventures, especially focused on long-term story arcs (remember that IPEEJFMP study?), and during the main course (wagyu steak heritage potatoes, and, for vegans [or those eating healthier], the wild mushroom Wellington in puff pastry with thyme-shallot polenta) we go over how we've adapted that advice into our product.
You also need a company story—the “why” stack of increasingly large fried eggs. Some brand identity: are you a kindly sage, a disciplined archer, or a swashbuckling gambler forging new frontiers? Develop a perspective on your market problem that is true, utilitarian, and stands out. Maybe you coin a tagline like “Roll a natural twenty every quarter.” (The stack of fried eggs is a bonus if you do it well because it will motivate employees as well.)
Now, what’s important is to just go for it. Try as many things as possible, do at least some analysis of whether it works, and adjust if needed. You start throwing everything against the wall and narrow down to what works, maybe revisiting every six months. Eventually you’ll figure out what works, but only if you pay attention to what works and what doesn’t work.
Sadly, whiteboarding sessions turned into slides or even Miro boards aren’t enough. You have to ford the swim-lanes to get to the isle of operationalization. Too many people suffer from pipeline constipation in this phase. If you find that you’re not publishing regularly, take a big swig of quality-through-quantity, stick to the BRAT diet, and plan to stay near your Google Analytics dashboard for the next 48 hours.
Now, I don’t have much to say about how you run and manage your business. Sales plans, career paths, what a “staff principle senior engineer” does, when you have that SW-EMEA QBR in Cologne, do you use sticky notes or Google Docs? But, you know, you need all that. I couldn’t comment on it: I haven’t ever worked on that part of the meat-loaf.
Execution is important, but the toolbox of what to do is well known. You’ll need to try a lot of things and track what works and what doesn’t.
I think planning is the most important part and, often, suffers from three things:
Not enough of it and it's not taken seriously. If you're doing this kind of work on annual basis, in a tech company, it will probably get stale as the year goes on. You need to revisit and refresh it.
This means you need it to be lightweight, spend less time on each rev of it so that you can rev it frequently. I realize I'm saying in (1) to be more comprehensive, but then saying to be lightweight. The point is to figure out how you can structure the work (and work product) to make it easy to revisit and revise frequently.
Individuals and teams don't understand what they should do. This could be because it's not clear, it's buried in a 80 page slide deck called "Copy 2H2028FY Strategy Track FINAL - Mort-230413 COPY - v10.4.b.pptx," your plan isn't actionable, and most often because it's under communicated.
The third one is worth focusing on because I think it's the most frequent problem for planning stuff.
Here, if you mix in too much strategy into planning, you'll be too vague and high-level. No matter how many slides produce and polish proving that it's a good idea, and why your organization is well positioned (competitive advantage) to profit from that...if you don't detail plans - how to do it - people won’t know what to do. You can't executive strategy, you can executive plans.
No time for links and wastebook today. I’m working on v10.4.g of the deck for the rest of the day while I stay close to my dashboards, eating this toast.
This episode: AI is coming for your software job, or at least for the parts of it you actually enjoyed. Meanwhile, businesses are still stuck in pilot purgatory with generative AI, IT leaders remain unconvinced of AI’s ROI, and Java is apparently coming for Python’s AI crown. The economy may be changing not because of interest rates or labor shortages, but because everyone is drinking more water and eating fewer snacks. Also: MP3s are free, remote workers may be getting pay cuts, Kubernetes vs. Serverless, and a reminder that laws are now a gentlemen’s agreement, and we are not ruled by gentlemen.
You Didn’t Notice MP3 Is Now Free - The MP3 format, once a staple for digital audio, is now free due to expired licensing. However, its significance has diminished with the rise of streaming services and faster internet speeds, making file sizes less of a concern. While this change is notable for developers, the general population is largely unaffected by the shift away from MP3.
When will remote workers see their pay cut? - This sort of like paying people in different regions different salaries for cost of living. Of course, it doesn’t address the actual question: does WFH vs RTO actually have an effect on business success?
LinkedIn revenue: ”Microsoft’s bottom line — the division delivered $16 billion in revenue in 2024, more than The New York Times, Zoom, and Docusign put together.” Sherwood.
Reflecting on the ROI of marketing efforts I’ve done recently - “Reflecting on the ROI of marketing efforts I’ve done recently: Print isn’t that useful unless it’s with a writer with a voice (e.g., a substack). Audio and video really make an impact. You want to be inside someone’s EarPods. Speaking at trade shows is helpful in expanding your network”
After 30 years of code, Java remains an enterprise cornerstone - ”Nearly 7 in 10 respondents reported more than half of their organization’s applications run on Java. Roughly half are now leveraging the programming language to build AI applications.”
2025 Is the Last Year of Python Dominance in AI: Java Comin’ - Asked if he believed Java could overtake Python for leadership in AI development, Arnal Dayaratna, an analyst at IDC, told The New Stack: “Yes, definitely, this could happen, especially since Java is unparalleled for the development of enterprise-grade, mission critical applications at scale.”
How real-world businesses are transforming with AI - with 50 new stories
Stuck in the pilot phase: Enterprises grapple with generative AI ROI - “More than 90% of leaders expressed concern about generative AI pilots proceeding without addressing problems uncovered by previous initiatives, according to the Informatica report. Nearly 3 in 5 respondents admitted to facing pressure to move projects along faster.”
IT decision makers unconvinced of returns from AI investment - “Nearly half of respondents have yet to adopt AI at all, with 36 percent indicating they plan to start using it within the next 12 months, while a further 13 percent are still at the stage of considering or evaluating it but have no plans yet.”
70s Sci-Fi Art - Good newsletter, lots of great styles.
“It’s bedtime again in America.” Among many more clever phrases.
“Broligarchs.” Brooke Harrington.
“The aphoristic rule of Washington meetings is: The more you know about what happened in it, probably the less fruitful it was.” Politico.
Plastic straws are a shibboleth.
“They used to say that the sun will never set on the British Empire because God doesn’t trust the bastards in the dark.” Warren Ellis.
“Immediately, we were cocktailed to the max. Some of us more than others.” David Plotz, Political Gabest for February 6th, 2025.
“If you work from home, you may go several days without speaking to another human being, but there are also disadvantages.” Laura Manach.
“laws have become a gentlemen’s agreement and we are not ruled by gentlemen.” jenn schiffer.
Great headline: “Bill Gates Says He Donated $100 Billion Of His Wealth For Charitable Causes, But He ‘Didn’t Order Less Hamburgers Or Less Movies.’”
I often ask the robot to summarize articles for me that look interesting…but that I don’t want to read. These are not the summaries, but I asked it to write a Harper’s Weekly Review style summary for you. What do you think?
Sam Altman claimed that AI intelligence scales logarithmically with compute and that costs are falling tenfold each year, which, if true, means AI will soon be as cheap and omnipresent as tap water. Tim O’Reilly argued that AI will not replace programmers but instead turn them into managers of digital workers, much like software has done to factory labor, and that those who fail to embrace AI assistance will be the first to fall behind. A group of researchers suggested that AI-assisted development is most effective when structured prompts are used, such as API simulators that let engineers refine interfaces before writing any actual code.
John Cochrane noted that eliminating taxes on tips may have little impact, since most low-income workers already pay little to no federal income tax, unless payroll taxes are also exempted. Martin Weitzman (1974) explained that when costs are highly uncertain, tax-based regulations, such as carbon pricing, are often more efficient than hard quotas, which may force companies into inefficient or unnecessarily expensive compliance.
John Ganz observed that Silicon Valley billionaires, having built platforms that fueled left-wing activism, turned reactionary when they realized their own workers and user bases were using those tools to organize against them. A DIY survival guide advised that the most effective forms of resistance against an authoritarian regime are often mundane—delaying bureaucracy, documenting history, and making sure that those fleeing oppression have somewhere to sleep.
Cloud modernization efforts continued as businesses struggled with “creaky cloud infrastructures”, with Computer Weekly reporting that legacy IT assets remain a major obstacle to data-driven innovation. The importance of modernization, they noted, isn’t just about performance—it’s also about trust, security, and ensuring that businesses can actually use the data they collect instead of just hoarding it.
Meanwhile, new data suggested that GLP-1 weight loss drugs are rewiring consumer spending habits, with James Dillard noting that purchases of fitness trackers are up 183%, water filtration systems up 28%, and skincare products up 12%. Alcohol sales are down, dried meat snacks are down, and refrigerated salad dressings—perhaps the unspoken victims of shifting metabolic priorities—are down nearly 20%. The new economy may be built not on cheap credit or AI-fueled productivity but on fewer late-night snacks and an obsession with hydration.
Treasury Secretary Scott Bessent, now acting director of the Consumer Financial Protection Bureau (CFPB), issued a directive halting the agency’s supervision of non-bank entities, notably shielding Elon Musk’s X from regulatory oversight. This move aligns with Musk’s vision of transforming X into an “everything app,” reminiscent of China’s WeChat. Meanwhile, Amazon announced plans to invest over $100 billion in artificial intelligence infrastructure for its cloud division, Amazon Web Services (AWS), in 2025. This investment nearly matches AWS’s annual revenue, highlighting the company’s commitment to AI advancement.
In the political arena, President Donald Trump proposed extending the 2017 tax cuts, expanding the State and Local Tax (SALT) deduction, and eliminating taxes on tips, overtime pay, and Social Security benefits. These initiatives could reduce federal revenue by $5 to $11 trillion over the next decade, potentially increasing the national debt to between 132% and 149% of GDP by 2035. Reflecting on societal shifts, an essay in The Point Magazine observed that we are told, with increasing frequency, that we are living in a post-feminist age.
In a candid blog post, designer Elizabeth Pape of Elizabeth Suzann discussed the challenges of scaling a self-funded fashion business, touching on topics like pricing, consumption, and the complexities of ethical production.
I did a lot of AI stuff above, maybe too much and too long. But, we’ll see. Sorry if it pissed you off. Tell me if you liked it (and want more) or do not like it and want less/none:
I signed up for ChatGPT Pro this weekend. Ben Thompson’s overview of it in last week’s Sharp Tech made me very interested. Since I live in the EU, I can get a refund within 14 days, so that $200/month price a barrier to trying it. So far the Deep Research thing is OK/good. Since it’s going to be moving into the Plus tier (right?), even if it’s limited to some number a month, I don’t think I need to pat $200 a month. Still, it’s been great, especially for making a parent’s guide to helping our kids out with homework.