Coté

What AI is good at, or, please don't fuck up my job and ETFs

I’m clearly a big fan of AI and believe it’s helpful in many ways.

I feel comfortable with that because I’ve used it for over two years now and rely on it daily for a wide variety of tasks, both work- and personal-related. That means I know exactly what it’s capable of, what it’s good at, and what it’s not good at. Me and the robot have a good relationship: we know how to work with each other.

From Katerina Kamprani's The Uncomfortable collection.

Generative AI is good at text

Right now, generative AI is only good at working with text. It generates text—if you can reduce audio to text, it excels at that, and if you can convert text to audio, it’s equally proficient.1

Text can take many forms, and generative AI handles them well. As others have noted, if you want to shorten text, it’s almost amazing at that. If you want to summarize text, it’s pretty good at that. And if you need a summary to help decide whether to read the full text, it’s fantastic.

Learning and strategy

If you want to learn and understand a topic, part of that process involves condensing large amounts of text into a shorter, more digestible form—and it’s pretty good at that. All of the consumer agentic things are going out there and searching the web for you to find that text and then summarizing it all. If what you want to learn and understand is well documented on the World Wide Web, it is good at that. If you want to get insights into secret, obscure, poorly documented things - stuff that has little public text - the AI’s Deep Research is going to be shallow bullshit.

Even when it’s good, with learning and understanding, you need a finely tuned bullshit detector. Once you detect the bullshit, you can ask again or go search on your own. But really, you need a bullshit detector in all of life—robot or meat-sack. If you don’t have one, build one quickly. The benefits you get from that will last longer and far outweigh the benefits you’ll get from AI.

This aspect of learning means it’s not so great at company strategy. If you and your competitors are all using the same public text, you’re all going to get the same answer. There will be no competitive advantage. What’s even worse now is that it’s effortless for your computers to understand your strategy and predict the ones you’d come up with…if you only based it on public text. You have to figure out how to get your secret text into there. With all interactions with the robot, you have to bring a lot to the chat window. The quality of what you bring will determine the quality you get from the robot. Garbage in, garbage out. Which is to say: nothing valuable in, nothing valuable out.

Back to pedantry: It’s proving to be good at teaching-by-modeling: it shows you what The Big Report could look like, explains what the fuck those 80 slides your teacher gave you are asking you to do in your essay, and serves as an additional tutor and instructor when you can’t afford to hire one.

From Katerina Kamprani's The Uncomfortable collection.

Writing & creating

The robot is also effective as a co-writer. In other words, you become a co-writer with the robot. It can generate text endlessly, and if you collaborate with it, you’ll get great results. Just as you would with any co-writer (especially a ghostwriter or a co-author whose name appears in smaller print on the book cover), you need to get to know each other, learn how to work together, and figure out the style you want. Claude is great in this regard—it has a simple tool for learning and refining your style. If you haven’t spent time teaching the robot your style, you should do so.

You can reduce videos, podcasts, scripts, even small talk, to text. Recall, AI is good at text, so it will be OK at that.

It’s okay at imagining. I play D&D with it, and it has gotten a lot better at handling mechanics over the past two years, but it still remains rather boring and predictable when it comes to unassisted imagination. If you feed it published D&D adventures, it does okay. But just try having it come up with ten dwarf names—they’ll all be variations on “Iron Shield” or “Rock Breaker” and the like.

It’s really good at writing code. And guess why? Code is text. Is it good at creating multi-system applications and workflows used to, say, coordinate how an entire bank works? Probably not—very few people are even good at that. And then there’s the whole process of getting it into production and keeping it running. If you think the robot can do that now—or ever—¡vaya con Dios! Please report back if you survive.

The AI is bad at perfection

What about tasks like optimizing supply chains? Maybe one day the robot will be good at other B2B tasks, but I suspect that for many years good old fashioned machine learning will keep doing just fine there.

Don’t use AI for tasks where being 100% correct is important. If the consequences of being wrong are dire, you’re going to get fucked. Worse, someone else is going to get fucked.

But, if you’re using the robot for a system that tolerates—or even thrives on—variety (errors), it’s great. “Anti-fragile” systems? I don’t really know what that means, but: sure. Are you brainstorming, whiteboarding, and exploring? Yup, great at that. Using it for therapy? It’s fascinatingly good at that.

You get the idea: if you’re using generative AI for something where you can recover from errors quickly, there is “no right answer,” and the task is text-based, then yes, it is great for that—and you need to start using it now.

From Katerina Kamprani's The Uncomfortable collection.

Thirty days to defuse the time bomb of false expectations

Let’s build up to my concern:

  1. Text is all generative AI is currently good at.

  2. Most people have not used AI for 12 months, let alone two-plus years, let alone 30 days. I’m just guessing here. Surveys tell dramatically different stories. But most surveys show only a small amount of use, and just recently.

  3. So, I don’t trust that most people yet understand what AI is good at—they often imagine it’s capable of far more. You have to use it to know it, and learning by doing is a lot of effort and usually takes longer than your ROI model’s horizon.

That’s “hype,” sure, but it’s more like misunderstood inexperience. It’s classic diffusion of innovation (ask Chatty-G to tell you about that concept; I bet it’ll be pretty good). Sure, that diffusion has been getting faster, but if humans are involved, we’re still talking decades—at least one decade.

My concern here is that once we collectively set expectations beyond reality, the fall is bigger, and the recovery becomes too great. Worse yet, people waste a lot of time chasing AI fantasies. They thought there’d be 100x returns when, in reality, there were only 10% or even 25% returns. You fire employees, take on investment and risk to overhaul your business, and spend time on those AI fantasies instead of pursuing other strategies. And then, when you learn what AI is truly/only good at, you’ve invested everything—only to find that your assumptions, ROI models, and, thus, investment was a fantasy. Plus, once you build it, you now own it forever, no matter how shit it is. Plus, you played a game of chicken with opportunity cost, and opportunity cost won.

So, don’t do that. Don’t bet the farm on something you haven’t used firsthand for at least 30 days, and certainly don’t stake our jobs or our index funds on it.

Wastebook

  • “I was the man of my dreams," Peter on Peter.

  • “the unexampled,” on Gary Snyder.

  • And, from Gary: “this romantic view of crazy genius is just another reflection of the craziness of our times… I aspire to and admire a sanity from which, as in a climax ecosystem, one has spare energy to go on to even more challenging – which is to say more spiritual and more deeply physical – things”

  • “Mandatory Commute policy,” synonym for RTO.

  • “autogolpe,” self-harm.

  • “If you change it, you own it,” if only.

  • ”monomaniacal dork squads,” power-up.

  • “a steaming pile of, um, DOGEshit,” deep analysis.

  • “Our Son of a Bitch,” various.

  • “You can’t sell a sandwich with secret mayo,” Noah’s quest continues.

  • There’s a first time to forget everything.

  • “[rhapsode]([en.wikipedia.org/wiki/Rhap...](https://en.wikipedia.org/wiki/Rhapsode#:~:text=A%20rhapsode%20(Greek%3A%20)ῥαψῳδός%2C,BC%20(and%20perhaps%20earlier).).”

  • “The Deeply Spiced Meatballs That Call Back to Haiti.”

  • “Features of the future,” a CF Day topic.

  • When submitting a conference talk and given the option to select “audience level,” I’ve started always selecting “intermediate.” I don’t know why, or what that means, but it’s some kind of fun.

  • “LLM aka Large Legal Mess,” don’t use the robot for lawyer-shit.

  • “inspo,” AI hair.

  • “If I’d wanted chatGPT to answer, I’d have asked chatGPT” @byronic.bsky.social.

  • "My leather jacket tailor never flinched, so I'm not sure what's wrong with all the Finance Bros."

  • Deep is the new plus.

Relative to your interests

Predictably, a bunch of AI stuff of late.

  • The reality of long-term software maintenance - “In the long run maintenance is a majority of the work for any given feature, and responsibility for maintenance defaults to the project maintainers.” Related:

  • Top EDI Processes You Should Automate With API - Tech never dies. Helpful consequence: take care of it before it takes care of you.

  • How’s that open source licensing coming along? - ”The takeaway is that forks from relicensing tend to have more organizational diversity than the original projects. In addition, projects that lean on a community of contributors run the risk of that community going elsewhere when relicensing occurs.”

  • Key insights on analytical AI for streamlined enterprise operations - ”The big issue, whether it’s generative or analytical AI, has always been how to we get to production deployments. It’s easy to do a proof of concept, a pilot or a little experiment — but putting something into production means you have to train the people who will be using this system. You have to integrate it with your existing technology architecture; you have to change the business process into which it fits. It’s getting better, I think, with analytical AI.” // It’s always been about day two.

  • Why I think AI take-off is relatively slow - My summary: humans resisting change is a bottleneck; also, humans not knowing what to do with AI; current economic models can’t model an AI-driven paradigm shift, so we can’t measure the change; in general, technology adoption takes decades, 20 for the internet, 40 for electricity. // AI is a technology and is prey to the usual barriers and bottlenecks to mass-adoption.

  • GenAI Possibilities Become Reality When Leaders Tackle The Hard Work First - Like any other tool, people have to learn how to use it: “Whatever communication, enablement, or change management efforts you think you’ll need, plan on tripling them.” // Also, garbage in, garbage out: “GenAI can’t deliver real business value if a foundation is broken. Too many B2B organizations are trying to layer genAI on top of scattered, siloed, and outdated technologies, data, and processes. As a result, they can’t connect the right insights, automations stall, and teams are unsure of how to apply genAI beyond basic tasks.”

  • A.I. Is Changing How Silicon Valley Builds Start-Ups - ”Before this A.I. boom, start-ups generally burned $1 million to get to $1 million in revenue, Mr. Jain said. Now getting to $1 million in revenue costs one-fifth as much and could eventually drop to one-tenth, according to an analysis of 200 start-ups conducted by Afore.” // Smoke 'em if you got 'em…

  • The AI Experience - What’s Next For Technology Marketing - Back-up the truck and dump the enterprise enterprise marketing slop: “Did you consider that soon you may be marketing to GenAI agents of your customers?” // And: “While the term “Account Based Marketing” or ABM is still floating around, less marketers are focused on continuing to enable personalized marketing for a subset of the customer and prospect base.” // Instead of having to craft the personalized content, you have the robot do it. Then the marketing skills you need to back to mechanics of running campaigns. // Yes, and, this is an example of my “bad things are bad” principle. If the slop you get is bad, it will be bad. But it can also be good, in which case, it will be good.

  • How Ikea approaches AI governance - ”Around 30,000 employees have access to an AI copilot, and the retailer is exploring tailoring AI assistants to add more value. Ikea is also exploring AI-powered supply chain optimization opportunities, such as minimizing delivery times and enhancing loading sequences for shipments to minimize costs. AI in CX mostly targets personalization. // ‘“I’m not just talking about generative AI,” Marzoni said. “There’s some old, good machine learning models that are still absolutely delivering a lot of value, if not the majority of the value to date.”’

  • U.S. Economy Being Powered by the Richest 10% of Americans - One estimate: in the US, “spending by the top 10% alone accounted for almost one-third of gross domestic product." // Never mind the, like, morals?…doesn’t seem very anti-fragile. Never mind the, like, morals?…doesn’t seem very anti-fragile. // “Those consumers now account for 49.7% of all spending, a record in data going back to 1989, according to an analysis by Moody’s Analytics. Three decades ago, they accounted for about 36%.”

  • Why it’s nice to compete against a large, profitable company - Because, they can’t lower prices on their core products least Wall Street freak-the-fuck out.

Logoff

See y’all next time! Gotta go run a few ideas by my pal, the robot.

1

It can kind of convert text to images, but only if you like the same people over and over or are an anime fan. If you like a perfectly chiseled chin, AI generated images are for you. You can put a lot of work into getting your text in shape to produce something unique that looks real. In this respect, it gives a tool to people who can’t do graphics (like me!) which is actually pretty amazing. But it can only go so far. Just try to create a “realistic” looking person instead of a perfect fashion model. It is near impossible. Of course, this is because it’s not trained on enough images yet, I guess.

Using AI for HR - management and workers

Enterprises pouring money into GenAI and CEOs treating AI agents like cheap labor - yet only 25% see ROI right now. Vibes: “Europe’s long holiday from history is over.” Also: IBM does RTO, predictions about DOGE layoffs, the term “platform” remains a favorite excuse for overcomplicated tech, and “autonomous killer robots.”

AI comes for HR

What to make of using AI to automate HR processes? Melody Brue and Patrick Moorhead look at Oracle’s work there:

The agents are designed to support several key facets of the employee experience, including hiring, onboarding, career planning, performance reviews and the management of compensation and benefits.

Yes, and…

(1) If it’s bullshit work (“busy work”), eliminate it, don’t automate it. The thinking here promises to automate bullshit work like manually formatting performance reviews, copy/pasting boilerplate onboarding checklists, clicking through timecard approvals, writing job descriptions from scratch, and filling out endless HR forms. Yes, and…are these tasks that should probably just be eliminated or drastically simplified rather than lovingly preserved in AI amber. I’ve written job descriptions several times and there is something wrong-feeling about the process and the results. The same with performance reviews from both sides of the review. If you feel like you’re doing bullshit work and you get excited about automating it with AI, why not eliminate it instead? Or, you know, fix it.

(2) How could workers use similar AI stuff to maximize their advantage versus management? In a heavily bureaucratic HR system, reports and analysis are important: you need to prove that you deserve a promotion, more money, whatever. You’re often weighed against relative metrics: how much do people get paid in a region, how did you perform versus other people on a bell curve (or ranking), etc. Putting together those reports is tedious and your managers may not put in the effort. Have the AI do it for you. You could also look at those wordy job descriptions to extract what your role is responsible for doing. And when you need to come up with annual MBO/KPI/OKR/whatever the next TLA is for “goals,” have the AI look at the goals-trickle down and come up with yours. Then have it track what you should be doing. Negotiating salary could be useful to: how much should you even be asking for, what is your BATNA? What is their BATNA?

(3) Could you run the robot on, say, the last 5 years of reviews and then compare it to what the human evaluators did? Is the robot better (less bias, giving feedback that improves worker performance, finds low performers, etc.), or is it worse (wrong analysis leads to less performant workforce)? As a worker, thought you might not actually have access to full reports, you could try to find out what the real performance measures are. Load in job descriptions, give an overview of what highly rewarded people did, and then see what attributes and actions get rewarded. Never mind what the official metrics are, target those.

There’s a general theory for all AI use here as well: if what your AI produces is something that can just be consumed and used by another AI, it’s probably bullshit work that you can reduce to a quick email or can be eliminated entirely.

***

For him, of course, it was a business opportunity. He was part of what I would come to see as a savvy minority of people and companies capitalizing on AI fatigue.

Meanwhile, this is a fantastic piece on the state of HR tech from the worker’s perspective. There’s plenty of AI talk in it. It’s also fun to see what tech conferences and marketing looks like to (I presume) outside eyes. We are such dorks and, often, tasteless:

While the word people was plastered everywhere as both a noun and an adjective, the workers of the exhibit hall's collective imagination were not real, three-dimensional people. They were shadows without substantive interests or worries beyond the success of their companies. That was the only way these products could be pitched as win-wins. But, come on. We were in Las Vegas - everyone here knew the real money comes from making sure enough people are losing.

Fresh Podcasts

There are new episodes of two of my podcasts, listen to ‘em!

Classroom History, 1938. Philip Evergood.

Relative to your interests

  • AI Agents: Why Workflows Are the LLM Use Case to Watch - The agentic app revolution isn’t a transformation story. It’s a modernization story; a chance to solve small problems with the team you already have.

  • AI Agents and the CEOs - “At the risk of saying the quiet part out loud, the way CEOs are talking about agents sure sounds like how they talk about employees–only cheaper!” // “Companies are dedicating significant spend to AI–approximately 5% of the revenue of large enterprises (revenues over $500 million) according to one survey by Boston Consulting Group, and yet only 25% claim they are seeing value from their AI investment.”

  • To avoid being replaced by LLMs, do what they can’t.

  • Learning from examples: AI assistance can enhance rather than hinder skill development - Could be that AI use makes you better. // “Decades before the advent of generative AI, the legendary UCLA baseball coach John Wooden declared that the four laws of learning are explanation, demonstration, imitation, and repetition (31). Few learners have access to the best human teachers, coaches, and mentors, but generative AI now makes it possible to learn from personalized, just-in-time demonstrations tailored to any domain. In doing so, AI has the potential not only to boost productivity but also to democratize opportunities to build human capital at scale.” // Also, some prompts used to evaluate writing quality. The one rating “easy responding” is interesting: how easy is it to (know how to) respond? Maybe good for CTAs.

  • Gartner Survey Reveals Over a Quarter of Marketing Organizations Have Limited or No Adoption of GenAI for Marketing Campaigns - ”Nearly half (47%) report a large benefit from adopting GenAI for evaluation and reporting in their campaigns.” // The number is reverse is more interesting: 77% of surveys marketing people say they’re using generative AI for marketing stuff. Related:

  • OpenAI reaches 400M weekly active users, doubles enterprise customer base - “The ChatGPT developer currently has 2 million paying enterprise users, twice as many as in September.” With “400 million active weekly users, a 33% increase from December.” And: “The New York Times reported in September that the company was expecting to end 2024 with a $5 billion loss on sales of $3.7 billion.”

  • 2025 is the breakthrough year for Generative Enterprise — and partnering with a capable services partner is critical - “[S]pending on GenAI is rising (HFS data suggests enterprise investment is rising by more than 25% on average into 2025), we start from a low base. We estimate enterprise spending on GenAI in 2024 accounted for less than 1% of global IT services spending. This is just one illustration of how far we still have to go.” // Plus, a whole bunch of commentary in enterprise AI.

  • Data is very valuable, just don’t ask leaders to measure it - AI ROI is difficult: “in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases… It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes”

  • A Simple Definition Of “Platform” - “a product that supports the creation and/or delivery of other products.”

  • IBM co-location program described as worker attrition plan - From the RTO-as-not-so-stealthy-layoff files.

  • YouTube (GOOGL) Plans Lower-Priced, Ad-Free Version of Paid Video Tier.

  • On European Defence, Energy and Growth - Imagining big changes in European priorities: changing policy to get more energy, more emphasis on militaries.

  • No Rules Are Implicit Rules - The European view on enlightened American management policy: “Greg, I hate to bring it to you, but working for ten fucking hours a day is not the normal hour. I don’t care if you live in America or not. The section continues with other “grand” examples of managers taking “up to” 14 days a year off to show their employees they should to so too. Let’s assume the best here: 14 workdays are almost three weeks. A year. The statutory minimum for full-time employees working a forty-hour week is 20 (thus 4 weeks) in Belgium. Oops.”

  • Rage Against the Machine - Perceptive: “They’re going to try two or three things they think will solve everything, which will be thrown out in court. I assume the first thing they’ll do is some kind of hiring freeze, and then, after three months, they’ll realize agencies have started to figure out ways to get around it. And then they’ll try to stop that, and they won’t be able to do that. Then they’ll try to make people come to work five days a week, and that’s going to be difficult because a lot of these agencies don’t have offices for these people anymore. I think it’s going to be one thing after another, and maybe after four years the number of employees will be down 2 percent—maybe.” // The layoff playbook DOGE is working comes from the tech world, and it sort of works there. But that’s because tech companies can die, be acquired, or be reborn. In a tech company, you rarely starve the beast (or amputate parts of it) and have it survive. Do we want the same outcomes with government?

Read by the robot

I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.

Kelsey Hightower declined to join the AI gold rush, advocating instead for a glossary of tech jargon to remind everyone that AI is not new, just rebranded.

Platform engineering teetered between breakthrough and bust, with some heralding it as the savior of DevOps while others braced for its descent into Gartner’s “trough of disillusionment.” Several years ago (February, 2023) Sam Newman insisted that calling something a “platform” is often just an excuse to overcomplicate things, suggesting “Delivery Enablement” as a rebrand.

Meanwhile, IBM Consulting offered enterprises a guided tour of “Agentic AI,” a term that likely needs its own entry in Hightower’s proposed glossary.

Wastebook

  • “effortful,” AI study.

  • “Topological qubits,” MSFT.

  • “Deliberately they don’t give a shit,” Emily, Political Gabfest, February 20th, 2025.

  • And: “chaos entrepreneur,” John.

  • “Europe’s long holiday from history is over,” John Naughton.

  • "This [Trump] administration cares about weapon systems and business systems and not ‘technologies. We're not going to be investing in ‘artificial intelligence’ because I don’t know what that means. We're going to invest in autonomous killer robots." Fund the outcomes, not the tech.

From Dead Motels, USA.

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

Nothing to report today.

Semiconductors, Security, and the DeepSeekFreak, along with Ass Semiotics

In this episode: AI eschatology, assology, and a deep, intellectual commitment to hating mayonnaise. Tariff trouble, security panic, and NVIDIA shrugging off DeepSeek. Young voters shift rightward, no one agrees on ‘medium roast,’ and Hollywood still relies on glue to critique its own youth obsession.

Wastebook

  • “immanetize the AI eschaton,” Charlie Stross.

  • “The ass is a very strong symbol of how our body is not neutral in the public space. How our body is constantly scrutinized, has been shaped to please the man’s eyes, has been seen as a body part that was objectified, that was detached from the person who was simply bearing it.” Assology. See related boobology below.

  • “This is the number one YouTube channel about hating mayonnaise.” Noah.

  • “LLMs are good at the things that computers are bad at, and bad at the things that computers are good at,” Slides Benedict.

  • “If I live, I must fully accept the game; I must have the most beautiful life. I don’t know why I am here, but since I remain here, I will construct a beautiful edifice.” A young Simone de Beauvoir.

Relative to your interests

Read by the robot

I don’t read everything, sometimes I have the robot read it for me. Beware that the robot sometimes makes things up. Summaries are for entertainment purposes only.

The CrowdStrike outage crashed 8.5 million devices, wiped out $5.4 billion, and forced IT leaders to admit that 84% had no real incident response plan. In response, Adaptavist found that 99.5% of companies are now hiring security personnel, diversifying vendors, and possibly sleeping in their data centers for luck.

Trump proposed a 25% tariff on imported semiconductors to force chipmakers back to the U.S., despite most advanced chips being made overseas. Corporate America may be souring on his policies, as erratic tariffs threaten supply chains. Financial analysts determined that economic indicators are surprisingly bad at predicting democratic collapse. Maybe we should blame video games again? The Atlantic reported that young voters have shifted rightward due to pandemic distrust, economic stagnation, and too much time online. Hopefully, those tariffs won’t make their damn video game consoles and vaporware-colored lights more expensive.

Related: NVIDIA’s share price is already within 1% of its pre-DeepSeek drop, showing that while the market can be extremely efficient, it’s not always efficient at thinking things through.

A UC Davis research center revealed that no one agrees on what a “medium roast” is, despite years of artisanal posturing. Kieran Healy warned that your iPhone knows more about your life than your best friend, your partner, or your mom—and it’s probably judging you for it. And all the fitness tracking in the world still wasn’t enough for the perfect boobs required for The Substance, a satire on Hollywood’s obsession with youth: “Unfortunately, there is no magic boob potion,” Margaret Qualley said, “so we had to glue those on.”

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Somewhere in Leiden, when there was still sun and warmth, September, 2024.

Logoff

Off to get a haircut today. I hate getting haircuts, that’s why my hair and beard get wild.

Meanwhile, we’re one away from 900 subscribers. Tell you what, I’d you’re one of the first several new people to sign up, I’ll send you a bundle of my books.

A head full of bologna

Lots of links and stuff this episode: AI isn’t a coworker, it’s just automation wrapped in hype. Tech moves fast, but nothing lasts—except bad takes, questionable business models, and the creeping realization that managers just want fewer humans to manage. Meanwhile, we live like kings and don’t even notice.

Put it on ice

Good episode of Software Defined Talk this week, especially the opening moment of absurdity where we, yet again, try to solve Europe’s ice problem. Take a listen, or watch the unedited recording.

Wastebook

  • “Layered, polished mix: As expected, Dre’s meticulous production work ensures that every instrument sits perfectly in the mix, making for a cold, calculated vibe.” Respect. (The robot comments in “Big Egos.”)

  • "razvedka boyem –reconnaissance through battle: You push and you see what happens, and then you change your position."

  • Long skim content.

  • “Everything affects everything else,” Julia Evans. // I mean, I think she just cracked the code to, like, reality there, you know, everything.

  • “[Sorry, ugly people with good ideas.]” // Alternative funding source.

  • “A Cup of Coffee in Hell,” not cold, but helpful.

  • “If it moves, it’s probably alive,” logic.

  • “Cannabis, crypto or half of North Dakota?” Buttonwood.

  • “Sen. Mitch McConnell (R-KY), a polio survivor, was the lone Republican to vote against him.” Oophff. When you got that guy voting against you know your head is full of bologna.

Relative to your interests

  • Making smaller containerized apps - Smaller, more secure, and faster to deploy–because nobody wants a 500MB container just to run “Hello, World.”

  • The “AI Agent As Coworker” Narrative Is Nonsense The AI agent co-worker narrative is nonsense - Against the agentic hype: “You have to admire Benioff’s chutzpah in defining digital labor as some brand-new massive market opportunity. But to many, it just sounds like automation. Like every other phase of automation since the beginning of the industrial age, this phase is also about doing more with fewer human resources.” // Meanwhile, the counter case from Seth Marrs.

  • New estimates have ChatGPT using 10x less power than previously thought - ”it would actually be more energy efficient for you to have an LLM turn off your furnace than to walk across the house to manually turn the dial.”

  • The danger of relying on OpenAI’s Deep Research - Some valid critiques of Deep Research. Though, none of them really amount to “it’s not good.” To sum-up: it can’t do complex research, let alone come up with original ideas nor cover obscure topics. It can’t only tell you what the Internet knows. This is actually not fully accurate: you can also upload your own files and put in your own knowledge. For me, the main problem is the readability of the reports. While they are long and detailed, they’re not written in an engaging way they makes it easy to read. I have a pile of them that I’ve yet to fully pick through. // Yeah, these robots have little creativity and original thought and further on, they can only do the predictable. But, man, they sure can do a lot of it. // There is an annoying “buyer beware” nature of all this AI stuff. If you’ve used it for years, or even a few months, you de-hype it a lot. You know it’s limits and to treat it like a dumb tool. But, that is not how it is sold at all, and it’s not how people who don’t use it think of it.

  • All hat, no cowboy - A bicycle for your hands: “Becoming a good programmer takes time, so does becoming an artist. What if all the people with ideas but no time or skills or persistence or real interest could participate and _turn their ideas into the thing?_Surely non-musicians have great ideas for songs that they could turn into great songs if it weren’t for the inconvenience of musical instruments.” Yes, and: “One way to look at this – not a charitable way, but a view that feels true to me – is that managers view all need for human labor as an inconvenience. In part because they rarely get to experience what it’s like to be closer to a creative process, but also because they constantly experience the inconvenience of checking on deadlines and paying invoices. They would simply rather manage a robot than a human, so the only other people they have to interact with are other executives. Peak economic efficiency.”

  • One Year With the Vision Pro - Basically, not enough ROI for $3,500.

  • The Great AI UI Unification - What’s going on here is a classic power user versus normal user UX problem. I’m probably more power user than normal user. I don’t mind the UX, it’s easy access to docs that explain features that I find annoying. For example, try to do a deep explanation of what’s currently in ChatGPT Pro. There really isn’t. Even more so, last I looked the help page doesn’t list new features like Deep Search. And most ironically of all, if you ask ChatGPT itself, the answers are not great, or accurate. E.g., I asked about using its reminders and it didn’t even know it had them until I fed it to blog post on it. The naming of things is not helpful as well. // Tech companies are terrible about documentation. While obscure, Apple Short Cuts is a great example. Docs for that are terrible, usually non-existent.

  • Tech continues to be political - ”I don’t know how to attend conferences full of gushing talks about the tools that were designed to negate me. That feels so absurd to say. I don’t have any interest in trying to reverse-engineer use-cases for it, or improve the flaws to make it ‘better,’ or help sell it by bending it to new uses.”

  • Internal Product Management, Forrester.

  • AI Alone Won’t Drive Revenue - What Are You Missing? - Some light ROI talk.

  • I don’t know, despite this being from the UK (or maybe that makes the point): newsflash, Europe is expensive to live in, mostly by design as far as I can tell.

  • The Tyranny of Now - ”What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.”

  • Learning from my mistakes… - It’s tough to monetize content that has near zero value or originality, and be easily pirated. This especially true if the price is wrong. That sort of applies to every product. // “In the end though, you can’t optimize your way out of a black hole, the gravity is too heavy. We were marketing a product at a price point that was material to our customers, and giving them content which was largely available from our competitors for free. All the tweaks in the world couldn’t change that.”

  • Why are big tech companies so slow? - Because they build, sell, and support a lot features.

  • How to add a directory to your PATH - Computers are easy, they said. You just need to read the manual, they said. It’s so intuitive!

Read by the robot

I don’t read everything, sometimes I have to robot read it for me. Here are it’s summaries.

AI agents are not coworkers, according to Forrester analyst Anthony McPartlin, who argued that the idea is little more than a marketing ploy. It’s just automation. His colleague Seth Marrs disagreed, predicting AI will become an indispensable workplace collaborator, though perhaps without an HR complaint line.

Meanwhile, most CFOs planned to increase tech budgets in 2025.

I’m guessing this dude isn’t meaning to be associated with the them, but here’s a little insight into how TheTechBros.gov think that might explain their batshit take on how to run a railroad.

Jack Crosbie mourned the decline of professional dress, noting that executives and tech billionaires get to dress however they want while the rest of us are left to wonder whether wearing Hoka running shoes to worksignals liberation or quiet surrender. This, of course, is only a problem if you don’t already own half of North Dakota.

Samir Varma declared freewill both an illusion and a practical reality in a post that argued no one—not even you—can predict what you will do next. The brain, it turns out, is deterministic but computationally irreducible, which is a fancy way of saying that you can only know what you’ll eat for dinner tomorrow by waiting for tomorrow. Until then, just assume it’s chicken.

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

Good overview from Bryan on the changes people often don’t make when they want to do the whole platform engineering things:

Platform teams can have a difficult time convincing their management of the importance of developer experience, instead being pushed toward traditional governance and control measures. While these measures might satisfy IT audit requirements, they can severely impact development team velocity. The result is predictable: development teams, under pressure to deliver business outcomes quickly, create workarounds or turn to "shadow IT" solutions.

Yes, and…

It feels like he’s suggesting either (1) it’s possible to do too much of governance, security, controls, etc., and, thus platform teams don’t have enough time for or stop doing customer work (focusing on developer needs first, security/etc. needs second), or, (2) that the governance, security control, etc. measures aren’t needed (as much). Of course, us platform vendors would say, (3) if you buy our products, out platform will automate a lot of the governance, security, controls so the platform team can focus on the customers, developers.

I don’t hear enough multi-year, enterprise success stories about platform engineering. It’s been three (four?) years since Humanitec declared DevOps dead and ushered in the idea for their IDP (back when “P” meant portal, not platform) product. Backstage was some kind of gas on the fire to all that. And, yes, here we are. It feels like the similar oddity with Kubernetes: lots of talking, then lots of figuring out how to adopt it, and only of big enterprise success stories. There are stories, but enough to justify having destroyed the progress we made with PaaS 5+ years ago. Something is wonky.

What is missing from all of this? Year after year, on this topic, it’s the same conversation.

There’s a digital transformation paradox here too: we’re always on about the urgency of needing to change, then we say there’s not enough change, and yet everything seems to be running just fine. Maybe it could be running even more fine!

One theory: because of the place I work, I don’t see all the success, just hear about the slogging from the people who want help. People who don’t need help don’t ask it. Coupled with: thought leaders don’t talk about everything being fine, that isn’t the job. Few people talk about ongoing success, so all I see is struggling.

//

This week the kids are out from school, so I’m trying to figure out vacationing.

The risk of shadow AI, an example of using ChatGPT Deep Research

Catch-up: yesterday, I went over everything you need for tech strategy and marketing.

What do I know about security: limiting AI use in enterprises

I find the restrictions on using public AI chat things baffling versus the potential, but obvious benefit. But I don’t know the CISO perspective and way of thinking. What am I missing?

Yes:

But:

My theories:

  1. It’s just too new and unknown, we don’t even know the risk and (is this a layman’s term?) attack vectors (e.g. Whiz findings). Better to lock it down and let others fuck around and find out (sidetone: I didn’t realize that we’d standardized on “FAFO” for that in polite conversation, which is lovely to know. Son of YOLO!).

  2. The restrictions on AI use are move about costs and control/ambiguity of work product.

    1. IP. If an employee pays for their own chat things, who owns the IP? With AI image generation, in the US at least, you have no (defendable) copyright on the generated images and video. I’m no lawyer, but it seems like that’d be easy to extend to text and code.

    2. Costs. “We don’t want to pay $5 to $20 more a seat/month - what, in this economy?”).

  3. Yes, and…so many work functions could get at least a 3x to 5x boost in “productivity” (or whatever figures du jour, I’m just swig-swigging those numbers). Or, maybe not. Then again, maybe yes! Me: If it’s good enough for tutoring, it’s probably good enough for knowledge workers.

  4. My theory: I think CISO’s just don’t trust it because there are so many unknowns. Which is reasonable: there hasn’t been enough time to learn.

  5. Plus, with Altman and Musk involved, you have batshit crazy people who are unpredictable driving the industry. But, you could just use Microsoft, AWS, and Anthropic. If you can get compute cheap enough, have enough ROI to do the capex and opex spend, or can profit form lower-powered/slower AI models, you could host it on your own and get benefit.

  6. Yes, but…isn’t part of CISO risk modeling balancing out business benefit versus zeroing out benefits/potential growth by clamp-downing? Over the next two years if competing firms have looser policy, and they profit without tanking (or being able to pay for/live through risks and still profit/keep share prices high), don’t you lose anyways because [insert software is eating the world digital transformation tub-thumping we all used in the late 2010’s]?

  7. (I hope you either (a) know me well enough, or, (b) intellectually wise enough to realize I’m not, at all, saying that security is a big deal. The point is to discuss the reaction and resulting strategy.)

Image
Shared by Adam Van Buskirk.

Wastebook

Shared by Bruce Sterling.

Relative to your interests

  • Emerging GenAI Use Cases and Spotlight on Secure Content Generation - If your AI stuff is using the same pool of knowledge as your competitors, you won’t get much competitive advantage. You need to add your own secret info. // “A common challenge, however, when employees use public generative AI tools or foundation models, is a lack of organizational specificity.”

  • Is Fine-Tuning or Prompt Engineering the Right Approach for AI? - As it says.

  • Stuck in the pilot phase: Enterprises grapple with generative AI ROI - ”More than 90% of leaders expressed concern about generative AI pilots proceeding without addressing problems uncovered by previous initiatives, according to the Informatica report. Nearly 3 in 5 respondents admitted to facing pressure to move projects along faster. ”

  • - Extending AI chat with Model Context Protocol (and why it matters) - Adding plugins to the AIs, the hope being that wide community of developers will form, extending the functionality of the AIs. I’ve seen this in practice with Spring AI and Claude and is very promising, and easy.

  • Do Marketers Need To Be Writing for AI? - SEO for AI model training. Yup, better start doing that. The good news is, all those SEO-trap pages that you generated (those long one you never actually show to users/customers) would probably work here…are working here. But, it’s likely a good idea to start doing more of this ongoing.

  • Moderne raises $30M to solve technical debt across complex codebases - ”A quick peek at Moderne’s customer base is telling of who is most likely to benefit from its technology — companies like Walmart and insurance giant Allstate. Its investor base includes names from the enterprise world such as American Express and Morgan Stanley, which, while unconfirmed, is safe to assume have invested strategically.” // From what I’ve seen and heard, seems like good stuff.

  • Context-switching is the main productivity killer for developers - #1 way to improve developer productivity, 30+ years running: stop interrupting them while they’re coding. // ”Research from UC Irvine shows that developers need an average of 23 minutes to rebuild their focus after an interruption fully.”

What the robot read

I often ask the robot to summarize articles for me that look interesting…but that I don’t want to read. Below are not the full summaries, but I asked it to write a Harper’s Weekly Review style summary for you, lightly edited by a meat-sack with said me comments in italicized brackets.

  1. Russ Vought quietly reinstated a CFPB procedure essential to mortgage markets, ensuring that banks could continue pricing loans without improvising their own math.

  2. Economists warned that high stock valuations may lead to a decade of low returns, an insight that Wall Street will process just in time to act surprised when it happens. [I don’t really get this one, but that’s the case with most long-term investor “logic,” or lack thereof.]

  3. Some commentary of Infrastructure as Code found that most companies are still doing it wrong, proving once again that automation is only as good as the humans failing to implement it.

  4. DeepSeek spent $1.6 billion on AI infrastructure, amassing 50,000 Nvidia GPUs in a move that may or may not justify the hype surrounding its capabilities. Investors watched as a $2 trillion AI market correction erased valuations faster than a chatbot dodging a direct question. [See above on investor’s “logic.”]

  5. Microsoft, Meta, Alphabet, and Amazon continued their spending spree, ensuring that AI-driven margin compression remains a long-term feature rather than a short-term bug.

  6. Spring AI promises to make generative AI accessible for Java developers, proving that some traditions—like running Java in the enterprise—never die.

  7. Cobus Greyling declared that the future belongs to “agentic workflows,” a phrase that sounds revolutionary but, to my meat-sack friend, mostly means workflows with slightly more AI in them, in a good way.

  8. A debate over AI optimization raged [seems a little strong?] between fine-tuning and prompt engineering, though most developers [or their corporate penny-pinchers will likely choose whichever option is cheaper that day.

  9. Related: Adam Van Buskirk warned that in a world where all frontiers have been settled, destruction may be the only remaining path forward—an insight that AI companies and their burn rates appear to have already embraced. [See vintage novel and wastebook yes/and above.]

Conferences

Events I’ll either be speaking at or just attending.

VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

I need to think about this a lot more but if you (a) want to see some examples of ChatGPT Deep Research in action, and/or, (b) are interested in industry analyst strategy and M&A scenarios (here, with Gartner), check out these two reports I ran on Gartner’s business and strategy, in the SDT Slack. I printed out the whole chat session, so you can see my prompting, questions it asked, the first report, some back and forth, and then the second report. You can find it in the SDT Slack, or, you could jus check it out here:

Chatgpt Deep Research On Gartner Strategy
255KB ∙ PDF file
Download
Download

I did not actually read all the pages, nor did I fact check it. Pretty interesting to see this kind of output though. I’ve used Deep Research for interview prep once so far: it wasn’t very impressive, but maybe that’s because I’d already done all the research myself, and the public info was slim.

Meanwhile, despite headwinds, IT seems to have done so far OK:

TradingView chart
Created with TradingView

How to market the Enterprise of Many Solutions Suite - A Ready to Go Plan for Enterprise Role Playing with Generative AI, Complete with Executive Dinner Meal Options

In tech product management marketing, there are three phases of your “story” and execution: strategy, planning, and doing (“execution”). I think a lot of people mix up these phases, talk too much about strategy, don’t do enough planning, often poorly communicate the plans to staff, and are not “throw it all at the wall” enough with doing. I’ve worked in this area for, I don’t know, 20 years. Here’s my latest organized brain-dump from watching people from afar and close-up at many places.

(1) Strategy

This first phase is about figuring it out: researching the market, observing what others are doing, conducting classic competitive analysis (Porter’s Five Forces or however it gets rolled into the strategy airport book meat-loaf du jour). You’re deciding what to do and making the case to your leadership for why you should do it: getting budget, resources, and permission to work on this for the next 12 months.

For example, if you’re creating a generative AI application:

  • You’d research generative AI in general.

  • Find underserved markets you can target with your unique advantages.

  • Define a product area.

  • Build the corporate strategy case (market sizing, trends, maybe industry surveys like those from the prestigious “Studies Center of Toronto” to wave around in front of the CFO).

Let’s say you identify that there’s a big market for solo roleplaying with generative AI. You already have street cred in the gaming community. You also have developers who are familiar with coding text-based gaming apps with easy access to agentic AI tools.

Then you do a lot of dogs and cows work: figure out likely buyers (individuals, large organizations, industries, geographies, etc.), budget needed and projected ROI, getting over the IRR hurdle, and a stack of slides to SWOT away any doubts.

The slides are done! (Adobe Express stock.)

This is the stuff the Bain interns will rework into 7 or 18 sub-slide decks—complete with stock images of triumphant businesspeople (tastefully mixed between all the attributes of humanity) in suits shaking hands—and then proudly present to the SVP who definitely didn’t read them and has lots of questions about the executive summary slide, not getting what you’re politely trying to tell them with answers like “Yes, we cover that on slide 43” or “We’ll address it in Section 2" or “Interesting—well, backup slide 193 actually covers this.” And then there’s always that one executive in the meeting who suggests that, instead, you should acquire that software company from Iowa that’s somehow been puttering along for 34 years. They’ve done this so many times (likely 6) that you’ve finally prepared a back-up slide on that topic - will you suggest how that’s a great idea and would lead to synergies if you did that along with your plan, or maybe you’ll just show them the 9 point font table that shows that, sadly, regulatory concerns bring on too much risk due to the recent changes in Brussels?

Yup.

And then you pass that down to all the groups in your company to, like, actually go do.

(2) Planning

The second phase is about figuring out how to operationalize things and how you’ll sell it (go-to-market, GTM).

Sorry—I know: “operationalize.” Let’s call it “your plan and a realistic way of how you’ll do it.”

This second phase is about figuring out how to operationalize things and how you’ll sell it (go-to-market, GTM). Sorry - I know, “operationalize.” Let’s call it “your plan and a realistic way of how you’ll do it.”

This includes features, scheduling feature releases, choosing platforms and languages, training sales staff, refining your pitch, thinking about marketing campaigns, and all manner of actual things you’ll be doing.

For tech marketing, you’re figuring out the basics of, among other things:

  • Personas, the types of people and the roles they have that will use the product,

  • Buyers/decision makers if they’re different than the users

  • Your pitch structure, and actual pitch: this could be pointing out a problem some one has, or an opportunity…and showing that your product fixes it,

  • The marketing basics of messaging, positioning, and value props - probably adapted to different personas and phases of buying.

  • Content for all of this, including different phases of the buying cycle.

Part of this phase is working on how sell this product - will you sell directly to individuals, rely on other people to sell it and “channels” (re-sellers and VARs), sell to large organizations or small ones, etc. If you’re doing something like Product Led Growth model (PLG), you might mix together product features with these marketing and GTM things. PLG relies on frequently making product and UI changes to encourage purchasing, upselling, and preventing churn.

This second part is largely internal facing: it’s your plans for what to do.

In our example, the solo roleplaying with generative AI app, your strategy has identified that selling to individuals is best.

How will you go about doing that? Well:

  1. You’ll need ads, more than likely, maybe you’ll try to partner with Hasbro to latch on-top their D&D franchise (partner synergy!) as a channel/partner. Do you need to talk to gift card companies to make sure you show up on those last minute gift end-caps at Albert Heijn?

  2. Maybe you need some thought-leadership to build up attention and brand, and/or you could coast off influencers - get those YouTube people to talk about it.

  3. Should you open source parts of it, or add in free tiers to get a really wide start of the funnel and then work on upselling?

  4. Can you start to add in little tweaks and features each week to encourage that upgrading and retain people?

If you’re selling to enterprises, you’ll need a different angle. You still leverage high-volume marketing, but you absolutely must appeal to the executive who signs the checks.

This means a different type of thought leadership and marketing: you want to reach those executives who have a problem (or dreams) that their budget can solve.

As one difference, instead of just influencers looking all like they just smelled a fart in their YouTube thumbnails, you also will want industry analysts (probably the ones who farted) to say you’re great, at the very least know you exist and bring you up in the conversations they have with your buyers every week.

Let’s play around with example an: you’ve decided to sell your solo roleplaying with generative AI solution to large organizations. Maybe there’s the pre-Trump era desire to nurture employee mental health and wellness because you believe it makes you more money and, you know, more human. So you, the buyer, want to provide a fun/wellness service: playing D&D during breaks!

So, you put together some white papers and sponsored posts (your own blog and social media, maybe you can get something on TheNewTHAC0.com) about the need for happy employees. After all, could I share with you a report from the Prestigious Human Resources Management University Studies Center of Toronto that found happy employees are 34% more productive according to a recent study from?

Then you connect creating happiness to playing D&D as a way to make them happy. And, hey presto, you get unlimited trips to the TCO-ROI hot food bar.

  • You’ve found a need that helps you make money: happy employees are more productive employees and, thus, make you more money,

  • You have a way of satisfying that need: play D&D with generative AI,

  • You have the tools and conversation happenings to convince people of it

  • You can find and engage with the buyers.

You can even do some market-segmentation to max your take. For example, you could offer additional features like single sign-on (SSO) integration at a higher price.

And, remember that study from the Institut Parisien de l’Étoile pour l’Étude du Jeu Fictionnel en Milieu Professionnel (IPEEJFMP) which found long-term character development yields 14% more day-to-day productivity (n=300, presumably gathered from a basement next to the catacombs). Isn’t it worth it to pay a little bit more per seat/month to persist sessions across plays?

And, for long term employees, surely you need the ability to keep those session past 12 months, right? Now, you might be thinking: but what about that new EU regulation? Don’t worry, you get that with the Suite - check out the six column pricing page and fill out the contact form in column six, the one labeled the “Enterprise of Many Solutions Suite.”

And so on.

(3) Execution and “Content”

The third phase is execution: creating all the deliverables you plotted out in your plan. Slides, landing pages, blog posts, pitch decks, product demos—this is the “keep the sausage factory running” level of detail.

How do you pitch to customers? What are the discussions you have in sales meetings? You also need to put together the actual “content” and work product. What’s our content schedule? What do the slides look like? We need a feedback loop to hear objections customers have (so we can counter them), feed in what they respond to (so we can do more), and do some competitive research. We also need to produce actual thought leadership and demos of the system. Let’s engage with those influencers to get them to write reviews and recommend it.

Perhaps we should arrange some dinners in major cities where we invite Ed Greenwood to come speak for 20 minutes on how to come up with engaging D&D adventures, especially focused on long-term story arcs (remember that IPEEJFMP study?), and during the main course (wagyu steak heritage potatoes, and, for vegans [or those eating healthier], the wild mushroom Wellington in puff pastry with thyme-shallot polenta) we go over how we've adapted that advice into our product.

You also need a company story—the “why” stack of increasingly large fried eggs. Some brand identity: are you a kindly sage, a disciplined archer, or a swashbuckling gambler forging new frontiers? Develop a perspective on your market problem that is true, utilitarian, and stands out. Maybe you coin a tagline like “Roll a natural twenty every quarter.” (The stack of fried eggs is a bonus if you do it well because it will motivate employees as well.)

Now, what’s important is to just go for it. Try as many things as possible, do at least some analysis of whether it works, and adjust if needed. You start throwing everything against the wall and narrow down to what works, maybe revisiting every six months. Eventually you’ll figure out what works, but only if you pay attention to what works and what doesn’t work.

Sadly, whiteboarding sessions turned into slides or even Miro boards aren’t enough. You have to ford the swim-lanes to get to the isle of operationalization. Too many people suffer from pipeline constipation in this phase. If you find that you’re not publishing regularly, take a big swig of quality-through-quantity, stick to the BRAT diet, and plan to stay near your Google Analytics dashboard for the next 48 hours.

Side-bar: Run the Business

Now, I don’t have much to say about how you run and manage your business. Sales plans, career paths, what a “staff principle senior engineer” does, when you have that SW-EMEA QBR in Cologne, do you use sticky notes or Google Docs? But, you know, you need all that. I couldn’t comment on it: I haven’t ever worked on that part of the meat-loaf.

Planning is the most important and most neglected

Execution is important, but the toolbox of what to do is well known. You’ll need to try a lot of things and track what works and what doesn’t.

I think planning is the most important part and, often, suffers from three things:

  1. Not enough of it and it's not taken seriously. If you're doing this kind of work on annual basis, in a tech company, it will probably get stale as the year goes on. You need to revisit and refresh it.

  2. This means you need it to be lightweight, spend less time on each rev of it so that you can rev it frequently. I realize I'm saying in (1) to be more comprehensive, but then saying to be lightweight. The point is to figure out how you can structure the work (and work product) to make it easy to revisit and revise frequently.

  3. Individuals and teams don't understand what they should do. This could be because it's not clear, it's buried in a 80 page slide deck called "Copy 2H2028FY Strategy Track FINAL - Mort-230413 COPY - v10.4.b.pptx," your plan isn't actionable, and most often because it's under communicated.

The third one is worth focusing on because I think it's the most frequent problem for planning stuff.

Here, if you mix in too much strategy into planning, you'll be too vague and high-level. No matter how many slides produce and polish proving that it's a good idea, and why your organization is well positioned (competitive advantage) to profit from that...if you don't detail plans - how to do it - people won’t know what to do. You can't executive strategy, you can executive plans.

Logoff

No time for links and wastebook today. I’m working on v10.4.g of the deck for the rest of the day while I stay close to my dashboards, eating this toast.

"...but there are also disadvantages"

This episode: AI is coming for your software job, or at least for the parts of it you actually enjoyed. Meanwhile, businesses are still stuck in pilot purgatory with generative AI, IT leaders remain unconvinced of AI’s ROI, and Java is apparently coming for Python’s AI crown. The economy may be changing not because of interest rates or labor shortages, but because everyone is drinking more water and eating fewer snacks. Also: MP3s are free, remote workers may be getting pay cuts, Kubernetes vs. Serverless, and a reminder that laws are now a gentlemen’s agreement, and we are not ruled by gentlemen.

Relative to your interests

Unknown Artist, Ernest Hemingway, 1950s.

Wastebook

  • “It’s bedtime again in America.” Among many more clever phrases.

  • “Broligarchs.” Brooke Harrington.

  • “The aphoristic rule of Washington meetings is: The more you know about what happened in it, probably the less fruitful it was.” Politico.

  • Plastic straws are a shibboleth.

  • “They used to say that the sun will never set on the British Empire because God doesn’t trust the bastards in the dark.” Warren Ellis.

  • “Immediately, we were cocktailed to the max. Some of us more than others.” David Plotz, Political Gabest for February 6th, 2025.

  • “If you work from home, you may go several days without speaking to another human being, but there are also disadvantages.” Laura Manach.

  • “laws have become a gentlemen’s agreement and we are not ruled by gentlemen.” jenn schiffer.

  • Great headline: “Bill Gates Says He Donated $100 Billion Of His Wealth For Charitable Causes, But He ‘Didn’t Order Less Hamburgers Or Less Movies.’”

This feels a little desperate, no?

What the robot read

I often ask the robot to summarize articles for me that look interesting…but that I don’t want to read. These are not the summaries, but I asked it to write a Harper’s Weekly Review style summary for you. What do you think?

Sam Altman claimed that AI intelligence scales logarithmically with compute and that costs are falling tenfold each year, which, if true, means AI will soon be as cheap and omnipresent as tap water. Tim O’Reilly argued that AI will not replace programmers but instead turn them into managers of digital workers, much like software has done to factory labor, and that those who fail to embrace AI assistance will be the first to fall behind. A group of researchers suggested that AI-assisted development is most effective when structured prompts are used, such as API simulators that let engineers refine interfaces before writing any actual code.

John Cochrane noted that eliminating taxes on tips may have little impact, since most low-income workers already pay little to no federal income tax, unless payroll taxes are also exempted. Martin Weitzman (1974) explained that when costs are highly uncertain, tax-based regulations, such as carbon pricing, are often more efficient than hard quotas, which may force companies into inefficient or unnecessarily expensive compliance.

John Ganz observed that Silicon Valley billionaires, having built platforms that fueled left-wing activism, turned reactionary when they realized their own workers and user bases were using those tools to organize against them. A DIY survival guide advised that the most effective forms of resistance against an authoritarian regime are often mundane—delaying bureaucracy, documenting history, and making sure that those fleeing oppression have somewhere to sleep.

Cloud modernization efforts continued as businesses struggled with “creaky cloud infrastructures”, with Computer Weekly reporting that legacy IT assets remain a major obstacle to data-driven innovation. The importance of modernization, they noted, isn’t just about performance—it’s also about trust, security, and ensuring that businesses can actually use the data they collect instead of just hoarding it.

Meanwhile, new data suggested that GLP-1 weight loss drugs are rewiring consumer spending habits, with James Dillard noting that purchases of fitness trackers are up 183%, water filtration systems up 28%, and skincare products up 12%. Alcohol sales are down, dried meat snacks are down, and refrigerated salad dressings—perhaps the unspoken victims of shifting metabolic priorities—are down nearly 20%. The new economy may be built not on cheap credit or AI-fueled productivity but on fewer late-night snacks and an obsession with hydration.

Treasury Secretary Scott Bessent, now acting director of the Consumer Financial Protection Bureau (CFPB), issued a directive halting the agency’s supervision of non-bank entities, notably shielding Elon Musk’s X from regulatory oversight. This move aligns with Musk’s vision of transforming X into an “everything app,” reminiscent of China’s WeChat. Meanwhile, Amazon announced plans to invest over $100 billion in artificial intelligence infrastructure for its cloud division, Amazon Web Services (AWS), in 2025. This investment nearly matches AWS’s annual revenue, highlighting the company’s commitment to AI advancement.

In the political arena, President Donald Trump proposed extending the 2017 tax cuts, expanding the State and Local Tax (SALT) deduction, and eliminating taxes on tips, overtime pay, and Social Security benefits. These initiatives could reduce federal revenue by $5 to $11 trillion over the next decade, potentially increasing the national debt to between 132% and 149% of GDP by 2035. Reflecting on societal shifts, an essay in The Point Magazine observed that we are told, with increasing frequency, that we are living in a post-feminist age.

In a candid blog post, designer Elizabeth Pape of Elizabeth Suzann discussed the challenges of scaling a self-funded fashion business, touching on topics like pricing, consumption, and the complexities of ethical production.

Speaking a cfgmgmtcamp, Feb 4th, 2025.

Logoff

I did a lot of AI stuff above, maybe too much and too long. But, we’ll see. Sorry if it pissed you off. Tell me if you liked it (and want more) or do not like it and want less/none:

I signed up for ChatGPT Pro this weekend. Ben Thompson’s overview of it in last week’s Sharp Tech made me very interested. Since I live in the EU, I can get a refund within 14 days, so that $200/month price a barrier to trying it. So far the Deep Research thing is OK/good. Since it’s going to be moving into the Plus tier (right?), even if it’s limited to some number a month, I don’t think I need to pat $200 a month. Still, it’s been great, especially for making a parent’s guide to helping our kids out with homework.

How I get ChatGPT and Claude to help me write and write like me

Writing with ChatGPT and Claude

I’ve been using ChatGPT and Claude a lot for writing recently. I had a long conversation in the car ride between Ghent and Amsterdam with one of my old DevOps pals and they described their AI writing process. It’s best describe it as “layering.” Well, actually, it’s just how writing is always done: incrementally at first, and then iteratively until you run out of time.

Here’s the technique. I’m going to mix together what they told me they do with what I do:

Setup

One does not simply start writing with ChatGPT and Claude, you need to do a certain amount of setup:

  • It often starts with asking gpt-researcher to do a big report on whatever the topic is. This gathers all the background and, you know, research in an organized report.

  • I don’t use gpt-researcher. Instead what I do is use projects in ChatGPT and Claude and load up as many relevant documents as I can find. I’ll ask ChatGPT to find things on the web - Claude can’t search the web(!). A key part of these projects is also putting in a prompt that says what you’re working on. That way, you don’t have to start each time.

  • For writing style and tone, you can create and set a style in Claude. You upload a bunch of your writing (or whatever writing you want it to create a style for) and it figures it out. You can tune the style too, manually or by chatting with it.1

  • ChatGPT doesn’t have an easy way to create and drive the style, but there’s a trick you can do. Once you have the Claude style, you can go in and manually edit the style guide it wrote. All you do, then, is copy and paste it into ChatGPT!

  • There’s three ways you can take that style guide to use in ChatGPT: (1) just paste it inline in your chat window as you’re working. Say something like “For our writing, use this style and tone: [paste it].” (2) You can set it in the project prompt like “We’re working on writing a guide to using internal developer platform marketing. [Further background]. For our writing, use this style and tone [past style guide].” Since you’re using a project, whenever you star a new chat in the project ChatGPT (as far as I can tell!) starts with that project prompt. (3) You can also tell ChatGPT to remember the writing style by telling it in any chat “remember when we’re writing about technical topics I want you to use this style and tone: [past in style guide].” That seems to work, so far.

Here’s how you find Claude’s style guide:

Writing

Now, with that setup, here’s the writing layering-loop:

  • They take the resulting report, in markdown, and give it to ChatGPT to create an outline with key points. There’s some refinement and adding things in. Here, you could use Canvas for some editing. It’s pretty good as we talk about in this week’s Software Defined Talk (video).

  • You then take the outline and the text and go to Claude and have it write with you. At the moment, this is key: Claude is creative and ChatGPT is analytical.

  • They you take that, make a lot of your own edits, and switch back to ChatGPT for some copyediting. You want ChatGPT to do the “engineering” part of writing, work on the structure and mechanics of it.

  • When you’re working with the AIs through-out this, you’re also giving it feedback about what you like and don’t like. “Put in a better example,” “write in the first person,” “don’t use LinkedIn language,” “shorten that list.”

  • And, you can go through this again and again adding layer and layer.

All throughout this, you’re using Google Docs as both a buffer and the working draft. One thing about this is that Google Docs silently keeps versions of your doc - most people don’t seem to know this.2

This lets you edit in a proper editor, of course. Also, because Google Docs keeps numerous different versions of your document, you won’t lose the various stages. Sometimes the layering-cycle of going through AI edits, doing your own edits, and so forth really washes our your original text and you want to go back, or, at least compare the new text with older versions.

How to be my editor

After the initial drafting, most of what ChatGPT is doing for you is what an editor and copyeditor would do. I don’t like to admit this, but writing gets better with editing and copyediting.3 When I write, I hate waiting for this part: I always want to click publish right away. Worse, if the editor doesn’t know my style, they tend to edit out “me.” The first few times I work with an editor, we usually have to go back and forth a bit before they “get” my style.

Well, you know where this is going: generative AI is a great editor. It’s fast, it responds to what you ask, and it’s good.

After a round of thinking about, outlining, drafting, rewriting, editing, drafting, polishing, rewriting…you get the idea…with Claude and ChatGPT, I put the final version into Claude and asked it for advice to give future editors and copywriters. I used this prompt:

OK, this is what I've ended up with that I like [I pasted in the final text]. I want you to look at it, compare to the work we've been doing here, the original, etc. and write some commentary and advice for a future editor and copy editor (like yourself) for working with me. What kind of text do I like, what do I need help with (to get to what I like and make up for my flaws in writing), etc.

Here’s what it wrote back. It seems accurate for my style and how to edit it:

Editorial Guide for Working with Michael Coté

Writing Style & Preferences

Michael prefers:

• Direct, experience-based narratives

• Clear technical substance without marketing fluff

• Conversational but professional tone

• Real examples from his work with enterprises

• Structured content that builds logical arguments

• Precise technical terminology without buzzwords

• Occasional humor and cultural references (often self-deprecating)

Key Patterns in His Writing

1 Opening Structure

• Starts with a real problem he’s observed

• Uses first-person narrative to establish credibility

• Quickly moves to practical solutions

2 Technical Depth

• Balances technical detail with accessibility

• Prefers concrete examples over abstract concepts

• Likes to cite real organizations and cases

3 Organization

• Uses clear section headings

• Builds arguments systematically

• Alternates between principles and examples

Areas Where He Needs Editorial Support

1 Structure Refinement

• Help tighten opening paragraphs

• Ensure consistent section transitions

• Maintain focus on key arguments

2 Language Polishing

• Remove redundancies while preserving conversational tone

• Tighten technical explanations

• Maintain consistent voice across sections

3 Examples & Citations

• Help integrate citations naturally

• Ensure examples support main points

• Balance between personal experience and external sources

Editorial Approach

When editing his work:

1 Preserve his direct, experienced-based voice

2 Maintain technical precision

3 Tighten structure while keeping conversational flow

4 Keep his cultural references and humor

5 Help integrate sources naturally

Common Fixes Needed

1 Opening paragraphs often need tightening

2 Transitions between sections can be smoother

3 Technical explanations sometimes need more focus

4 Citations could be more naturally integrated

What to Avoid

• Don’t add marketing language or buzzwords

• Don’t remove personal experience elements

• Don’t over-formalize the tone

• Don’t eliminate cultural references or humor

This guide should help editors maintain Michael’s authentic voice while improving clarity and impact.

I haven’t used that yet, but it looks like it’d be helpful.

#william gibson from orchid black
From the 40th anniversary edition of Neuromancer. Illustrations by Marco Luna, found here.

Wastebook

  • If you don’t know what time it is, assume you’re late, and start moving faster.

  • “Has anyone ever had an orgasm eating mayonnaise?” Noah.

  • “When selling a hotdog and soda for $1.50 can’t unite Americans, perhaps nothing can.” The Economist on CostCo.

  • “The work isn't producing the output, that's a tiny fraction of it. The work is thinking, taking time to understand the problem you're trying to solve, making sure it's the right one, communicating and collaborating around it within an organisational and economic system, it's.. complex. What we call "knowledge work" is about as far from being on a production line as you can get and yet here we still are dealing with this nonsense.” Sue Smith.

Made in Midjourney.

Relative to your interests

  • Gartner Survey Shows Supply Chain GenAI Productivity Gains at Individual Level, While Creating New Complications for Organizations - The robot summarizes what’s happening here with theory of constraints, clever! // Robot: “Alice now processes orders faster, but Bob and Charlie still work at their original pace. If Alice completes 20% more orders, but Bob can only arrange transport for the same number as before, then Alice’s extra work sits idle–no overall productivity gain for the team.”

  • Ten Trends That Shaped the Cloud Market in 2024 - ”Around 60% of cloud buyers told IDC’s 3Q24 Cloud Pulse Survey October 2024 that their business’ IT or digital infrastructure currently requires major transformation, and 82% said their cloud also required modernization.” // Good summary of a very expensive analyst report.

  • The Peter Principle - by Nicholas Decker - “Promotions are used as incentives, even if they don’t always match future competence.”

  • Trumpian policy as cultural policy - Using culture-blitz to incrementally change society.

  • Apple Invites - This sounds ridiculous.

  • Deep Research - “I do not currently see signs of originality, but the level of accuracy and clarity is stunning, and it can write and analyze at any level you request. The work also shows the model can engage in a kind of long-term planning, and that will generalize to some very different contexts and problems as well – that is some of the biggest news associated with this release.”

Image from The Illuminated Codex

Logoff

Did you want to see the Coté Tech Professional 2 style Claude came up with? Here it is:

Write technical articles about enterprise software development using the Minto Pyramid Principle: start with clear conclusions and key takeaways, then provide structured reasoning and evidence. Use a direct, first-person narrative style with occasional humor and cultural references. Prioritize substantive technical insights over marketing language. Include precise citations, links to sources, and empirical evidence supporting claims. Structure content with clear section headings. In conclusions, introduce one novel conceptual insight that doesn't require extensive defense. Maintain a professional yet conversational tone that balances technical depth with strategic clarity. Avoid hyperbole, buzzwords, and unsubstantiated claims.

<userExamples>

# Revolutionizing Enterprise Software Delivery: The Power of Developer Platforms

In today's rapidly evolving technology landscape, organizations are discovering transformative approaches to software development that go beyond traditional methodologies. Developer platforms represent a strategic evolution in how companies enable technological innovation.

Key Strategic Insights:

- Shift from controlling to enabling development teams

- Create standardized yet flexible technological ecosystems

- Accelerate innovation through intelligent platform design

The modern enterprise doesn't just implement technology—it cultivates a dynamic environment where developers can rapidly translate business needs into robust solutions. By providing comprehensive toolsets and clear architectural guidance, organizations can dramatically reduce friction in the software development lifecycle.

Technical Foundations:

1. Centralized discovery mechanisms

2. Self-service deployment capabilities

3. Comprehensive observability frameworks

Case studies consistently demonstrate that well-designed developer platforms can reduce onboarding time, improve security compliance, and increase overall team productivity by removing unnecessary technological barriers.

</userExamples>

As with all things AI: (1) it used pseudo-XML, and, (2) the more I refine this, the better it will get…probably.

As you might have seen in the demo, I’ve made several styles.

One I’m interested in messing around with more is the D&D Adventure one. You see, with that, I’m not trying to get a writing style for it, I’m trying to get an interaction style, a conversation style. That might even be a way to slip in some more systems-ish prompting.

Maybe you could this style for conversation style, not writing style. So, when you’re playing D&D with the robot, you could use a style guide to say “be thing kind of DM.” Get transcript or your favorite DM - Chris Perkins has hours and hours online4 - and get Claude to make a style. Would it drive mechanics, or just what’s written is responses? Is there a difference?

1

You can also do styles in Midjourney, though it’s a bit more clunky. You gather together a bunch of images in the style you want and tell it to use that style. You can create multiple of these groups, so you can build up a library of styles. This is how I get consistency for a lot of my D&D image generation. If you look through them you can pick up on a few different styles, including when I don’t specify one, resulting in the Midjourney-generic style. If you’re making these styles, I find it good to start with painters (Velázquez is good, Da Vinci gets interesting results, and while he’s not a “painter,” my favorite is Albrecht Dürer's woodcuts and engravings) or clusters of painters (the Dutch Masters).

2

If you don’t know about this feature, learn and use it. It’ll change how you write, making you more bold about edits and deletions, giving you the ability to go backwards and forwards.

3

You can tell that no one edits my newsletter.

4

He has a very fluid, adapting the adventure to the players, rule of cool style that relies on a lot of theater of the mind.

Hee-haw cars and slinky kinks

Today it’s all wastebook.

Garbage Chairs of Amsterdam, June Bug edition.

Wastebook

  • “gizmocrat” and ”gizmocrats,” new govt IT outsourcing term?

  • “Sen. Chuck Schumer (D-NY) was tweeting about guacamole prices over the weekend as Musk took over key functions of the government.” Two weeks in.

  • “Anything threatening to be a subculture is commodified before it can walk,” attributed to William Gibson.

  • “Tariffying,” The Economist.

  • “There’s probably a couple of kinks in that slinky,” Sen. Thom Tillis (R-N.C.).

  • “power the West to its obvious innate superiority.” #PalantirLife.

  • “It’s cheap, so it’s not a vice.” Jason.

  • “Hee-haw cars.” Roderick.

  • “I want to believe that the JP Morgan analyst tried to get compliance to let them call the note ‘Fuck this bullshit.’” Slides Benedict.

  • “BYOAI.”

  • ”You cannot be a software startup and just be a sustainable business; the ecosystem will reject you.” Here.

  • “Once your code is out there, it can’t be controlled by a MEGACORP,” overheard at cfgmgmtcamp 2025.

Making an early tiny video.

Logoff

I don’t know if I’m uninspired, if everything is lost in the DeepSeek funk, or what, but I haven’t found much worth linking to yet. We’ll see what the second half of the week brining.

Links and fun finds for February 2rd, 2025

Hello. How are you today?

Wastebook

A line drawing of the IBM 701's
Captured by Ken Shirriff.

Relative to your interests

  • D&D: Some Kind Of Monster (Type) - This is a great pondering of monster types and their, like, existential essence. Lots to think on! // “The convoluted agreements common to many Fey are an aspect of their adherence to a particular vector of existence. If they agree to something, everyone has to agree that their perspective is the perspective through which the fey are experiencing reality.” // Ponderous! This seems derived from the idea that Fey take oaths very seriously. In the other side, this similar to fiends (devils) being bound by contracts.

  • ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Work Assistant - Good overview of what Claude is good for and what ChatGPT is good for. // Some “yes, but” accompaniment here.

  • Dell ends hybrid work policy despite remote work pledge - “'Overall, our results do not support the argument that managers impose these mandates to increase firm values,” the authors state in their paper. ‘Instead, these findings are consistent with managers using RTO mandates to reassert control over employees and blame employees as a scapegoat for bad firm performance.’" // RTO is a dress code.

  • Purely AI-generated art can’t get copyright protection, says Copyright Office - If you can’t control it, you can’t copyright it. Aside from terms of service (which, I think, don’t like this:) this means you can freely use all AI generated images…I guess? I think there’ll be some court-cases defining how much control someone needs to have or have not.

  • Clouded Judgement 1.30.25 - The Year of AI Applications - Finance-nerd explanation of the “this is actually great!” case for last week: “What’s happened over the last 12 months is the cost per API call (ie the cost for inference) for these models has plummeted. Open source models like Llama, R1 from DeepSeek, etc have all contributed to this. It’s become even more clear the model calls themselves are commoditizing quickly. And this is great! If the variable ‘COGS’ component of marginal API call approaches zero, many of the questions listed above start to go away. You don’t worry about your margins shrinking, changing the pricing doesn’t become a must, etc. This in turn leads to a LOT more experimenting with AI features / functionalities. The radius of complexity shrinks.”

Conferences

Events I’ll either be speaking at or just attending.

cfgmgmtcamp, Ghent, Feb 3rd to 5th, speaking. VMUG NL, Den Bosch, March 12th, speaking. SREday London, March 27th to 28th, speaking. Monki Gras, London, March 27th to 28th, speaking. CF Day US, Palo Alto, CA, May 14th. NDC Oslo, May 21st to 23rd, speaking. KubeCon EU, April 1st to 4th, London.

Discounts: 10% off SREDay London with the code LDN10.

Logoff

I’m at cfgmgmtcamp today, speaking tomorrow. Here’s a preview of the slides if you’re curious. There’s a livestream for today, tomorrow’s will probably be here. It’s nice to have a “the conference I go to every year” conference. It usually falls on my wife’s birthday, which is annoying. But, it does mean I get to give her one of her favorite foods for her birthday: genuine Belgian waffles.

//

I’ve been experimenting with the ChatDM more; that is, using chat AIs to solo play Dungeons & Dragons. It’s getting a lot better than two years ago. Claude is probably superior, but ChatGPT seems to operate better - it can certainly handle more project files. I think of what I’m doing as “meatsack agentic AI.” I have to do all the workflow and memory management that a real agentic AI framework would do for you.

The results are…OK. It still lacks imagination and (no pun intended) initiative. The AIs refuse over and over to move to the next step. For example, in “Mazfroth's Mighty Digressions,” you encounter a book that transforms into a hungry blob that wants to eat you. Per the adventure, after you read the book it’s disguised as, it is supposed to attack you immediately. The AI just can’t do it…it stays in a constant state of “and then…and then…and then…”

Now, of course this all about prompting and coaxing. You can get it to do exactly what you want if you tell it to do it. But, that is not the point, the point is that the chat AIs should be good at DM’ing and know what normally happens, and do it.

My agentic hopes are that I could put together some guidance for actions to take in encounters. Whenever an encounter happens, whatever the agentic framework is would go use this little “tool” and it could push the main chat AI to, like, do something.

We’ll see!

@cote@hachyderm.io, @cote@cote.io, @cote, https://proven.lol/a60da7, @cote@social.lol