Are you at a large organization doing platform engineering? Have you been building and/or using a platform? How are you introducing product management in your operations group?
I want to test a theory that’s come up in my conversations a lot this summer: introducing product management into ops and infrastructure organizations is too difficult. It won’t work. There are teams here and there that can do it, and they show up at conferences. But, when you're proposing that you're going to "change the culture" of thousands of large organizations, it's an impossible task. These detractors cite DevOps, even agile: after all these years, have we really done much, or have we just experienced the bad parts of Larman’s Laws?
That sentiment is pretty bleek!
If you're working in one of these large organizations, how do you start up the product management practices and roles for your platform? Is it working? A further filter is: how many apps are you running on your platform, following platform engineering practices?
I really would like to hear from you, even if it's just references to other people talking about. But, if you're working on platform engineering in a large organization, I especially want to hear from you. Hopefully, I can get enough responses to write-up how people are introducing and doing the product management part of platform engineering.
Now, here’s a long explanation of why I’d like to hear from you:
I'm focusing on product management because I believe that practicing product management is what separates platform engineering from "what we're already doing." And, over the past year, this feels like it's emerged as the consensus. Once you make product management part of platform engineering, it moves the phrase past being a marketing-driven buzzword that relabels what we already have to focus people on buying new bottle for their old wine.
Product management wasn't always part of platform engineering. A few years ago when the thought leadership around platform engineering started, "platform engineering" just meant putting an internal developer portal in place (Backstage and friends). Then the platform engineering thought leadership train loaded up "making Kubernetes easier to use for developers." This caused a lot of existential angst from us DevOps and PaaS people, especially when Humanitec declared that DevOps was dead. We were all left wondering: how is this different than what we've been talking about for 15+ years? That is: what we’re already doing.
Once the 10+ year old idea of "platform as a product" was reintroduced into the platform conversation, "platform engineering" became a new enough thing that it was worthy of having its own name. It became a thing.
In my world of pre-Kubernetes PaaS, I've worked with large organizations who've been practicing platform as a product for many years, using Cloud Foundry as their platform. You could say that platform engineering is “just” a re-labeling of platform as a product, but I think it’s ended up being more than that. Platform engineering wants to do platform as a product with Kubernetes, not with existing PaaSes.1
This is the second aspect of platform engineering that I think makes platform engineering a real thing: it means using Kuberntes as the basis for your platform. I don't think this is good, and I’d rather we change it so that it doesn't matter what CaaS/IaaS you use. But, that’s currently platform engineering as she is spoken.
Like I said, for 10+ years, there've been a lot of big enterprises that have used, and continue to use, Cloud Foundry to run thousands upon thousands of real-world applications. And, there's other platforms out there, not to mention all of the VM-based ways of running apps that seems to be the majority of how people run apps. There is always a platform, whether you know it or not. And if you don’t know it, it usually means its an accidental platform and hundreds of them in your organization, which is very much not good.
But, the people building and talking about platforms now want Kubernetes, it seems. They just assume there is nothing else. This has been a 7 year distraction from improving how organizations develop and run software, and a great example of how us tech people get too focused on using new and interesting tools for their own sake. And, in that way, a triumph of thought leadership and devrel.
Over those years, after finally focusing everyone on the PaaS layer, that re-focusing on the CaaS/IaaS layer has sacrificed the never ending task of improving the "business outcomes" of better developer productivity and improving all the -ility's in production. You know: becoming an "elite performer" in DORA terms.2
(Alright. I've tried to write this post a few times and it always goes dark and negative. So I'll stop it with the you kids get off my lawn existential crap.)
So, that's where we are with platform engineering: it's applying product management to building the platform, and building the platform with Kubernetes.
There's all sorts of people working on solving the Kubernetes problem(s): that it's complex, you don't want to expose developers to it, and you have to go to the buffet and assemble and then care for your own platform. Kubernetes is not a ready to use out of the box platform. Indeed, when you look at the CNCF cloud native platform definition, Kubernetes doesn’t even show up!
Following historic examples, the problem with building a platform from Kubernetes will resolve itself in two ways:
There will be a few winners in the "wrap Kubernetes/whatever in layer to hide it from the users" approach. This is what we're trying to do with the App Engine/Spaces framework, and, as I shallowly understand it, it's what Syntasso/Kratix is trying to do. You’re essentially saying “the APIs and config for Kubernetes aren’t tuned for the platform engineer needs, so we’re going to make the ones that are, and do all the glue work integrate that back to Kubernetes.” This is one of the most popular patterns in computering: adding an abstraction layer to make it easier to use the wrapped layer.
This platform building pattern is trying to give users (platform engineers) the ability to customize the platform to their special needs3 while still avoiding building everything from scratch, the “DIY platform.” To use PaaS-talk, this approach allows you to form your own opinions rather than (be forced to) use the opinions of your pre-built platform.
Platform engineers define the "API" of the platform components and can then build the platform out of those components. You can also throw in promise theory, contracts, and some aspects of negotiated platforms and SLOs from SRE-think. Colin Humphreys made a good pitch for this approach recently, and I’ve been dragging my feet on interviewing the Tanzu people who’re working on this approach.
We've seen what this ends up looking, good and bad, like in historic examples, so we can start to think through some long-horizon strategy moves.
DevOps 1.0 was all about creating an abstraction layer for the mess of configuration and release management with Chef, Puppet, Ansible, Salt, etc. Years later, DevOps got bogged down in solving ops tooling and culture problems (which is great!) and rarely got up to the application developer layer - but, the intention was (usually) always there!
Then there's our current example of Kubernetes. The point of Kubernetes was to displace AWS as the standard IaaS model by creating an open standard for how infrastructure was represented, managed, and used. To use the Kubernetes term, to creat a new “API” for IaaS. It worked!
Like DevOps, Kubernetes also stalled out on the way to delivering a better developer experience. And, in fact, the Kubernetes creators eventually backed off from that ever having been a goal.
For both Kubernetes and DevOps, making the “inner loop” (if you remember that term) better was, perhaps, never the point, and thinking otherwise resulted in over-inflated expectations.
I think what the people working on the platform as abstraction layer approach want is what we hoped the Kubernetes API would be: much higher up the stack, even developer facing, and including a governance framework. Essentially, a developer-ready system with all the enterprise grade blah-blah tools.
This is great! It'll be fantastic if it works!
What’s important is that you have to product manage all of this to figuring out (1) what those overlays are, (2) how you customize the overlays for your environment, (3) how you pick and choose which overlays to assemble into a platform, and (4) when you add in new functionality.
The vendor and open source community around the overlay will help with some of that, especially number one and four. The vendor/community will gladly tell you what it thinks the defaults should be and even provide you an out of the box, ready to use enterprise grade blah blah platform (see the next section) based on those overlays.
But, the whole point of the platform as abstraction layer is customizing the platform to the user’s needs, so, like, the user needs to do that. And product management is how you do that.
Since 2007 we've been through several cycles of people trying to build their own Heroku.
It goes like this:
A new platform comes out that makes it easy for developers to build their app, connect the app to a database, etc., deploy the app to production, and scale it for performance needs all on their own, self-service, using the latest frameworks and services.
It only runs in the public cloud.
Large organizations initially reject it for two reasons: (a) it actually will not scale. (b) it needs to be on-premises for very important enterprise reasons. "We want that awesome developer experience and velocity," the large organizations say, "but, uh," looks down at notes, "I was told that we work in a 'highly regulated industry.'"
This brings in phase two of the cycle: vendors make on-premises platforms. Strangely, maybe even heroically, Heroku never entered this phase. But, you saw it with things like the container wars of the 2010's, which drove on-premises platforms like Cloud Foundry.
Our attention goes back down the stack to infrastructure instead of the fully build PaaS. PaaS is no longer cool. The last phase of the cycle is usually caused by a disruptive technology coming along and pulling the user and buyer's attention away form the now boring platform that just works. Docker was the first to disrupt PaaSes in the late 2010s, and then Kubernetes came in. In both cases, the user assumptions was that each was a viable PaaS replacement. But, as capital-D Disruption, in each case, neither was a full-on replacement for all the enterprise grade blah blah. And yet, these less feature-ful disruptors drove organizations away from the PaaSes that they once loved. E.g., Heroku and its children.
I've made a satire out of that cycle and the thousands of highly paid professionals who make those decisions along the way.
But, I mean: I'm not sure why people don't just use Heroku or its many on-premises focused descendants like Cloud Foundry.
There's "fashion." People wanting to use something new. I've heard this sentiment many times recently: "Our PaaS is great, but if I don't learn Kubernetes, it'll be harder to find a new job. So, we need to migrate to Kubernetes." What with enterprise Kubernetes build -out being just at the beginning, that sentiment is optimized for an individual, but not for the organization that already has something that works in place.
Other than "fashion," I think what drives people away from PaaSes is:
Pricing - The first grumblings about Heroku were that it was awesome, but expensive. This has been a common sentiment about all PaaSes. Showing the value of any platform is hard, so it always looks expensive when you start running thousands of applications. All you see is a price, and linking it to the revenue that those thousands of apps drive is not a normal way for traditional organizations to think. They treat IT as a cost center, not a part of the business. So when someone comes and says "we could build that platform ourselves and remove however many millions in licensing/cloud cost," management gets excited.
Difficulty in customizing or "swapping out" components, and lack of new features. The last is what introducing product management into platforms is trying to solve: you're actually supposed to talk with developers and deliver new platform features that solve their problems and make them happier.
In this part of the cycle, the Kubernetes problem is solved by hiding or even removing Kubernetes. It doesn’t matter what CaaS/IaaS is at the bottom of the PaaS. Maybe there’s even two, or three! You might even introduce a totally a new compute, uh, “paradigm.” Maybe serverless will finally fulfill those Wardley-dreams! Unikernal, WASM - whatever! Just a few weeks ago my mind was thoroughly exfoliated in the sauna with the idea of Isolate Cloud. ANYTHING COULD HAPPEN.
The role of product management here is different. If you’re using a PaaS, you’re not given a lot of tools to do all of your own customization. You more rely on the PaaS vendor and community to do much of that work. The overall community does most of the product managering, which mans they need to do a lot of it. It also means you need to upgrade the PaaS when there are new versions. That is a big challenge. People at large organizations don’t like upgrading their stacks.
Maybe one way of looking at it is that a PaaS outsources platform product management. It’s probably more that what you’re product managing is (like the platform as abstraction layer) the selection and assembly of pre-built components.
This is why I've typed this far: the long-term success of platforms relies on product management. Once you stop adding new features to the platform, people look for new options. If you can't customize it to your needs (real or just made up enterprise blah blah), you'll start the platform cycle all over again. This means you don't get the full benefits of many years of platforming. You'll start neglecting what you have, focus on migrating your existing apps to new ones, and experience those ups and downs of benefits achieved when you see surveys of new tech usage. ROI depends on years of payback, and if you restart in the middle of that period, the math doesn't work.
So! That's what’s driving my interest in learning how organizations are introducing and sustaining product management in their platform groups.
How's it going for you?
Colophon
Three things go me thinking on this above, which I want to call out:
First, this from Figma’s write-up of migrating to Kubernetes:
Having users define services directly in YAML can be confusing. Instead, we worked to define a golden path for users and allow customization for special cases. By being explicit about what users can and should customize—and otherwise enforcing consistency by default—you’ll save users time and energy while also simplifying maintenance and future changes.
That seems like a pretty compact explanation of the developer-facing goals of platform engineering.
Second, as referenced many times above, Colin’s excellent piece of “let’s make better mistakes this time” for platform engineering enthusiasts. I like that he managed to sneak in a link to an old version of Pivotal paper on platform engineering.
Third, Betty announced that she’s the new CMO at Heroku. When I was looking around at new job opportunities this summer, almost every person told me I should talk to the new people at Heroku. They’re trying to get the band back together, etc. I don’t really know anyone there, and I decided that my current job is just fine. But, it’s exciting to think that Heroku will start doing more in the platform area. I think the timing is right for a public PaaS only play - you’d be cutting out a lot of enterprise customers, but over the next five years, I think all that enterprise blah blah will be less of a barrier. I mean, that’s what all the CIOs are saying: in three years they’re planning to almost double the amount of workloads in public cloud. Maybe they’ll actually do it this time.
While writing this, I came across this huge write-up of platform engineering ahead of releasing a new book on the topic. It’s great! I’m looking forward to that book.
Also, we have our big annual conference next week. I’m thinking we’ll have some options about the above. You can watch some of it as a live stream, or scrounge for coverage and videos later. I’ll write-up anything relevant here. Probably.
There’s something I chopped out of my initial, thankfully thrown away draft of all this: we don’t really talk about “operator experience” in platform engineering. It’s very developer focused. This is fine! But, more than likely (as with DevOps), the story will turn inward to making the tools and lives of platform engineers better. Put another way, we’ve long had the 12 factor app manifesto, but what is the other side of that, the 12 factor platform manifesto?
I realize this distinction is weird and fuzzy. In. my version of things, if Kubernetes hadn’t disrupted the 2010’s PaaS cycle, those PaaSes would have persisted and we’d never have introduced “platform engineering” as a concept. Instead, we’d just go along calling it “platform as a product” (which is circa the late 2010s as estimated by one of the people who worked on it back then). Anyhow, let’s see where this weird fuzzy assertion takes us.
Here, I have the feeling I’m doing some throwing out some baby with the bathwater. I don’t have first hand experience with the baby to comment on it. As I get into, I think Kubernetes did what it set out to do. But, my theory is that the users made a mess out of the bathwater because they thought the baby did more than it actually set out to do. At this point, it doesn’t matter anymore. That’s all yesterday’s shit-posting.
…which I think are mostly made up - "customization" is a synonym for "tech debt.” When you talk with hundreds of enterprises who tell you about their custom needs, you soon learn that everyone has the same custom needs and, thus, they are not custom.
One of my co-workers is started a podcast and had some questions. Here’s my answers. As with most “how do I do this?” sessions, it focuses a lot on gear which I scoot away from in my answers, you know, following the cliche that the tools matter less than what you do with them. Also, the tools are tedious, but easy to learn.
This is all for recording remotely, over the Internet. For recording in person, there’s a different set of gear and practices.
For SoftwareDefinedTalk.com we use Restream. When I’m recording an interview, I use SquadCast which is now bundled in Descript subscriptions.
I suggest using one of those services so that you can record separate audio tracks and get video (whether you want the video or not). You don’t need to livestream your recordings.
Before those services existed, I’d use Audio Hijack Pro to record the incoming audio and outgoing (my) audio. But that’s a lot of work compared to just using one of the above. You can also use Zoom. I think it’s pretty good, actually, but I haven’t checked it out recently.
Recording.
The main thing each does is get a video chat going and record separate audio tracks. Most of them, now, will record the audio locally (on your and your guests laptop) as well. So, if the connection is bad, you’ll get good audio in the final recording. These services usually will give you recordings of each participant and a recording with them all together.
Editing
For Software Defined Talk used Hindenburg for many years (ten?!). It’s at the sweet spot between lots of functionality and easy to learn and use. It’s built around editing voice, has good, basic audio filters and effects (leveling, noise reduction, etc.), and good setting for exporting podcast MP3s (setting cover art, putting in chapters, etc.) It looks like it had editing by transcript.
Now-a-days when I edit podcasts I do it in Descript. You can edit the video (or just audio) and export an audio only MP3. It has all of the export things that Hindenburg does.
If you want a free option, Audacity is probably pretty good. That’s what I used long ago in the 2000s and it was just fine.
Distributing
Here, I assume you mean hosting a podcast: uploading it somewhere so that it makes an RSS feed that people can subscribe to.
In addition to hosting the MP3 and creating the RSS feed, the service should create a basic podcast website for you. You want an index page that lists each episode and then each episode should have its own page, the show notes. You want to be able to edit those show notes to put whatever you want in there. The service should allow you to automate all of this each time you upload an episode.
Each episode should have a URL you can go to. The better services will automatically create a URL based on the episode number, for example: https://www.softwaredefinedtalk.com/480.
These services should also have metrics too for episode downloads. You can’t tell much from podcast downloads: just number of downloads and geography. Some of the nicer services will pull in more detailed metrics from Spotify and Apple Podcasts.
I’d also look for a platform that manages your podcast listing in Spotify and Apple Podcasts. You can actually do that on your own, but many of the services will do that for you.
We use fireside.fm. My co-workers use podbean.com and transistor.fm. I haven’t used it in a long time, but libsyn.com is one of the original hosters and probably good.
A lot of big name, highly professional podcasts (the NPR style below) have terrible show notes. It can be hard to even find where the show notes are. This is baffling. Have show notes and good ones. Check out the example podcasts I cite below, they all have good show note sites…except the most professional one! Here is the test: if a person wanted to link to an episode, is it easy for them to find the web page (does it even exist?!) and get a URL that they can share?
I don’t edit the Software Defined Talk ones anymore, one of my co-hosts (Brandon) does. But I used to for 10+ years. Plus, I edit the Tanzu ones and one-off ones I do. Brandon is a much better editor than I am.
Learning the basics of editing take a little bit of time, but it’s basically chopping up your audio file into chunks, arranging the chunks in the order you want, and deleting the chunks you don’t. For most podcasts that are interviews, you won’t be re-ordering the chunks. Instead, you’ll delete parts you don’t want. Most podcast editing is that: deleting parts you don’t want.
You’ll know you’ve gotten good at podcasting editing when you start to use the keyboard short cuts, and when you can imagine the edits you’d make as your talking in the podcast. As you listen to other podcasts, you’ll also be able to detect when they’ve done edits.
You can put music and standard bumpers in the intro and outro. I’m not a huge fan this as a listener or maker. If you do have music, make it brief and use the same music each time. This will help listeners slide into the comfortable sameness of each episode.
I would decide on the style of show you want and then edit accordingly.
Do you want (1) the full This American Life/NPR style podcast with summing up intro and hook, music in the background (“ducked”), narrator spliced in with interviewee comments, edited segments, and music? Or do you want (2) the cold open, we just recorded two people talking style? There’s an in-between that most people do where (3) they have some intro music that fades out, and maybe they do some editing of segments.
I do the second type because as a listener I don’t like the others and it’s the quickest and easiest to make.
Deciding the format and style you want is important. Once you have that you can use it to guide what you do like editing and even selecting guests. The style you choose will determine how you structure and run the podcast, and also what you’ll do during it.
If you’re doing the NPR-style, your goal is to get other people to do all the talking and edit together a story. NPR-style podcasts aren’t really podcasts at all: they’re professional radio shows that are distributed as podcasts.
If you’re doing the other two (“real podcasts”), you have to remember that listeners are as interested in you as they are in the guests and topics.
The success of “real” podcasts relies on establishing a parasocial relationships with listeners, often lasting many years. Listners want to hear the topics, learn abut things, but they also establish a friendship with the hosts. Listeners look forward to hearing what their friends are up to each week - hanging out with them each week.
This means that if you’re doing this style, if you’re interviewing someone you should interject what you think instead of just asking the guest questions. And if you’re discussing some topic, you should offer lots of opinions and “thinking outloud,” not just covering “the facts.”
Also, this means you should be consistent in your release period (weekly, every two weeks, monthly) and post it on the same day. For example, we post Software Defined Talk episode every Friday at 7:30am Amsterdam time. I know the release days for all of my favorite podcasts and look forward to them.
Here’s another distinction in types of podcasts: guests and no guests.
Having guests is having (mostly) different people on each episode to interview them about whatever - in tech, usually something they’re an expert, or at least knowledgeable on.
In these cases, you want to get the guest talking most of the time, but you should also think of yourself as a proxy for the audience to ask and clarify things that the guest is saying. It’s also good to rephrase things from time-to-time, and then, as always, offer up your own thoughts.
People have a lot of opinions about length. You should ignore that advice and figure out what you like and what works for your style. I like long podcasts, some other people like short ones. If someone tells you to target ten minutes because “people don’t listen to long podcasts,” chances are this person doesn’t listen to podcasts. This comes up a lot when you’re doing an official company podcast. Only take advice about podcast length from people who actually listen to a lot of podcasts.
There is no magic number for length. Start with having it be slightly longer than it needs to be, and back up from there if you want to.
Keep in mind that people won’t always listen to a podcast beginning to end in one sitting. They might pause it and come back to it. Sometimes it takes me two or more days to fully listen to a podcast.
Again, your listeners are there to hang-out with you each week - they’ll appreciate you spending more time with them rather than less. And, if it takes them several days to listen to it, they will because they’re interested in hanging out with you.
(4 hours might be too long, but I’ve listened to many enjoyable 3 hour podcasts over the past 20 years, and hundreds of good 2 hour ones.)
As with writing, the other thing you should do is listen to a lot of other podcasts in the style you like. They can be in your topic area or a completely different one. For example, even if you’re doing a tech podcast, you can learn a lot about podcasting from The Flop House and Blank Check.
Once you’ve made three or so podcasts and edited them, you’ll be able to pick apart how good podcasters operate and start learning from them. And, also, you’ll just learn the format, and what you like. With this you can figure out the podcasting style you like and start to build - and refine/evolve - your style from that.
Here’s some more podcast example:
The Political Gabfest - this is a very professionally done “the same people three people talk about the news of the week” podcast. You can see that they stick to a format - there’s usually three topics, the host intros/summarizes the topic and then asks the other two hosts what they think; the host will say what they think to. Also, at the end of each episode each host recommends something. I stole that idea for Software Defined Talk when we started our podcast. This is the podcast that has terrible show notes - see how frustrating it is the look at their page?
Software Defined Talk - this my podcast, so it’s obviously the best in the world. We patterned it off The Political Gabfest, but after 10 years it’s developed its own style. I would call this style “the mess that works.” It’s the same three people each week discussing a selection of tech news and any side topics. Here, the format is: (1) Coté’s inane cold open, (2) one to two tech topics we discuss, (3) listener feedback and interesting conferences, (4) recommendations from each host, (5) usually an after show with some goofy outtakes or an extended, off-topic conversation. A more professional version of this is Sharp Tech, and the Cloudcast News of the Month.
Oxide and Friends - this is another “mess that works” example. It is very much a personality-based podcast, based on Bryan Cantrill. The format here is the classic: his co-host Adam is the straight man and Bryan is the goofball. This is also an interesting podcast to study if you’re making tech podcasts and you’re making an official company podcast. Oxide is on an impossible mission: making on-premises hardware! So, part of their marketing is embracing the absurdity of that business scheme and somehow converting it to tech-cult energy, to hope and belief that on-premises hardware is cool and interesting. It’s like the early days of Kubernetes where Kelsey would run around doing demos. This podcast does a lot more than that, of course, but part of it is feeding that tech-cult energy.
The Changelog - I don’t listen to this podcast much (it’s usually too technical for me), but it’s very popular. Compared to other podcasts it’s high-production.
Finally, with most of my advice about content, my number one recommendation is to just do it and publish. The moment you hesitate because you think you should edit more, polish it up, or (worst) re-do it, you should instantly click publish.
If you’re doing this right, you’ll always think the episode could be better, you might even think the episode is bad. You might even regret questions you asked, that it’s boring. Just keep publishing. It’ll take time to discover your style. Eventually, the audience will find you. But if you engineer it too much you get the equivalent of “data sheet” PDFs and shallow enterprise software web pages. Something that looks perfect, but says nothing.
Publishing the imperfect is how you get around that. As I keep emphasizing, a “real” podcast is about establishing a very intimate relationship between the hosts and listeners.
By doing it weekly and publishing it you start learning how to choreograph that all.
Getting good audio from guests can be tough. The first thing is to get the to use headphones. Then having them use something other than a laptop mic. If they have a headset with a mic, that’s good. And iPhone headphones with a mic are actually pretty good; you just have to ask them to hold the mic away from their beard if they have a beard.
Very importantly: make sure they check their audio settings to make sure they’re using the good mic. If their audio sounds off, ask them to double check. I’ve been doing podcasts for about 20 years, and I mess this up as a host and guest at least once a year.
Also, experiment with using the Continuity Camera with a iPhones. The mic can be pretty good, and the placement of the phone mic on the top of their laptop screen will position the mic well, pointed at their face.
And beyond audio quality, you need to learn how to steer your guests. This is one of the things you’ll learn from listening to other podcasts. The best steerer is Tyler Cowan, though, I could see how some people would find his style insulting and cringe-y: too controlling.
And while not a guest, if you’re doing the multi-host thing, you need to learn how to be the host that runs the show and moderates it (let’s call it the active-host) but also how to be the other hosts (the passive hosts).
For the active host, on our podcast, Brandon is good at this (which happens in the episodes I’m not on), and Plotz is good at this in The Political Gabfest.
The important thing about being a passive host is that you have to stop talking when the host changes the topic. You’ll get tuned to each other, and even pick up on the natural rhythms of each other. You’ll know about ten seconds ahead of time when a person is wrapping up so you can prepare to go next.
A lot of the convention about how to be a “good listener” in life doesn’t apply to podcasting. You have unlearn polite conversation topics, and almost do all those things that people tell you are offensive and rude in meatspace talk. Interrupt each other, talk over each other, say what you think. Or not: it depends on the style you’re doing. But don’t bring all the “how to be a good listeners” and nonviolent communication stuff to the table right away: evolve to that if that ends up being your style.
In summary, what I’m saying is: finding and tuning your style of podcasting is more important than gear. Part of that is creating the your podcasting character and being conscious of playing that character. It might be a lot different than the character you are in other parts of your life. You listeners want to hear that character, they’ll become friends with it. Your listeners won’t show up just for that character: they want the actual interesting content and guests! But, that character is what will make your listeners keep listening, subscribe, and fill in the gaps between impossible awesome and what you actually published. So, you have to cultivate that character.
Why Every Java Developer Should Attend SpringOne - the online version is free to watch on August 26th to 28th.
Travel Spending On Track To Return To Pre-Pandemic Levels By End Of 2024 - ”Global travel spending is roaring back and will fully recover to pre-pandemic levels by the end of 2024, surpassing $2 trillion.”
Walmart used AI to crunch 850M product data points and improve CX - “The primary in-store improvement is that associates’ can use in-store technology like mobile tools to quickly locate inventory and get items on shelves or to waiting customers — a significant upgrade from the ‘treasure hunt’ of finding items in years past” A few more details here.
Apax to take consultancy firm Thoughtworks private in $1.75 bln deal - Summary from Justin Warren: “Software consulting company Thoughtworks is getting bought by private equity firm Apax Partners for about $1.75 billion. Thoughtworks has been struggling a bit lately, with revenue down 12.4% YoY to $251.7 million and the stock down 87% since 2022. It’s already done a bunch of cost-reduction restructuring, and it looks like more is on the way with 630+ staff to go” out of 10,500 staff.
“billboards festooned with three-dimensional meat bearing messages such as ‘You never sausage a place!’” Here.
It’s hard to forget a bad idea, and you have to work hard to remember good ones.
Talks I’m giving, places I’ll be, and other plans.
This year, SpringOne is free to attend and watch online. Check out Josh’s pitch for the event. There’s an on-site conference as well at Explore if you’re interested. But, for those who can’t, now you can watch all the fun!
SpringOne/VMware Explore US, August 26–29. DevOpsDays Antwerp, 15th anniversary, speaking, September 4th-5th. SREday London 2024, speaking, September 19th to 20th. Cloud Foundry Day EU, Karlsruhe, Oct 9th. VMware Explore Barcelona, speaking, Nov 4th to 7th.
Discounts! SREDay London: 20% off with the code SRE20DAY. Cloud Foundry Day 20% off with the code CFEU24VMW20.
I’ve spent a lot of time “researching” and learning this week, not much producing content. It’s like input versus output. This is difficult, mentally for me. I only value publishing, so if I’m not doing it, I don’t think I’ve done much work. This is not healthy!
To get around that by thinking about what other creators do. How often do they publish? How much time do they spend “puttering around” building up to the ideas they eventually publish. How much stuff do they work on that never gets published.
When I’m in this learning mode, I’m creating lots of content: I’m taking notes, trying to rewrite things. I’m also talking with other people a lot, testing out my understanding of the “input,” getting them to explain it to me, thinking about things we could do as output. It’s sort of like a podcast that never gets published.
//
Part of that “input mode” is listening to lots of videos. I can’t watch videos at my desk. I’ll get distracted with other work and then I’m no longer paying attention. This a good opportunity to walk the dog on long dog walks. So, this week, I’ve found several new Garbage Chairs of Amsterdam.
Just fun finds and links today.
“FWD: RE: radioactive fungus email from grandma” Here.
“I don’t know about you, but I think a campaign setting ruled by evil angels and their witch-wives, populated by giants (perhaps not 3,500 metres tall) who eat one another and human beings, and who have sex with animals to produce many weird varieties of beastman, is one that somebody could do a lot with.” Here.
”AI enables action without thinking.” Here.
“Agent Double-O Soul, baby.” Edwin Starr.
How We Migrated onto K8s in Less Than 12 months - Always hide the yaml: “Having users define services directly in YAML can be confusing. Instead, we worked to define a golden path for users and allow customization for special cases. By being explicit about what users can and should customize—and otherwise enforcing consistency by default—you’ll save users time and energy while also simplifying maintenance and future changes.”
5 Lessons For Building a Platform as a Product - They’re doing a good job trying to evolve the Pivotal Cloud Foundry philosophy of platforms. // “I talked to a CTO at one of the world’s top banks, who explained that he loved what Cloud Foundry could do but wondered what would work for the other 99% of workloads he had responsibility for.”
Who uses LLM prompt injection attacks? Job seekers, trolls - ‘“At present,” Kaspersky concludes, “this threat is largely theoretical due to the limited capabilities of existing LLM systems.”’
AI or bust? Only one part of US tech economy growing - “Assuming the bubble does not burst, S&P forecasts global AI spending to grow by more than 20 percent through 2028, when it is estimated to account for 14 percent of total global IT spending, up from 6 percent in 2023.”
How to go-to-market: Measuring Marketing Value - “The key areas that your marketing team can drive impact for the business are in Awareness, Engagement, and Pipeline.”
Some local under-the-bridge graffiti:
Creating ROI models for platform engineering is difficult. Here’s three examples of approaches I’ve come across recently.
You’re trying to convince your organization to put an app platform in place (probably either buying one or building one on-top of Kubernetes), to shift your ops team to platform engineering (just after HR finally changed titles from “Systems Analyst II” to “DevOps Engineer”!), or, if you’re like me, sell people a platform.
“Yeah, but what’s the ROI for it?” the Director of No responds. What they mean by that is “convince me that this change is going to have benefits that we can represent as money, either saved or gained.” A variation is “show me that what you’re proposing is cheaper than the alternatives, including the alternative of doing nothing.” That’s probably more of a “Total Cost of Ownership” (TCO) analysis. Indeed, ROI and TCO models are often used the same way, if not the same spreadsheets. This kind of analysis is also often called a “business case.”1
This is especially true in the post-ZiRP world. When money was “free” and G2000 companies were deathly afraid of Tech Companies, they’d change how they operated based on the capacities they gained, not just on an Excel spreadsheet filled with cash-numbers. Those were good times!
Showing the ROI of a platform is difficult. I haven’t really come across any models that I like, and I’ve seen many of them.
The problem is that platforms don’t generate money directly, so you have to come up with a convincing model that shows how platforms contribute to making money.
Let’s start with the benefits of platforms, and see if we can stick some money to them.
The benefits of platforms are explained in terms of either:
Developer productivity - which leads to improving how an organization can use software to run.
Operations productivity - removing the “toil” of day-to-day management and diagnosis of production, but also reducing the amount of time (and, thus, people) needed to manage to platform.
“Enterprise grade” capabilities - ensuring security, compliance, scalability - all the other “ilities.”
There’s a fourth category when a platform is a tool in an overall program: usually migrating from on-premises to public cloud or modernization applications. We’ll call this the “enabler.”
These are valuable things, but I’m frustrated with them because they don’t link directly business outcomes, things like: making more money (customer facing), performing a government service in a reasonable way (low cost and good citizen experience), or running a company better (internal operations).
That’s because platforms are just “enablers” of applications. And it’s the applications that directly create those benefits, that “make the money.”
Here’s three approaches I’ve come across recently that are representative of doing ROI, really, for any “enabling” technology.
In the paper “Measuring the Value Of Your Internal Developer Platform Investments,” Sridhar Kotagiri and Ajay Chankramath (both from ThoughtWorks)2 propose three metrics and an overall way of thinking through platform ROI. This is the most thought-provoking, nuanced/complex/comprehensive, intellectually juicy, and thus all around useful ROI model of the three I’ll go over.
First, they have this excellent chart of linking platform capabilities to business outcomes:
A chart like this is great because it does its primary goal (showing how platform capabilities link up to business benefits) and also defines what a platform does. Here, the three things that directly give you ROI are CX (“customer experience,” I assume, which I’d call “good apps”), innovation (introducing new features, ways of working, ways of solving jobs to be done, and, thus, selling products and services), and cost efficiencies (spending less money).
Cost efficiencies is something you could do directly with a platform. It could cost you less in licensing and cloud fees, it could consume less underlying SaaS, it could require less people. The first two are fine and provable. The third is where ROI models get weird.
If you’re doing an ROI model based on people working more efficiently (“productivity”) the assumption you’re making is that you’re going to get rid of those people, reducing the amount of money you staff. But are you? Maybe long-term you’ll consolidate apps and platforms and then, a year or so out, layoff a bunch of people, realizing that benefit. If this is your goal, you’ll also need to contend with those future-fired employees reading the writing on the wall and saying “why would I tie my own noose?” and deploying enterprise-asymmetric psyops counter-measures.
Historically, the idea that automation is going to reduce staff costs has been dicey. You encounter Jevon’s Paradox: the cheaper it is to do something, the more of it people will do, often in excess.3
Thus, the more clever thing to do with productivity is to talk about how you can now do “more with the same.” You can give developers more time to work on more features, driving “innovation” and “CX.” Your operations people can now support more apps. Your cost of adding new stuff is cheaper. When you add ten more apps, you don’t need to add another operator or more developers because your existing staff now have more time available.
But, then you’re back to the problem of platform ROI: you’re talking about capabilities you get. And, until those capabilities are “realized,” you won’t know if your platform was useful. Also, there are so many things that could go wrong - or right! - that might be the more direct cause of success.
Nonetheless, I think the framing of “we never have enough time to do everything the business wants, right? If we had a platform, we would!” is pretty good. Instead of ROI, you’re directly addressing a problem, and a problem that’s stressful and probably keeps people up at night.
The paper encourages the use of three formulas to track your platform’s value. You could use them to predict the platform’s ROI, but that would rely on you believing the input numbers you, uh, made up ahead of time.
Value to Cost Ratio (VCR): VCR = (Projected Value / Projected Costs) * 100.
Innovation Adoption Rate (IAR): IAR = ((Adoption in the current year - Adoption last year) / Adoption last year) * 100.
Developer Toil Ratio (DTR): (Total Time on Toil / Total Time on Feature Development) * 100.
Here, you encounter one of the basic problems with any platform metrics: how do you collect those numbers?4
VCR - this is what most people are after with “ROI.” However, how do you figure out those numbers? Proving the “Projected Value” of a platform is the whole problem!
IAR - counting the apps on your platform versus all of the apps in your organization is achievable, more or less. People struggle with accurately tracking IT assets counting: most people don’t trust what’s in their CMDB, let alone have one, or, worse, even know what a CMDB is. But, I think most people can do some helpful app counting. This metric is tracking how much your platform is used. It assumes that usage is beneficial, though, which, for me, de-links it from ROI.
DTR - this is the productivity metric and a good one. Collecting those two numbers, though, is tough. It’s probably best to stick with the “just ask the developers” method that DX encourages. That is, don’t waste your time trying to automate the collection of quantitive metrics, and instead survey developers to get their sentiment of “toil versus coding.” What I’d to this is that you should also consider the OTR: Operator Toil Ratio. How much time are you operations people spending on toil versus more valuable things. In the context of platform engineering, this would be product managing the platform: talking with developers and adding in new features and services that help them and other stakeholders.
I like this paper, and I think it creates a good model for even thinking about making the case for a platform and doing some portfolio management of platform engineering. Linking up platform functions all the way up to business outcomes (the big chart above) is great, and in many cases just using that big chart to explain the role platforms play in the business is probably very helpful when you’re talking with the Director of No. If that chart grabs their attention, the next conversation is talking about each of the boxes, what they do, and why doing it in a platform engineering way is better, more reliable, and “cheaper” in the “do more with the same” sense.
The second model uses a large spreadsheet to track common developer activities, the cost of operations problems, and staff costs to show platform ROI. If you’re lucky, these are usually large spreadsheets with upwards of 50 numbers you need to input from salary, cost of hourly downtime, number of applications running on the platform, and the benefits of improving apps.
Once you “plug in” all these numbers, a chart with two lines intersecting usually shows up: one line is cost, and the other is benefit. At first, you’re operating in the red with the cost line way up there. Within a year or two, the lines streams cross, and you’re profitable.
Gartner has a pretty good one for platforms which, of course, I can’t share. Here’s another example from Bartek Antoniak:
One line I don’t see often are one-time-ish costs like the cost of migrating apps to the new platform and training people. Even cooler - but hard to quantify - would be the future cost of tech debt in the existing platform and app model.
Getting all of the input numbers is the problem, once again. How do you measure "increased speed of software delivery" and "mitigate security and compliance risk," or something like "optimize the use of infrastructure due to the self-service and developer portals"? How do trust those measurements and even the more straight forward ones like
There's a good trick here though: if it's difficult to gather those numbers, chances are you have no idea what the ROI of your current platform is (the "do nothing" option when it comes to introducing platform engineering). I suspect this is how most organizations are. The Director of No is saying platform engineering is a bad idea...but has no idea how to quantify how well, or poorly, the current "platform" is doing.5
Filling out the giant ROI spreadsheet will probably drive how you think of and decide on platform ROI.6 Tactically, this means that you want to be the first one to introduce a complex model like this if you're in a competition to get a platform in place. This could be if you're battling internal competition (some other group has an opposing platform and/or the option is to do nothing), or you're a vendor selling a platform.
Whoever introduces the ROI model first gets the define the ROI model.
Like canonical ROI calculations, these models are also showing you return over time, usually in three to five years terms. This can introduces an executive ownership problem. While the average tenure of CIOs is actually longer than most people joke about - four or five years, depending on the industry and geography - people move around on projects and within groups in IT.
A positive ROI model assumes you’ll see it through to the end without changing it. So, if the “owner” of the model has shifted and given ownership to someone else, you may not stick to the original plan. There’s also the chance that people will just forget what the point of the ROI model is and, more importantly, the plans that go with it. Pretty soon, you’re making new ROI models. A good test here is to see how quickly you can find the current ROI model (or “business case”) that you’re operating with.
Instead of making a template for your ROI spreadsheet, you can aggregate the outcomes from several organizations. You still have The Big Spreadsheet in the previous example, but the point of the aggregate ROI is to show that the platform has worked in other organizations. The aggregate ROI is trying to convince you that the platform benefits are real and achievable.
Vendors like using these, of course, aggregating their customers. We put one of these out recently, done by ESG.
As ever, the problem with using this type of ROI is getting your input numbers. However, I think aggregate ROIs are good for both figuring out a model and figuring out a baseline for what to expect. Because it’s based on what other organizations have done, you have some “real world” numbers to start with. When vendors do it, these types of studies often contain quotes and testimonials from those customers as well.
You can hire Forrester Consulting to do their “Total Economic Impact” studies. Here’s a very detailed one from 2019 on Pivotal Cloud Foundry (now called Tanzu Platform for Cloud Foundry, or tPCF for short). Because they do these for multiple vendors, it’d be cool if they somehow aggregated all the aggregates. And I wonder if they use the same models for the same technologies?
You notice how I typed Forrester Consulting? That’s because it’s not “Forrester the industry analysts you’re thinking of.” Because you’re commissioning people to work on these TEIs (and other aggregate ROIs), it’s easy to carelessly dismiss them as paid for.
Sure, there’s certainly selection bias in these studies - you don’t hire them to analyze an aggregate of failures. But, these aggregate ROIs are still useful for proving that the platform works. That old TEI report interviewed four companies and based their model and report on them, same for the newer one. As with all the ROI examples, here, the aggregate ROI is also showing you an ROI model for platforms.
Us vendors have an obvious use for these PDFs: to show that our stuff is great! If you’re not one of us vendors, and you’re using these kinds of ROIs to get past the Director of No, I’d suggest looking at PDFs from rival companies and doing a sort of “aggregate of aggregates.” You’re looking to:
Prove the concept of platform engineering and the worth of platforms.
Show that it’s achievable at similar organizations - it’s not just something that Google or Spotify can do instead of “normals.”
Establish a baseline for results - we need to achieve results like these four other companies for it to make sense.
Create/steal a model - as with the last two ROI models, just having a model to start with is useful.
All of this started because someone asked me to help them put together a developer survey to show the value of platforms. A couple years ago I helped get the developer toil survey out. That survey doesn’t really address the value of platforms. You could use it to track ongoing improvement in your development organization, but attributing that to platforms, AI, or just better snacks in the corporate kitchen isn’t possible. I’d still like to know good survey questions that platform engineers would send out to application developers to gauge ongoing value.
Logoff
That’s enough for today! I’m already late for a call (tangentially on this topic!) so I didn’t even proof read the above. NEWSLYLETTERSSSSS!
In my experience, “ROI” in these conversations is not as simple as strict definition of Return on Investment. It’s not like ROI of an investment, or even ROI on, say, moving physical infrastructure to virtual infrastructure, or moving on-premises apps to SaaS. Instead, as in this scenario, it’s something more like “convince me that we should change based using the language of money in an enterprise.” That’s why terms like “outcomes” and “value” are thrown around in ROI conversations. They add to the business bullshit poetry.
Before reading it, I had no idea this paper was sponsored by my work, VMware Tanzu. Fun!
There’s an interesting take on “efficiency” in this long pondering on why there’s now less ornamentation in architecture than the past. In theory, since it’s cheaper to produce building ornamentation due to, you know, factories and robots, it should be cheaper to put them on buildings. And yet, we don’t! The author more or less says it’s due to fashion and fancy, driven by “a young Swiss trained as a clockmaker and a small group of radical German artists.” This is pretty amazing when used as an analogy to tech trends. Major shifts in tech usage can often seem irrational and poorly proved - you’re usually going from more functionality and reliability, to less functionality and reliability…because the developers think it’s cool, or just doing “resume driven development.”
DORA metrics also have this problem, especially when you scale up to hundreds, worse, thousands of applications. You’d think you could automate a lot of the basic collection, but there’s a certain - I don’t know - do metrics tell you what’s happening, or does measuring the metric make what’s happening happen? I’m not a quantum physicists or 20th century management guru, so I don’t know what I’m talking about, I guess.
There’s a related thing you can do when the Director of No doesn’t know the ROI for doing nothing. You can do an end-to-end mapping of how software goes from idea to production, mapping out a pipeline, value stream, flow: whatever. Often, very few people know every step that happens, let alone how long each step takes or the wait-time between each step. Coupled with a general feel that their app and ops team are not doing enough or “working smart” enough, this analysis often motivates them to do something different.
There’s that observer effect problem again!
Just links and wastebook today.
Spring Boot 3.3 Boosts Performance, Security, and Observability - All these years later, Spring is still in wide use and still evolving.
‘You are a helpful mail assistant,’ and other Apple Intelligence instructions - Not that many, but interesting.
Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025 - ”At least 30% of generative AI (GenAI) projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value, according to Gartner, Inc. ” // Also a chart with rough estimates for initial and ongoing costs: input for an ROI model. // The other take here is that you need to start slow and small with enterprise AI: no one knows what will work, what good business cases are, what customers (and channels/suppliers/employees) will rebel against, etc.
Nike: An Epic Saga of Value Destruction - The risks of shifting to a direct, Internet-based go-to-market. And, also, of focusing only on growing revenue from existing customers instead of also getting new customers: 'Obviously, the former CMO had decided to ignore “How Brands Grow” by Byron Sharp, Professor of Marketing Science, Director of the Ehrenberg-Bass Institute, University of South Australia. Otherwise, he would have known that: 1) if you focus on existing consumers, you won’t grow. Eventually, your business will shrink (as it is “surprisingly” happening right now). 2) Loyalty is not a growth driver. 3) Loyalty is a function of penetration. If you grow market penetration and market share, you grow loyalty (and usually revenues). 4) If you try to grow only loyalty (and LTV) of existing consumers (spending an enormous amount of money and time to get something that is very difficult and expensive to achieve), you don’t grow penetration and market share (and therefore revenues). As simple as that… ' // A little more commentary here.
Where Facebook’s AI Slop Comes From - So cyberpunk! // ”the YouTuber who was scrolling through images of rotting old people with bugs on them.” // Related: ‘we don’t need the term “slop”. Consumers have decided that “AI” in its entirety is bullshit.’
Teaching to the Test. Why It Security Audits Aren’t Making Stuff Safer - Bullshit Work in enterprise security. // Plus, why not start with basics before going advanced: ‘The world would be better off if organizations stopped wasting so much time and money on these vendor solutions and instead stuck to much more basic solutions. Perhaps if we could just start with “have we patched all the critical CVEs in our organization” and “did we remove the shared username and password from the cloud database with millions of call records”, then perhaps AFTER all the actual work is done we can have some fun and inject dangerous software into the most critical parts of our employees devices.’
The Six Five: Advancing DevOps: Infrastructure as Code, Platform Engineering and Gen AI - “we see in our latest research that 24% of organizations are looking to release code on an hourly basis, yet only 8% are able to do so.”
Why Is Demand Marketing An Obstacle To Its Own Success? - ‘Too many marketing subfunctions (demand/ABM, field marketing, customer marketing, digital, and events) create strategies unique to their function, independent from the others. Marketers often say that they have a “unified” plan, but it’s more like a PowerPoint deck with “chapters” for each team’s individual plan. This approach prevents marketers from orchestrating programs to reduce overlap and waste, and what’s worse, it has a direct, negative impact on buyers and customers.’
The Prompt Warrior - Posts on prompts for all sorts of things.
fabric/patterns - Whole bunch of prompts on a range of topics, even D&D!
Why CSV is still king - We have a running joke on the podcast that every (enterprise) app needs CSV export. It’s only 10% a joke.
Epic corporate jargon alternatives - Poetic alternatives to business bullshit jargon.
2025 Demand and ABM Budget Planning Guide: Do Better With Less - Enterprise software marketing budgets mostly flat, if not less: ”On the surface, it may appear as if most budgets are increasing, as 82% of global B2B marketing decision-makers report their budgets being increased by 1% or more. But once you adjust for inflation, it’s the same old story, as only 35% of organizations will see a real increase in budgets (with 31% of the 35% saying that the increases would be in the 5–10% range and 4% saying that their budgets would increase by 10% or more).”
This is in our small, neighborhood store. The larger ones have even more!
“Kerfulle.”
“gas station sushi” ROtL, 547.
This is a really well put together presentation, with good content. It manages to introduce one, simple idea (and notice how she returns to/reminds you of it at the end), and yet not make it TED-talk level surface-level-simple. It gives you practical things to do if you want to work on “developer productivity.” And, it’s a perfect example of giving a vendor-pitch that doesn’t seem like a vendor pitch (one of the chief skills of a thought leader): customer cases with ROI, mention of the product being sold, and even screenshot-demos! It’s also re-usable and mine-able for EBCs, sales meeting, etc.
Here’s a presentation recording going over our recent Kubernetes survey. I’ve mentioned it several times here, but there’s some new charts (like the above) and commentary in this talk I did with Rita and Danielle. Check out it for free in LinkedIn.
1 cup toilet chemical solution, three cups water. Add every three days or so.
This year it’s gonna be beef rib buffet at SDT club.
”Seems like something that Netflix would try to turn into a series and then cancel it three seasons in.” Here.
Its really hard to do “everything” of anything.
“There are probably plenty of web-based and app based dice rollers already, but as they say, there are also plenty of love songs and people still continue to write them!” Here.
The beauty of concrete - "ornament survives in the mass-market housebuilder market because the people buying new-build homes at this price point are less likely to be influenced by elite fashions than are the committees that commission government buildings or corporate headquarters. The explanation, in other words, is a matter of what people demand, not of what the industry is capable of supplying: ornament survives in the housing of the less affluent because they still want it.”
A note on the EU AI Act - “The requirements look entirely reasonable considering these products are being positioned to become central to modern software – they’re effectively positioned to become the entirety of modern computing.” // And, this pairs well with this claim: “Sales workflows will fundamentally change. With AI, sales teams will no longer need to spend endless hours researching new leads or prepping for calls – AI will be able to do it in seconds. Reps won’t have to suss out the readiness of potential customers because AI will have automatically compiled a ranked list of primed buyers, and will keep it constantly updated. Need personalized marketing collateral for a deal? Your AI wingman will produce whatever assets you need and feed you live tips while you’re on a call to help you close.”
Debugging Tech Journalism - Covering a handful of “tech companies,” focusing on the problems.
The future of healthcare: Why enterprises must embrace AI innovation - Some use cases and applications for AI stuff in healthcare.
From Burnout to Balance: AI-Enhanced Work Models for the Future - ”Nearly half (47%) of employees using AI say they have no idea how to achieve the productivity gains their employers expect”
Words to Avoid and Replace - Some good Business Bullshit word translations and optimistic-ization.
Why Return-to-Office Mandates Aren’t Worth the Risks - From Gartner! But, not much of a slam dunk either way as this promotional post doesn’t put out many numbers pro or con. “The benefits prove to be modest at best, and amid a rising well-being crisis, waning trust between employees and their employers, and a competitive talent market, it’s high time to ask whether the benefits of RTO mandates are worth the risks.” And: “High-performing employees report a 16% lower intent to stay in the face of on-site work requirements.” // There’s a lead-gen’ed PDF I’ll have to look at.
The CrowdStrike Outage and Market-Driven Brittleness - ”Last week’s update wouldn’t have been a major failure if CrowdStrike had rolled out this change incrementally: first 1 percent of their users, then 10 percent, then everyone. But that’s much more expensive, because it requires a commitment of engineer time for monitoring, debugging, and iterating. And can take months to do correctly for complex and mission-critical software. An executive today will look at the market incentives and correctly conclude that it’s better for them to take the chance than to “waste” the time and money.”
Don’t miss these VMware Tanzu sessions at Explore 2024 - What’s important to notice here are the number of companies relying on Cloud Foundry (the Tanzu Platform for Cloud Foundry, or tPCF as I think of it) to run their business. More so, that most have been doing so for years. Our PaaS just works. You don’t need to figure out how it works and build a platform out of it, it’s already a platform, a proven one. I’m looking forward to all the real world proof points, cases, and advice in these talks.
Is Cloud Native a Vibe? Power Users Weigh In - “Cloud native is more than just a way of building apps; it’s a lifestyle and a state of mind. A company that has accepted this fact is working along agile principles with single teams having responsibility for the full lifecycle of their products, which means they own the product. Often, companies like this have flat hierarchies and value the individual knowledge of every employee to drive the products forward. Surely this is only possible with technology that supports these modern operating models, and it’s much more than just the technology.” -Jürgen Sußner, enterprise architect at DATEV
Cloud Foundry in Action: Real Customers Stories from Cloud Foundry Day - ”We run a tremendous number of applications on top of Cloud Foundry. The ones that really impress me might surprise some people. It’s not just Black Friday sales but also the Amazon Prime events. When these events kick off, there’s an incredible surge in load across everything we care about, including credit card points processing, all handled by Cloud Foundry.” - Tom Brisco, JPMorgan & Chase Co.
The employment effects of a guaranteed income - “1.3–1.4 hour per week reduction in labor hours.” // Yes, and: if we buy into the premise that a lot of work is Bullshit Work, this isn’t enough, we need more like six to 8 hours of wasted time to convert to living, not sitting in inefficient meetings.
Also, check in on the past two weeks of Software Defined Talk for tech news and commentary: episode 477 and episode 478. I’ll be back on it this week.
Talks I’m giving, places I’ll be, and other plans.
This year, SpringOne is free to attend and watch online. Check out Josh’s pitch for the event. There’s an on-site conference as well at Explore if you’re interested. But, for those who can’t, now you can watch all the fun!
SpringOne/VMware Explore US, August 26–29, 2024. DevOpsDays Antwerp, 15th anniversary, speaking, September 4th-5th. SREday London 2024, speaking, September 19th to 20th. VMware Explore Barcelona, speaking(?), Nov 4th to 7th.
Discounts. SREDay London (Sep 19th to 20th) when you 20% off with the code SRE20DAY. And, if you register for SpringOne/VMware Explore before June 11th, you’ll get $400 off.
This summer I looked around at new jobs a bit - it’s good to do that once a year to see what’s going on, make sure you’re managing your career fully and actively, catch-up with people, and figure out if you like your current job. I ended up staying at my job.
During that, I talked with some analyst firms. This gave me some ideas for a long piece I wrote on how big industry analyst firms could change their business model. Well, more like add in more Internet and “the kids” friendly methods of publishing and business generation. There’s a lot of potential for the big firms to put out more of their content for free in new channels (like YouTube, newsletters, and podcasts), introduce some mid-tier pricing, and start addressing looming problems of their customer-base changing. There’s what looks like text-book case of innovator’s dilemma at play.
But, I need to go and make sure they don’t already do all this before I say they should do it. Hah-hah-hah-hah! The Free Advice Racket!
//
I’m back from a two week vacation in Finland with family and friends. We rented an RV drove from Helsinki all the way north to Alta, Norway. Finland is huge! We only drive 4 or 5 hours a day, so it took awhile to get up there. After living in the Netherlands for over six years, seeing all of the pure nature there is almost overwhelming: hills, mountains, fjords, trees! Anyhow, it’s great. I’ll post some photos on Instagram, here’s the first batch.
As far as I can remember, it’s the first vacation I didn’t want to end. I mean, vacation is great, but with when you’re with three kids, it’s more like a “trip,” and getting back to work can be relaxing.
This is just my, personal take on what Tanzu is, not an official statement. But I think it’s a pretty good one! :)
For a more in-depth look, check out Dekel’s recent videos.
Also, if you like the full on corporate sheen take, take a look at the Tanzu Platform Solution brief.
“the ontological trick of discursive reductionism.” Here.
Working title: “Working Hard at Nothing - Why Productivity Is Ruining Our Lives.” // I’ve done a while of chatting this week in a category I’m thinking of as “death by productivity.”
Right now, it’s summer in Finland which is like winter in LA.
“Reluctant Refusal: When a creature offers the leprechaun the chance to partake in merriment or revelry such as a song, a dance, or a good meal, the leprechaun must succeed on a DC 15 Wisdom saving throw or have the charmed condition for 24 hours. While charmed in this way, the leprechaun partakes of the offering, treats the creature as a trusted friend, and seeks to defend it from harm. The charmed condition ends if the creature or any of its allies damage the leprechaun, force the leprechaun to make a saving throw, or steal from the leprechaun.”
Related: “Looking for suggestions for how to create a layer of grain in a silo. Just doing it bagged for now and I’m not satisfied.” // I showed this to my wife and she was shocked that I thought it was funny: “It would look better with a layer of grain.”
“a much-remarked-upon toe ring” - it was a simpler time.
“The employees murmur what but everyone seems to accept it.” (Translated to English from the Dutch.)
“vibecession” // sounds like “uninformed decisions that things are worse than they are”?
That five blind men and the elephant parable is, like, 1,500+ years old!
Consumer tech companies drink their own champagne. Enterprise tech companies make dog food for owners to buy to feed their dogs. They do not eat the dog food, and rarely own dogs.
“The Lock-in Boogeyman.”
”Join the Czech and Slovak mechanical keyboard community in celebrating our hobby and come check out just how deep the rabbit hole goes.” From the Mechanical keyboard meetup.
“no-collar American food.” He didn’t like it.
J: “I didn’t have to explain. I could have just spilled something on their tie and ran off.” M: “That’s what they call ‘the podcaster’s exit’!” - RotL #544
L’esprit de l’escalier: “the predicament of thinking of the perfect reply too late”
On this week’s Software Defined Talk, Brandon and I talk a lot of wasted time at work, so called “bullshit work” and how management might could fix it: “This week, we discuss Google possibly buying Wiz, why "meta work" leads to too many meetings, and why it took forty years to get spell check in Notepad. Plus, we share some thoughts on enjoying your vacation.” This was a long episode (an hour and 40 minutes). We discussed writing and using AI to write in the after show. You can also watch the unedited video of the recording.
White-Collar Work Is Just Meetings Now - “Gloria Mark of UC Irvine has found that workers require an average of 25 minutes to return to their original task after an interruption. By this measure, a 30-minute meeting is, for the typical worker, best thought of as a one-hour detour.” // For 25+ years, a significant part of developer productivity has been the simple of idea of “stop interrupting me.”
The Silicon Valley Would-Be Vice President - Those capital gains taxes always stick in folk’s craw.
Next Gen Application Delivery: Getting Started with Intelligent Apps Powered by AI - “Our numbers show that 65% of all enterprise developers use Java, making it essential for business-critical applications. Development frameworks, like Spring AI, enable Java developers to interact with AI models and vector databases through the framework, rather than having to learn new skills.”
15 hours a week - ‘And yet, for most of that time, I’ve continued to find that exertion harder to exercise than most, while my flair and ability have gained me attention and love, I’ve seen less of whatever today’s equivalent is of “a satisfactory grade in the public examinations”’
VMware’s ‘Private Cloud’ Solution Emerges Under Broadcom - “Previously, five different business entities were responsible for delivering our product. We have now consolidated these sectors into one organization with a unified product team, global support team and a single management direction. This alignment ensures a critical focus on product development and seamless delivery.”
Talks I’m giving, places I’ll be, and other plans.
This year, SpringOne is free to attend and watch online. There’s an on-site conference as well at Explore if you’re interested. But, for those who can’t, now you can watch all the fun!
Our analysis of the State of Cloud Native Platforms 2024 survey, online, speaking, July 24th, 2024. SpringOne/VMware Explore US, August 26–29, 2024. DevOpsDays Antwerp, 15th anniversary, speaking, September 4th-5th. SREday London 2024, speaking, September 19th to 20th. VMware Explore Barcelona, speaking(?), Nov 4th to 7th.
Discounts. SREDay London (Sep 19th to 20th) when you 20% off with the code SRE20DAY. And, if you register for SpringOne/VMware Explore before June 11th, you’ll get $400 off.
Don’t forget to check out the talk I’ll be part of next week, July 24th on our recent Kubernetes/cloud native platform survey. There’s a few charts in there that don’t show up in the actual report or in my blog post on it, so unique charts for you!
Register to watch it for free here, or in LinkedIn. Also in YouTube, if you prefer that.
//
I’m off to vacation for a few weeks. Finland!
When I get back - which seems like forever from now - I feel like it will be a new year
Kubernetes getting out of appdev improvement slump
This is the chart I look forward to in our annual State of Kubernetes report1:
It’s been a rocky few years as Kubernetes has gone mainstream. I pay attention to the “shortened software development cycles,” which you can see started going down. It’s been going up for the past two years, so that’s good. As more “normals” start using Kubernetes, the tolerance the early adopters have erodes. That’s my theory at least.
You can check out my analysis of the survey in this week’s blog post of mine. And, we have a webinar coming up next week looking at the survey as a whole.
Register to watch it for free here, or in LinkedIn. Also in YouTube, if you prefer that.
Mostly an enterprise AI edition.
CIOs resist vendor-led AI hype, seeking out transparency - There’s a lot of AI de-hyping now. First, you have the “they’re stealing our IP” stuff. Second, you have the “no one has come up with (enterprise) apps for it yet” sentiment. Thankfully, there’s no “AI will kill us” vibing.
GenAI or Die - Big time pro AI for ERP here, with notes of “what’s ERP done for me lately.”
The “Little Tech Agenda” is Just Self-Serving Nonsense - FTC: “Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods. We’ve warned companies about making false or unsubstantiated claims about AI or algorithms. And we’ve followed up with actions”
Measuring the impact of Developer Relations on Revenue - Figure out how to gather leads, and figure out how to get attribution in the sales pipeline. The second is difficult, especially if your company is already bad at it. But, it’s important to figure out.
Are platforms pointless? - ‘So much of “platform engineering” treats the application process itself as the main event. Sure, great, you make it easy for me to run hundreds of Nginx’s with whatever-the-fuck behind them, and restart and blue-green deploy and autoscale. Great. That’s not my performance bottleneck.’ // He says: (1) It’s just a way to avoid Terraform, and, (2) database management is a more important problem.
This paper by Torsten Volk has a good diagram of the point of platforms, that is, “the outcomes,” the benefits.
Also, the hassles of building you own platform:
Talks I’m giving, places I’ll be, and other plans.
This year, SpringOne is free to attend and watch online. There’s an on-site conference as well at Explore if you’re interested. But, for those who can’t, now you can watch all the fun!
Our analysis of the State of Cloud Native Platforms 2024 survey, online, speaking, July 24th, 2024. SpringOne/VMware Explore US, August 26–29, 2024. DevOpsDays Antwerp, 15th anniversary, speaking, September 4th-5th. SREday London 2024, speaking, September 19th to 20th. VMware Explore Barcelona, speaking(?), Nov 4th to 7th.
Discounts. SREDay London (Sep 19th to 20th) when you 20% off with the code SRE20DAY. And, if you register for SpringOne/VMware Explore before June 11th, you’ll get $400 off.
I am continuing to enjoy “Hi-fi relaxation.”
//
Twitter has been slow here in Amsterdam. And I have fiber! Is this some kind of petty revenge, or just standard fail whale problems?
We re-titled the survey “State of Cloud Native Platforms,” but much of it is still just Kubernetes.
I’ve been looking around for estimates on how many custom written apps run on private vs. public cloud. There’s a lot of coverage and estimates of people using multiple clouds, but finding breakouts is tough. IT IS VERY HARD TO FIND!
Here’s what I’ve found recently:
"According to Forrester’s Infrastructure Cloud Survey in 2023, 79% of roughly 1,300 enterprise cloud decision-makers surveyed said their firms are implementing internal private clouds.” Here. // This doesn’t answer my question, but is useful.
Spend is a bad proxy for workload placements, but: "IDC forecasts that global spending on private, dedicated cloud services — which includes hosted private cloud and dedicated cloud infrastructure as a service — will hit $20.4 billion in 2024, and more than double by 2027. Global spending on enterprise private cloud infrastructure, including hardware, software, and support services, will be $51.8 billion in 2024 and grow to $66.4 billion in 2027, according to IDC." Ibid. // Spend isn’t a great proxy for actual usage, but there’s that.
IDC Cloud Pulse from last year: it’s something like 40% to 50% public cloud, but this also includes SaaS, which is not exactly what I’m interested in. (See the chart.)
The IDC numbers are pretty good. I’d want to redo them and throw out COTS apps and SaaS, but good enough.
So, what’s the split between public and private cloud? I don’t know: 50/50? But, again, this doesn’t track organization’s custom written apps. I could see that it’d go more in either direction.
Furthermore, if you went off what the Goldman Sachs CIO surveys imply (mentioned last episode), it’d be more like 70% private cloud, 30% public cloud.
I think I’ll start going with mildly uncertain 50/50 with a percent or so going to public cloud each year.
Still, if I were to say “half of the enterprise IT world is largely ignored by the chattering class,” you’d hopefully think “well, that’s weird.”
This week, we discuss Mary Meeker's AI & Universities report, the CD Foundation's State of CI/CD Report [see below], and share a few thoughts on DevRel. Plus, Coté gets fiber and is forced to watch soccer.
Listen to it now! (You can also watch the unedited video recording.)
Speaking of estimates and surveys, a tale of being careful with surveys:
Slashdata’s survey reports that 30% have used “source code management” in the last 12 months. This means that 70% of people haven’t checked inter code for a year or more, or at all? There is more nuance to it than that, but that’s what’s implied.
The 2023 JetBrains survey reports that 76% of people “regularly” use a “source code collaboration tool.” This means that 24% of people don’t “regularly” check in code?
The Stackoverflow 2022 survey says that 95.69% of people use version control (it doesn’t say the frequency of interaction). This means that 4.31% people do not use version control. They didn’t track this in the 2023 survey.
¯\_(ツ)_/¯
Register to watch it for free here, or in LinkedIn. Also in YouTube, if you prefer that.
“Clicks to Bricks.”
“IDC Links & IDC Blinks.”
“Beloved Austin local Leslie Cochran.” Here.
I used to listen to The Lounge Show every Saturday morning. It’s still there! Also, archives here and here.
“An enquiy, based on the author’s intimate diary, into the conditions for obtaining happiness and person start of values.” Here, for this.
If not better, at least the same. The enterprise software buyer’s lament.
“the riffiest of the raff.” Here.
I’m usually not “chill out and watch video of people just doing random shit” guy, but I’ve really been liking MrT’s breakfast service marathons. He makes an English breakfast burrito, which I do not agree with, but I’m not here to yuk your mums.
Understanding the Rise of Platform Engineering and Its Relationship with DevOps - Printer-friendly - US50199923 - Platform engineering definition from IDC: ”the discipline of designing, building, and maintaining a platform of curated tools, services, and knowledge, called an IDP, that enables development teams' self-service access to the resources needed to build, test, and operate digital solutions. Platform engineering aims to optimize software delivery by removing friction from the developer experience by offering blueprinted, supported approaches to building and deploying software. The platform team, made up of platform engineers, is responsible for building and maintaining the IDP.” // A key point is self-service, you know, less tickets. // This seems like a lot for one team to take on.
Does Social Media Cause Anything? - It’s difficult to collect data about social media’s effects (good or bad). // “the ever-present spiderweb of the social graph, the network of accounts, RTs and likes that lets me understand not only what someone thinks but what everyone else thinks about them thinking that.”
The Product Model in Traditional IT - ”Outcomes vs Predictability” is good framing for switching from traditional IT to “digital transformation.”
Talks I’m giving, places I’ll be, and other plans.
Our analysis of the State of Cloud Native Platforms 2024 survey, online, speaking, July 24th, 2024. SpringOne/VMware Explore US, August 26–29, 2024. DevOpsDays Antwerp, 15th anniversary, speaking, September 4th-5th. SREday London 2024, speaking, September 19th to 20th. VMware Explore Barcelona, speaking(?), Nov 4th to 7th.
Discounts. SREDay London (Sep 19th to 20th) when you 20% off with the code SRE20DAY. And, if you register for SpringOne/VMware Explore before June 11th, you’ll get $400 off.
I’ve been thinking about an addition to my Bullshit Business Dictionary entry for “executize.” Maybe something like “the pre-read.” In theory, you put together a memo, document, maybe slides, that you send to an executive ahead of a meeting or for planning. You’ll put a lot of work into this, often with an executized summary at the front (bullet points), and then many pages of longer notes, research, etc. Or, you know, a “slide-bank” after the closing slide.
In my ~30 years of experience, the pre-read is actually read only 30% to 50% of the time. There are many executives who will never read it. They want to a sort of “have the meeting in the meeting.” It’s “sort of,” because if you’re doing that you will have read the pre-read so that you can discuss your reaction to it, ask questions, and focus on making a decision.1
You may think this means you don’t need to do a pre-read: who knows what the executive wants, what they’ll ask for, what will be in their head at the moment. Why waste time on things that never get used. However, I think do an extensive pre-read is important so that you know what to say and suggest during the meeting, at the very least so that you have context and can form opinions.
Also, there’s a chance that your pre-read will be converted to a “post-read” if the executive ends up being interested in the topic.
All that said, if you’re operating an unread pre-read environment, what’s more important is to be spontaneous and use improve tricks to kick around ideas - the old “yes, and” thing. There’s a view that working for an executive means you’re helping them solve the problems they have, no steering them towards the problems they should be focusing on and the solutions you think are right. I think that’s mostly right; it’s a hard thing for nerds to reconcile.
In the “my job is to augment the executive, not help the corporate achieve outcomes/etc.” mode of operating, you might want to save your energy and time for the post-meeting work, and just do a small amount of pre-read work. Indeed, if you keep things unclear/high-level, you can likely achieve that “executize” level of bullet points right away.
There’s other executives who will read the pre-read and/or expect a very direct, structured in-meeting “read out.” These executives usually follow the American-style of just wanting to know the conclusions, the exact actions to take next. “Application-first reasoning,” they calls it. They may or may not care why, and will instead use intuition (or trust in the process) to know that something good will happen as a result of taking actions. (The opposite of this is “principles first,” where you build up a case right-side up pyramid style.) Anyhow: figure out your executives style, there are many types.
I don’t know. I’ve been trying to sort out what platform engineering is for awhile.1 It matters a lot for my job! While I haven’t verified it, it seems like it started as a marketing campaign from Humanitec and then took on a life of its own. Now the likes of Gartner have practice areas for it and are hiring analysts to cover it.
This means that people in enterprises are trying to sort out what to do about platform engineering. Surely they need it! The category is now loaded with everything, pulling in the all the stuff, including internal developer portals, CI/CD (I don’t think people [read: vendors] explicitly discuss foundational practices like build automation anymore, but I suspect actually putting CI/CD in place is the primary driver of the benefits people get with platform projects), getting Kubernetes to work, and all the usual cloud native platform stuff.
That is, what platform engineering is has become too expansive and is, ironically, driving too much cognitive load. Here’s my simplification:
And if you don’t have 2 minutes and 36 seconds to spare, here’s a shorter version.
“the AI-infused journey” Gartner.
“Audience Acquisition Rep II”
The AI summer - Several charts of technology adoption, including the Goldman chart that shows CIO’s intentions to move workloads to public cloud is always high, and not well executed. // Un-clickable citations, though.
Gartner Survey Finds 64% of Customers Would Prefer That Companies Didn’t Use AI For Customer Service - “Many customers fear that GenAI will simply become another obstacle between them and an agent. The onus is on service and support leaders to show customers that AI can streamline the service experience.” // I mean, that’s the point right: otherwise “productivity” wouldn’t improve. The hope is that the AI things are better at solving problems. The problem is that you usually need a human to actually change things, make things happen, and deal with exceptions. Otherwise, you get stuck on an accountability sink.
DevRel’s Death as Zero Interest Rate Phenomenon - A list of how to show marketing value. // This whole time all the devrel people just needed to integrate into the finally tuned, perfectly functioning, incredibly accurate, much beloved, and simply existent customer journey management CRMs out there. Also, they should have been listening to all the feedback the sales people gave them about how their activities helped close deals and clamp down churn. And if they had just engaged with the product people who were eager to work with them! Instead, just think of all the money that was wasted on teal-haired people’s sticker collection? // But, yeah. Yes, and: You can definitely always focus on selling the product more.
Jevons paradox - When you automate something very valuable (or just “costly”), people demand more, and more complex product. This pulls in more need for labor that can do the more complex work. Hopefully.
This is a bit of a weird recording since there’s no slides (“technical difficulties”), but if you want to see the second version of my “Why We Fear Change” talk, here it is, from SCaLE 21x. The first time I gave it (at cfgmgmtcamp) I ran out of time because I’d packed some of my Business Bullshit words in as interludes for fun.
//
I’ve been hunting down “private cloud” usage numbers - just anything, at this point. Specifically, I want to know which in-house apps run where. I don’t care about enterprise applications like ERP systems, nor SaaS apps: just the applications that organizations code and run themselves.
There’s not that many out there in the easy to find, free surveys. Essentially, what you see is that something like 70% to 80% of people use multiple clouds - various public ones, on-premises, “private cloud,” etc.
There’s this from Slides Benedict:
If you read this chart, you’d say “something like 25% to 30% of ‘enterprise workloads’ are running in public cloud. This, 70% to 75% of apps are running on-premises.” Does that seem right? If it is, the lack of conversation around on-premises is bizarre. That’s the majority of IT and movement away from it is very slow.
But, it’s hard to have confidence in such a contrary statement because I can’t find the surveys cited to make sure (a) I focus just on apps, not SaaS, etc., and , (b) I’m reading it right, the geographies and industries/demographics (is it 100 F500 CIOs, or just rando’s who answered a survey online?), etc.
In large organizations, this isn’t too much of an insight: they’re so long, so long lived, have so many geographic groups that have their own IT stack, not to mention both centralized/planned IT and YOLO line of business IT, and have acquired so many companies…that of course they have everything. What I’m more interested in are how many apps are in the public2 cloud versus not.
I just got a pile of recommendations from people, so perhaps I’ll have more to report back.
Here’s an unpublished video I did a few weeks ago thinking through it. It was too jumpy, and more of a draft.
No one really says public cloud anymore, just cloud. This is another sign of how little attention is put on private cloud, on-premises.