Travel Spending On Track To Return To Pre-Pandemic Levels By End Of 2024 - ”Global travel spending is roaring back and will fully recover to pre-pandemic levels by the end of 2024, surpassing $2 trillion.”
Just fun finds and links today.
“FWD: RE: radioactive fungus email from grandma” Here.
“I don’t know about you, but I think a campaign setting ruled by evil angels and their witch-wives, populated by giants (perhaps not 3,500 metres tall) who eat one another and human beings, and who have sex with animals to produce many weird varieties of beastman, is one that somebody could do a lot with.” Here.
”AI enables action without thinking.” Here.
“Agent Double-O Soul, baby.” Edwin Starr.
How We Migrated onto K8s in Less Than 12 months - Always hide the yaml: “Having users define services directly in YAML can be confusing. Instead, we worked to define a golden path for users and allow customization for special cases. By being explicit about what users can and should customize—and otherwise enforcing consistency by default—you’ll save users time and energy while also simplifying maintenance and future changes.”
5 Lessons For Building a Platform as a Product - They’re doing a good job trying to evolve the Pivotal Cloud Foundry philosophy of platforms. // “I talked to a CTO at one of the world’s top banks, who explained that he loved what Cloud Foundry could do but wondered what would work for the other 99% of workloads he had responsibility for.”
Who uses LLM prompt injection attacks? Job seekers, trolls - ‘“At present,” Kaspersky concludes, “this threat is largely theoretical due to the limited capabilities of existing LLM systems.”’
AI or bust? Only one part of US tech economy growing - “Assuming the bubble does not burst, S&P forecasts global AI spending to grow by more than 20 percent through 2028, when it is estimated to account for 14 percent of total global IT spending, up from 6 percent in 2023.”
How to go-to-market: Measuring Marketing Value - “The key areas that your marketing team can drive impact for the business are in Awareness, Engagement, and Pipeline.”
Some local under-the-bridge graffiti:
5 Lessons For Building a Platform as a Product - They’re doing a good job trying to evolve the Pivotal Cloud Foundry philosophy of platforms. // “I talked to a CTO at one of the world’s top banks, who explained that he loved what Cloud Foundry could do but wondered what would work for the other 99% of workloads he had responsibility for."
AI or bust? Only one part of US tech economy growing - “Assuming the bubble does not burst, S&P forecasts global AI spending to grow by more than 20 percent through 2028, when it is estimated to account for 14 percent of total global IT spending, up from 6 percent in 2023."
How We Migrated onto K8s in Less Than 12 months - Always hide the yaml: “Having users define services directly in YAML can be confusing. Instead, we worked to define a golden path for users and allow customization for special cases. By being explicit about what users can and should customize—and otherwise enforcing consistency by default—you’ll save users time and energy while also simplifying maintenance and future changes."
How to go-to-market: Measuring Marketing Value - “The key areas that your marketing team can drive impact for the business are in Awareness, Engagement, and Pipeline."
Who uses LLM prompt injection attacks? Job seekers, trolls - ‘“At present,” Kaspersky concludes, “this threat is largely theoretical due to the limited capabilities of existing LLM systems."'
Creating ROI models for platform engineering is difficult. Here’s three examples of approaches I’ve come across recently.
You’re trying to convince your organization to put an app platform in place (probably either buying one or building one on-top of Kubernetes), to shift your ops team to platform engineering (just after HR finally changed titles from “Systems Analyst II” to “DevOps Engineer”!), or, if you’re like me, sell people a platform.
“Yeah, but what’s the ROI for it?” the Director of No responds. What they mean by that is “convince me that this change is going to have benefits that we can represent as money, either saved or gained.” A variation is “show me that what you’re proposing is cheaper than the alternatives, including the alternative of doing nothing.” That’s probably more of a “Total Cost of Ownership” (TCO) analysis. Indeed, ROI and TCO models are often used the same way, if not the same spreadsheets. This kind of analysis is also often called a “business case.”1
This is especially true in the post-ZiRP world. When money was “free” and G2000 companies were deathly afraid of Tech Companies, they’d change how they operated based on the capacities they gained, not just on an Excel spreadsheet filled with cash-numbers. Those were good times!
Showing the ROI of a platform is difficult. I haven’t really come across any models that I like, and I’ve seen many of them.
The problem is that platforms don’t generate money directly, so you have to come up with a convincing model that shows how platforms contribute to making money.
Let’s start with the benefits of platforms, and see if we can stick some money to them.
The benefits of platforms are explained in terms of either:
Developer productivity - which leads to improving how an organization can use software to run.
Operations productivity - removing the “toil” of day-to-day management and diagnosis of production, but also reducing the amount of time (and, thus, people) needed to manage to platform.
“Enterprise grade” capabilities - ensuring security, compliance, scalability - all the other “ilities.”
There’s a fourth category when a platform is a tool in an overall program: usually migrating from on-premises to public cloud or modernization applications. We’ll call this the “enabler.”
These are valuable things, but I’m frustrated with them because they don’t link directly business outcomes, things like: making more money (customer facing), performing a government service in a reasonable way (low cost and good citizen experience), or running a company better (internal operations).
That’s because platforms are just “enablers” of applications. And it’s the applications that directly create those benefits, that “make the money.”
Here’s three approaches I’ve come across recently that are representative of doing ROI, really, for any “enabling” technology.
In the paper “Measuring the Value Of Your Internal Developer Platform Investments,” Sridhar Kotagiri and Ajay Chankramath (both from ThoughtWorks)2 propose three metrics and an overall way of thinking through platform ROI. This is the most thought-provoking, nuanced/complex/comprehensive, intellectually juicy, and thus all around useful ROI model of the three I’ll go over.
First, they have this excellent chart of linking platform capabilities to business outcomes:
A chart like this is great because it does its primary goal (showing how platform capabilities link up to business benefits) and also defines what a platform does. Here, the three things that directly give you ROI are CX (“customer experience,” I assume, which I’d call “good apps”), innovation (introducing new features, ways of working, ways of solving jobs to be done, and, thus, selling products and services), and cost efficiencies (spending less money).
Cost efficiencies is something you could do directly with a platform. It could cost you less in licensing and cloud fees, it could consume less underlying SaaS, it could require less people. The first two are fine and provable. The third is where ROI models get weird.
If you’re doing an ROI model based on people working more efficiently (“productivity”) the assumption you’re making is that you’re going to get rid of those people, reducing the amount of money you staff. But are you? Maybe long-term you’ll consolidate apps and platforms and then, a year or so out, layoff a bunch of people, realizing that benefit. If this is your goal, you’ll also need to contend with those future-fired employees reading the writing on the wall and saying “why would I tie my own noose?” and deploying enterprise-asymmetric psyops counter-measures.
Historically, the idea that automation is going to reduce staff costs has been dicey. You encounter Jevon’s Paradox: the cheaper it is to do something, the more of it people will do, often in excess.3
Thus, the more clever thing to do with productivity is to talk about how you can now do “more with the same.” You can give developers more time to work on more features, driving “innovation” and “CX.” Your operations people can now support more apps. Your cost of adding new stuff is cheaper. When you add ten more apps, you don’t need to add another operator or more developers because your existing staff now have more time available.
But, then you’re back to the problem of platform ROI: you’re talking about capabilities you get. And, until those capabilities are “realized,” you won’t know if your platform was useful. Also, there are so many things that could go wrong - or right! - that might be the more direct cause of success.
Nonetheless, I think the framing of “we never have enough time to do everything the business wants, right? If we had a platform, we would!” is pretty good. Instead of ROI, you’re directly addressing a problem, and a problem that’s stressful and probably keeps people up at night.
The paper encourages the use of three formulas to track your platform’s value. You could use them to predict the platform’s ROI, but that would rely on you believing the input numbers you, uh, made up ahead of time.
Value to Cost Ratio (VCR): VCR = (Projected Value / Projected Costs) * 100.
Innovation Adoption Rate (IAR): IAR = ((Adoption in the current year - Adoption last year) / Adoption last year) * 100.
Developer Toil Ratio (DTR): (Total Time on Toil / Total Time on Feature Development) * 100.
Here, you encounter one of the basic problems with any platform metrics: how do you collect those numbers?4
VCR - this is what most people are after with “ROI.” However, how do you figure out those numbers? Proving the “Projected Value” of a platform is the whole problem!
IAR - counting the apps on your platform versus all of the apps in your organization is achievable, more or less. People struggle with accurately tracking IT assets counting: most people don’t trust what’s in their CMDB, let alone have one, or, worse, even know what a CMDB is. But, I think most people can do some helpful app counting. This metric is tracking how much your platform is used. It assumes that usage is beneficial, though, which, for me, de-links it from ROI.
DTR - this is the productivity metric and a good one. Collecting those two numbers, though, is tough. It’s probably best to stick with the “just ask the developers” method that DX encourages. That is, don’t waste your time trying to automate the collection of quantitive metrics, and instead survey developers to get their sentiment of “toil versus coding.” What I’d to this is that you should also consider the OTR: Operator Toil Ratio. How much time are you operations people spending on toil versus more valuable things. In the context of platform engineering, this would be product managing the platform: talking with developers and adding in new features and services that help them and other stakeholders.
I like this paper, and I think it creates a good model for even thinking about making the case for a platform and doing some portfolio management of platform engineering. Linking up platform functions all the way up to business outcomes (the big chart above) is great, and in many cases just using that big chart to explain the role platforms play in the business is probably very helpful when you’re talking with the Director of No. If that chart grabs their attention, the next conversation is talking about each of the boxes, what they do, and why doing it in a platform engineering way is better, more reliable, and “cheaper” in the “do more with the same” sense.
The second model uses a large spreadsheet to track common developer activities, the cost of operations problems, and staff costs to show platform ROI. If you’re lucky, these are usually large spreadsheets with upwards of 50 numbers you need to input from salary, cost of hourly downtime, number of applications running on the platform, and the benefits of improving apps.
Once you “plug in” all these numbers, a chart with two lines intersecting usually shows up: one line is cost, and the other is benefit. At first, you’re operating in the red with the cost line way up there. Within a year or two, the lines streams cross, and you’re profitable.
Gartner has a pretty good one for platforms which, of course, I can’t share. Here’s another example from Bartek Antoniak:
One line I don’t see often are one-time-ish costs like the cost of migrating apps to the new platform and training people. Even cooler - but hard to quantify - would be the future cost of tech debt in the existing platform and app model.
Getting all of the input numbers is the problem, once again. How do you measure "increased speed of software delivery" and "mitigate security and compliance risk," or something like "optimize the use of infrastructure due to the self-service and developer portals"? How do trust those measurements and even the more straight forward ones like
There's a good trick here though: if it's difficult to gather those numbers, chances are you have no idea what the ROI of your current platform is (the "do nothing" option when it comes to introducing platform engineering). I suspect this is how most organizations are. The Director of No is saying platform engineering is a bad idea...but has no idea how to quantify how well, or poorly, the current "platform" is doing.5
Filling out the giant ROI spreadsheet will probably drive how you think of and decide on platform ROI.6 Tactically, this means that you want to be the first one to introduce a complex model like this if you're in a competition to get a platform in place. This could be if you're battling internal competition (some other group has an opposing platform and/or the option is to do nothing), or you're a vendor selling a platform.
Whoever introduces the ROI model first gets the define the ROI model.
Like canonical ROI calculations, these models are also showing you return over time, usually in three to five years terms. This can introduces an executive ownership problem. While the average tenure of CIOs is actually longer than most people joke about - four or five years, depending on the industry and geography - people move around on projects and within groups in IT.
A positive ROI model assumes you’ll see it through to the end without changing it. So, if the “owner” of the model has shifted and given ownership to someone else, you may not stick to the original plan. There’s also the chance that people will just forget what the point of the ROI model is and, more importantly, the plans that go with it. Pretty soon, you’re making new ROI models. A good test here is to see how quickly you can find the current ROI model (or “business case”) that you’re operating with.
Instead of making a template for your ROI spreadsheet, you can aggregate the outcomes from several organizations. You still have The Big Spreadsheet in the previous example, but the point of the aggregate ROI is to show that the platform has worked in other organizations. The aggregate ROI is trying to convince you that the platform benefits are real and achievable.
Vendors like using these, of course, aggregating their customers. We put one of these out recently, done by ESG.
As ever, the problem with using this type of ROI is getting your input numbers. However, I think aggregate ROIs are good for both figuring out a model and figuring out a baseline for what to expect. Because it’s based on what other organizations have done, you have some “real world” numbers to start with. When vendors do it, these types of studies often contain quotes and testimonials from those customers as well.
You can hire Forrester Consulting to do their “Total Economic Impact” studies. Here’s a very detailed one from 2019 on Pivotal Cloud Foundry (now called Tanzu Platform for Cloud Foundry, or tPCF for short). Because they do these for multiple vendors, it’d be cool if they somehow aggregated all the aggregates. And I wonder if they use the same models for the same technologies?
You notice how I typed Forrester Consulting? That’s because it’s not “Forrester the industry analysts you’re thinking of.” Because you’re commissioning people to work on these TEIs (and other aggregate ROIs), it’s easy to carelessly dismiss them as paid for.
Sure, there’s certainly selection bias in these studies - you don’t hire them to analyze an aggregate of failures. But, these aggregate ROIs are still useful for proving that the platform works. That old TEI report interviewed four companies and based their model and report on them, same for the newer one. As with all the ROI examples, here, the aggregate ROI is also showing you an ROI model for platforms.
Us vendors have an obvious use for these PDFs: to show that our stuff is great! If you’re not one of us vendors, and you’re using these kinds of ROIs to get past the Director of No, I’d suggest looking at PDFs from rival companies and doing a sort of “aggregate of aggregates.” You’re looking to:
Prove the concept of platform engineering and the worth of platforms.
Show that it’s achievable at similar organizations - it’s not just something that Google or Spotify can do instead of “normals.”
Establish a baseline for results - we need to achieve results like these four other companies for it to make sense.
Create/steal a model - as with the last two ROI models, just having a model to start with is useful.
All of this started because someone asked me to help them put together a developer survey to show the value of platforms. A couple years ago I helped get the developer toil survey out. That survey doesn’t really address the value of platforms. You could use it to track ongoing improvement in your development organization, but attributing that to platforms, AI, or just better snacks in the corporate kitchen isn’t possible. I’d still like to know good survey questions that platform engineers would send out to application developers to gauge ongoing value.
Logoff
That’s enough for today! I’m already late for a call (tangentially on this topic!) so I didn’t even proof read the above. NEWSLYLETTERSSSSS!
In my experience, “ROI” in these conversations is not as simple as strict definition of Return on Investment. It’s not like ROI of an investment, or even ROI on, say, moving physical infrastructure to virtual infrastructure, or moving on-premises apps to SaaS. Instead, as in this scenario, it’s something more like “convince me that we should change based using the language of money in an enterprise.” That’s why terms like “outcomes” and “value” are thrown around in ROI conversations. They add to the business bullshit poetry.
Before reading it, I had no idea this paper was sponsored by my work, VMware Tanzu. Fun!
There’s an interesting take on “efficiency” in this long pondering on why there’s now less ornamentation in architecture than the past. In theory, since it’s cheaper to produce building ornamentation due to, you know, factories and robots, it should be cheaper to put them on buildings. And yet, we don’t! The author more or less says it’s due to fashion and fancy, driven by “a young Swiss trained as a clockmaker and a small group of radical German artists.” This is pretty amazing when used as an analogy to tech trends. Major shifts in tech usage can often seem irrational and poorly proved - you’re usually going from more functionality and reliability, to less functionality and reliability…because the developers think it’s cool, or just doing “resume driven development.”
DORA metrics also have this problem, especially when you scale up to hundreds, worse, thousands of applications. You’d think you could automate a lot of the basic collection, but there’s a certain - I don’t know - do metrics tell you what’s happening, or does measuring the metric make what’s happening happen? I’m not a quantum physicists or 20th century management guru, so I don’t know what I’m talking about, I guess.
There’s a related thing you can do when the Director of No doesn’t know the ROI for doing nothing. You can do an end-to-end mapping of how software goes from idea to production, mapping out a pipeline, value stream, flow: whatever. Often, very few people know every step that happens, let alone how long each step takes or the wait-time between each step. Coupled with a general feel that their app and ops team are not doing enough or “working smart” enough, this analysis often motivates them to do something different.
There’s that observer effect problem again!
Just links and wastebook today.
Spring Boot 3.3 Boosts Performance, Security, and Observability - All these years later, Spring is still in wide use and still evolving.
‘You are a helpful mail assistant,’ and other Apple Intelligence instructions - Not that many, but interesting.
Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025 - ”At least 30% of generative AI (GenAI) projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value, according to Gartner, Inc. ” // Also a chart with rough estimates for initial and ongoing costs: input for an ROI model. // The other take here is that you need to start slow and small with enterprise AI: no one knows what will work, what good business cases are, what customers (and channels/suppliers/employees) will rebel against, etc.
Nike: An Epic Saga of Value Destruction - The risks of shifting to a direct, Internet-based go-to-market. And, also, of focusing only on growing revenue from existing customers instead of also getting new customers: 'Obviously, the former CMO had decided to ignore “How Brands Grow” by Byron Sharp, Professor of Marketing Science, Director of the Ehrenberg-Bass Institute, University of South Australia. Otherwise, he would have known that: 1) if you focus on existing consumers, you won’t grow. Eventually, your business will shrink (as it is “surprisingly” happening right now). 2) Loyalty is not a growth driver. 3) Loyalty is a function of penetration. If you grow market penetration and market share, you grow loyalty (and usually revenues). 4) If you try to grow only loyalty (and LTV) of existing consumers (spending an enormous amount of money and time to get something that is very difficult and expensive to achieve), you don’t grow penetration and market share (and therefore revenues). As simple as that… ' // A little more commentary here.
Where Facebook’s AI Slop Comes From - So cyberpunk! // ”the YouTuber who was scrolling through images of rotting old people with bugs on them.” // Related: ‘we don’t need the term “slop”. Consumers have decided that “AI” in its entirety is bullshit.’
Teaching to the Test. Why It Security Audits Aren’t Making Stuff Safer - Bullshit Work in enterprise security. // Plus, why not start with basics before going advanced: ‘The world would be better off if organizations stopped wasting so much time and money on these vendor solutions and instead stuck to much more basic solutions. Perhaps if we could just start with “have we patched all the critical CVEs in our organization” and “did we remove the shared username and password from the cloud database with millions of call records”, then perhaps AFTER all the actual work is done we can have some fun and inject dangerous software into the most critical parts of our employees devices.’
The Six Five: Advancing DevOps: Infrastructure as Code, Platform Engineering and Gen AI - “we see in our latest research that 24% of organizations are looking to release code on an hourly basis, yet only 8% are able to do so.”
Why Is Demand Marketing An Obstacle To Its Own Success? - ‘Too many marketing subfunctions (demand/ABM, field marketing, customer marketing, digital, and events) create strategies unique to their function, independent from the others. Marketers often say that they have a “unified” plan, but it’s more like a PowerPoint deck with “chapters” for each team’s individual plan. This approach prevents marketers from orchestrating programs to reduce overlap and waste, and what’s worse, it has a direct, negative impact on buyers and customers.’
The Prompt Warrior - Posts on prompts for all sorts of things.
fabric/patterns - Whole bunch of prompts on a range of topics, even D&D!
Why CSV is still king - We have a running joke on the podcast that every (enterprise) app needs CSV export. It’s only 10% a joke.
Epic corporate jargon alternatives - Poetic alternatives to business bullshit jargon.
2025 Demand and ABM Budget Planning Guide: Do Better With Less - Enterprise software marketing budgets mostly flat, if not less: ”On the surface, it may appear as if most budgets are increasing, as 82% of global B2B marketing decision-makers report their budgets being increased by 1% or more. But once you adjust for inflation, it’s the same old story, as only 35% of organizations will see a real increase in budgets (with 31% of the 35% saying that the increases would be in the 5–10% range and 4% saying that their budgets would increase by 10% or more).”
This is in our small, neighborhood store. The larger ones have even more!
“Kerfulle.”
“gas station sushi” ROtL, 547.
This is a really well put together presentation, with good content. It manages to introduce one, simple idea (and notice how she returns to/reminds you of it at the end), and yet not make it TED-talk level surface-level-simple. It gives you practical things to do if you want to work on “developer productivity.” And, it’s a perfect example of giving a vendor-pitch that doesn’t seem like a vendor pitch (one of the chief skills of a thought leader): customer cases with ROI, mention of the product being sold, and even screenshot-demos! It’s also re-usable and mine-able for EBCs, sales meeting, etc.
Spring Boot 3.3 Boosts Performance, Security, and Observability - All these years later, Spring is still in wide use and still evolving.