Coté

Coté

Avoiding all the usual, boring app development problems with AI

Most of the generative AI applications we'll see in the coming years will be just new features added to existing applications. Even more pragmatically, simply improving how existing applications work will drive a lot of the AI benefits. When it comes to applications, this means we should manage AI like we would any other service, both in process and how we run it. That's my prediction at least. I'm as enamored with AI as anyone else, trying out plenty of experiments and ravenously hungry for any real world case studies that are more complex that chatbots, sophisticated search, and (re)writing.

The reality of AI roll-out in the enterprise is creeping in. That reality is: it’ll be slower than we were promised. For example, Battery Venture's recent enterprise tech spending survey says: 

The AI wave is still building, but the future has been slower than anticipated. Today only 5.5% of identified AI use cases are in production, a sobering reality check on respondents’ Q1’24 projection that 52% of identified use cases would be in production over the next 24 months. 

As Benedict Evans put it "the future can take a long time."

My point is not to dampen the enthusiasm, but to make the timeline more realistic and, thus, the chances of success much higher.

In enterprise software, sanity is valuable because it introduces stability and reliablity. I'm not sure I want my bank working at break-neck speed, applying new ideas this way and that before figuring out what works and, even, what's valuable for me. As with any highly regulated business, I want to trust my bank, and that trust comes from knowing they're operating in a consistent, proven way. I want my bank to be a lot more sane than whatever services I use to share pictures of my sandwiches or listen to Yacht Rock.

There's years of AI-driven benefits in our future, especially in large organizations and businesses. AI stuff could, indeed, be a silver bullet: something that didn't previously exist that lets us solve problems faster and cheaper, and even better than what we previously had.

But, how you achieve those goals is likely immune to silver bullet dreams. This is especially true when it comes to applications. The practice of creating, evolving, and maintaining good applications will remain the same as they were in the pre-AI era. If we mess around with and ignore those practices, just as we found out when Kubernetes up-ended all the progress we made with PaaS, we'll have to start all over again, losing time, progress, expertise, and trust.

In the enterprise, just like the rest of the world, great ideas are a dime a dozen, in a good way: there's lots of options for improvement. In a large organization, three hurdles stand in the way of transforming good ideas into code and ultimately into production: (1) neglecting product/problem fix, (2) "politics" due to silo-defending cultures, and, (3) fear of releasing due to lack of trust in resilience, reliability, and sometimes cost.

When enterprises try to spin up AI projects, these three barriers will come into effect just like they would any new IT services. Let's look at each.

Garbage Chairs of Amsterdam, Karlsruhe, Germany edition.

Most IT projects fail because of neglected product/problem fit 

I'm sure you've used many applications and thought "do the people who build this software actually use it?" That's the first layer of the problem: the app is poorly designed and doesn't actually help the people using it do their job better. The second is that the application is just solving the wrong problem. "Wrong sounds" judge-y. What I mean there is that there were better problems to solve first. Of more vexingly, better ways to solve the problem. 

AI projects will have to solve this product/problem fit. Worse, that product management labyrinth is even more dangerous when it comes to hot new technologies because we get so torqued up that we lose sight of basics. We assume that these new silver bullets will remove the need for all that tedious process stuff we’re currently putting up with. 

I like how Jürgen Geuter put it recently

[W]ith every new technology we spend a lot of money to get a lot of bloody noses for way too little outcome. Because we keep not looking at actual, real problems in front of us - that the people affected by them probably can tell you at least a significant part of the solution to. No, we want a magic tool to make the problem disappear. Which is a significantly different thing than solving it. Because actually solving a problem includes first fully understanding the reasons for the existence of the problem, reasons that often come back to an organizations’ structure, their – sometimes inconsistent – goals and their social structure in addition to actual technical issues.

The antidote for this is consistently applying product management. Well, applying it at all for most enterprises.

Product management brings a cool-headed approach to all that "magic." The good news is that product management is one of the more understood, more developed and real-world tested software practices out there. And, if you follow the frequent weekly or fortnightly release cycles that people who follow the Tanzu Labs model do, you introduce a data-driven method of planning and improving your apps. I've seen this done at numerous large enterprises. You can read more about it in my book Monolithic Transformation and the Tanzu Labs Product Manager Playbook.

5 Türen, Gerhard Richter, 1967.

Culture

To me, "politics" is whatever noise, sandbagging, and (selfish) behavior people in an organization do that prevents you from doing "the right thing." No one likes "politics": it's always bad. Otherwise we'd just call it "work." It's grit in the system, often put there by other people in your organization who want to protect what they have, hoard budget and the rewards that come with success.

For the most part, it's not so much the grit-throwers fault, but management's fault for setting up and sustaining the system to work that way. People are rational, they figure out "the game" and play it.

When it comes to initiatives like the magic of AI, politics shows up in maximum force. People either want to hoard the benefits and attention, or they want to defend themselves. Is it taking a long time to get access to the infrastructure you need to run AI models? That's probably politics. Are you trying to find time for yet another meeting to get access to the manufacturing quality data you need? I bet it's "politics."

The fear that AI will eliminate jobs will compound this problem even more: people will be reluctant to give you access to their data to automate analysis because "then, what will I do?" After all, employees rarely get the benefits of “productivity,” speeding up work by 20% doesn’t mean workers now get Fridays off.

There's absolutely nothing about the magic of AI that will solve these culture problems. To adapt what one of my team-mates once said: "AI will not fix your broken culture."

Solving culture problems is difficult, but, again, we have decades of experience learning what works and doesn't work. There's a lot of "it depends'ing" to it, yes. But, as with product management, that's why you start with a system and rigorously apply it.

Lack of trust in process and platforms

I mentioned that I want my bank to be sane. This is because, you know, they have all my money. If that money just disappeared overnight, my life would get very difficult. It'd be a major bummer!

The financial sector hasn't exactly behaved rationally during my years as an adult, and there's a whole generation of people who were born into bad financial times. That is, banks have earned that need for trust. 

Similarly, IT has earned a need for trust. Just as with banking, in aggregate, things are fine. I still have all my money after all these years! And, IT keeps running businesses. But, when big problems happen - delays, growing budgets, ineffective apps, and security breaches - all of IT gets blamed. People forget how valuable most all IT is. In short, IT needs to constantly win and maintain the business’ trust.

When it comes to software, this lack of trust feeds delays by adding in endless governance and process. But it also injects fear into the people doing the work. If things go poorly, if we're not 100% correct and successful, we will be punished. For applications, this means you have a bias towards taking fewer risks. For organization, this means you're afraid of releasing your apps. 

It all comes together in a cultural malaise - a fog of timidity and lethargy that stifles innovation, slows down decision-making, and encourages risk-averse behavior. Here again, the people are acting rationally within the system that management has built and continues to maintain, if only through neglect.

In applications, the first step to solving this malaise is to get a reliable, proven platform in place, changing how operations works to be the product managers of that platform. The second step is to change how developers write applications to take advantage of that platform.

This is where platforms and platform engineering come in. I'm, you know, reticent to hand over the torch from DevOps to platform engineering - the PE crew has done a good enough job just swiping it - but platform engineering has a lot going for it when it comes to building trust. Of course, you need a platform too - culture without technology is just delightful conversations.

What I'm talking about here is a baseline of trust in application development and platforms, nothing to do with AI. But, AI will need that same amount of trust. And that trust will come from being "just another service" in a reliable platform. You're not going to want AI to exist on its own as a weird service held together by newly typed up python, you'll want it supported by platforms relying on the same old, boring but “enterprise grade” components that run the world day in and day out.

This means putting in place the SRE-smells in platform engineering, the product management in platform engineering to make sure the AI services are useful and used by developers, and then the actual platform stack itself, hopefully one that you didn't just paste-pot together out of parts.

People probably trust AI now because it seems like magic, because they haven't used it day-to-day to know how finicky it is, and because not many AI apps have been put into production yet. But, as with any new software, the more it’s used, the more you'll notice the failures and start to build up mis-trust.

We know how to run, program with, govern, and secure any old type of "service," and I don't think there'll be much different with AI services if we treat them as such. You need an AI platform strategy.

How to eat silver bullets

Does this seem to slow? Unrealistic in the face of boards and CEOs hungry for those AI silver bullets? It can, and we see this over and over. You can achieve those stupendous results, but your biggest problem is going to be figuring out what exactly to do and how to organize your teams to do it.

If you have those parts right, you've got a good chance of getting that AI magic. You'll certainly have a better chance than people who follow the fire-aim-ready approach to strategy. Seemingly contradictory, the most important thing is to start right now: assess your software capabilities. Do you have product managers? Are you able to release software every week or two to actual users? Can you gather feedback about what works and doesn't work so you can refine your app? Maybe you can, but most organizations don't have the basics of that loop in place.

Adding product-thinking and frequent releases to your organization and culture takes a long time, but it will take even longer if you don't start. Put that in place, and then you'll be on your way. Set realistic expectations and then you'll benefit from another theory of applying IT magic to your business: "Most people overestimate what they can do in one year and underestimate what they can do in ten years."

I’ll be in two talks on these topics next month at Explore in Barcelona, November 4th to 7th.

I’m really interested in a panel I’m moderating with a platform engineer from Mercedes-Benz, Benni Miano, and what I’m going to call our chief AI architect (in Tanzu-land), Adib Saikali. I’ve been really bored with enterprise AI for awhile, and talking to those two has gotten me interested in again. Adib had some interesting thoughts on roles and responsibilities in the enterprise AI stack.

There’s another one on platform contracts that I’m helping out with. It’s based on some work that we’ve done with NatWest to get developers to shift to cloud native applications - that whole point above of changing how your developers work to take advantage of your platforms.

And, of course, there’ll be all sorts of talk about how we think about adding AI to your platforms. Plus, Barcelona in November - what’s not to like! Check out the two talks, and tell me if you’re coming.

Logoff

I’ll save up the links and weird finds for next time. If you’re into my experiments using generative AI to play solo Dungeons & Dragons, here’s a new technique I’m working on and recommend.

@cote@hachyderm.io, @cote@cote.io, @cote, https://proven.lol/a60da7, @cote@social.lol