Large scale application modernization with Rohit Kelapure 

Whatever you want to call it, “legacy” software is a problem. In one of our recent surveys, 76% of executives said they are too invested in legacy applications to change how they do software. It can seem hard, but fixing that blocker is possible. As with all things in software, there is no quick fix, it just takes discipline, work, and time. In this episode, Coté talks with VMware Tanzu’s Rohit Kelapure who’s been working in application modernization for years. He goes over the initial portfolio analysis and thinking that the Pivotal Labs application modernization teams walk customers through.

Transformation case study

Jana Werner and Barry O’Reilly have a great case study and commentary of transformation at a couple organizations, primarily a bank. I was lucky enough to get Jana on for an interview on Software Defined Talk, talking about the case, information theory, and Nietzche. It should be in the podcast feed next week.

Here’s some of my highlights:

  • “In one Financial Services organization we’ve been able to implement an increase to contactless card payment limits in days where the previous increase took months. We stood up a work from home solution for call centre staff, automated processes and relieved pressure on teams having to manually capture customer data over the phone in 3.5 weeks—a record for service delivery and a credit to the teams and individuals who made this possible.”
  • Focus on the outcome, not following the process, or working a lot: “The subtle yet powerful shift to outcome-based measures of success — reducing and resolving customer issues — over traditional output-based deliverables of being on time, budget and scope had a pronounced impact on productivity and employee satisfaction.”
  • When the meeting (the bureaucracy) becomes the product: “Slide decks, paper proposals and steering group sessions all take a significant investment to prepare, avoiding “difficult” conversations by socializing and re-socializing in advance of exec meetings, deferring decisions, requesting a raft of meeting minutes to document, correcting, amending and signing them off—the majority of which few people read.”
  • This effects how the entire organization runs, and, indeed the pace of delivery: “The speed of these cycles determines the heartbeat of the organization.”
  • Management starts asking different questions: “Responses can now shift post-release from ‘How could you have got this wrong?’ to ‘What is our next best action?’ Seeing how shipping smaller slices allows us to iterate, also means leaders can set direction and monitor metrics over setting targets and failing anything but perfect results.”
  • Often, management has failed to build a system (vision, strategy, norms, “culture,” etc.) that makes staff’s job clear. That’s a problem! “Test your strategy cascade methods to maintain the clear purpose, problems and outcomes teams are working towards. Ask teams if there’s a clear line of sight between the objectives, and how their work contributes to achieve your shared success. If they can’t see it, fix it. Maximising both organizational alignment and autonomy can be the biggest accelerant for sustainable pace, employee engagement and amazing customers experiences.”
  • And: “The biggest hurdle to change is people not believing it’s possible.”

There’s a lot good in there. Check out the case study.

How Kubernetes adds agility in challenging times

New article of mine:

The IT outcomes of Kubernetes are clear: 95% of businesses report clear benefits from adopting Kubernetes, with more efficient resource utilization and shorter software development cycles amongst the top benefits cited. The benefits don’t stop with the IT team, though. In an era where IT mostly determines competition and growth, the more agile the technology at the heart of a business, the greater the agility of the business overall.So, the business case for Kubernetes is clear. To those of us in IT at least.It makes sense that empowering development teams to do more in less time has clear benefits for businesses on paper. But given the inherent complexity of Kubernetes, the way in which these benefits actually manifest may not be so clear for those outside the IT department, particularly in the early stages of implementing the technology.Here, we look at why the tangible outcomes Kubernetes can provide across a business are worth overcoming initial challenges it may present, and what it means in the context of global events many organisations have been faced with in 2020 so far.

Read the rest!

Straddling the firewall: cloud from 2010 to 2020 (& what to do next)

I gave the ten year anniversary talk at the CloudAustin meetup. Ten years ago, I gave the first talk. Here’s a bit of the essay I wrote out as I was working on the presentation.

I want to go over the last ten years of cloud, since I gave the first talk at this meetup, back in August 2010. At the time, I was wrapping up my stint at RedMonk, though I didn’t know that until a year later. I went to work at Dell in corporate strategy, helping build the software and cloud strategies and businesses. I went back to being an analyst at 451 Research where I ran to software infrastructure team, and then thanks to my friend, Andrew Shafer, ended up where I am now, at Pivotal, now VMware. I also have three kids. And a dog. And live in Amsterdam.

Staying grounded.

Beware, I still have to pay my bills

Pic: Kadumago, Nov 2019.

You should know that I have biases. I have experiences, ideas, “facts” even that come from that bias. I work at VMware, “VMware Tanzu” to be specific which is VMware’s focus on developers and the enterprise software they write. I’ve always worked on that kind of thing, and often in the interests of “on-premises” IT.

Indeed, at the end, I’m going to tell you that developers are what’s important – in fact, that Tanzu stuff is well positioned for the future I think should exist.

So, you know…whatever. As I used to say at RedMonk: “disclaimer.”

Computers from an outlet

Back in 2013, when I was at Dell, I went on an analyst tour in New England. Matt Baker and I visited IDC, Gartner, Forrester, and some smaller shops in hotel lobbies and fancy restaurants. These kind of analyst meetings are a bit dog and pony, but they give you a survey of what’s going on and a chance to tell analysts what you think reality looks like.

Anyhow, we walked into the Forrester office, a building that was all brand new and optimistic. Some high-level person there way into guitars and, I don’t know, 70’s rock. Along with Forrester branded ballpoint pens, they gave us The Allman Brothers’s at Fillmore East CDs as thank you gifts. For real. (Seriously.) Conferences room names were all rock-and-roll. They had an electric guitar in the lobby. Forrester: not the golden buttoned boating jackets of Gartner, or the short-sleeved button-up white shirts of Yankee-frugal IDC.

Being at Dell, we were interested in knowing one thing: when, and even if, and at what rate, will the on-premises market succumb to public cloud. That’s all any existing vendor wanted to know in the past decade. That’s what that decade was all about: exploring the theory that public cloud would take over on-premises IT.

The Forrester people obliged. The room was full of about 8 or 10 analysts, and the Forrester sales rep. When a big accoiunt like Dell comes to call, you bring a big posse. Almost before Matt could finish saying the word “cloud,” a great debated emerged between two sides of analysts. There were raised voices, gesticulating, and listing back and forth in $800 springy, office chairs. All while Matt and I sort of just sat there listening like a call-center operator who uses long waits for people to pick up the phone as a moment of silence and calm.

All IT will be like this outlet here, just a utility, one analyst kept insisting. No, it’ll be more balanced, both on-premises and in cloud over time, another said. I’d never encountered an analyst so adamant about public cloud, esp. in 2013. It seemed like they were about to spring across the table and throttle their analyst opponent. I’m sure time ran out before anyone had a chance to whiteboard a good argument. Matt and I, entertained, exchanged cards, shook hands, and left with our new Allman Brother’s album.

Public cloud has been slower to gobble up on-premises IT than fanatics thought it would be, even in the late 2000’s. The on-premises vendors deployed an endless amount of FUD-chaff against moving to public cloud. That fear of the new and unknown slowed things down, to be sure.

But I think more practical matters are what keeps on-premises IT alive, even useful. We used to talk about “data gravity” as the thing holding migrations to public cloud back. All that existing data can’t be easily moved to the cloud, and new apps will just be throwing off tons of new data, so that’ll just pull you down more. (I always though, you know, that you could just go to CostCo and buy a few hard-drives, run an xcopy command overnight, and then FedEx the drives to The Cloud. I’m sure there’s some CAP theorem problem there – maybe an Oracle EULA violation?) But as I’ll get to, I think there’s just tech debt gravity – probably a 5 to 10 million applications[1] that run the world that just can’t crawl out of on-premises datacenters. These apps are just hard to migrate, and sometimes they work well enough at good enough cost, that there’s no business case to move them.

Public cloud grows and grows

But, back to public cloud. If we look at it by revenue, it’s clear that cloud has been successful and is used a lot.

[Being respectful of analyst’s work, the reader is asked to open up Gartner’s Nov 13th press release, entitled “Gartner Forecasts Worldwide Public Cloud Revenue to Grow 17% in 2020” and look at the table there. You can find charts as well. Be sure to add up the SaaS-y categories into just one.]

As with all such charts, this is more a fanciful illustration of our instincts. Charts are a way to turn hunches into numbers, transform the qualitative into the quantitative. You can use an even older forecast (also, see the chart…and be sure to take out “advertising,” obviously) to go all the way back to 2010. As such, don’t get hung up on the exact numbers here.

The direction and general sizing in these lines is what matters. SaaS is estimated at $176 billion in 2020, PaaS $39.7 billion, IaaS at $50b.

So. Lots of revenue there. There are a few things to draw from this chart – again, hunches that we can gussy up into charts:

  1. The categorization of SaaS, PaaS, and IaaS stuck. This was finalized in some NIST work, and if you can believe it, categorizing things like this drove a lot of discussion. I participated in at least three of those exercises, at RedMonk, Dell, and then at 451 Research. Probably more! There was a lot of talk about “bursting” and “hybrid cloud.” Traditional IT, versus private cloud. Is it “on-prem” or “on-premises” – and what does it say of your moral character is you incorrectly use the first? Whatever. We returned to the simplicity of apps, devs, and ops.
  2. SaaS is sort of “forgotten” now as a cloud thing. It looms so large that we can’t even see it when we narrow our focus on the other two parts of cloud. Us in the dev and ops crowds just think of SaaS as normal now, not part of this wild, new category of “cloud.” Everyday, maybe even every hour we all use SaaS – the same is true for enterprises. Salesforce’s revenue went from $1.3bn in 2010 to $17.1bn in 2020. We now debate Office 365 vs. GMail, never Exchange. SaaS is the huge winner. Though it’s sort of not in my interests, when people talk about cloud and digital transformation, I tell them that the most useful thing they should probably focus on is just going all SaaS. You see this with all the remote working stuff now – all those firewalls and SharePoints on intranets are annoying, and supporting your entire company working from home requires the performance and scaling of a big time SaaS company. SaaS is what’s normal, so we don’t even think about it anymore.
  3. Below that, is PaaS. The strange, much maligned layer over the decade. We’re always optimistic about PaaS, as you can see in the newer forecast. It doesn’t seem to deliver on that optimism, at least in the mainstream. I certainly hope it will, and think it should. We’ll see this time. I’ve lived through enough phases of PaaS optimism and re-invention (well, all of them, I suppose) that I’m cautious. To be cynical to be optimistic, as I like to quip: it’s only stupid until it works. I’ll return to the glorious future PaaS in a bit. But first…
  4. Below that, we have IaaS – what us tech people think of mostly as cloud. This is just a big pool of hardware and networking. Maybe some systems management and security services if you consider that “infrastructure.” I don’t know. Pretty boring. However, if you’re not buying PaaS, this is the core of what you’re buying: just a new datacenter, managed by someone else. “Just” is insulting. That managed by someone else is everything for public cloud. You take this raw infrastructure, and you put your stuff on it. Forklift your applications, they say, deploy new applications. This is your “datacenter” in the cloud and the way most people think about “cloud” now-a-days: long rows of blinking lights in dark warehouses sometimes with rainbow colored pipes.

IT can’t matter fast enough

Let’s get back to the central question of the past decade: when will public cloud eclipse on-premises IT?

First, let’s set aside the nuance of “private cloud” versus “traditional IT” and just think of it all as “on-premises.” This matters a lot because the nature of the vendors and the nature of the work that IT does changes if you build and manage IT on your own, inside the firewall. The technology matters, but the responsibility and costs for running and maintaining it year after year, decade after decade turns into the biggest, er, headache. It’s that debate from Forrester people: when will IT become that wall outlet that Nicholas Carr predicted long ago? When will IT become a fungible resource that people can shed when all those blinking lights start holding back their business ambitions?

What we were hunting for in the past ten years was a sudden switch over like this, the complete domination of mobile over PCs:

The Dediu Cliff.  Source: “The rise and fall of personal computing,” Jan 2012, Horace Dediu.

This is one of the most brilliant and useful strategy charts you’ll ever see. It shows how you need to look at technology changes, market share changes. Markets are changed by a new entrant that’s solving the same problems for customers, the jobs to be done, but in a different way that’s ignored by the incumbents.[2] This is sort of big “D,” Disruption theory, but more inclusive. Apple isn’t an ankle biter slowly scaling up the legs of Microsoft, they’re a seasoned, monied incumbent, just “re-defining” the market.[3]

Anyhow, what we want is a chart like this for cloud so that we can find when on-premises crests and we need to start focusing on public cloud. Rather, a few years before that so we have plenty of time to invest and shift.  When I did strategy at Dell, Seth Feder kept up this chart for us. He did great work – he was always talking about how good the “R squared” was – I still don’t know what that means, but he seemed happy with it. I wish I could share his charts, but they’re lost to Dell NDAs and shredders. Thankfully, IDC has a good enough proxy, hardware spend on both sides of the firewall:

[The reader is asked to open IDC’s April 2nd, 2020 press release titled “Cloud IT Infrastructure Spending Grew 12.4% in the Fourth Quarter, Bringing Total 2019 Growth into Positive Territory, According to IDC” and contemplate the third chart therein.]

You can’t use this as a perfect guide – it’s just hardware, and, really can you tell when hardware is used for “private cloud” versus “traditional IT”? And, beyond IaaS, we’d like to see this for the other two aaS’s: applications and developers. If only we had Seth still toiling away on his charts for the past decade.

But, once again, a chart illustrates our hunch: cloud is Hemingway’s bankruptcy thing, slowly racing towards a Dediu Cliff. We still don’t know when on-premises compute will suddenly drop, but we should expect it…any year now…

…or decade…

…I guess.  

¯\_(ツ)_/¯

Pre-cliff jumpers

Competition in cloud was fierce. Again, I’m leaving out SaaS – not my area, don’t have time or data. But let’s look at infrastructure, IaaS.

[The reader is asked to open up the 2010 IaaS MQ and the 2020 IaaS MQ.]

It’s worth putting these charts side-by-side. They’re Gartner Magic Quadrants, of course. As with all charts, you can hopefully predict me saying, they illustrate our intuitions. What’s magical (yes!) about the MQ’s is that they show a mixture of sentiment, understanding, and actual feature set. You can see that in play here as we figured out what cloud was.

Infamously, the first IaaS MQ in 2010 has Amazon in the lower right, and a bunch of “enterprise grade” IaaS people up and to the right. Most of us snickered at and were confused by this. But, that 2010 list and ranking reflected how people, esp. big corporate buyers were thinking about what they wanted cloud to be. They wanted it to be like what they knew and were certain they needed, but run by someone else with that capex-to-opex pixie dust people used to obsesses so much about.

Over the next ten years, everyone figured out what public cloud actually was: something different, more or less. Cloud was untethered from those “enterprise grade” expectations. In fact, most companies don’t want “enterprise grade” anymore, it’s not good enough. They want “cloud grade.” Everyone wants to “run like Google,” do DevOps and SRE. Enterprise buyers are no longer focused on continuing with what they have: they want something different.

All those missing dots are the vendors who lost out. There were many, many reasons. The most common initial was, well, “server hugging.” Just a biased belief in on-premises IT because that’s where all the vendor’s money had always came from. People don’t change much after their initial few decades, enterprises even less.[4]

The most interesting sidebars here are Microsoft and Rackspace. Microsoft shed it’s Windows focus and embraced the Linux and open source stack of cloud. Rackspace tried a pre-kubernetes Kubernetes you kids may not remember, OpenStack. I’m hoping one day there’s a real raucous oral account of OpenStack. There’s an amazing history in there that we in the industry could learn from.

But, the real reason for this winnowing is money, pure and simple.

Disruptors need not apply

You have to be large to be a public cloud. You have to spend billions of dollars, every year, for a long time. Charles Fitzgerald has illustrated this over the years:

It costs a lot to save you so much. Source: “Follow the CAPEX: Cloud Table Stakes 2018 Edition,” Charles Fitzgerald, February 2019.

Most of the orange dots from 2010 just didn’t want to do this: spend billions of dollars. I talked with many of them over the past ten years. They just couldn’t wrap their head around it. We didn’t even know it cost that much, really. Instead, those orange dots fell back on what they knew, tying to differentiate with those “enterprise grade” features. Even if they wanted to and did triy, they didn’t have the billions in cash that Amazon, Microsoft, and Google had. Disruption is nice, but an endless cash-gun is better.

I mean, for example, what was Dotcloud going to do in the face of this? IBM tried several times, had all sorts of stuff in its portfolio, even acquiring its way in with SoftLayer – but I think they got distracted by “Watson” and “Smart Cities.” Was Rackspace ever going to have access to that much money? (No. And in fact, they went private for four years to re-work themselves back into a managed service provider, but all “multi-cloud” now.)

There are three public clouds. The rest of us just sell software into that. That’s, more less, exactly what all the incumbents feared as they kept stacking servers in enterprise datacenters: being at the whim of a handful of cloud providers, just selling adornments.

$80bn in adornments

Source: “Investing City” on Seeking Alpha, originally from Pivotal IPO investor presentation.

While the window for grabbing public cloud profits might have closed, there’s still what you do with all that IaaS, how you migrate your decades of IT to and fro, and what you do with all the “left overs.” There’s plenty of mainframe-like, Microfocus-y and Computer Associates type of revenue to eek out of on-premises, forever.

Let’s look at “developers,” though. That word – developers – means a lot of things to people. What I mean, here, is people writing all those applications that organizations (mostly large ones) write and run on their own. When I talk about “developers,” I more mean whatever people are in charge of writing and running an enterprises (to use that word very purposefully) custom-written software.

Back when Pivotal filed to IPO in March of 2018, we estimated that the market for all of that would be $80.4bn, across PaaS and on-premises.

This brings us back to PaaS. No one says “PaaS” anymore, and the phrase is a bit too narrow. I want to suggest, sort of, that we stop obsessing over that narrow definition, and instead focus on enterprise developers and in-house software. That’s the stuff that will be installed on, running on, and taking advantage of cloud over the next ten years. With that wider scope, an $80bn market doesn’t seem too far fetched.

And it’s real: organizations desperately want to get good at software. They’ve said this for many years, first fearing robot dogs – Google, Amazon. AirBnB, Tesla…whatever. After years of robot dog FUD, they’ve gotten wiser. Sure, they need to be competitive, but really modernizing their software is just table stakes now to stay alive and grow.

What’s exciting, is that organizations actually believe this now and understand software enough to understand that they need to be good at it.

Source: “Improving Customer Experience And Revenue Starts With The App Portfolio,” Forrester Consulting, commissioned by VMware, March, 2020.

We in IT might finally get what we want sometime soon: people actually asking us to help them. Maybe even valuing what we can deliver when we do software well.

Beyond the blinking cursor

Obsessing over that Dediu Cliff for cloud is important, but no matter when it happens, we’ll still have to actually do something with all those servers, in the public cloud or our own datacenters. We’ve gotten really good at building blinking cursor boxes over the past ten years. Blinking cursor boxes

IT people things: developers write code, operations people put together systems. They also have shaky budgets – most organizations are not eager to spend money on IT. This often means IT people are willingly forced to tinker with building their own software and systems rather than purchasing and reusing others.[5] They love building blinking cursor boxes instead of focusing on moving pixels on the screen.

A blinking cursor box is yet another iteration of the basic infrastructure needed to run applications. Applications are, of course, the software that actually moves pixels around the screen: the apps people use to order groceries, approve loan applications, and other thrilling adventures in computing. Applications are the things that actually, like, are useful…that we should focus. But, instead, we’re hypnotized by that pulsing line on a blank screen.

Us vendors don’t help the situation. We compete and sell blinking cursor boxes! So we each try to make one, constantly. Often a team of people makes one blinking cursor box, gets upset at the company they work for, grabs a bunch of VC money, and then goes and makes another blinking cursor box. So many blinking cursors. The public clouds are, of course, big blinking cursor boxes. There was a strange time when Eucalyptus tried fight this trend and to do a sort of Turing Test on the Amazon’s blinking cursor. That didn’t go well for the smaller company. Despite that, slowly, slowly us vendors have gotten closer to a blinking cursor layer of abstraction. We’re kind of controlling our predilections here. It seems to be going well.

Month 13: now, the real work can begin.

Over the past ten years, we’ve seen many blinking cursor boxes come and go: OpenStack, Docker, “The Datacenter of the Future,” and now (hopefully not going) Kubernetes. (There was also still virtualization, Windows, Linux, and mainframes. Probably some AS/400s if we dig around deep enough.) Each of these new blinking cursor boxes had fine intentions to improve on the previous blinking cursor boxes. These boxes are also open source, meaning that theoretically IT people in organizations could download the code, build and configure their own blinking cursor boxes, all without having to pay vendors. This sometimes works, but more often than not what I hear of are 12 month or more projects to stand up the blinking cursor box du jour…that don’t even manage to get the cursor blinking. The screen is just blank.

In larger organizations, there are usually multiple blinking cursor box programs in place, doubling, even tripling that time and money burn. These efforts often fail, costing both time and millions in staff compensation. An almost worse effect is when one or more of the efforts succeeds, kicking off another year of in-fighting between competing sub-organizations about which blinking cursor box should be the new corporate standard. People seem to accept such large-scale, absurdly wasteful corporate hijinks – they probably have more important things to focus on like global plagues, supply chain issues, or, like, the color palette of their new logo.

As an industry, we have to get over this desire to build new blinking cursor boxes every five or so years, both at vendors and enterprises. At the very least we should collaborate more: that seems to be the case with Kubernetes, finally.

Even in a world where vendors finally standardize on a blinking cursor box, the much more harmful problem is enterprises building and running their own blinking cursor boxes. Think, again, of how many large organizations there are in the world, F500, G2,000 – whatever index you want to use. And think of all the time and effort put in for a year to get a blinking cursor box (now, probably Kubernetes) installed from scratch. Then think of the need in three months to update to a new version; six months at the longest, or you’ll get trapped like people did in the OpenStack year and next thing you know it, running a blinking cursor box from the Victorian era. Then think of the effect to add in new databases and developer frameworks (then new versions of those!), security, and integrations to other services. And so on. It’s a lot of work, duplicated at least 2,000 times, more when you include those organizations allow themselves to build competing blinking cursor boxes.

Obviously, working for a vendor that sells a blinking cursor box, I’m biased. At the very least, consider the costs over five to ten years of running your own cloud, essentially. Put in the opportunity cost as well: is that time and money you could instead be spending to do something more useful, like moving pixels around on your customer’s screen?

Once you free up resources from building another blinking cursor box, you can (finally) start focusing on modernizing how you do software. Next: one good place to start.

Best practices, do them

As I’ve looked into how organizations manage to improve how they do software over the years I’ve noticed something. It’ll sound uselessly simple when you read it. Organizations that are doing well follow best practices and use good tools. Organizations that are struggling don’t. This is probably why we call them best practices.

Survey after survey of agile development usage will show this, every year. Simple practices and tools like unit testing are widely followed, but adherence to other practices quickly falls off. How many times have you heard people mention a “sit down stand-up meeting,” or say something like “well, we don’t follow all the agile practices we learned in that five day course – we adapted them to fit us”?

I like to use CI/CD usage as a proxy for how closely people are following best practices.

One of the most important tools people have been struggling to use is continuous integration and continuous delivery (CI/CD[6]). The idea that you can automate the drudgery of building and testing software, continuous integration, is obviously good. The ability to deploy software to production every week, if not daily, is vital staying competitive with new features and getting fast feedback from users on your software’s usefulness.

Very early on, if you’re not doing CI/CD your strategy to improve how you’re doing software – to progress with your cloud strategy, even – is probably going to halt. It’s important! Despite this, for the past ten plus years, usage been poor:

Source: State of Agile Surveys, 3rd through 14th, VersionOne/CollabNet/digital.ai. CI/CD not tracked in 5th/2009. Over the years, definitions change, “delivery” and “deployment” are added; but, these numbers are close enough to other surveys to be useful. See more CI/CD surveys: Forrester survey (2019), DZone CD reports (2014, 2015, 2016, 2017, 2019).

Automating builds and tests with continuous integration is clearly easier (or seen as more valuable?) than continuous delivery. And there’s been an encouraging rise in CD use over the past ten years.

Still, these numbers are not good. Again, think of those thousands of large organizations across the world, and then that half of them are not doing CI, and then that 60% of them are not doing CD. This seems ludicrous. Or, you know, great opportunity for improvement…and all that.

Take a look at what you’re doing – or not doing! – in your organization. Then spend time to make sure you’re following best practices if you want to perform well.

Stagnant apps in new clouds

Over the past ten years, many discussions about cloud centered around technologies and private or public cloud. Was the cloud enterprise grade enough? Would it cost less to run on premises? What about all my data? Something-something-compliance. In recent years, I’ve heard less and less of those conversations. What people worry about now is what they already have: thousands and thousands of existing applications that they want to move to cloud stacks.

To start running their business with software, and start innovating how they do their businesses, they need to get much better at software: be like the “tech companies.” However, most organizations (“most all,” even!) are held back by their existing portfolio of software: they need to modernize those thousands and thousands of existing applications.

Modernizing software isn’t particularly attractive or adventurous. You’re not building something new, driving new business, or always working with new technologies. And when you’re done, it can seem like you’ve ended up with exactly the same thing from the outside. However, management is quickly realizing that maintaining the agility of their existing application portfolio is key if they want future business agility.

In one survey, 76% of senior IT leaders said they were too invested in legacy applications to change. This is an embarrassing situation to be in: no one sets off to be trapped by the successes of yesterday. And, indeed, many of these applications and programs were probably delivered on the promise of being agile and adaptable. And now, well, they’re not. They’re holding back improvement and killing off strategic optionality. Indeed, in another survey on Kubernetes usage, 49% of executives said integrating new and existing technology is the biggest impediment to developer productivity.[7]

After ten years…I think we’ve sort of decided on IaaS, on cloud. And once you’ve got your cloud setup, your biggest challenge to fame and glory is modernizing your applications. Otherwise, you’ll just be sucking up all the same, old, stagnating water into your shiny new cloud.


[1] This is not even an estimate, just a napkin figure. AirFrance-KLM told me they’re modernizing over 2,000 applications. Let’s take the so called Fortune 500, and multiply it out: 500 x 2,000 = 1,000,000. Now, AirFrance-KLM is a big company, but not the biggest by far. A company like JPMC has many more applications. Also, there are governments, militaries, and other large organizations out there beyond the F500. Or you could have started with the Global 2,000. So, let’s assume that there are “millions” of apps out there. (Footnote to footnote: what is an “app”? If you’re asking this, let’s call it a “workload” and see if that satisfies you enough to go back to the main text.)

[2] Yes, yes – this isn’t strictly true. Taking out costs can change markets because you’ve freed up so much cash-flow, lowered prices, etc. There’s “globalization.” Regulations can dramatically change things (AT&T, unbundling, abdicating moral responsibility in publishing, etc.). Unexpected, black swans can destroy fragile parts of the market, table-flipping everything. And more what have you’s &co.’s. Thank you for your feedback; we appreciate your business; this call will now disconnect.

[3] For more on how to use this kind of chart in the strategy world, see Rita McGrath’s books, most recently, Seeing Around Corners.

[4] Some years ago, it was popular to cite such studies as “at the current churn rate, about half of S&P 500 companies will be replaced over the next ten years.” I, dear reader, have been guilty of using such weird mental gymnastics, a sort of Sugar Rush grade, neon-track of logic. (Also, there’s private equity, M&A, the 2008 financial collapse, globalism, cranky CEOs, and all manner of other things that change that list regardless of the, well, business acumen of the victims – pouring gasoline onto the scales of the fires of creative destruction.)

[5] Public cloud has been an interesting inroad against this: you have to pay for public cloud, there’s no way around it. Still, people seem to like it more than paying larger, upfront software licensing fees. In public cloud, the initial process is often much more pleasant than a traditional acquisition process. The sales process of signing up without talking to a person, testing out the software, using it for real…all without having to setup servers, networking, install and upgrade software.

[6] There’s a lot of hair-splitting between “continuous delivery” versus “continuous deployment.” I don’t know. At a certain level of organizational management, the distinction becomes less useful than spending your mental effort on more pressing strategic and management riddles. I think it’s notable, that the jargon we use is “CI/CD” not “CI/CD/CD” (or maybe, more delightfully, CI/CD2?)

[7] I’m fond of citing this survey for another reason that shows how misaligned executive and developer believes can be: 29% of developers said that “Access to infrastructure is the biggest impediment to developer productivity”…but only 6% of executives agreed.

Developers often know what’s slowing them down better than executives. So, like, if you’re a manager, don’t let that happen. Go check on what’s actually going on in the organization instead of just what’s going on in your status meetings. Get the more details and free books mentioned at cote.pizza.

And, for more of these tiny videos, see the full playlist. They’re, you know, fun!

Using agile software to enable an agile business

The most recent version of my “big picture” talk:

This presentation explains why getting better at software is important and can help improve your business. It presents the product model of software development, in contrast to the typical project model. It then describes three common barriers to change and how some organizations overcome them. There are three case studies of real-world, large organizations used throughout as well to illustrate the major ideas.

Also available with Korean subtitles.

Governance hacks – business cases

Cut from my writing up of AirFrance-KLM’s modernization strategy for it’s 2,000+ apps.

For each major decision (like modernizing an application, moving an application team to a new toolchain, putting a new platform in place, and other major changes to do how you do software), always have a business case. You have to avoid local optimization too: make sure you focus on the big picture, looking at dev, ops, and the overall business outcome. What does it mean to speed up the release cycle? Does introducing new services and capabilities make your daily business run more efficiently, or attract new customers? Does it help prevent security problems or add in more reliability? As they say “what is this in service of?”

This is especially important for avoiding gratuitous transformation, gold-plating, and other fixing it if it ain’t broke anti-patterns. Also, it will help you show people why it’s worth changing if everything seems to be going well. “[T]eams have applications who are working and sometimes are working quite, quite well,” Jean-Pierre Brajal says, “So people come to us and say, “well, why do I have to cancel my application? It’s already running.” You can use a business case to show the benefits.

Also, he notes, it’s important to make sure you’re improving the process end-to-end, not just one component. Let’s say you make automating testing the software better, but don’t address deploying the software. Now, when you’ve moved the team off their old system – that was working – you’ve introduced a new problem, a new bottleneck that they didn’t used to have to deal with.

There’s a lot more in the talk.

1/24/20, 6:33 AM

Scaling change with different tools than the pilot

Being successful at initial digital transformation projects is relatively easy. Scaling it to 20,000 people is not. I’ve noticed an anti-pattern pointed out in this piece by Accenture people, as they put it:

A divide between the digital capabilities supporting the pilot and the capabilities available to support scaling it.

That is, the pilot projects use different, newer techniques and technologies than existing IT. But when they scale, they change the technologies used and the methods are not actually changed. This sounds overly simplified, but the point is: they don’t actually change.

Notes on the 2019 DevOps Report

Some quick notes and callouts from this year’s 2019 DevOps Report:

  • Four key metrics: lead time, deployment frequency, mean time to restore (MTTR) and change fail percentage.
    • Med, High, and Elite all have a change fail rate of 0-15%. So, expect 15% change fail as benchmark worst case to shoot for…?
  • Demographics: 30% are devs, 26% “DevOps or SRE” – [so, lots of ICs self-evaluating]. 16% “managers,” and then it goes down from there…
  • Top industries are Technology at 38% and FinServe at 12%. Retail is 9%.
  • Mostly North American (50%)and Europe (29%)
  • Org. size: 100-499 (21%), 500-1,999 (15%), and 10,000+ (26%)
  • “A key goal in digital transformation is optimizing software delivery performance: leveraging technology to deliver value to customers and stakeholders.”
  • [I’m not sure if age of company, and, thus, an indication of governance and tech debt, is tracked. With 38% being tech companies, it’d be good know how young they are. But, most FinServ companies are large and old (unless it was mostly FinServ startups!).
  • Very prescriptive this year, a maturity model to put a strategy in place, etc.
  • A lot on paying down tech debt:
    • Bounded contexts, APIs, SOA and microservices. Using and testing out of team services without having to work with that team (sort of like mocking for runtime).
    • Also: “Teams that manage code maintainability well have systems and tools that make it easy for developers to change code maintained by other teams, find examples in the codebase, reuse other people’s code, as well as add, upgrade, and migrate to new versions of dependencies without breaking their code”
  • Very little prod chaos monkey stuff: less than 10% across the board.
  • CABs still bad: those that have them are 2.6x more likely to be low performers.
    • Instead, do peer reviews and automate governance: “peer review-based approval during the development process. In addition to peer review, automation can be leveraged to detect, prevent, and correct bad changes much earlier in the delivery lifecycle. Techniques such as continuous testing, continuous integration, and comprehensive monitoring and observability provide early and automated detection, visibility, and fast feedback. In this way, errors can be corrected sooner than would be possible if waiting for a formal review.”
    • CABs should instead focus on process and practices change: ” the CAB should focus instead on helping teams with process- improvement work to increase the performance of software delivery. This can take the form of helping teams implement the capabilities that drive performance by providing guidance and resources. CABs can also weigh in on important business decisions that require a trade-off and sign-off at higher levels of the business, such as the decision between time-to- market and business risk.”
    • [I’m pretty sure that was the original point, esp. when you look at RUP and ITIL stuff: setting the process to be used. Tooling to automate governance wasn’t really available. Policing it those prescriptive processes took over as it always does. And I’m not sure there are industry standard frameworks to use there yet either. There must be lots of hand-crafting.]
    • “Survey respondents with a clear change process were 1.8 times more likely to be in elite performers.” – [as ever, garbage in, garbage out.]
    • The people who work on governance are not the ones who can actually do the coding to automate it: “only our technical practitioners have the power to build and automate the change management solutions we design, making them fast, reliable, repeatable, and auditable…. Leaders at every level should move away from a formal approval process where external boards act as gatekeepers approving changes, and instead move to a governance and capability development role. After all, only managers have the power to influence and change certain levels of organizational policy. We have seen exponential improvements in performance— throughput, stability, and availability—in just months as a result of technical practitioners and organizational leaders working together.”
  • This is a different measure of “productivity”: “Productivity is the ability to get complex, time-consuming tasks completed with minimal distractions and interruptions.”
    • It doesn’t track amount of work done, but the environment people are working in…?
  • Tools use is all across the board: DIY stuff, COTs, open source, etc. [This sort of excludes the IaaS and other runtime layers, focusing on just CI/CD and test automation]
  • “Multi-tasking” across roles and projects might be OK: “we cannot conclude that how well teams develop and deliver software affects the number of roles and projects that respondents juggle.”
  • Being able to find things and ask questions [and, presumably, getting answers!], having search, is important.
  • From my read (slide 74), the methods of transforming orgs are all across the board with Big Bang and Training Center as the only low ranked ones. Communities of practice are high, part of the Spotify model.
  • Pg. 75 tries to derive some advice nonetheless: mostly that separate education and training groups don’t work well/widely, that grassroots is used a lot, and that communities of practice are good, as well as PoCs that get cloned.
  • [This is an instance where the high level of individual contributors in the answers might have an effect. They see the positive change in their own team, but don’t have the big picture view to see if the practices scale up to 1,000’s of people. On the other hand, they might follow the “my congressperson is perfect, all the other ones are corrupt and terrible” pattern. Also, those 5,000+ people orgs struggle.]
  • [We still don’t know how to change an engine in flight.]

The Business Bottleneck, new book

I’m working on a new book (check out the work in progress), here’s the premise:

After at least five years of struggling to transformation, IT knows how to deliver better software, how to do the process and use the new tools needed for “digital transformation.” They may not actually do all that, but they know what should be done. However, “The Business” is not involved enough nor knows what to do. This prevents achieving the full benefits of digital transformation. The Business just knows that Amazon is coming to eat their lunch and that their boards are demanding a strategic response, like, yesterday. There are a handful of educational exceptions: companies like The Home Depot that are figuring it out and thriving. But, there’s a lot more organizations that are stumbling than succeeding. IT isn’t the bottleneck anymore, it’s finance, strategy, and management.

And, here’s a talk on the topic:

It’s a sequel to my previous book, Monolithic Transformation. That book looks more inward at the IT department and how it should change, while this one is trying to look at the rest of the organization: how does (should?) “The Business” need to change?

Here are some draft excerpts and related things I’ve been working on:

Banking “disruption,” or whatever – part 01

There’s near universal sentiment that traditional banks need to shift to improve and protect their businesses against financial startups, so called “FinTechs.” These startups create banks that are often 100% online, even purely as a mobile app. The release of Apple Pay highlights how these banks are different: they’re faster, more customer experience focused, and innovate new features. 

The core reason FinTechs can do all of this is because they’re good at creating well designed software that feels natural to people and allows these FinTechs to optimize the banking experience and even start innovating new features. People like banking with them!

These FinTechs are growing quickly, For example, N26 grew from 100,000 accounts in 2015 to 3.5m this year. Still, existing banks don’t seem to be feeling too much pain. In that same period, JPMC went from 39.2m digital accounts to 49m, adding 19.8m accounts. Even if it’s small or hard to chart, market share is being lost and existing banks are eager to respond. And, of course, the FinTechs are eager to take advantage of slower moving banks with the $128bn of VC funding that’s fueled FinTech growth.

I wanted to get a better handle on all this, so I’ve put together this “hot take” on digital banking, FinTech’s, whatever. My conclusion is that these new banks take advantage of having a clean-slate – a lack of legacy baggage in business models and technology stacks – to focus most of their attention on customer experience, doing software really well. This is at the heart of most “tech companies” operational differentiation, and it’s no different in banking. 

Large, existing banks may be “slow moving,” but they have deep competitive advantages if they can address the legacy of past success: those big, creaking backend systems and a culture of product development that, well, isn’t product development. Thankfully, there are several instances and case studies of banks keeping transforming how they do business.

That Apple Card sure looks cool

As with you, I’m sure, I’m curious about the excited around the Apple Card. It looks cool, with features like quick activation and tight (perhaps too tight!) integration with the iPhone. The card benefits aren’t too great compared to what’s widely available: the Apple Card gives you 1% to 3% cash back on purchases, with 3% only for Apple purchases.

Two other features got me thinking though.

The cash back amounts show up in your account by the end of the day. In contrast many credit cards offer cashback, it can take weeks or even months for that show up on your account – that cash back period, is perhaps not surprisingly hard to fine for cards. 

The Apple Card has a really quick activation process. Traditionally, getting your account setup, activating a card, can take days to weeks – usually, you need a card snail mailed to you. But once you setup your account, you can start using tap-to-pay with your phone. When I moved to Amsterdam, I setup an ABN AMRO account, and last week I setup an N26 account. In both instances, I had to wait several days to get a physical debit card. I could start transferring money instantly, however. 

There’s no guarantee that the Apple Card will be a competitive monster. Per usual, the huge customer base and trust Apple has boosts their chance. As Patrick McGee at The Financial Times notes: “JD Power survey published last week, before the card was even available, found that 52 per cent of those aged between 18 and 29 were aware of it; of those, more than half were likely to apply.” Apple usually has a great attach rate between the iPhone and new products. Signs point to the Apple Card working out well for Apple and their partners.

Shifting the market with innovation…right?

That snazzy UI and zippy features make me wonder, though, why is this new? Why aren’t these boring, commodified features in banking yet? Let’s broaden this question to banking in general, mostly retail or consumer banking for discussing here. 

Perhaps we have an innovation gap in banking, something that’s likely been ignored by existing banks for many years. These FinTechs, and other innovation-focused companies like Apple, have been using innovation as crowbars to take market share, coming up with better ways of servicing customers and new features.

Is that innovation getting FinTechs new business and sucking away customers from existing banks? To get a handle on that kind of market share shift I like to use a chart I call The Dediu Cliff to think about startups vs. incumbents. It’s a simple, quick way of showing how market share shifts between those two, how startups gain share and incumbents lose it. You chart out as many years as you can in a 100% area graph showing the shift in market share between the various players. Getting that data for banking has so far proved difficult, but let’s take a swag at it anyhow.

Whatever the business models, financial services executives seem to think so as one PWC survey found: 73% of those executives “perceive consumer banking as the one most [banking products] likely to be disrupted by FinTech.” Being lazy, I found a pre-made data set to show this, in Sweden thanks to McKinsey:

Sweden - Screen Shot 2019-08-14 at 4.55.06 PM.png
Sources: “Disruption in European consumer finance: Lessons from Sweden,” Albion Murati, Oskar Skau, and Zubin Taraporevala, McKinsey, April 2018; “New rules for an old game: Banks in the changing world of financial intermediation,” Miklos Dietz, Paul Jenkins, Rushabh Kapashi, Matthieu Lemerle, Asheet Mehta, Luisa Quetti, McKinsey, Nov 2018. 

As the report notes, Sweden is very advanced in digital banking. In comparison, they estimate that in the UK the “specialist” firms have less than 20% share. In this dataset, “specialist” isn’t exactly all new and fun FinTech startups, but this chart shows the shift from “universal,” traditional banks to new types of banks and services. There’s a market shift.

If I had more time, I’d want to make a similar Dediu Cliff for more than just Sweden. As a bad, but quick example, comparing JPMC’s retail banking customer growth to N26’s:

100% area - Screen Shot 2019-08-14 at 4.55.09 PM.png
Sources: “How JPMorgan Is Preparing For The Next Generation Of Consumer Banking,” CBInsights, August, 2018; JPMC 2018 annual report; “N26 is now one of the highest valued FinTechs globally,” N26 Blog, July, 2019.

 

This chart is not too useful because it shows just one bank to one FinTech, though. And JPMC is much lauded for its innovation abilities. At the end, in the summer of 2019 JPMC has 62m household customers, with 49m being “digital,” and N26 has 3.5m, all “digital” we should assume. Here’s the breakdown:

 

bar chart - Screen Shot 2019-08-14 at 4.55.11 PM.png
Sources: “How JPMorgan Is Preparing For The Next Generation Of Consumer Banking,” CBInsights, August, 2018; JPMC 2018 annual report; “N26 is now one of the highest valued FinTechs globally,” N26 Blog, July, 2019.

Growth, as you’d expect, is something else: JPMC had a CAGR of 8%, while N26’s was 227%. If N26 survives, that of course means their growth will flatten, eventually.

Even if it’s hard to chart well, we should take it that the new bread of FinTechs are taking market share. Financial services executives seem to think so as one PWC survey found: 73% of those executives “perceive consumer banking as the one most [banking products] likely to be disrupted by FinTech.” 

To compound the fogginess, as in the original Dediu Cliff, charting the dramatic shift from PCs to smart phones, the threat often comes from completely unexpected competitors. The market is redefined, from just PCs for example, to PCs and smart phones. This leaves existing businesses (PC manufacturers) blind-sided because their markets are redefined. Customer’s desires and buying habits change: they want to spend their computer share of wallet and time on iPhones, not Wintels. 

Taking this approach in banking, there are numerous FinTechs going over underserved markets that are “underbanked” and usually deprioritized by existing banks. This is a classic, “Big D” disruption strategy. One of the more fascinating examples are ride-sharing companies that become de facto banks because they handle the money otherwise bankless drivers earn.

There’s also a hefty threat from behemoth tech companies outside of banking that are stumbling into finance. Companies like Alibaba and WeChat have huge presences in payments and Facebook is always up to something. These entrants could prove to be the most threatening long term if they redefine what the market is and how it operates.

Differentiating by focusing on people

So, there is a shift going on. What are these FinTechs doing? Let’s simplify to three things:

  1. Mobile – an emphasis on mobile as the core branch and workflow, often 100% mobile.
  2. Speed – from signing up, to transferring money, to, as with the Apple Card, faster cash back. While it’ll take awhile to get my card, actually signing up with N26 was quick, including taking pictures of my Netherlands residency card for ID verification. I signed up at 11:29am and was ready to go at 4:05pm, on a Sunday no less.
  3. Innovation – sort of. It’s not really about new features, but innovations in how people interact with the banks. N26 let you create “spaces” which are just sub accounts used to organize budgets and reports; bunq lets you create 25 new accounts; many FinTechs (like the Apple Card) bundle in transaction type reporting and budgeting tools. All of those are interesting, but not ground breaking…yet. 

From a competitive analysis stand-point, what’s frustrating is that feature-by-feature, traditional banks and FinTechs seems to be on par. Throw in services like mint.com and all the supposedly new features that FinTechs have seem to be available don’t look so unique anymore. Paying with your phone amazing, to be sure, but that’s long been done by existing banks.

For all the charts and surveys you can pile on, the difference amounts to a subjective leap of faith. FinTech companies are more customer centric, focusing on the customer experience. When you look at the broader “tech companies” that enterprises aspire to imitate, customer experience is one of the primary differentiators. Their software is really good. More precisely, how their software helps people accomplish tasks is well designed and ever improving.

There’s a sound vision to be plucked from that for banks: “Live more, bank less,” as DBS Bank  in Singapore puts it.

Unshackled

Responding to all of this seems easy on the face of it: if these FinTechs can do it, why not the thousands of developers with their bank-sized budgets do it?

As ever, banks suffer from the shackles of success: all the existing processes, IT, and thought technologies that was wildly successful and drives their billions in revenue….but hasn’t been modernized in years, or even decades.

In part 2, we’ll look at what banks can do to unshackle themselves, and maybe slip on some new shackles for the next ten years.

(There are some footnotes that didn’t get over here.  For those, and if you want to see me wrastlin’ through part two, or leave a comment, check out the raw Google Doc of this.)

The Strategy Bottleneck

This is a draft excerpt from a book I’m working on, tentatively titled The Business Bottleneck. If you’re interested in the footnotes, leaving a comment, and the further evolution of the book, check out the Google Doc for it. Also, the previous excerpt, “The Finance Bottleneck.”

Digital transformation is a fancy term for customer innovation and operational excellence that drive financial results. John Rymer & Jeffrey Hammond, Forrester, Feb 2019.

The traditional approach to corporate strategy is a poor fit for this new type of digital-driven business and software development. Having worked in corporate strategy I find that fitting its function to an innovation-led business is difficult. If strategy is done on annual cycles, predicting and proscribing what the business should be doing over the next 12 months, it seems a poor match for the weekly learning you get from a small batch process. Traditionally, strategy defines where a company focuses: which market, which part of the market, what types of products, how products are sold, and therefore, how money should be allocated. The strategy group also suggests mergers and acquisitions, M&A, that can make their plans and the existing business better. If you think of a company as a portfolio of businesses, the strategy group is constantly asses that each business in that portfolio to figure out if it should buy, sell, or hold.

The dominant strategy we care about here goes under the name “digital transformation.” Sort of. The idea that you should use software as a way of doing business isn’t new. A strategy group might define new markets and channels where software can be used: all those retail omnichannel combinations, new partnerships in open banking APIs, and new products. They also might suggest businesses to shut down, or, more likely divest to other companies and private equity firms, but that’s one of the less spoken about parts of strategy: no one likes the hand that pulls the guillotine cord.

A moment of pedantry

First, pardon a bit of strategy-splaining. Having a model of what strategy is, however, is a helpful baseline to discuss how strategy needs to change to realize all these “digital transformation” dreams. Also, I find that few people have a good grasp of what strategy is, nor, what I think it should be.

I like to think of all “markets” as flows of cash, big tubes full of money going from point A to point B. For the most part, this is money from a buyer’s my wallet flowing to a merchant. A good strategy figures out how to grab as much of that cash as possible, either by being the end-point (the merchant), reducing costs (the buyer), or doing a person-in-the-middle attack to grab some of that cash. That cash grabbing is often called “participating in the market.”

When it comes to defining new directions companies can take, “payments” is a good example. We all participate in that market. Payments is one of the more precise names for a market: tools people use to, well, pay for things.

First, you need to wrap your head around the payments industry, this largely means looking at cashless transactions because using cash requires no payment tool. “Most transactions around the world are still conducted in cash,” The Economist explains, “However, its share is falling rapidly, from 89% in 2013 to 77% [in 2019].” There’s still a lot of cash used, oddly in the US, but that’s changing quickly, especially in Asia, for example in China, The Economist  goes on, “digital payments rose from 4% of all payments in 2012 to 34% in 2017.” That’s a lot of cash shifting and now shooting through the payments tube. So, let’s agree that “payments” is a growing, important market that we’d like to “participate” in.

There are two basic participants here:

  1. New companies enter the market by creating new ways of paying for things that compete with existing ways to pay for things. For example, new entrants are services like Alipay, Bunq, Apple Pay, and GrabPay. While this is the domain of startups in most people’s minds, large companies play this role often.
  2. Existing companies both defend their existing businesses and create new ways of paying for things. For example, Dutch banks launched iDEAL several years ago. Existing companies often partner with new entrants, for example: Goldman Sachs provides the backend for Apple Pay and Maybank partnered with GrabPay. Incumbents can also accomplish the second goal by just acquiring the new companies: in general banking, Goldman Sachs acquired Honest Dollar to help it get into consumer banking.

“Strategy,” then, is (1.) deciding to participate in these markets, and, (2.) the exact way these companies should participate, how they grab money from those tubes of cash. Defining and nailing strategy, then, is key to success and survival. For example, an estimated 3.3 trillion dollars flowed through the credit card tube of money in 2016. As new ways of processing payments gain share, they grab more and more from that huge tube of cash. Clearly, this threatens the existing credit card companies, all of whom are coming up with new ways to defend their existing businesses and new payment methods.

As an example of a general strategy for incumbents, a recent McKinsey report on payments concludes:

The pace of digital disruption is accelerating across all components of the GTB value chain, placing traditional business models at risk. If they fail to pursue these disruptive technologies, banks could become laggards servicing less lucrative portions of the value chain as digital attackers address the friction points. To avoid this fate, banks must embrace digitized transaction banking with a goal of eliminating discrepancies, simplifying payments reconciliation, and streamlining infrastructure to operate profitably at lower price points. They must take proactive strategic steps to leverage their current favorable market position, or watch new market entrants pass them by.

That is:

  1. New methods of payments will destroy your business, those pesky “tech companies.”
  2. So you should create some new (not credit card) payment methods 
  3. At the same time make your back-end systems more efficient so they can drive down costs for your existing credit card based business, increasing your profit margins despite overall revenue declining as “tech companies” grab more and more money out of your cash-tubes.
  4. Also, take advantage of your existing capabilities in security, fraud handling, and governance compliance to differentiate both your new, not credit card payment offerings and defend your existing credit card business.

That’s pretty good strategic direction and it comes, as you can see in the PDF, from a very deep analysis of market conditions, and trends – there’s even a Mekko chart!

McKinsey payments Mekko - Screen Shot 2019-08-08 at 2.27.28 PM.png
Source: “Global payments 2018: A dynamic industry continues to break new ground,” McKinsey, Oct 2018.

Now, how you actually put all that into practice is what strategy is. Each company and industry has its own peccadilloes. The reason McKinsey puts out all those fine charts is to do the pre-sales work of getting to invite them in and ask “yes, but how?” 

Getting over digital transformation fatigue

“Software is eating the world.” Pronouncements like this chestnut are by now obvious thanks to many Casandras have grown hoarse over the years. As one executive put it:

We came to the realization that, ultimately, we are a technology company operating in the financial-services business. So, we asked ourselves where we could learn about being a best-in-class technology company. The answer was not other banks, but real tech firms. 

This type of thinking has gone on for years, but change in large organizations has been glacial. If you search for the phrase “digital transformation” you’ll daily find sponsored posts on tech news sites preaching this, as they so often say, “imperative.” They’re long on blood curdling pronouncements and short on explaining what to actually do.

We’re all tired of this facile, digital genuflection. But maybe it’s still needed. 

If survey and sentiment are any indication, digital strategies are not being rolled out broadly across organizations as one survey, below, suggests. It shows that the part of the businesses that creates the actual thing being sold, product design and development, is being neglected:

Forrester digital transformation projects by department.png
Source: answers to the question “Which business processes are the focus of your firm’s most recent digital transformation?” Data from “Kick-Start Your Digital Business Strategy,” Forrester, June 2019.

As with all averages, this means that half of the firms are doing better…and half of them worse. Curiously, IT is getting most of the attention here: as I say, the IT bottleneck is fixed. My anecdotes-as-data studies match up with the attention customer service is getting: as many of my examples here show, like the Orange one, early digital transformation applications focus on moving people from call centers to apps. And, indeed, “improving customer experience” is one of the top goals of most app work I see.

But, it drops off after there. There’s plenty of room for improvement and much work to be done by strategy groups to direct and decide digital strategy. Let’s look at a two part toolkit for how they might could do it:

  1. Sensing your market – how to observe your market to time and plan changes.
  2. Validating strategy – a new method to safely and accurately define what your organization does.

Sensing your market

Changing enterprise strategy is costly and risky. Done too early, and you deliver perfectly on a vision but are unable to scale to more customers: the mainstream is not yet “ready.” Done too late, and you’re in a battle to win back customers, often with price cutting death spirals and comically disingenuous brand changes: you don’t have time for actual business innovation, so you put lipstick on discount pigs.

An innovation strategy relies on knowing the right time to enter the market. You need a strategy tool to continually sense and time the market. Like all useful strategy tools, it not only tells you when to change, but also when to stay the same, and how to prioritize funding and action. Based on our experience in the technology industry, we suggest starting with a simple model based on numerous tech market disruptions and market shifts. This model is Horace Dediu’s analysis of the post-2007 PC market. 2007, of course, is the year the iPhone was introduced. I’m not sure what to call it, but the lack of a label doesn’t detract from its utility. Let’s call it The Dediu Cliff:

The Dediu Cliff. Source: “The rise and fall of personal computing,” Jan 2012, Horace Dediu.

To detect when a market is shifting, Dediu’s model emphasizes looking beyond your current definition of your market. In the PC market, this meant looking at mobile devices in addition to desktops and laptops. Microsoft Windows and x86 manufacturers had long locked down the definition and structure of the PC market. Analyst firms like IDC tracked the market based on that definition and attempted disruptors like Linux desktop aspirants competed on those terms.

When the iPhone and Android were introduced in 2007, the definition of the PC market changed without much of anyone noticing. In a short 10 years, these “phones” came to dominate the “PC” market by all measures that mattered: time spent staring at the screen, profits, share increases, corporate stability and high growth, and customer joy. Meanwhile, traditional PCs were seen mostly as work horses, as commodities like pens and copy machines bought on refresh cycles with little regard to differentiation.

Making your own charts will often require some art. For example, another way to look at the PC market changing is to look at screen time per device, that is, how much time people spend on each device:

Screentime from Statcounter - Screen Shot 2019-08-08 at 2.49.37 PM.png
Screen time, or “engagement” as measured by web traffic. Notice that the analysis of the US market share has  iOS leading above Android. Source: Statcounter, queried 29 July 2019.

You have to find the type of data that fits your industry and the types of trends you’re looking to base strategy on. Those trends could be core assumptions that drive how your daily business functions. For example, many insurance businesses are still based on talking with an agent. So, in the insurance industry, you might chart online vs. offline browsing and buying:

Screen Shot 2019-08-07 at 3.28.14 PM
Source: Gartner L2, July 2019.

While more gradual than Deidu’s PC market chart, this slope will still allow you to track trends. Clearly, some companies aren’t paying attention to that cliff: as the Gartner L2 research goes on to say, once people look to go from quote to purchasing, only 38% of insurance companies allow for that purchase online.

Gaining this understanding of shifts in the very definition of your market is key. Ideally, you want to create the shift. If not, you want to enter the market once the shift is validated, as early as possible, even if the new entrant has single digit market share. Deploying your corporate resources (time, attention, and money) often takes multiple years despite the “overnight success” myths of startups. 

Timing is everything. Nailing that, per industry, is fraught, especially in highly regulated industries like banking, insurance, pharmaceuticals, and other markets that can use regulations to, uh, artificially bolster barriers to entry. Don’t think that high barriers to entry will save you though: Netflix managed to wreak havoc in the cable industry, pushing top telcoes even more into being dumb pipes, moving them to massive content acquisitions to compete.

I suggest the following general tactics to keep from falling off The Dediu Cliff:

  1. Know your customer – study their Jobs to be Done, maintain a good, “speaking” relationship with them.
  2. Consider Cassandras that use footnotes – track trend spotting, especially year over year (over year, over year).
  3. Try new things – experiment and incubate new ideas to continually test and participate in the market.

We’ll take a look at each of these, and then expand on how the third is generalized into your core innovation function.

Know your customer

Measuring what your customer things about you is difficult. Metrics like NPS and churn give you trailing indicators of satisfaction, but they won’t tell you when your customer’s expectations are changing, and, thus, the market.

You need to understand how your customer spends their time and money, and what “problems” they’re “solving” each day. For most strategy groups, getting this hands on is too expensive and not in their skill set. Frameworks like Jobs to Be Done and customer journey mapping can systemize this research, as we’ll see below, using a small batch process to implement your application allows you to direct strategy by observing what your customers actually interact with your business do day-to-day. 

Case Study: “The front door of the store is in your pocket,” Home Depot

In the ever challenging retail world, The Home Depot has managed to prosper by knowing their customer in detail. The company’s omnichannel strategy provides an example. Customers expect “omnichannel” options in retail, the ability to order products online, buy them in-store, order online but pick-up in-store, return items from online in-store…you get the idea. Accomplishing all of those tasks seems simple from the outside, but integrating all of those inventory, supply-chain, and payment systems is extremely difficult. Nonetheless, as Forrester has documented, The Home Depot’s concerted, hard fought work to get better at software is delivering on their omnichannel strategy: “[a]s of fiscal year 2018, The Home Depot customers pick up approximately 50% of all online orders in the store” and a 28% growth in online sales.

Advances in this business have been fueled by intimate knowledge of The Home Depot’s customers and in-store staff by actually observing and talking with them. “Every week, my product and design teams are in people’s homes or [at] customer job sites, where we are bringing in a lot of real-time insights from the customers,” Prat Vemana, The Home Depot’s Chief Digital Office said at the time.

The company focuses on customer journeys, the full, end-to-end process of customers thinking to, researching, browsing, acquiring, installing, and then using a product. For example, to hone in on improving the experience of buying appliances, the product team working on this application spent hours in stores studying how customers bought appliances. They also spent time with customers at home to see how they browsed appliance options. The team also traveled with delivery drivers to see how the appliances are installed.

Here, we see a company getting to know their customer and their problems intimately. This leads to new insights and opportunities to improve the buying experience. In the appliances example, the team learned that customers often wanted to see the actual appliance and would waste time trying to figure out how they could see it in person. So, the team added a feature to show which stores had the appliances they were interested in, thus keeping the customer engaged and moving them along the sales process. 

Spanning all these parts of the customer journey gives the team research-driven insights into how to deliver on The Home Depot’s omnichannel strategy. As customers increasingly start research on their phone, in social media, go instore to browse, order online, pick up instore, have items delivered, and so forth, many industries are figuring out their own types of omnichannel strategies. 

All of those different combinations and changing options will be a fog to strategy groups unless they start to get to know their customers better. As Allianz’s Firuzan Iscan puts it: “When we think from the customer perspective, most of our customers are hybrid customers. They are starting in online, and they prefer an offline purchasing experience. So that’s why when we consider the journey end to end, we need to always take care of online and offline moments of this journey. We cannot just focus on online or offline.”

Corporate strategy didn’t sign up for this

The level of study done at The Home Depot may seem absurd for the strategy team to do. Getting out of the office may seem like a lot of effort, but the days spent doing it will give you a deep, ongoing understanding of what your customers are doing, how you’re fulfilling their needs, and how you can better their overall journey with you to keep their loyalty and sell more to them. Also, it’s a good excuse to get out of beige cubicle farms and dreary conference rooms. Maybe you can even expense some lunches!

As we’ll see, when the product teams building these applications are put in place, strategy teams will have a rich source of this customer information. In the meantime, if you’re working on strategy, you’d be wise to fill that gap however you can. We’ll discuss one method next, listening to those people yelling and screaming doom and disruption.

Consider Cassandras

An early, ignored attempt to warn about that “book seller” in Seattle.

In Western mythos, Cassandra was cursed to always have 100% accurate prophecies but never be believed. For those of us in the tech industry, cloud computing birthed many Cassandras. Now, in 2019, the success of public cloud is indisputable. The on-premises market for hard and software is forever changed. Few believed that a “booker seller” would do much here or that Microsoft could reinvent itself as an infrastructure provider, turning around a company that was easily dismissed in the post-iPhone era.

Despite this, as far back as 2007, early Casandras were pointing out that software developers were using AWS in increasing numbers. Early on, RedMonk made the case that developers were the kingmakers of enterprise IT spend. And, if you tracked developer choice, you’d see that developers were choosing cloud. More Cassandras emerged over the years as cloud market share grew. Traditional companies heard these Cassandras, some eventually acting on the promises.

cloud spend.png
“Follow the CAPEX: Cloud Table Stakes 2018 Edition,” Charles Fitzgerald, February 2019.

Finally, traditional companies took the threat seriously, but as Charles Fitzgerald wickedly chronicled, it was too late. As his chart above shows, entering the public cloud market at this stage would cost $100’s of billions of dollars, each year, to catch up. The traditional companies in the infrastructure market failed to sense and act on The Cliff early enough – and these were tech companies, those outfits that are supposed to outmaneuver and outsmart the market!

Now, don’t take this to mean that these barriers to entry are insurmountable. Historically, almost every tech leader has been disrupted. That’s what happened in this market. There’s no reason to think that cloud providers are immune. We just don’t know when and how they’ll succumb to new competitors or, like Microsoft, have to reinvent themselves. What’s important, rather is for these companies to properly sense and respond to that threat.

There’s similar, though, rearview mirror oriented, stories in many industries. TK( listing or summarizing one in a non-tech company would sure be cool here ).

To consider Cassandras, you need a disciplined process that looks at year over year trends, primarily how your customers spend their time and money. Mary Meeker’s annual slide buffet is a good example: where are your customers spending their time? RedMonk’s analysis of developers is another example. A single point in time Cassandra is not helpful, but a Cassandra that reports at regular intervals gives you a good read on momentum and when your market shifts.

Finally, putting together your own Dediu Cliff can self-Cassandraize you. Doing this can be tricky as you need to imagine what your market will look like – or several scenarios. You’ll need to combine multiple market share numbers from industry analysts into a Cliff chart, updating it quarterly. Having managed such a chart, I can say it’s exhilarating (especially if someone else does the tedious work!) but can be disheartening when quarter by quarter you’re filed into an email inbox labeled “Cassandras.”

Thus far, our methods for sensing the market have been a research, even “assume no friction” methods. Let’s look at the final method that relies on actually doing work, and then how it expands into the core of the new type of strategy and breaking The Business Bottleneck.

Try new things

The best way to understand and call market shifts is to actually be in the market, both as a customer and a producer. Being a customer might be difficult if you’re, for example, manufacturing tractors, but for many businesses being a customer is possible. It means more than trying your competitor’s products. To the point of tracking market redefinition, you want to focus on the Jobs to Be Done, problems customers are solving, and try new ways of solving those problems. If this sounds like it’s getting close to the end goal of innovation, it’s because it is: but doing it in a smaller, lower cost and lower risk way.

For example, if you’re in the utility business, become a customer of in-home IoT devices, and how that technology can be used to steal your customer relationship, further pushing your business into a commodity position. In the PC market, some executives at PC companies made it a point of pride to never have tried, or “understood” the appeal of small screens – that kind of willful, proud ignorance isn’t helpful when you’re trying to be innovative. 

You need to know the benefits of new technologies, but also the suffering your products cause. There’s a story that management at US car manufacturers were typically given a company car and free mechanical service during the day while their car was parked at the company parking lot. As a consequence, they didn’t know first hand how low quality affected the cars. As Nassem Talab would put, they didn’t have any skin in the game…and they lost the game. Regularly put your skin in the game: rent a car, file an insurance claim, fill out your own expenses, travel in coach, and eat at your in-store delis.

Ket to trying new things is to be curious, not only in finding these things, but in thinking up new products to improve and solve the problems you are, now, experiencing first hand.

The goal of trying new things is to experiment with new products, using them to direct your strategy and way of doing business. If you have the capability to test new products, you can systematically sense changes in market definition. Tech companies regularly gloat new ideas as test products to sense customer appetite and, thus, market redefinitions. If you’ve ever used an alpha or beta app, or an invite only app, you’ve played a part in this process. These are experiments, ways the company tries new things. We laud companies like Google for their innovation successes, but we easily forget the long list of failed experiments. The website killedbygoogle.com catalogs 171 products that Google killed. Not all of these are “experiments,” some were long-running products that were killed off. Nonetheless, once Google sensed that an experiment wasn’t viable or a product no longer valid, they killed it, moving on.

When it comes to trying things, we must be very careful about the semantics of “failure.” Usually, “failure” is bad, but when it comes to trying new things, “failure” is better thought of as “learning.” When you fail at something, you’ve learned something that doesn’t work. When you’re feeling your way through foggy, frenetic market shifts requires tireless learning. So, in fact, “failing” is often the fastest way to success. You just need a safe, disciplined system to continually learn.

Validating strategy

Innovation requires failure. There are few guarantees that all that failure will lead to success, but without trying new things, you’ll never succeed at creating new businesses and preventing disruption. Historically, the problems with strategy has been the long feedback cycles required to tell you if your strategy “worked.”

First, budgets are allocated annually, meaning your strategy cycle is annual as well.Worse, to front-load the budget cycle, you need to figure out your strategy even earlier. Most of the time, this means the genesis of your current strategy was two, even three years ago. The innovation and business rollout cycles at most organizations are huge. TK( some long roll out figure). It can be even worse: five years, if not ten years in many military projects. Clearly, in “fast moving markets,” to use the cliché, that kind of idea-to-market timespan is damaging. Competing against companies that have shorter loops is key for organizations now. As one pharmacy executive put it, taking six months to release competitive features isn’t much use if Amazon can release them in two months.

Your first instinct might be the start trying many new things, creating an incubation program as a type of beta-factory of your own. The intention is good, but the risks and costs are too high for most large organizations. Learning-as-failure is expensive and can look downright stupid and irresponsible to share holders. Instead, you need a less costly, lower risk way to fail than throwing a bunch of things at the wall and seeing what sticks.

The small batch cycle

small batch doodle - Screen Shot 2019-08-08 at 3.04.20 PM.png
The small batch cycle.

Many organizations using what we’ll call the small batch cycle. This is a feedback loop that relies on four simple steps:

  1. Identify a problem to solve.
  2. Create a theory of how to solve the problem.
  3. Validate this theory by trying it out in real life.
  4. Analyze the results to see if the theory is valid or not.

This is, essentially, the scientific method. The lean startup method and, later, lean design has adapted this model to software development. This same loop can be applied “above the code” to strategy. This is how you can use failure-as-learning to create validated strategy and, then, start innovating like a tech company. 

As described above, due to long cycles, most corporate strategy is theoretical, at worse, PowerPoint arts and crafts with cut-and-pasting from a few web searches. The implementation details can become dicey and then there’s seeing if customers will actually buy and use the product. In short, until the first customer buys and uses the “strategy,” you’re carrying the risk of wasting all your budget and time on this strategy, often a year or more.

That risk might pay off, or it might not. Not knowing either way is why it’s a risk. A type of corporate “double up to catch up” mentality adds to the risk as well. Because the timeline is so long, the budget so high, and the risk of failure so large, managers will often seek the biggest bang possible to make the business case’s ROI “work.” Taking on a year’s time and $10m budget must have a significant pay off. But with such high expectations, the risk increases because more must be done, and done well. And yet, the potential downside is even higher as well.

This risky mentality has been unavoidable in business for the most part – building factories, laying phone lines, manufacturing, etc. require all sorts of up-front spending and planning. Now, however, when your business relies on software, you can avoid these constraints and better control the risks. Done well, software costs relatively little and is incredibly malleable. It’s, as they say, “agile.” You just need to connect the agile nature of software to strategy. Let’s look at an example.

Case Study: Most viable strategy: Duke Energy validates RFID strategy

As an energy company, Duke Energy has plenty of strategizing to do around issues like: disintermediation from IoT devices, deregulation, power needs for electric vehicles, and improving customer experience and energy conservation. Duke has a couple years of experience being cloud-native, getting far enough along to open up an 83,000-square- foot labs building housing 400 employees working in product teams.

They’re applying the mechanics of small batches and agile software to their strategy creation. “Journey teams” are used to test out strategies before going through the full-blown, annual planning process. “They’re small product-type teams led by design thinkers that help them really map out that new [strategic] journey and then identify [what] are the big assumptions,” Duke’s John Mitchell explained. Once identified, the journey teams test those assumptions, quickly proving or disproving the strategy’s viability.

Mitchell gives a recent example: labor is a huge part of the operating costs for a nuclear power plant, so optimizing how employees spend their time can increase profits and the time it takes to address issues. For safety and compliance reasons, employees work in teams of five on each job in the plant, typically scheduled in hour-long blocks. Often, the teams finish in much less than an hour, creating spare capacity that could be used on another job.

If Duke could more quickly, in near real-time, move those teams to new jobs they could optimize each person’s time. “So the idea was, ‘How can we use technology?’” Mitchell explains. “What if we had an RFID chip on all of our workers? Not to ‘Big Brother’ check in on them,” he quickly clarifies, but to better allocate the spare capacity of thousands of people. Sounds promising, for sure.

Not so fast though, Mitchell says: “You need to validate, will that [approach] work? Will RFID actually work in the plant?” In a traditional strategy cycle, he goes on, “[You’d] order a thousand of these things, assuming the idea was good.” Instead, Duke took a validated strategy approach. As Mitchell says, they instead thought, “let’s order one, let’s take it out there and see if it actually works in plant environment.” And, more importantly, can you actually put in place the networking and software needed: “Can we get the data back in real time? What do we do with data?” The journey team tested out the core strategic theories before the company invested time and money into a longer-term project and set of risks.

Key to all this, of course, is putting these journey teams in place and making sure they have the tools needed to safely and realistically test out these prototypes. “[T]he journey team would have enough, you know, a very small amount of support from a software engineer and designer to do a prototype,” Mitchell explains. “[H]opefully, a lot of the assumptions can be validated by going out and talking to people,” he goes on, “and, in some cases there’s a prototype to be taken out and validated. And, again, it’s not a paper prototype—unless you can get away with it—[it’s] working software.”

Once the strategic assumptions are validated (or invalidated, the entire company has a lot more confidence in the corporate strategy. “Once they … validate [the strategy],” Mitchell explains, “you’ve convinced me—the leader, board, whatever—that you know you’re talking about.”

It’s software

With software, as I laid out in Monolithic Transformation, the key ways to execute the loop are short release cycles, smaller amounts of code in each release, and the infrastructure capabilities to reliably reverse changes and maintain stability if things go wrong. 

These IT changes lead directly to positive business outcomes. Using a small batch cycle increases the design quality and cost savings of application design, directly improving your business. First, the shorter, more empirical, customer-centered cycles mean you better match what your customers actually want to do with your software. Second, because your software’s features are driven by what customers actually do, you avoid overspending on your software by putting in more features than are actually needed. 

For example, The Home Depot kept close to customers and “found that by testing with users early in the first two months, it could save six months of development time on features and functionality that customers wouldn’t use.” That’s 4 months time and money saved, but also functionality in the software that better matches what customers want.

As you mature, these capabilities lead to even wider abilities to experiment with new features like A/B testing, further honing down the best way to match what your software does to how your customers want to use it, and, thus, engage with your business. TK( quick example would be nice here ).

Software is the reason we call tech companies tech companies. They rely on software to run, even define their business. Thus, it’s TK( maybe? ) software strategy where we need to look at next.

The Finance Bottleneck

This is a draft excerpt from a book I’m working on, tentatively titled The Business Bottleneck. If you’re interested in the footnotes, leaving a comment, and the further evolution of the book, check out the Google Doc for it.

The Business Bottleneck

All businesses have one core strategy: to stay alive. They do this by constantly offering new reasons for people to buy from them and, crucially, stay with them. Over the last decade, traditional businesses have been freaked by competitors that are figuring out better offerings and stealing those customers. The super-clever among these competitors innovate entirely new business models: hourly car rentals, next day delivery, short term insurance for jackets, paying for that jacket with your phone, banks with only your iPhone as a branch, incorporating real-time weather information into your reinsurance risk analysis. 

Screen Shot 2019-08-07 at 3.28.14 PM.png
Source: Gartner L2, July 2019.

In the majority (maybe all) of these cases, surviving and innovating is done well with small business and software development cycles. The two work hand-in-hand are ineffective without the other. I’d urge you think of them as the same thing. Instead of business development and strategy using PowerPoint and Machiavellian meeting tactics as their tool, they now use software.

You innovate by systematically failing weekly, over and over, until you find the thing people will buy and the best way to deliver it. We’ve known this for a long time and enshrined it in processes like The Lean Startup, Jobs to Be Done, agile development and DevOps, and disruption theory. While these processes are known and proven, they’ve hit several bottlenecks in the rest of the organization. In the past, we had IT bottlenecks. Now we have what I’ve been thinking of as The Business Bottleneck. There’s several of them. Let’s start by looking at the first, and, thus, most pressingly damaging one. The bottleneck that cuts off business health and innovation before it even starts: finance.

Most software development finance is done wrong and damages business. Finance seeks to be accurate, predictable, and works on annual cycles. This is not at all what business and software development is like. 

Business & software development is chaos

Software development is a chaotic, unpredictable activity. We’ve known this for decades but we willfully ignore it like the advice to floss each day. Mark Schwartz has a clever take on the Standish software project failure reports. Since the numbers in these reports stay the same each year, basically, the chart below shows that that software is difficult and that we’re not getting much better at it:

Screen Shot 2019-08-07 at 3.30.26 PM.png
Source: built from excerpts from the 2009 study and 2015 study.

What this implies, though, is something even more wickedly true: it’s not that these project failed, it was that we had false hopes. In fact, the red and yellow in the original chart actually shows that software is performs consistent to its true nature. Let me rework the chart to show this:

Screen Shot 2019-08-07 at 3.32.07 PM.png
Source: built from excerpts from the 2009 study and 2015 study.

What this second version illustrates is that the time and budget it takes to get software software right can’t be predicted with any useful accuracy. The only useful accuracy is that you’ll be wrong in your predictions. We call it software engineering, and even more accurately “development” because it’s not scientific. Science seeks to describe reality, the be precise and correct – to discover truths that can be repeated. Software isn’t like that at all. There’s little science to what software organizations do, there’s just the engineering mentality of what works with what we have time and budget to do.

Source: from Michael Alba.

What’s more, business development is chaotic as well. Who knows what new business idea, what exact feature will work and be valuable to customers? Worse, there is no science behind business innovation – it’s all trial and error, constantly trying to both sense and shape what people and businesses will buy and at what price. Add in competitors doing the same, suppliers gasping for air in their own chaos quicksand, governments regulating, and culture changing people’s tastes, and it’s all a swirling cipher.

In each case, the only hope is rigorously using a system of exploration and refining. In business, you can study all the charts and McKinsey PDFs you want, but until you actually experiment by putting a product other there, seeing what demand and pricing is, and how your competitors will respond, you know nothing. The same is true for software. 

Each domain has tools for this exploration. I’m less familiar with business development, and only know the Jobs to Be Done tool. This tool studies customer behaviors to discover what products they actually will spend money on, to find the “job” they hire your company to solve, and then change the business to profit from that knowledge.

The discovery cycle in software follows a simple recipe: you reduce your release cycle down to a week and use a theory-driven design process to constantly explore and react to customer preferences. You’re looking to find the best way to implement a specific feature in the UI to maximize revenue and customer satisfaction. That is, to achieve whatever “business value” you’re after. It has many names and diagrams, but I call this process the “small batch cycle.”

THD small batch two up.jpg
The Home Depot illustrates its small batch cycle, Part Vemana and Brooke Creef, 2018. a caption

For example, Orange used this cycle when perfecting its customer billing app. Orange wanted to reduce traffic to call centers, thus lower costs but also driving up customer satisfaction (who wants to call a call center?). By following a small batch cycle, the company found that its customers only wanted to see the last two month’s worth of bills and their employees current data usage. That drove 50% of the customer base to use the app, helping remove their reliance on actual call centers, driving down costly and addressing customer satisfaction.

These business and software tools start with the actual customers, people, who are doing the buying and use these people as the raw materials and lab to run experiments. The results of these experiments are used to validate, more often invalidate theories of what the business should be and do. That’s a whole other story, and the subject of my previous book, Monolithic Transformation.

We were going to talk about finance, though, weren’t we?

The Finance Bottleneck

Finance likes certainly – forecasts, plans, commits, and smooth lines. But if you’re working in the chaos of business and software development, you can’t commit to much. The only certainty is that you’ll know something valuable once you get out there and experiment. At first all you’ll learn is that your idea was wrong. In this process, failure is as valuable as success. Knowing what doesn’t work, a failure, is the path to finding what does work, a success. You keep trying new things until you find success. To finish the absurd truth: failure creates success.

Software organizations can reliably deliver this type of learning each week. The same is true for business development. We’ve known this for decades, and many organizations have used it as their core differentiation engine.

But finance doesn’t work in these clever terms. “What they hell do you mean ‘failure creates success’? How do I put that in a spreadsheet?” we can hear the SVP of Finance saying, “Get the hell out of this conference room. You’re insane.”

Instead, when it comes to software development, finance focuses only on costs. These are easy to know: the costs of staff, the costs of their tools, and the costs of the data centers to run their software. Business development has similar easy to know costs: salary, tools, travel, etc.

When you’re developing new businesses and software, it’s impossible to know the most important number: revenue. Without that number, knowing if costs are good or bad is difficult. You can estimate revenue and, more likely, you can wish-timate it. You can declare that you’re going to have 10% of your total addressable market (TAM). You can just declare – ahem, assume – that you’re chasing a $9bn market opportunity. Over time, once you’ve discovered and developed your business, you can start to use models like consumer spending vs. GDP growth, or the effect of weather and political instability on the global reinsurance market. And, sure, that works as a static model so long as nothing ever changes in your industry.

For software development, things are even worse when it comes to revenue. No one really tells IT what the revenue targets are. When IT is asked to make budgets, they’re rarely involved in, nor given revenue targets. Of course, as laid out here, these targets in new businesses can’t be known with much precision. This pushes IT to just focus on costs. The problem here, as Mark Schwartz points out in all of his books, is that cost is meaningless if you don’t know the “value” you’re trying to achieve. You might try to do something “cheaply,” but without the context of revenue, you have no idea what “cheap” is. If the business ends up making $15m, is $1m cheap? If it ends up making $180m, is $5m cheap? Would it have been better to spend $10m if it meant $50m more in revenue?

 

IT is rarely involved in the strategic conversations that narrow down to a revenue.  Nor are they in meetings about the more useful, but abstract notion of “business value.” So, IT is left with just one number to work with: cost. This means they focus on getting “a good buy” regardless of what’s being bought. Eventually, this just means cutting costs, building up a “debt” of work that should have been done but was “too expensive” at the time. This creates slow moving, or completely stalled out IT. 

A rental car company can’t introduce hourly rentals because the back office systems are a mess and take 12 months to modify – but, boy, you sure got a good buy! A reinsurance company can’t integrate daily weather reports into its analytics to reassess its risk profile and adjust its portfolio because the connection between simple weather APIs and rock-solid mainframe processing is slow – but, sister, we sure did get a good buy on those MIPS! A bank can’t be the first in its market to add Apple Pay support because the payments processing system takes a year to integrate with, not to mention the governance changes needed to work with a new clearinghouse, and then there’s fraud detection – but, hoss, we reduced IT costs by $5m last year – another great buy!

Worse than shooting yourself in the foot is having someone else shoot you in the foot. As one pharmacy executive put it, taking six months to release competitive features isn’t much use if Amazon can release them in two months. But, hey! Our software development processes cost a third less than the industry averages!

Business development is the same, just with different tools and people who wear wing-tips instead of toe-shoes. Hopefully you’re realizing that the distinction between business and software development is unhelpful – they’re the same thing.

The business case is wrong from the start

So, when finance tries to assign a revenue number, it will be wrong. When you’re innovating, you can’t know that number, and IT certainly isn’t going to know it. No one knows the business value that you’re going to create: you have to first discover it, and then figure out how to deliver it profitably.

As is well known, the problem here is the long cycle that finance follows: at least a year. At that scope, the prediction, discovery, and certainty cycle is sloppy. You learn only once a year, maybe with indicators each quarter of how it’s going. But, you don’t really adjust the finance numbers: they don’t get smarter, more accurate, as you learn more each week. It’s not like you can go get board approval each week for the new numbers. It takes two weeks just to get the colors and alignment of all those slides right. And all that pre-wiring – don’t even get me started!

In business and software development, each week when you release your software you get smarter. While we could tag shipping containers with RFID tags to track them more accurately, we learn that we can’t actually collect and use that data – instead, it’s more practical to have people just enter the tracking information at each port, which means the software needs to be really good. People don’t actually want to use those expensive to create and maintain infotainment screens in cars, they want to use their phones – cars are just really large iPhone accessories. When buying a dishwasher, customers actually want to come to your store to touch and feel them, but first they want to do all their research ahead of time, and then buy the dishwasher on an app in the store instead of talking with a clerk. 

These kinds of results seem obvious in hindsight, but business development people failed their way to those success. And, as you can imagine, strategy and finance assumptions made 12 to 18 months ago that drove businesses cases often seem comical in hindsight.

A smaller cycle means you can fail faster, getting smarter each time. For finance, this means frequently adjusting the numbers instead of sticking to the annual estimates. Your numbers get better, more accurate over time. The goal is to make the numbers adjust to reality as you discover it, as you fail your way to success, getting a better idea of what customers want, what they’ll pay, and how you can defend against competition.

Small batch finance

Some companies are lucky to just ignore finance and business models. They burn venture capital funding as fuel to rocket towards stability and profitability. Uber is a big test of this model – will it become a viable business model (profitable), or will it turn out that all that VC money was just subsidizing a bad business model? Amazon is a positive example here, over the past 20 years cash-as-rocket-fuel launched them to boatloads of profit.

Most organizations prefer a less expensive, less risky methods. In these organizations, what I see are programs that institutionalize these failure driven cycles. They create new governance and financing models that enforce smaller business cycles, allowing business and software development to take work in small batches. Allianz, for example, used 100 day cycles discover and validate new businesses. Instead of one chance every 365 days to get it right, they have three, almost four. As each week goes by, they get smarter, there’s less waste and risk, and finance gets more accurate. If their business theory is validated, the new business is graduated from the lab and integrated back into the relevant line of business. The Home Depot, Thales, Allstate, and many others institutionalize similar practices.

allianz digital factory MVP procerss.jpg
Source: “The Shift to a New Digital Allianz Germany,” Dr. Daniel Poelchau, Allianz, CF Summit EU, Oct 2016.

Each of these cycles gives the business the chance to validate and invalidate assumptions. It gives finance more certainly, more precision, and, thus, less errors and risk when it comes to the numbers. Finance might even be able to come up with a revenue number that’s real. That understanding makes funding business and software development less risky: you have ongoing health checks on the viability of the financial investment. You know when to stop throwing good money after bad when you’ve invalidated your business idea. Or, you can change your assumptions and try again: maybe no one really wants to rent cars by the hour, maybe they want scooters, or maybe they just want a bus pass.

Business cases focused on growth, not costs

With a steady flow of business development learning, you can start making growth decisions. If validate that you can track a team of nuclear power plant workers better with RFID badges, thus directing them to new jobs more quickly and reducing costly downtime, you can then increase your confidence that spending millions of dollars to do it for all plant workers with payoff. You see similar small experiments leading to massive investments in omnichannel programs at places like Dick’s Sporting Goods and The Home Depot.

Finance has to get involved in this fail-to-success cycle. Otherwise, business and software development will constantly be driven to be the cheapest provider. We saw how this generally works out with the outsourcing craze of my youth. Seeking to be the cheapest, or the synonomic phrase, the “most cost effective,” option ends up saving money, but paralyzing present and future innovation

Screen Shot 2019-08-07 at 3.38.18 PM.png
“Survey Analysis: IT Is Moving Quickly From Projects to Products,” Bill Swanton, Matthew Hotle, Deacon D.K Wan, Gartner, Oct 

The problem isn’t that IT is too expensive, or can’t prove out a business case. As the Gartner study above shows, the problem is that most financing models we use to gate and rate business and software development are a poor fit. That needs to be fixed, finance needs to innovate. I’ve seen some techniques here and there, but nothing that’s widely accepted and used. And, certainly, when I hear about finance pushing back on IT businesses cases, it’s symptomatic of a disconnect between IT investment and corporate finance.

Businesses can certainly survive and even thrive. The small, failure-to-success learning cycles used by business and software developers works, are well known, and can be done by any organization that wills it. Those bottlenecks are broken. Finance is the next bottleneck to solve for.

I don’t really know how to fix it. Maybe you do! 

Crawl into the bottleneck

After finance, for another time, my old friends: corporate strategy. And if you peer past that blizzard of pre-wired slides and pivot tables, you can see just in past the edges of the next bottleneck, that mysterious cabal called “The C-Suite.” Let’s start with strategy first.