Straddling the firewall: cloud from 2010 to 2020 (& what to do next)

I gave the ten year anniversary talk at the CloudAustin meetup. Ten years ago, I gave the first talk. Here’s a bit of the essay I wrote out as I was working on the presentation.

I want to go over the last ten years of cloud, since I gave the first talk at this meetup, back in August 2010. At the time, I was wrapping up my stint at RedMonk, though I didn’t know that until a year later. I went to work at Dell in corporate strategy, helping build the software and cloud strategies and businesses. I went back to being an analyst at 451 Research where I ran to software infrastructure team, and then thanks to my friend, Andrew Shafer, ended up where I am now, at Pivotal, now VMware. I also have three kids. And a dog. And live in Amsterdam.

Staying grounded.

Beware, I still have to pay my bills

Pic: Kadumago, Nov 2019.

You should know that I have biases. I have experiences, ideas, “facts” even that come from that bias. I work at VMware, “VMware Tanzu” to be specific which is VMware’s focus on developers and the enterprise software they write. I’ve always worked on that kind of thing, and often in the interests of “on-premises” IT.

Indeed, at the end, I’m going to tell you that developers are what’s important – in fact, that Tanzu stuff is well positioned for the future I think should exist.

So, you know…whatever. As I used to say at RedMonk: “disclaimer.”

Computers from an outlet

Back in 2013, when I was at Dell, I went on an analyst tour in New England. Matt Baker and I visited IDC, Gartner, Forrester, and some smaller shops in hotel lobbies and fancy restaurants. These kind of analyst meetings are a bit dog and pony, but they give you a survey of what’s going on and a chance to tell analysts what you think reality looks like.

Anyhow, we walked into the Forrester office, a building that was all brand new and optimistic. Some high-level person there way into guitars and, I don’t know, 70’s rock. Along with Forrester branded ballpoint pens, they gave us The Allman Brothers’s at Fillmore East CDs as thank you gifts. For real. (Seriously.) Conferences room names were all rock-and-roll. They had an electric guitar in the lobby. Forrester: not the golden buttoned boating jackets of Gartner, or the short-sleeved button-up white shirts of Yankee-frugal IDC.

Being at Dell, we were interested in knowing one thing: when, and even if, and at what rate, will the on-premises market succumb to public cloud. That’s all any existing vendor wanted to know in the past decade. That’s what that decade was all about: exploring the theory that public cloud would take over on-premises IT.

The Forrester people obliged. The room was full of about 8 or 10 analysts, and the Forrester sales rep. When a big accoiunt like Dell comes to call, you bring a big posse. Almost before Matt could finish saying the word “cloud,” a great debated emerged between two sides of analysts. There were raised voices, gesticulating, and listing back and forth in $800 springy, office chairs. All while Matt and I sort of just sat there listening like a call-center operator who uses long waits for people to pick up the phone as a moment of silence and calm.

All IT will be like this outlet here, just a utility, one analyst kept insisting. No, it’ll be more balanced, both on-premises and in cloud over time, another said. I’d never encountered an analyst so adamant about public cloud, esp. in 2013. It seemed like they were about to spring across the table and throttle their analyst opponent. I’m sure time ran out before anyone had a chance to whiteboard a good argument. Matt and I, entertained, exchanged cards, shook hands, and left with our new Allman Brother’s album.

Public cloud has been slower to gobble up on-premises IT than fanatics thought it would be, even in the late 2000’s. The on-premises vendors deployed an endless amount of FUD-chaff against moving to public cloud. That fear of the new and unknown slowed things down, to be sure.

But I think more practical matters are what keeps on-premises IT alive, even useful. We used to talk about “data gravity” as the thing holding migrations to public cloud back. All that existing data can’t be easily moved to the cloud, and new apps will just be throwing off tons of new data, so that’ll just pull you down more. (I always though, you know, that you could just go to CostCo and buy a few hard-drives, run an xcopy command overnight, and then FedEx the drives to The Cloud. I’m sure there’s some CAP theorem problem there – maybe an Oracle EULA violation?) But as I’ll get to, I think there’s just tech debt gravity – probably a 5 to 10 million applications[1] that run the world that just can’t crawl out of on-premises datacenters. These apps are just hard to migrate, and sometimes they work well enough at good enough cost, that there’s no business case to move them.

Public cloud grows and grows

But, back to public cloud. If we look at it by revenue, it’s clear that cloud has been successful and is used a lot.

[Being respectful of analyst’s work, the reader is asked to open up Gartner’s Nov 13th press release, entitled “Gartner Forecasts Worldwide Public Cloud Revenue to Grow 17% in 2020” and look at the table there. You can find charts as well. Be sure to add up the SaaS-y categories into just one.]

As with all such charts, this is more a fanciful illustration of our instincts. Charts are a way to turn hunches into numbers, transform the qualitative into the quantitative. You can use an even older forecast (also, see the chart…and be sure to take out “advertising,” obviously) to go all the way back to 2010. As such, don’t get hung up on the exact numbers here.

The direction and general sizing in these lines is what matters. SaaS is estimated at $176 billion in 2020, PaaS $39.7 billion, IaaS at $50b.

So. Lots of revenue there. There are a few things to draw from this chart – again, hunches that we can gussy up into charts:

  1. The categorization of SaaS, PaaS, and IaaS stuck. This was finalized in some NIST work, and if you can believe it, categorizing things like this drove a lot of discussion. I participated in at least three of those exercises, at RedMonk, Dell, and then at 451 Research. Probably more! There was a lot of talk about “bursting” and “hybrid cloud.” Traditional IT, versus private cloud. Is it “on-prem” or “on-premises” – and what does it say of your moral character is you incorrectly use the first? Whatever. We returned to the simplicity of apps, devs, and ops.
  2. SaaS is sort of “forgotten” now as a cloud thing. It looms so large that we can’t even see it when we narrow our focus on the other two parts of cloud. Us in the dev and ops crowds just think of SaaS as normal now, not part of this wild, new category of “cloud.” Everyday, maybe even every hour we all use SaaS – the same is true for enterprises. Salesforce’s revenue went from $1.3bn in 2010 to $17.1bn in 2020. We now debate Office 365 vs. GMail, never Exchange. SaaS is the huge winner. Though it’s sort of not in my interests, when people talk about cloud and digital transformation, I tell them that the most useful thing they should probably focus on is just going all SaaS. You see this with all the remote working stuff now – all those firewalls and SharePoints on intranets are annoying, and supporting your entire company working from home requires the performance and scaling of a big time SaaS company. SaaS is what’s normal, so we don’t even think about it anymore.
  3. Below that, is PaaS. The strange, much maligned layer over the decade. We’re always optimistic about PaaS, as you can see in the newer forecast. It doesn’t seem to deliver on that optimism, at least in the mainstream. I certainly hope it will, and think it should. We’ll see this time. I’ve lived through enough phases of PaaS optimism and re-invention (well, all of them, I suppose) that I’m cautious. To be cynical to be optimistic, as I like to quip: it’s only stupid until it works. I’ll return to the glorious future PaaS in a bit. But first…
  4. Below that, we have IaaS – what us tech people think of mostly as cloud. This is just a big pool of hardware and networking. Maybe some systems management and security services if you consider that “infrastructure.” I don’t know. Pretty boring. However, if you’re not buying PaaS, this is the core of what you’re buying: just a new datacenter, managed by someone else. “Just” is insulting. That managed by someone else is everything for public cloud. You take this raw infrastructure, and you put your stuff on it. Forklift your applications, they say, deploy new applications. This is your “datacenter” in the cloud and the way most people think about “cloud” now-a-days: long rows of blinking lights in dark warehouses sometimes with rainbow colored pipes.

IT can’t matter fast enough

Let’s get back to the central question of the past decade: when will public cloud eclipse on-premises IT?

First, let’s set aside the nuance of “private cloud” versus “traditional IT” and just think of it all as “on-premises.” This matters a lot because the nature of the vendors and the nature of the work that IT does changes if you build and manage IT on your own, inside the firewall. The technology matters, but the responsibility and costs for running and maintaining it year after year, decade after decade turns into the biggest, er, headache. It’s that debate from Forrester people: when will IT become that wall outlet that Nicholas Carr predicted long ago? When will IT become a fungible resource that people can shed when all those blinking lights start holding back their business ambitions?

What we were hunting for in the past ten years was a sudden switch over like this, the complete domination of mobile over PCs:

The Dediu Cliff.  Source: “The rise and fall of personal computing,” Jan 2012, Horace Dediu.

This is one of the most brilliant and useful strategy charts you’ll ever see. It shows how you need to look at technology changes, market share changes. Markets are changed by a new entrant that’s solving the same problems for customers, the jobs to be done, but in a different way that’s ignored by the incumbents.[2] This is sort of big “D,” Disruption theory, but more inclusive. Apple isn’t an ankle biter slowly scaling up the legs of Microsoft, they’re a seasoned, monied incumbent, just “re-defining” the market.[3]

Anyhow, what we want is a chart like this for cloud so that we can find when on-premises crests and we need to start focusing on public cloud. Rather, a few years before that so we have plenty of time to invest and shift.  When I did strategy at Dell, Seth Feder kept up this chart for us. He did great work – he was always talking about how good the “R squared” was – I still don’t know what that means, but he seemed happy with it. I wish I could share his charts, but they’re lost to Dell NDAs and shredders. Thankfully, IDC has a good enough proxy, hardware spend on both sides of the firewall:

[The reader is asked to open IDC’s April 2nd, 2020 press release titled “Cloud IT Infrastructure Spending Grew 12.4% in the Fourth Quarter, Bringing Total 2019 Growth into Positive Territory, According to IDC” and contemplate the third chart therein.]

You can’t use this as a perfect guide – it’s just hardware, and, really can you tell when hardware is used for “private cloud” versus “traditional IT”? And, beyond IaaS, we’d like to see this for the other two aaS’s: applications and developers. If only we had Seth still toiling away on his charts for the past decade.

But, once again, a chart illustrates our hunch: cloud is Hemingway’s bankruptcy thing, slowly racing towards a Dediu Cliff. We still don’t know when on-premises compute will suddenly drop, but we should expect it…any year now…

…or decade…

…I guess.  

¯\_(ツ)_/¯

Pre-cliff jumpers

Competition in cloud was fierce. Again, I’m leaving out SaaS – not my area, don’t have time or data. But let’s look at infrastructure, IaaS.

[The reader is asked to open up the 2010 IaaS MQ and the 2020 IaaS MQ.]

It’s worth putting these charts side-by-side. They’re Gartner Magic Quadrants, of course. As with all charts, you can hopefully predict me saying, they illustrate our intuitions. What’s magical (yes!) about the MQ’s is that they show a mixture of sentiment, understanding, and actual feature set. You can see that in play here as we figured out what cloud was.

Infamously, the first IaaS MQ in 2010 has Amazon in the lower right, and a bunch of “enterprise grade” IaaS people up and to the right. Most of us snickered at and were confused by this. But, that 2010 list and ranking reflected how people, esp. big corporate buyers were thinking about what they wanted cloud to be. They wanted it to be like what they knew and were certain they needed, but run by someone else with that capex-to-opex pixie dust people used to obsesses so much about.

Over the next ten years, everyone figured out what public cloud actually was: something different, more or less. Cloud was untethered from those “enterprise grade” expectations. In fact, most companies don’t want “enterprise grade” anymore, it’s not good enough. They want “cloud grade.” Everyone wants to “run like Google,” do DevOps and SRE. Enterprise buyers are no longer focused on continuing with what they have: they want something different.

All those missing dots are the vendors who lost out. There were many, many reasons. The most common initial was, well, “server hugging.” Just a biased belief in on-premises IT because that’s where all the vendor’s money had always came from. People don’t change much after their initial few decades, enterprises even less.[4]

The most interesting sidebars here are Microsoft and Rackspace. Microsoft shed it’s Windows focus and embraced the Linux and open source stack of cloud. Rackspace tried a pre-kubernetes Kubernetes you kids may not remember, OpenStack. I’m hoping one day there’s a real raucous oral account of OpenStack. There’s an amazing history in there that we in the industry could learn from.

But, the real reason for this winnowing is money, pure and simple.

Disruptors need not apply

You have to be large to be a public cloud. You have to spend billions of dollars, every year, for a long time. Charles Fitzgerald has illustrated this over the years:

It costs a lot to save you so much. Source: “Follow the CAPEX: Cloud Table Stakes 2018 Edition,” Charles Fitzgerald, February 2019.

Most of the orange dots from 2010 just didn’t want to do this: spend billions of dollars. I talked with many of them over the past ten years. They just couldn’t wrap their head around it. We didn’t even know it cost that much, really. Instead, those orange dots fell back on what they knew, tying to differentiate with those “enterprise grade” features. Even if they wanted to and did triy, they didn’t have the billions in cash that Amazon, Microsoft, and Google had. Disruption is nice, but an endless cash-gun is better.

I mean, for example, what was Dotcloud going to do in the face of this? IBM tried several times, had all sorts of stuff in its portfolio, even acquiring its way in with SoftLayer – but I think they got distracted by “Watson” and “Smart Cities.” Was Rackspace ever going to have access to that much money? (No. And in fact, they went private for four years to re-work themselves back into a managed service provider, but all “multi-cloud” now.)

There are three public clouds. The rest of us just sell software into that. That’s, more less, exactly what all the incumbents feared as they kept stacking servers in enterprise datacenters: being at the whim of a handful of cloud providers, just selling adornments.

$80bn in adornments

Source: “Investing City” on Seeking Alpha, originally from Pivotal IPO investor presentation.

While the window for grabbing public cloud profits might have closed, there’s still what you do with all that IaaS, how you migrate your decades of IT to and fro, and what you do with all the “left overs.” There’s plenty of mainframe-like, Microfocus-y and Computer Associates type of revenue to eek out of on-premises, forever.

Let’s look at “developers,” though. That word – developers – means a lot of things to people. What I mean, here, is people writing all those applications that organizations (mostly large ones) write and run on their own. When I talk about “developers,” I more mean whatever people are in charge of writing and running an enterprises (to use that word very purposefully) custom-written software.

Back when Pivotal filed to IPO in March of 2018, we estimated that the market for all of that would be $80.4bn, across PaaS and on-premises.

This brings us back to PaaS. No one says “PaaS” anymore, and the phrase is a bit too narrow. I want to suggest, sort of, that we stop obsessing over that narrow definition, and instead focus on enterprise developers and in-house software. That’s the stuff that will be installed on, running on, and taking advantage of cloud over the next ten years. With that wider scope, an $80bn market doesn’t seem too far fetched.

And it’s real: organizations desperately want to get good at software. They’ve said this for many years, first fearing robot dogs – Google, Amazon. AirBnB, Tesla…whatever. After years of robot dog FUD, they’ve gotten wiser. Sure, they need to be competitive, but really modernizing their software is just table stakes now to stay alive and grow.

What’s exciting, is that organizations actually believe this now and understand software enough to understand that they need to be good at it.

Source: “Improving Customer Experience And Revenue Starts With The App Portfolio,” Forrester Consulting, commissioned by VMware, March, 2020.

We in IT might finally get what we want sometime soon: people actually asking us to help them. Maybe even valuing what we can deliver when we do software well.

Beyond the blinking cursor

Obsessing over that Dediu Cliff for cloud is important, but no matter when it happens, we’ll still have to actually do something with all those servers, in the public cloud or our own datacenters. We’ve gotten really good at building blinking cursor boxes over the past ten years. Blinking cursor boxes

IT people things: developers write code, operations people put together systems. They also have shaky budgets – most organizations are not eager to spend money on IT. This often means IT people are willingly forced to tinker with building their own software and systems rather than purchasing and reusing others.[5] They love building blinking cursor boxes instead of focusing on moving pixels on the screen.

A blinking cursor box is yet another iteration of the basic infrastructure needed to run applications. Applications are, of course, the software that actually moves pixels around the screen: the apps people use to order groceries, approve loan applications, and other thrilling adventures in computing. Applications are the things that actually, like, are useful…that we should focus. But, instead, we’re hypnotized by that pulsing line on a blank screen.

Us vendors don’t help the situation. We compete and sell blinking cursor boxes! So we each try to make one, constantly. Often a team of people makes one blinking cursor box, gets upset at the company they work for, grabs a bunch of VC money, and then goes and makes another blinking cursor box. So many blinking cursors. The public clouds are, of course, big blinking cursor boxes. There was a strange time when Eucalyptus tried fight this trend and to do a sort of Turing Test on the Amazon’s blinking cursor. That didn’t go well for the smaller company. Despite that, slowly, slowly us vendors have gotten closer to a blinking cursor layer of abstraction. We’re kind of controlling our predilections here. It seems to be going well.

Month 13: now, the real work can begin.

Over the past ten years, we’ve seen many blinking cursor boxes come and go: OpenStack, Docker, “The Datacenter of the Future,” and now (hopefully not going) Kubernetes. (There was also still virtualization, Windows, Linux, and mainframes. Probably some AS/400s if we dig around deep enough.) Each of these new blinking cursor boxes had fine intentions to improve on the previous blinking cursor boxes. These boxes are also open source, meaning that theoretically IT people in organizations could download the code, build and configure their own blinking cursor boxes, all without having to pay vendors. This sometimes works, but more often than not what I hear of are 12 month or more projects to stand up the blinking cursor box du jour…that don’t even manage to get the cursor blinking. The screen is just blank.

In larger organizations, there are usually multiple blinking cursor box programs in place, doubling, even tripling that time and money burn. These efforts often fail, costing both time and millions in staff compensation. An almost worse effect is when one or more of the efforts succeeds, kicking off another year of in-fighting between competing sub-organizations about which blinking cursor box should be the new corporate standard. People seem to accept such large-scale, absurdly wasteful corporate hijinks – they probably have more important things to focus on like global plagues, supply chain issues, or, like, the color palette of their new logo.

As an industry, we have to get over this desire to build new blinking cursor boxes every five or so years, both at vendors and enterprises. At the very least we should collaborate more: that seems to be the case with Kubernetes, finally.

Even in a world where vendors finally standardize on a blinking cursor box, the much more harmful problem is enterprises building and running their own blinking cursor boxes. Think, again, of how many large organizations there are in the world, F500, G2,000 – whatever index you want to use. And think of all the time and effort put in for a year to get a blinking cursor box (now, probably Kubernetes) installed from scratch. Then think of the need in three months to update to a new version; six months at the longest, or you’ll get trapped like people did in the OpenStack year and next thing you know it, running a blinking cursor box from the Victorian era. Then think of the effect to add in new databases and developer frameworks (then new versions of those!), security, and integrations to other services. And so on. It’s a lot of work, duplicated at least 2,000 times, more when you include those organizations allow themselves to build competing blinking cursor boxes.

Obviously, working for a vendor that sells a blinking cursor box, I’m biased. At the very least, consider the costs over five to ten years of running your own cloud, essentially. Put in the opportunity cost as well: is that time and money you could instead be spending to do something more useful, like moving pixels around on your customer’s screen?

Once you free up resources from building another blinking cursor box, you can (finally) start focusing on modernizing how you do software. Next: one good place to start.

Best practices, do them

As I’ve looked into how organizations manage to improve how they do software over the years I’ve noticed something. It’ll sound uselessly simple when you read it. Organizations that are doing well follow best practices and use good tools. Organizations that are struggling don’t. This is probably why we call them best practices.

Survey after survey of agile development usage will show this, every year. Simple practices and tools like unit testing are widely followed, but adherence to other practices quickly falls off. How many times have you heard people mention a “sit down stand-up meeting,” or say something like “well, we don’t follow all the agile practices we learned in that five day course – we adapted them to fit us”?

I like to use CI/CD usage as a proxy for how closely people are following best practices.

One of the most important tools people have been struggling to use is continuous integration and continuous delivery (CI/CD[6]). The idea that you can automate the drudgery of building and testing software, continuous integration, is obviously good. The ability to deploy software to production every week, if not daily, is vital staying competitive with new features and getting fast feedback from users on your software’s usefulness.

Very early on, if you’re not doing CI/CD your strategy to improve how you’re doing software – to progress with your cloud strategy, even – is probably going to halt. It’s important! Despite this, for the past ten plus years, usage been poor:

Source: State of Agile Surveys, 3rd through 14th, VersionOne/CollabNet/digital.ai. CI/CD not tracked in 5th/2009. Over the years, definitions change, “delivery” and “deployment” are added; but, these numbers are close enough to other surveys to be useful. See more CI/CD surveys: Forrester survey (2019), DZone CD reports (2014, 2015, 2016, 2017, 2019).

Automating builds and tests with continuous integration is clearly easier (or seen as more valuable?) than continuous delivery. And there’s been an encouraging rise in CD use over the past ten years.

Still, these numbers are not good. Again, think of those thousands of large organizations across the world, and then that half of them are not doing CI, and then that 60% of them are not doing CD. This seems ludicrous. Or, you know, great opportunity for improvement…and all that.

Take a look at what you’re doing – or not doing! – in your organization. Then spend time to make sure you’re following best practices if you want to perform well.

Stagnant apps in new clouds

Over the past ten years, many discussions about cloud centered around technologies and private or public cloud. Was the cloud enterprise grade enough? Would it cost less to run on premises? What about all my data? Something-something-compliance. In recent years, I’ve heard less and less of those conversations. What people worry about now is what they already have: thousands and thousands of existing applications that they want to move to cloud stacks.

To start running their business with software, and start innovating how they do their businesses, they need to get much better at software: be like the “tech companies.” However, most organizations (“most all,” even!) are held back by their existing portfolio of software: they need to modernize those thousands and thousands of existing applications.

Modernizing software isn’t particularly attractive or adventurous. You’re not building something new, driving new business, or always working with new technologies. And when you’re done, it can seem like you’ve ended up with exactly the same thing from the outside. However, management is quickly realizing that maintaining the agility of their existing application portfolio is key if they want future business agility.

In one survey, 76% of senior IT leaders said they were too invested in legacy applications to change. This is an embarrassing situation to be in: no one sets off to be trapped by the successes of yesterday. And, indeed, many of these applications and programs were probably delivered on the promise of being agile and adaptable. And now, well, they’re not. They’re holding back improvement and killing off strategic optionality. Indeed, in another survey on Kubernetes usage, 49% of executives said integrating new and existing technology is the biggest impediment to developer productivity.[7]

After ten years…I think we’ve sort of decided on IaaS, on cloud. And once you’ve got your cloud setup, your biggest challenge to fame and glory is modernizing your applications. Otherwise, you’ll just be sucking up all the same, old, stagnating water into your shiny new cloud.


[1] This is not even an estimate, just a napkin figure. AirFrance-KLM told me they’re modernizing over 2,000 applications. Let’s take the so called Fortune 500, and multiply it out: 500 x 2,000 = 1,000,000. Now, AirFrance-KLM is a big company, but not the biggest by far. A company like JPMC has many more applications. Also, there are governments, militaries, and other large organizations out there beyond the F500. Or you could have started with the Global 2,000. So, let’s assume that there are “millions” of apps out there. (Footnote to footnote: what is an “app”? If you’re asking this, let’s call it a “workload” and see if that satisfies you enough to go back to the main text.)

[2] Yes, yes – this isn’t strictly true. Taking out costs can change markets because you’ve freed up so much cash-flow, lowered prices, etc. There’s “globalization.” Regulations can dramatically change things (AT&T, unbundling, abdicating moral responsibility in publishing, etc.). Unexpected, black swans can destroy fragile parts of the market, table-flipping everything. And more what have you’s &co.’s. Thank you for your feedback; we appreciate your business; this call will now disconnect.

[3] For more on how to use this kind of chart in the strategy world, see Rita McGrath’s books, most recently, Seeing Around Corners.

[4] Some years ago, it was popular to cite such studies as “at the current churn rate, about half of S&P 500 companies will be replaced over the next ten years.” I, dear reader, have been guilty of using such weird mental gymnastics, a sort of Sugar Rush grade, neon-track of logic. (Also, there’s private equity, M&A, the 2008 financial collapse, globalism, cranky CEOs, and all manner of other things that change that list regardless of the, well, business acumen of the victims – pouring gasoline onto the scales of the fires of creative destruction.)

[5] Public cloud has been an interesting inroad against this: you have to pay for public cloud, there’s no way around it. Still, people seem to like it more than paying larger, upfront software licensing fees. In public cloud, the initial process is often much more pleasant than a traditional acquisition process. The sales process of signing up without talking to a person, testing out the software, using it for real…all without having to setup servers, networking, install and upgrade software.

[6] There’s a lot of hair-splitting between “continuous delivery” versus “continuous deployment.” I don’t know. At a certain level of organizational management, the distinction becomes less useful than spending your mental effort on more pressing strategic and management riddles. I think it’s notable, that the jargon we use is “CI/CD” not “CI/CD/CD” (or maybe, more delightfully, CI/CD2?)

[7] I’m fond of citing this survey for another reason that shows how misaligned executive and developer believes can be: 29% of developers said that “Access to infrastructure is the biggest impediment to developer productivity”…but only 6% of executives agreed.

The Strategy Bottleneck

This is a draft excerpt from a book I’m working on, tentatively titled The Business Bottleneck. If you’re interested in the footnotes, leaving a comment, and the further evolution of the book, check out the Google Doc for it. Also, the previous excerpt, “The Finance Bottleneck.”

Digital transformation is a fancy term for customer innovation and operational excellence that drive financial results. John Rymer & Jeffrey Hammond, Forrester, Feb 2019.

The traditional approach to corporate strategy is a poor fit for this new type of digital-driven business and software development. Having worked in corporate strategy I find that fitting its function to an innovation-led business is difficult. If strategy is done on annual cycles, predicting and proscribing what the business should be doing over the next 12 months, it seems a poor match for the weekly learning you get from a small batch process. Traditionally, strategy defines where a company focuses: which market, which part of the market, what types of products, how products are sold, and therefore, how money should be allocated. The strategy group also suggests mergers and acquisitions, M&A, that can make their plans and the existing business better. If you think of a company as a portfolio of businesses, the strategy group is constantly asses that each business in that portfolio to figure out if it should buy, sell, or hold.

The dominant strategy we care about here goes under the name “digital transformation.” Sort of. The idea that you should use software as a way of doing business isn’t new. A strategy group might define new markets and channels where software can be used: all those retail omnichannel combinations, new partnerships in open banking APIs, and new products. They also might suggest businesses to shut down, or, more likely divest to other companies and private equity firms, but that’s one of the less spoken about parts of strategy: no one likes the hand that pulls the guillotine cord.

A moment of pedantry

First, pardon a bit of strategy-splaining. Having a model of what strategy is, however, is a helpful baseline to discuss how strategy needs to change to realize all these “digital transformation” dreams. Also, I find that few people have a good grasp of what strategy is, nor, what I think it should be.

I like to think of all “markets” as flows of cash, big tubes full of money going from point A to point B. For the most part, this is money from a buyer’s my wallet flowing to a merchant. A good strategy figures out how to grab as much of that cash as possible, either by being the end-point (the merchant), reducing costs (the buyer), or doing a person-in-the-middle attack to grab some of that cash. That cash grabbing is often called “participating in the market.”

When it comes to defining new directions companies can take, “payments” is a good example. We all participate in that market. Payments is one of the more precise names for a market: tools people use to, well, pay for things.

First, you need to wrap your head around the payments industry, this largely means looking at cashless transactions because using cash requires no payment tool. “Most transactions around the world are still conducted in cash,” The Economist explains, “However, its share is falling rapidly, from 89% in 2013 to 77% [in 2019].” There’s still a lot of cash used, oddly in the US, but that’s changing quickly, especially in Asia, for example in China, The Economist  goes on, “digital payments rose from 4% of all payments in 2012 to 34% in 2017.” That’s a lot of cash shifting and now shooting through the payments tube. So, let’s agree that “payments” is a growing, important market that we’d like to “participate” in.

There are two basic participants here:

  1. New companies enter the market by creating new ways of paying for things that compete with existing ways to pay for things. For example, new entrants are services like Alipay, Bunq, Apple Pay, and GrabPay. While this is the domain of startups in most people’s minds, large companies play this role often.
  2. Existing companies both defend their existing businesses and create new ways of paying for things. For example, Dutch banks launched iDEAL several years ago. Existing companies often partner with new entrants, for example: Goldman Sachs provides the backend for Apple Pay and Maybank partnered with GrabPay. Incumbents can also accomplish the second goal by just acquiring the new companies: in general banking, Goldman Sachs acquired Honest Dollar to help it get into consumer banking.

“Strategy,” then, is (1.) deciding to participate in these markets, and, (2.) the exact way these companies should participate, how they grab money from those tubes of cash. Defining and nailing strategy, then, is key to success and survival. For example, an estimated 3.3 trillion dollars flowed through the credit card tube of money in 2016. As new ways of processing payments gain share, they grab more and more from that huge tube of cash. Clearly, this threatens the existing credit card companies, all of whom are coming up with new ways to defend their existing businesses and new payment methods.

As an example of a general strategy for incumbents, a recent McKinsey report on payments concludes:

The pace of digital disruption is accelerating across all components of the GTB value chain, placing traditional business models at risk. If they fail to pursue these disruptive technologies, banks could become laggards servicing less lucrative portions of the value chain as digital attackers address the friction points. To avoid this fate, banks must embrace digitized transaction banking with a goal of eliminating discrepancies, simplifying payments reconciliation, and streamlining infrastructure to operate profitably at lower price points. They must take proactive strategic steps to leverage their current favorable market position, or watch new market entrants pass them by.

That is:

  1. New methods of payments will destroy your business, those pesky “tech companies.”
  2. So you should create some new (not credit card) payment methods 
  3. At the same time make your back-end systems more efficient so they can drive down costs for your existing credit card based business, increasing your profit margins despite overall revenue declining as “tech companies” grab more and more money out of your cash-tubes.
  4. Also, take advantage of your existing capabilities in security, fraud handling, and governance compliance to differentiate both your new, not credit card payment offerings and defend your existing credit card business.

That’s pretty good strategic direction and it comes, as you can see in the PDF, from a very deep analysis of market conditions, and trends – there’s even a Mekko chart!

McKinsey payments Mekko - Screen Shot 2019-08-08 at 2.27.28 PM.png
Source: “Global payments 2018: A dynamic industry continues to break new ground,” McKinsey, Oct 2018.

Now, how you actually put all that into practice is what strategy is. Each company and industry has its own peccadilloes. The reason McKinsey puts out all those fine charts is to do the pre-sales work of getting to invite them in and ask “yes, but how?” 

Getting over digital transformation fatigue

“Software is eating the world.” Pronouncements like this chestnut are by now obvious thanks to many Casandras have grown hoarse over the years. As one executive put it:

We came to the realization that, ultimately, we are a technology company operating in the financial-services business. So, we asked ourselves where we could learn about being a best-in-class technology company. The answer was not other banks, but real tech firms. 

This type of thinking has gone on for years, but change in large organizations has been glacial. If you search for the phrase “digital transformation” you’ll daily find sponsored posts on tech news sites preaching this, as they so often say, “imperative.” They’re long on blood curdling pronouncements and short on explaining what to actually do.

We’re all tired of this facile, digital genuflection. But maybe it’s still needed. 

If survey and sentiment are any indication, digital strategies are not being rolled out broadly across organizations as one survey, below, suggests. It shows that the part of the businesses that creates the actual thing being sold, product design and development, is being neglected:

Forrester digital transformation projects by department.png
Source: answers to the question “Which business processes are the focus of your firm’s most recent digital transformation?” Data from “Kick-Start Your Digital Business Strategy,” Forrester, June 2019.

As with all averages, this means that half of the firms are doing better…and half of them worse. Curiously, IT is getting most of the attention here: as I say, the IT bottleneck is fixed. My anecdotes-as-data studies match up with the attention customer service is getting: as many of my examples here show, like the Orange one, early digital transformation applications focus on moving people from call centers to apps. And, indeed, “improving customer experience” is one of the top goals of most app work I see.

But, it drops off after there. There’s plenty of room for improvement and much work to be done by strategy groups to direct and decide digital strategy. Let’s look at a two part toolkit for how they might could do it:

  1. Sensing your market – how to observe your market to time and plan changes.
  2. Validating strategy – a new method to safely and accurately define what your organization does.

Sensing your market

Changing enterprise strategy is costly and risky. Done too early, and you deliver perfectly on a vision but are unable to scale to more customers: the mainstream is not yet “ready.” Done too late, and you’re in a battle to win back customers, often with price cutting death spirals and comically disingenuous brand changes: you don’t have time for actual business innovation, so you put lipstick on discount pigs.

An innovation strategy relies on knowing the right time to enter the market. You need a strategy tool to continually sense and time the market. Like all useful strategy tools, it not only tells you when to change, but also when to stay the same, and how to prioritize funding and action. Based on our experience in the technology industry, we suggest starting with a simple model based on numerous tech market disruptions and market shifts. This model is Horace Dediu’s analysis of the post-2007 PC market. 2007, of course, is the year the iPhone was introduced. I’m not sure what to call it, but the lack of a label doesn’t detract from its utility. Let’s call it The Dediu Cliff:

The Dediu Cliff. Source: “The rise and fall of personal computing,” Jan 2012, Horace Dediu.

To detect when a market is shifting, Dediu’s model emphasizes looking beyond your current definition of your market. In the PC market, this meant looking at mobile devices in addition to desktops and laptops. Microsoft Windows and x86 manufacturers had long locked down the definition and structure of the PC market. Analyst firms like IDC tracked the market based on that definition and attempted disruptors like Linux desktop aspirants competed on those terms.

When the iPhone and Android were introduced in 2007, the definition of the PC market changed without much of anyone noticing. In a short 10 years, these “phones” came to dominate the “PC” market by all measures that mattered: time spent staring at the screen, profits, share increases, corporate stability and high growth, and customer joy. Meanwhile, traditional PCs were seen mostly as work horses, as commodities like pens and copy machines bought on refresh cycles with little regard to differentiation.

Making your own charts will often require some art. For example, another way to look at the PC market changing is to look at screen time per device, that is, how much time people spend on each device:

Screentime from Statcounter - Screen Shot 2019-08-08 at 2.49.37 PM.png
Screen time, or “engagement” as measured by web traffic. Notice that the analysis of the US market share has  iOS leading above Android. Source: Statcounter, queried 29 July 2019.

You have to find the type of data that fits your industry and the types of trends you’re looking to base strategy on. Those trends could be core assumptions that drive how your daily business functions. For example, many insurance businesses are still based on talking with an agent. So, in the insurance industry, you might chart online vs. offline browsing and buying:

Screen Shot 2019-08-07 at 3.28.14 PM
Source: Gartner L2, July 2019.

While more gradual than Deidu’s PC market chart, this slope will still allow you to track trends. Clearly, some companies aren’t paying attention to that cliff: as the Gartner L2 research goes on to say, once people look to go from quote to purchasing, only 38% of insurance companies allow for that purchase online.

Gaining this understanding of shifts in the very definition of your market is key. Ideally, you want to create the shift. If not, you want to enter the market once the shift is validated, as early as possible, even if the new entrant has single digit market share. Deploying your corporate resources (time, attention, and money) often takes multiple years despite the “overnight success” myths of startups. 

Timing is everything. Nailing that, per industry, is fraught, especially in highly regulated industries like banking, insurance, pharmaceuticals, and other markets that can use regulations to, uh, artificially bolster barriers to entry. Don’t think that high barriers to entry will save you though: Netflix managed to wreak havoc in the cable industry, pushing top telcoes even more into being dumb pipes, moving them to massive content acquisitions to compete.

I suggest the following general tactics to keep from falling off The Dediu Cliff:

  1. Know your customer – study their Jobs to be Done, maintain a good, “speaking” relationship with them.
  2. Consider Cassandras that use footnotes – track trend spotting, especially year over year (over year, over year).
  3. Try new things – experiment and incubate new ideas to continually test and participate in the market.

We’ll take a look at each of these, and then expand on how the third is generalized into your core innovation function.

Know your customer

Measuring what your customer things about you is difficult. Metrics like NPS and churn give you trailing indicators of satisfaction, but they won’t tell you when your customer’s expectations are changing, and, thus, the market.

You need to understand how your customer spends their time and money, and what “problems” they’re “solving” each day. For most strategy groups, getting this hands on is too expensive and not in their skill set. Frameworks like Jobs to Be Done and customer journey mapping can systemize this research, as we’ll see below, using a small batch process to implement your application allows you to direct strategy by observing what your customers actually interact with your business do day-to-day. 

Case Study: “The front door of the store is in your pocket,” Home Depot

In the ever challenging retail world, The Home Depot has managed to prosper by knowing their customer in detail. The company’s omnichannel strategy provides an example. Customers expect “omnichannel” options in retail, the ability to order products online, buy them in-store, order online but pick-up in-store, return items from online in-store…you get the idea. Accomplishing all of those tasks seems simple from the outside, but integrating all of those inventory, supply-chain, and payment systems is extremely difficult. Nonetheless, as Forrester has documented, The Home Depot’s concerted, hard fought work to get better at software is delivering on their omnichannel strategy: “[a]s of fiscal year 2018, The Home Depot customers pick up approximately 50% of all online orders in the store” and a 28% growth in online sales.

Advances in this business have been fueled by intimate knowledge of The Home Depot’s customers and in-store staff by actually observing and talking with them. “Every week, my product and design teams are in people’s homes or [at] customer job sites, where we are bringing in a lot of real-time insights from the customers,” Prat Vemana, The Home Depot’s Chief Digital Office said at the time.

The company focuses on customer journeys, the full, end-to-end process of customers thinking to, researching, browsing, acquiring, installing, and then using a product. For example, to hone in on improving the experience of buying appliances, the product team working on this application spent hours in stores studying how customers bought appliances. They also spent time with customers at home to see how they browsed appliance options. The team also traveled with delivery drivers to see how the appliances are installed.

Here, we see a company getting to know their customer and their problems intimately. This leads to new insights and opportunities to improve the buying experience. In the appliances example, the team learned that customers often wanted to see the actual appliance and would waste time trying to figure out how they could see it in person. So, the team added a feature to show which stores had the appliances they were interested in, thus keeping the customer engaged and moving them along the sales process. 

Spanning all these parts of the customer journey gives the team research-driven insights into how to deliver on The Home Depot’s omnichannel strategy. As customers increasingly start research on their phone, in social media, go instore to browse, order online, pick up instore, have items delivered, and so forth, many industries are figuring out their own types of omnichannel strategies. 

All of those different combinations and changing options will be a fog to strategy groups unless they start to get to know their customers better. As Allianz’s Firuzan Iscan puts it: “When we think from the customer perspective, most of our customers are hybrid customers. They are starting in online, and they prefer an offline purchasing experience. So that’s why when we consider the journey end to end, we need to always take care of online and offline moments of this journey. We cannot just focus on online or offline.”

Corporate strategy didn’t sign up for this

The level of study done at The Home Depot may seem absurd for the strategy team to do. Getting out of the office may seem like a lot of effort, but the days spent doing it will give you a deep, ongoing understanding of what your customers are doing, how you’re fulfilling their needs, and how you can better their overall journey with you to keep their loyalty and sell more to them. Also, it’s a good excuse to get out of beige cubicle farms and dreary conference rooms. Maybe you can even expense some lunches!

As we’ll see, when the product teams building these applications are put in place, strategy teams will have a rich source of this customer information. In the meantime, if you’re working on strategy, you’d be wise to fill that gap however you can. We’ll discuss one method next, listening to those people yelling and screaming doom and disruption.

Consider Cassandras

An early, ignored attempt to warn about that “book seller” in Seattle.

In Western mythos, Cassandra was cursed to always have 100% accurate prophecies but never be believed. For those of us in the tech industry, cloud computing birthed many Cassandras. Now, in 2019, the success of public cloud is indisputable. The on-premises market for hard and software is forever changed. Few believed that a “booker seller” would do much here or that Microsoft could reinvent itself as an infrastructure provider, turning around a company that was easily dismissed in the post-iPhone era.

Despite this, as far back as 2007, early Casandras were pointing out that software developers were using AWS in increasing numbers. Early on, RedMonk made the case that developers were the kingmakers of enterprise IT spend. And, if you tracked developer choice, you’d see that developers were choosing cloud. More Cassandras emerged over the years as cloud market share grew. Traditional companies heard these Cassandras, some eventually acting on the promises.

cloud spend.png
“Follow the CAPEX: Cloud Table Stakes 2018 Edition,” Charles Fitzgerald, February 2019.

Finally, traditional companies took the threat seriously, but as Charles Fitzgerald wickedly chronicled, it was too late. As his chart above shows, entering the public cloud market at this stage would cost $100’s of billions of dollars, each year, to catch up. The traditional companies in the infrastructure market failed to sense and act on The Cliff early enough – and these were tech companies, those outfits that are supposed to outmaneuver and outsmart the market!

Now, don’t take this to mean that these barriers to entry are insurmountable. Historically, almost every tech leader has been disrupted. That’s what happened in this market. There’s no reason to think that cloud providers are immune. We just don’t know when and how they’ll succumb to new competitors or, like Microsoft, have to reinvent themselves. What’s important, rather is for these companies to properly sense and respond to that threat.

There’s similar, though, rearview mirror oriented, stories in many industries. TK( listing or summarizing one in a non-tech company would sure be cool here ).

To consider Cassandras, you need a disciplined process that looks at year over year trends, primarily how your customers spend their time and money. Mary Meeker’s annual slide buffet is a good example: where are your customers spending their time? RedMonk’s analysis of developers is another example. A single point in time Cassandra is not helpful, but a Cassandra that reports at regular intervals gives you a good read on momentum and when your market shifts.

Finally, putting together your own Dediu Cliff can self-Cassandraize you. Doing this can be tricky as you need to imagine what your market will look like – or several scenarios. You’ll need to combine multiple market share numbers from industry analysts into a Cliff chart, updating it quarterly. Having managed such a chart, I can say it’s exhilarating (especially if someone else does the tedious work!) but can be disheartening when quarter by quarter you’re filed into an email inbox labeled “Cassandras.”

Thus far, our methods for sensing the market have been a research, even “assume no friction” methods. Let’s look at the final method that relies on actually doing work, and then how it expands into the core of the new type of strategy and breaking The Business Bottleneck.

Try new things

The best way to understand and call market shifts is to actually be in the market, both as a customer and a producer. Being a customer might be difficult if you’re, for example, manufacturing tractors, but for many businesses being a customer is possible. It means more than trying your competitor’s products. To the point of tracking market redefinition, you want to focus on the Jobs to Be Done, problems customers are solving, and try new ways of solving those problems. If this sounds like it’s getting close to the end goal of innovation, it’s because it is: but doing it in a smaller, lower cost and lower risk way.

For example, if you’re in the utility business, become a customer of in-home IoT devices, and how that technology can be used to steal your customer relationship, further pushing your business into a commodity position. In the PC market, some executives at PC companies made it a point of pride to never have tried, or “understood” the appeal of small screens – that kind of willful, proud ignorance isn’t helpful when you’re trying to be innovative. 

You need to know the benefits of new technologies, but also the suffering your products cause. There’s a story that management at US car manufacturers were typically given a company car and free mechanical service during the day while their car was parked at the company parking lot. As a consequence, they didn’t know first hand how low quality affected the cars. As Nassem Talab would put, they didn’t have any skin in the game…and they lost the game. Regularly put your skin in the game: rent a car, file an insurance claim, fill out your own expenses, travel in coach, and eat at your in-store delis.

Ket to trying new things is to be curious, not only in finding these things, but in thinking up new products to improve and solve the problems you are, now, experiencing first hand.

The goal of trying new things is to experiment with new products, using them to direct your strategy and way of doing business. If you have the capability to test new products, you can systematically sense changes in market definition. Tech companies regularly gloat new ideas as test products to sense customer appetite and, thus, market redefinitions. If you’ve ever used an alpha or beta app, or an invite only app, you’ve played a part in this process. These are experiments, ways the company tries new things. We laud companies like Google for their innovation successes, but we easily forget the long list of failed experiments. The website killedbygoogle.com catalogs 171 products that Google killed. Not all of these are “experiments,” some were long-running products that were killed off. Nonetheless, once Google sensed that an experiment wasn’t viable or a product no longer valid, they killed it, moving on.

When it comes to trying things, we must be very careful about the semantics of “failure.” Usually, “failure” is bad, but when it comes to trying new things, “failure” is better thought of as “learning.” When you fail at something, you’ve learned something that doesn’t work. When you’re feeling your way through foggy, frenetic market shifts requires tireless learning. So, in fact, “failing” is often the fastest way to success. You just need a safe, disciplined system to continually learn.

Validating strategy

Innovation requires failure. There are few guarantees that all that failure will lead to success, but without trying new things, you’ll never succeed at creating new businesses and preventing disruption. Historically, the problems with strategy has been the long feedback cycles required to tell you if your strategy “worked.”

First, budgets are allocated annually, meaning your strategy cycle is annual as well.Worse, to front-load the budget cycle, you need to figure out your strategy even earlier. Most of the time, this means the genesis of your current strategy was two, even three years ago. The innovation and business rollout cycles at most organizations are huge. TK( some long roll out figure). It can be even worse: five years, if not ten years in many military projects. Clearly, in “fast moving markets,” to use the cliché, that kind of idea-to-market timespan is damaging. Competing against companies that have shorter loops is key for organizations now. As one pharmacy executive put it, taking six months to release competitive features isn’t much use if Amazon can release them in two months.

Your first instinct might be the start trying many new things, creating an incubation program as a type of beta-factory of your own. The intention is good, but the risks and costs are too high for most large organizations. Learning-as-failure is expensive and can look downright stupid and irresponsible to share holders. Instead, you need a less costly, lower risk way to fail than throwing a bunch of things at the wall and seeing what sticks.

The small batch cycle

small batch doodle - Screen Shot 2019-08-08 at 3.04.20 PM.png
The small batch cycle.

Many organizations using what we’ll call the small batch cycle. This is a feedback loop that relies on four simple steps:

  1. Identify a problem to solve.
  2. Create a theory of how to solve the problem.
  3. Validate this theory by trying it out in real life.
  4. Analyze the results to see if the theory is valid or not.

This is, essentially, the scientific method. The lean startup method and, later, lean design has adapted this model to software development. This same loop can be applied “above the code” to strategy. This is how you can use failure-as-learning to create validated strategy and, then, start innovating like a tech company. 

As described above, due to long cycles, most corporate strategy is theoretical, at worse, PowerPoint arts and crafts with cut-and-pasting from a few web searches. The implementation details can become dicey and then there’s seeing if customers will actually buy and use the product. In short, until the first customer buys and uses the “strategy,” you’re carrying the risk of wasting all your budget and time on this strategy, often a year or more.

That risk might pay off, or it might not. Not knowing either way is why it’s a risk. A type of corporate “double up to catch up” mentality adds to the risk as well. Because the timeline is so long, the budget so high, and the risk of failure so large, managers will often seek the biggest bang possible to make the business case’s ROI “work.” Taking on a year’s time and $10m budget must have a significant pay off. But with such high expectations, the risk increases because more must be done, and done well. And yet, the potential downside is even higher as well.

This risky mentality has been unavoidable in business for the most part – building factories, laying phone lines, manufacturing, etc. require all sorts of up-front spending and planning. Now, however, when your business relies on software, you can avoid these constraints and better control the risks. Done well, software costs relatively little and is incredibly malleable. It’s, as they say, “agile.” You just need to connect the agile nature of software to strategy. Let’s look at an example.

Case Study: Most viable strategy: Duke Energy validates RFID strategy

As an energy company, Duke Energy has plenty of strategizing to do around issues like: disintermediation from IoT devices, deregulation, power needs for electric vehicles, and improving customer experience and energy conservation. Duke has a couple years of experience being cloud-native, getting far enough along to open up an 83,000-square- foot labs building housing 400 employees working in product teams.

They’re applying the mechanics of small batches and agile software to their strategy creation. “Journey teams” are used to test out strategies before going through the full-blown, annual planning process. “They’re small product-type teams led by design thinkers that help them really map out that new [strategic] journey and then identify [what] are the big assumptions,” Duke’s John Mitchell explained. Once identified, the journey teams test those assumptions, quickly proving or disproving the strategy’s viability.

Mitchell gives a recent example: labor is a huge part of the operating costs for a nuclear power plant, so optimizing how employees spend their time can increase profits and the time it takes to address issues. For safety and compliance reasons, employees work in teams of five on each job in the plant, typically scheduled in hour-long blocks. Often, the teams finish in much less than an hour, creating spare capacity that could be used on another job.

If Duke could more quickly, in near real-time, move those teams to new jobs they could optimize each person’s time. “So the idea was, ‘How can we use technology?’” Mitchell explains. “What if we had an RFID chip on all of our workers? Not to ‘Big Brother’ check in on them,” he quickly clarifies, but to better allocate the spare capacity of thousands of people. Sounds promising, for sure.

Not so fast though, Mitchell says: “You need to validate, will that [approach] work? Will RFID actually work in the plant?” In a traditional strategy cycle, he goes on, “[You’d] order a thousand of these things, assuming the idea was good.” Instead, Duke took a validated strategy approach. As Mitchell says, they instead thought, “let’s order one, let’s take it out there and see if it actually works in plant environment.” And, more importantly, can you actually put in place the networking and software needed: “Can we get the data back in real time? What do we do with data?” The journey team tested out the core strategic theories before the company invested time and money into a longer-term project and set of risks.

Key to all this, of course, is putting these journey teams in place and making sure they have the tools needed to safely and realistically test out these prototypes. “[T]he journey team would have enough, you know, a very small amount of support from a software engineer and designer to do a prototype,” Mitchell explains. “[H]opefully, a lot of the assumptions can be validated by going out and talking to people,” he goes on, “and, in some cases there’s a prototype to be taken out and validated. And, again, it’s not a paper prototype—unless you can get away with it—[it’s] working software.”

Once the strategic assumptions are validated (or invalidated, the entire company has a lot more confidence in the corporate strategy. “Once they … validate [the strategy],” Mitchell explains, “you’ve convinced me—the leader, board, whatever—that you know you’re talking about.”

It’s software

With software, as I laid out in Monolithic Transformation, the key ways to execute the loop are short release cycles, smaller amounts of code in each release, and the infrastructure capabilities to reliably reverse changes and maintain stability if things go wrong. 

These IT changes lead directly to positive business outcomes. Using a small batch cycle increases the design quality and cost savings of application design, directly improving your business. First, the shorter, more empirical, customer-centered cycles mean you better match what your customers actually want to do with your software. Second, because your software’s features are driven by what customers actually do, you avoid overspending on your software by putting in more features than are actually needed. 

For example, The Home Depot kept close to customers and “found that by testing with users early in the first two months, it could save six months of development time on features and functionality that customers wouldn’t use.” That’s 4 months time and money saved, but also functionality in the software that better matches what customers want.

As you mature, these capabilities lead to even wider abilities to experiment with new features like A/B testing, further honing down the best way to match what your software does to how your customers want to use it, and, thus, engage with your business. TK( quick example would be nice here ).

Software is the reason we call tech companies tech companies. They rely on software to run, even define their business. Thus, it’s TK( maybe? ) software strategy where we need to look at next.

An unused executive dinner speech

I hosted an executive dinner a few weeks ago. I’d put together this opening talk, introducing the customer who was kind enough to come go through their story. I didn’t really get a chance to give it, which was probably for the best. Maybe next time

Thanks for coming – we’re glad y’all took the time. I know it’s hard.

My favorite thing about Pivotal is that I get to meet new people, computer people. My wife is always befuddled that I’m a wallflower in most company, but then, turn into an extrovert around computer people. So, it’s nice to meet more people like myself.

I’ve been at Pivotal almost five years and I’ve seen people like yourselves go through all sorts of transformations. They’re getting better at doing software. That’s setting them up to change how they run their business, to change what their business is, even. You can call it innovation, or whatever. Anyhow, I collect these stories – especially the parts where things go wrong – and in the Pivotal spirit of being kind and doing the right thing, try to help people who’re getting started by telling them the lessons learned.

Tonight, I’m hoping we can just get to know each other, especially amongst yourselves – us Pivotal people know each other well already!.

Most organizations feel like they’re the only ones suffering. They think their problems are unique. I get to talk with a lot of organizations, so I see the opposite. In general, everyone has the same problems and usually the same solutions.

Given that, I’d encourage you to talk with each other about what you’re planning, or going through. Chances are someone right next to you is in the same spot, or, if you’re lucky, has gotten past it.

As an example of that, we’re lucky that [customer who’s here] wanted share what’s been going on at [enterprise]. There’s lots of great stories in there…

So, let’s hear them…then let’s eat!

Speed, Accuracy, and Flexibility, IBM circa 1920

The purpose of a sales force is to bring a company’s value proposition—its “deal”—to customers. That value proposition results in the development of a company’s “go-to-market” strategy, how it will implement that plan. Central to that activity can be a direct sales force, people who meet face-to-face with customers, a typical approach with complex and expensive equipment. For simple products, a catalog or store can suffice, and today even a simple website will do. In 1914, ITR’s and Hollerith’s products were complicated, and so one had to make a clear case about why customers should buy them.

There was considerable consistency across the decades about IBM’s value proposition. Watson explained to a new batch of executives that, “We are furnishing merchants, manufacturers and other businessmen with highly efficient machines which save them money.” For the larger IBM community, he followed with, “That is why we are going to make more money for this business.” He spoke about how IBM created value. By 1920, Watson was preaching that the way to accomplish C-T-R’s goals was “to serve better industry’s vital requirements—the need to conserve time, motions and money.” He introduced a signature for IBM sales literature, too, that delivered a sound-bite value proposition used for decades: “Speed, Accuracy, and Flexibility.”

From IBM: The Rise and Fall and Reinvention of a Global Icon.

So what exactly should IBM do, and have done?

Now that IBM has ended its revenue losing streak, we’re ready to stick a halo on it:

There is no doubt, though, that there are signs of progress at IBM, which would not comment on its financial picture before the release of the earning report. So much attention is focused on the company’s top line because revenue is the broadest measure of the headway IBM is making in a difficult transformation toward cloud computing, data handling and A.I. offerings for corporate customers.

The new businesses — “strategic imperatives,” IBM calls them — now account for 45 percent of the company’s revenue. And though it still has a ways to go, IBM has steadily built up those operations — and gained converts.

Over all those quarters, there hasn’t been that much good analysis of “what went wrong” at IBM in so much as I haven’t really read much about what IBM should have been doing. What did we expect from them? What should they be doing now and in the future? I don’t know the answers, but I’m damn curious.

“State your deal.”

Since the mid-2000’s, all tech companies have been shit on for not getting to and dominating public cloud faster (there are exceptions like Adobe that get lost in the splurty noise of said shitting on). Huge changes have happened at companies HP/HPE and Dell/EMC/VMware (where I work happily at Pivotal, thank you very much), and you can see Oracle quarterly dance-adapting to the new realities of enterprise IT spending.

For the past 8 or 10 years I’ve had a rocky handle on what it is that IBM sell exactly, and in recent years their marketing around it has been fuzzy.  Try to answer the question “so what is it, exactly, that IBM sells?” A good companion is, “why do customers choose IBM over other options?”

You can’t say “solutions” or “digital transformation.” (I’m aware of some black kettle over here, but I and any Pivotal person could tell you exactly the SKUs, tools, and consulting we sell, probably on an index card). I’m pretty sure some people in IBM know, but the press certainly doesn’t know how the fuck to answer that question (with some exception at The Register and from TPM, grand sage of all IBM coverage).

I’ve been a life-long follower of IBM: my dad worked at the Austin campus, it was a major focus at RedMonk, and, you know, just being in the enterprise tech industry gets your face placed facing Armonk frequently. I feel like I know the company pretty well and have enough of an unnatural fascination to put up with spelunking through them when I get the chance; IBMers seem pleasantly bewildered when the first thing I ask them to do is explain the current IBM hierarchy and brand structure.

But I couldn’t really explain what their deal is now. I mean, I get it: enterprise outsourcing, BPaaS (or did they sell all that off?), some enterprise private cloud and the left over public cloud stuff, mainframe, a bunch of branded middleware (MQ, WebSphere, DB2, etc.) that they seem forbidden to mention by name, and “Watson.”

There are clear products & services (right?)

 

When I’ve been involved in competitive situations with IBM over the years, what they’re selling is very, very straight forward: outsourcing, software, and a sense of dependability. But the way they’re talked about in the press is all buzzwordy weirdness. I’m sure blockchain and AI could be a big deal, but their on and off success at doing something everyday, practical with it is weird.

Or, it could just be the difficulty of covering it, explaining it, productizing, and then marketing it. “Enterprise solutions” often amounts to individually customized strategy, programs, and implementations for companies (as it should, most of the time), so you can’t really wrap a clear-cut SKU around that. It’s probably equally hard to explain it to financial analysts.

So, what’s their deal?

Cumulative capex spend by Google, Amazon, and Microsoft since 2001.
How much is that public cloud in the window?

Anyhow, I don’t come here to whatnot IBM (genuinely, I’ve always liked the company and still hope they figure it out), but more out of actual curiosity to hear what they should have been doing and what they should do now. Here’s some options:

  1. The first option is always “stay on target, stay on target,” which is to say we just need to be patient and they’ll actually become some sort of “the business of AI/ML, blockchain, and the same old, useful stuff of improving how companies run IT.” I mean, sure. In that case, going private is probably a good idea. The coda to this is always “things are actually fine, so shut the fuck up with your negativity. Don’t kill my vibe!” And if this it true, IBM just needs some new comms/PR strategies and programs.
  2. You could say they should have done public cloud better and (like all the other incumbent tech companies except Microsoft), just ate it. What people leave out of this argument is that they would have had to spend billions (and billions) of dollars to build that up over the past 10 years. Talk about a string of revenue loosing quarters.
  3. As I’m fiddling around with, they could just explain themselves better.
  4. They should have gotten into actual enterprise applications, SaaS. Done something like bought Salesforce, merged with SAP, who knows. IBM people hated it when you suggested this.
  5. The always ambiguous “management sucks.” Another dumb answer that has to be backed up not with missed opportunities and failures (like public cloud), but also proving that IBM could have been successful there in the first place (e.g., with public cloud, would Wall Street have put up with them loosing billions for years to build up a cloud?)

I’m sure there’s other options. Thinking through all this would be illustrative of how the technology industry works (and not the so called tech industry, the real tech industry).

(Obviously, I’m in a weird position working at Pivotal who sells against IBM frequently. So, feel free to dismiss all this if you’re thinking that, now that you’ve read this swill, you need to go put on a new tin-foil hat because your current one is getting a tad ripe.)

More on “grim” automation – Notebook

A few weeks back my book review of two “the robots are taking over” came out over on The New Stack. Here’s some responses, and also some highlights from a McKinsey piece on automation.

Don’t call it “automation”

From John Allspaw:

There is much more to this topic. Nick Carr’s book, The Glass Cage, has a different perspective. The ramifications of new technology (don’t call it automation) are notoriously difficult to predict, and what we think are forgone conclusions (unemployment of truck drivers even though the tech for self-driving cars needs to see much more diversity of conditions before it can get to the 99%+ accuracy) are not.

Lisanne Bainbridge in her seminal 1983 paper outlines what is still true today.

From that paper:

This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.

When things go wrong, humans are needed:

To take over and stabilize the process requires manual control skills, to diagnose the fault as a basis for shut down or recovery requires cognitive skills.

But their skills may have deteriorated:

Unfortunately, physical skills deteriorate when they are not used, particularly the refinements of gain and timing. This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one. If he takes over he may set the process into oscillation. He may have to wait for feedback, rather than controlling by open-loop, and it will be difficult for him to interpret whether the feedback shows that there is something wrong with the system or more simply that he has misjudged his control action.

There’s a good case made for not only the need for humans, but to keep humans fully trained and involved in the process to handle errors states.

Hiring not abating

Vinnie, the author of one of the books I reviewed, left a comment on the review, noting:

For the book, I interviewed practitioners in 50 different work settings – accounting, advertising, manufacturing, garbage collection, wineries etc. Each one of them told me where automation is maturing, where it is not, how expensive it is etc. The litmus test to me is are they stopping the hiring of human talent – and I heard NO over and over again even for jobs for which automation tech has been available for decades – UPC scanners in groceries, ATMs in banking, kiosks and bunch of other tech in postal service. So, instead of panicking about catastrophic job losses we should be taking a more gradualist approach and moving people who do repeated tasks all day long and move them into more creative, dexterous work or moving them to other jobs.

I think Avent’s worry is that the approach won’t be gradual and that, as a society, we won’t be able to change norms, laws, and “work” over fast enough.

McKinsey

As more context, check out this overview of their own study and analysis from a 2015 McKinsey Quarterly article:

The jobs don’t disappear, they change:

Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank teller’s job was redefined with the advent of ATMs.

Further:

our research suggests that as many as 45 percent of the activities individuals are paid to perform can be automated by adapting currently demonstrated technologies… fewer than 5 percent of occupations can be entirely automated using current technology. However, about 60 percent of occupations could have 30 percent or more of their constituent activities automated.

Most work is boring:

Capabilities such as creativity and sensing emotions are core to the human experience and also difficult to automate. The amount of time that workers spend on activities requiring these capabilities, though, appears to be surprisingly low. Just 4 percent of the work activities across the US economy require creativity at a median human level of performance. Similarly, only 29 percent of work activities require a median human level of performance in sensing emotion.

So, as Vinnie also suggests, you can automate all that stuff and have people focus on the “creative” things, e.g.:

Financial advisors, for example, might spend less time analyzing clients’ financial situations, and more time understanding their needs and explaining creative options. Interior designers could spend less time taking measurements, developing illustrations, and ordering materials, and more time developing innovative design concepts based on clients’ desires.

Companies want more from offshore IT, likely leading to more on-shore IT growth

The most recent offshoring survey from Horses for Sources suggests that companies will have less use for traditional IT outsourcing.

When it comes to IT services and BPO, it’s no longer about “location, location, location”, it’s now all about “skills, skills, skills”.

Instead of “commodity” capabilities (things like password resets, routine programming changes, etc.), companies want more highly-skilled, innovative capabilities. Either offshorers need to provide this, or companies will in-source those skills.

Because offshorers typically don’t focus on such “open ended” roles, analysis of the survey suggests offshorers will have less business, at least new business:

aspirations for offshore use between the 2014 and 2017 State of the Industry studies, we see a significant drop, right across the board, with plans to offshore services.

And:

an increasing majority of customers of traditional shared services and outsourcing feel they have wrung most of the juice offshore has to offer from their existing operations, and aren’t looking to increase offshore investments.

What with the large volume of IT offshorers companies do, and how this outsourcing tends to control/limit IT capabilities, paying attention to these trends can help you predict what the ongoing “nature of IT” is in large offshorers.

This fits the offshoring and outsourcing complaining I hear from most all software teams in large organizations.

To me this read as “yes, we need to refocus IT to help us create and refine new business models.” You know, “digital transformation,” “cloud native,” and all that.

Source: “Offshore has become Walmartas Outsourcing becomes more like Amazon”

Linux killed Sun?

For the Sun: WTF? files:

Gerstner questioned whether three or four years from now any proprietary version of Unix, such as Sun’s Solaris, will have a leading market position.

One of the more popular theories for the decline of Sun is that they accepted Linux way, way too late. As a counter-example, there’s IBM saying that somewhere around 2006 you’d see the steep decline of the Unix market, including Solaris, of course.

If I ever get around to writing that book on Sun, a chart showing server OS market-share from 2000 to 2016 would pair well with that quote.

If you’ve read Stephen’s fine book, The New Kingmakers, you may recall this relevant passage:

In 2001, IBM publicly committed to spending $1 billion on Linux. To put this in context, that figure represented 1.2% of the company’s revenue that year and a fifth of its entire 2001 R&D spend. Between porting its own applications to Linux and porting Linux to its hardware platforms, IBM, one of the largest commercial technology vendors on the planet, was pouring a billion dollars into the ecosystem around an operating system originally written by a Finnish graduate student that no single entity — not even IBM — could ever own. By the time IBM invested in the technology, Linux was already the product of years of contributions from individual developers and businesses all over the world.

How did this investment pan out? A year later, Bill Zeitler, head of IBM’s server group, claimed that they’d made almost all of that money back. “We’ve recouped most of it in the first year in sales of software and systems. We think it was money well spent. Almost all of it, we got back.”

Source: IBM to spend $1 billion on Linux in 2001 – CNET

Talend IPO’s

The open source based data integration (basically, evolved ETL) company Talend IPO’ed this week. It’s a ten year old company, based on open source, with a huge French tie-in. Interesting all around. Here’s some details on them:

  • “1,300 customers include Air France, Citi, and General Electric.” That’s way up from 400 back in 2009, seven years ago.
  • In 2015 “Talend generated a total revenue of $76 million. Its subscription revenue grew 39% year over year, representing $62.7 million of the total. The company isn’t profitable: it reported a net loss of $22 million for 2015.”
  • “…much of that [loss] thanks to the $49 million it spent on sales and marketing,” according yo Julie Bort.
  • “Subscription revenue rose 27% to $63m while service fees stayed flat at $13m,” according to Matt Aslett.
  • It looks like the IPO performed well, up ~50% from the opening price.

TAM Time

By this point, I’m sure Talend messes around in other TAMs, but way back when I used to follow the business intelligence and big data market more closely, I recall that much of the growth – though small in TAM – was in ETL. People always like the gussy it up as “data integration”: sure thing, hoss.

That seems still be the case as spelled out a recent magic quadrant of the space (courtesy of the big dog in the space, Informatica):

Gartner estimates that the data integration tool market was worth approximately $2.4 billion in constant currency at the end of 2014, an increase of 6.9% from 2013. The growth rate is above the average for the enterprise software market as a whole, as data integration capability continues to be considered of critical importance for addressing the diversity of problems and emerging requirements. A projected five-year compound annual growth rate of approximately 7.7% will bring the total to more than $3.4 billion by 2019

In comparison, here’s the same from the 2011 MQ:

Gartner estimates that the data integration tools market amounted to $1.63 billion at the end of 2010, an increase of 20.5% from 2009. The market continues to demonstrate healthy growth, and we expect a year-on-year increase of approximately 15% in 2011. A projected five-year compound annual growth rate of approximately 11.4% will bring the total to $2.79 billion by 2015.

Meanwhile check out Carl Lehmann’s recent overview of Informatica and the general data integration market and Matt Aslett’s coverage of IPO plans back in June for a good overview of Talend.

Eventually, to do a developer strategy your execs have to take a leap of faith

A kingmaker in the making.

I’ve talked with an old colleague about pitching a developer-based strategy recently. They’re trying to convince their management chain to pay attention to developers to move their infrastructure sales. There’s a huge amount of “proof” an arguments you can make to do this, but my experience in these kinds of projects has taught me that, eventually, the executive in charge just has to take a leap of faith. There’s no perfect slide that proves developers matter. As with all great strategies, there’s a stack of work, but the final call has to be pure judgement, a leap of faith.

“Why are they using Amazon instead of our multi-billion dollar suite?”

You know the story. Many of the folks in the IT vendor world have had a great, multi-decade run in selling infrastructure (hardware and software). All the sudden (well, starting about ten years ago), this cloud stuff comes along, and then things look weird. Why aren’t they just using our products? To cap it off, you have Apple in mobile just screwing the crap out of the analogous incumbents there.

But, in cloud, if you’re not the leaders, you’re obsessed with appealing to developers and operators. You know you can have a “go up the elevator” sale (sell to executives who mandate the use of technology), but you also see “down the elevator” people helping or hindering here. People complain about that SOAP interface, for some reason they like Docker before it’s even GA’ed, and they keep using these free tools instead of buying yours.

It’s not always the case that appealing to the “coal-facers” (developers and operators) is helpful, but chances are high that if you’re in the infrastructure part of the IT vendor world, you should think about it.

So, you have The Big Meeting. You lay out some charts, probably reference RedMonk here and there. And then the executive(s) still isn’t convinced. “Meh,” as one systems management vendor exec said to me most recently, “everyone knows developers don’t pay for anything.” And then, that’s the end.

There is no smoking gun

If you can’t use Microsoft, IBM, Apple, and open source itself (developers like it not just because it’s free, but because they actually like the tools!) as historic proof, you’re sort of lost. Perhaps someone has worked out a good, management consultant strategy-toned “lessons learned” from those companies, but I’ve never seen it. And believe me, I’ve spent months looking when I was at Dell working on strategy. Stephen O’Grady’s The New Kingmakers is great and has all the material, but it’s not in that much needed management consulting tone/style. (I’m ashamed to admit I haven’t read his most recent book yet, maybe there’s some in there.)

Of course, if Microsoft and Apple don’t work out as examples of “leaders,” don’t even think of deploying all the whacky consumer-space folks out like Twitter and Facebook, or something as detailed as Hudson/Jenkins or Oracle DB/MySQL/MariaDB.

I think SolarWinds might be an interesting example, and if Dell can figure out applying that model to their Software Group, it’d make a good case study. Both of these are not “developer” stories, but “operator” ones; same structural strategy.

Eventually, they just have to “get it”

All of this has lead me to believe that, eventually, the executives have to just take a leap of faith and “get it.” There’s only so much work you can do — slides and meetings — before you’re wasting your time if that epiphany doesn’t happen.

The transformation is complete.

If this is your bag, come check out a panel on the developer relations at the OpenStack Summit on April 28th, in Austin — I’ll be moderating it!

So you want to become a software company? 7 tips to not screw it up.

Hey, I’ve not only seen this movie before, I did some script treatments:

Chief Executive Officer John Chambers is aggressively pursuing software takeovers as he seeks to turn a company once known for Internet plumbing products such as routers into the world’s No. 1 information-technology company.

Cisco is primarily targeting developers of security, data-analysis and collaboration tools, as well as cloud-related technology, Chambers said in an interview last month.

Good for them. Cisco has consistently done a good job to fill out its portfolio and is far from the one-trick pony people think it is (last I checked, they do well with converged infrastructure, or integrated systems, or whatever we’re supposed to call it now). They actually have a (clearly from lack of mention in this piece) little known-about software portfolio already.

In case anyone’s interested, here’s some tips:

1.) Don’t buy already successful companies, they’ll soon be old tired companies

Software follows a strange loop. Unlike hardware where (more or less) we keep making the same products better, in software we like to re-write the same old things every five years or so, throwing out any “winners” from the previous regime. Examples here are APM, middleware, analytics, CRM, web browsers…well…every category except maybe Microsoft Office (even that is going bonkers in the email and calendaring space, and you can see Microsoft “re-writing” there as well [at last, thankfully]). You want to buy, likely, mid-stage startups that have proven that their product works and is needed in the market. They’ve found the new job to be done (or the old one and are re-writing the code for it!) and have a solid code-base, go-to-market, and essentially just need access to your massive resources (money, people, access to customers, and time) to grow revenue. Buy new things (which implies you can spot old vs. new things).

2.) Get ready to pay a huge multiple

When you identify a “new thing” you’re going to pay a huge multiple of 5x, 10x, 20x, even more. You’re going to think that’s absurd and that you can find a better deal (TIBCO, Magic, Actuate, etc.). Trust me, in software there are no “good deals” (except once in a lifetime buys like the firesale fro Remedy). You don’t walk into Tiffany’s and think you’re going to get a good deal, you think you’re going to make your spouse happy.

3.) “Drag” and “Synergies” are Christmas ponies

That is, they’re not gonna happen on any scale that helps make the business case, move on. The effort it takes to “integrate” products and, more importantly, strategy and go-to-market, together to enabled these dreams of a “portfolio” is massive and often doesn’t pan out. Are the products written in the exactly the same programming language, using exactly the same frameworks and runtimes? Unless you’re Microsoft buying a .Net-based company, the answer is usually “hell no!” Any business “synergies” are equally troublesome: unless they already exist (IBM is good at buying small and mid-companies who have proven out synergies by being long-time partners), it’s a long-shot that you’re going to create any synergies. Evaluate software assets on their own, stand-alone, not as fitting into a portfolio. You’ve been warned.

4.) Educate your sales force. No, really. REALLY!

You’re thinking your sales force is going to help you sell these new products. They “go up the elevator” instead of down so will easily move these new SKUs. Yeah, good luck, buddy. Sales people aren’t that quick to learn (not because they’re dumb, at all, but because that’s not what you pay and train them for). You’ll need to spend a lot of time educating them and also your field engineers. Your sales force will be one of your biggest assets (something the acquired company didn’t have) so baby them and treat them well. Train them.

5.) Start working, now, on creating a software culture, not acquiring one

The business and processes (“culture”) of software is very different and particular. Do you have free coffee? Better get it. (And if that seems absurd to you, my point is proven.) Do you get excited about ideas like “fail fast”? Study and understand how software businesses run and what they do to attract and retain talent. We still don’t really understand how it all works after all these years and that’s the point: it’s weird. There are great people (like my friend Israel Gat) who can help you, there’s good philosophy too: go read all of Joel’s early writing of Joel’s as a start, don’t let yourself get too distracted by Paul Graham (his is more about software culture for startups, who you are not — Graham-think is about creating large valuations, not extracting large profits), and just keep learning. I still don’t know how it works or I’d be pointing you to the right URL. Just like with the software itself, we completely forget and re-write the culture of software canon about every five years. Good on us. Andrew has a good check-point from a few years ago that’s worth watching a few times.

6.) Read and understand Escape Velocity

This is the only book I’ve ever read that describes what it’s like to be an “old” technology company and actually has practical advice on how to survive. Understand how the cash-cow cycle works and, more importantly for software, how to get senior leadership to support a cycle/culture of business renewal, not just customer renewal.

7.) There’s more, of course, but that’s a good start

Finally, I spotted a reference to Stall Points in one of Chambers’ talks the other day which is encouraging. Here’s one of the better charts you can print out and put on your wall to look at while you’re taking a pee-break between meetings:

That charts all types of companies. It’s hard to renew yourself, it’s not going to be easy. Good luck!

The Problem with PaaS Market-sizing

Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20–25bn sometime in the future (5–10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.

A bias against private PaaS

This number is in contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just Force.com, Heroku, parts of AWS, Azure, and Google, and bunch of “Other.” This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.

I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, for PaaS for large organizations seems to be private PaaS

(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform. Also, things are getting even more ambiguous at the infrastructure layer all the time — perhaps “private PaaS” means more “owning” the PaaS layer, regardless of who “owns” the IaaS layer.)

How much do you have budgeted?

With this premise — that people want private PaaS — I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:

  • IDC’s Application Development forecast puts the application development market (which includes ALM tools and platforms) at $24bn in 2015, growing to $30bn in 2019. The commentary notes that the influence of PaaS will drive much growth here.
  • Recently from Ovum: “Ovum forecasts the global spend on middleware software is expected to grow at a compound annual growth rate (CAGR) of 8.8 percent between 2014 and 2019, amounting to $US22.8 billion by end of 2019.”
  • And there’s my old pull from a Goldman Sachs report that pulled from Gartner, where middleware is $24bn in 2015 (that’s from a Dec 2014 forecast).

When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20–25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.

Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”

So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.

(Ironic-clipart from my favorite source, geralt.)

OpenStack Summit 2016 Talks

The OpenStack Summit is in Austin this year, finally! So, I of course submitted several talks. Go over and vote for them – I think that does something helpful, who the hell knows?

Here’s the talks:

I’ll be at the Summit regardless, but it’d sure be dandy to do some of the above too.

New DevOps Column at The Register

I started a new column at The Register, on the topic of DevOps. I used the first column to layout the case that DevOps is a thing, and baseline for how much adoption there currently is (enough, but not a lot – a “glass almost half full” type of situation). I was surprised by how many comments it kicked up!

Next up, I’ll try to pick key concepts and explain them, along with best and worst practices for adoption of those concepts. Or whatever else pops up to fill 800 words. Tell me if you have any ideas!

(You may recall had a brief column at The Register back when I was at 451 Research.)

The Problem with PaaS Market-sizing

Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20-25bn sometime in the future (5-10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.

A bias against private PaaS

This number is contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just Force.com, Heroku, and parts of AWS, Azure, and Google. This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.

I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, PaaS for large organizations seems to be private PaaS

(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform.)

How much do you have budgeted?

With this premise – that people want private PaaS – I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:

  • IDC’s Application Development forecast puts the application development market (which includes ALM tools and platforms) at $24bn in 2015, growing to $30bn in 2019. The commentary notes that the influence of PaaS will drive much growth here.
  • Recently from Ovum: “Ovum forecasts the global spend on middleware software is expected to grow at a compound annual growth rate (CAGR) of 8.8 percent between 2014 and 2019, amounting to $US22.8 billion by end of 2019.”
  • And there’s my old pull from a Goldman Sachs report that pulled from Gartner, where middleware is $24bn in 2015 (that’s from a Dec 2014 forecast).

When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20-25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.

Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”

So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.

(Ironic-clipart from my favorite source, geralt.)

The media doesn’t know what they’re talking about w/r/t Yahoo, a study in i-banker rhetoric

The notion that some in the media – who usually have no specific knowledge about Yahoo – have recklessly put forward that Yahoo is “unfixable” and that it should be simply “chopped up” and handed over for nothing to private equity or strategies is insulting to all long-term public shareholders.

This presentation is an example of many things we discuss on Software Defined Talk around large, struggling companies and the way they’re covered. Among other rhetorical highlights:

  • Check out how they make their case
  • Use visuals and charts
  • The informal nature of their language, e.g., they use the word “stuff” frequently
  • Their citations, e.g., citing themselves (I always love a good “Source: Me!”) and citing “Google Images”

These things, in my view, are neither good or bad: I’m more interested in the study of the rhetoric which I find fascinating for investment banker documents/presentations like this.

Not only that, it’s a classic “Word doc accidentally printed in landscape.” The investment community can’t help themselves.

As another note, no need to be such a parenthetical dick, below, to prove the point of a poor M&A history, just let the outcomes speak for themselves, not the people who do them.

img_4051

They actually do a better job in the very next slide, but that kind to pettiness doesn’t really help their argument. (Their argument is: she’s acquiring her friends.)

This is a type of reverse halo effect: we assume that tree standing goofiness has something to do with the business: an ad hominem attack. But, I think most billionaires probably have picture of themselves in trees, wearing those silly glove shoes, roasting their own coffee, only eating meat they kill themselves, or any number of other affectations that have nothing to do with profit-making, good or bad.

Solving the conundrums of our father’s strategies

So here we are, as of this writing a good twenty-nine years after the “hatchet job,” and Kodak has declared bankruptcy. The once-humming factories are literally being blown up, and the company’s brand, which Interbrand had valued at $14.8 billion in 2001, fell off its list of the top one hundred brands in 2008, with a value of only $3.3 billion. 6 It really bothered me that the future was so visible in 1980 at Kodak, and yet the will to do anything about it did not seem to be there. I asked Gunther recently why, when he saw the shifts coming so clearly, he did not battle harder to convince the company to take more forceful action. He looked at me with some surprise. “He asked me my opinion,” he said, “and I gave it to him. What he did beyond that point was up to him.” Which is entirely characteristic of scientists like Gunther. They may see the future clearly, but are often not interested in or empowered to lead the charge for change. Why do I know this story so well? He happens to be my father. —The End of Competitive Advantage, Rita McGrath.

You don’t get a sudden, personal turn like that in business books much. It evoked one of the latent ideas in my head: much of my interest in “business” and “strategy” comes from dad’s all too typical career at IBM in the 80s and 90s.

Sometime in the early 80s – or even late 70s? – my dad started working at IBM in Austin on the factory floor, printed circuit boards I believe. He’d tell me that he’d work the late shift, third shift and at 6am in the morning, stop by 7-11 with his buddies to get a six pack and wait in the parking lot of the Poodle Dog bar for it to open at 8.

He moved up to management, and eventually into planning and forecasting. All for hardware. I remember he got really excited in the late 80s when he got a plotter at home so he could work on foils, that is, transparencies. We call these “slides” now: you can still get a that battlefield-twinkle out of old IBM’ers eyes if you say “foils.”

Eventually, he lived the dictum of “I’ve Been Moved” and went up to the research triangle for a few years, right before IBM divested of his part of the company selling it to Multek (at least he got to return to Austin).

As you can guess, his job changed from a long-term one where his company had baseball fields and family fun days (where we have an outdoor mall, The Domain now) to the usual transient, productivity harvesting job. He moved over to Polycom eventually where he spent the rest of his career helping manage planning and shipping, on late night phone calls to Thailand manufacturers.

In addition to always having computers around – IBM PCs of course! – there was always this thing of how a large tech company evolves and operates. At the time, I don’t think I paid much attention to it, but it’s a handy reference now that I spend most of my time focused on the business of tech.