Straddling the firewall: cloud from 2010 to 2020 (& what to do next)

I gave the ten year anniversary talk at the CloudAustin meetup. Ten years ago, I gave the first talk. Here’s a bit of the essay I wrote out as I was working on the presentation.

I want to go over the last ten years of cloud, since I gave the first talk at this meetup, back in August 2010. At the time, I was wrapping up my stint at RedMonk, though I didn’t know that until a year later. I went to work at Dell in corporate strategy, helping build the software and cloud strategies and businesses. I went back to being an analyst at 451 Research where I ran to software infrastructure team, and then thanks to my friend, Andrew Shafer, ended up where I am now, at Pivotal, now VMware. I also have three kids. And a dog. And live in Amsterdam.

Staying grounded.

Beware, I still have to pay my bills

Pic: Kadumago, Nov 2019.

You should know that I have biases. I have experiences, ideas, “facts” even that come from that bias. I work at VMware, “VMware Tanzu” to be specific which is VMware’s focus on developers and the enterprise software they write. I’ve always worked on that kind of thing, and often in the interests of “on-premises” IT.

Indeed, at the end, I’m going to tell you that developers are what’s important – in fact, that Tanzu stuff is well positioned for the future I think should exist.

So, you know…whatever. As I used to say at RedMonk: “disclaimer.”

Computers from an outlet

Back in 2013, when I was at Dell, I went on an analyst tour in New England. Matt Baker and I visited IDC, Gartner, Forrester, and some smaller shops in hotel lobbies and fancy restaurants. These kind of analyst meetings are a bit dog and pony, but they give you a survey of what’s going on and a chance to tell analysts what you think reality looks like.

Anyhow, we walked into the Forrester office, a building that was all brand new and optimistic. Some high-level person there way into guitars and, I don’t know, 70’s rock. Along with Forrester branded ballpoint pens, they gave us The Allman Brothers’s at Fillmore East CDs as thank you gifts. For real. (Seriously.) Conferences room names were all rock-and-roll. They had an electric guitar in the lobby. Forrester: not the golden buttoned boating jackets of Gartner, or the short-sleeved button-up white shirts of Yankee-frugal IDC.

Being at Dell, we were interested in knowing one thing: when, and even if, and at what rate, will the on-premises market succumb to public cloud. That’s all any existing vendor wanted to know in the past decade. That’s what that decade was all about: exploring the theory that public cloud would take over on-premises IT.

The Forrester people obliged. The room was full of about 8 or 10 analysts, and the Forrester sales rep. When a big accoiunt like Dell comes to call, you bring a big posse. Almost before Matt could finish saying the word “cloud,” a great debated emerged between two sides of analysts. There were raised voices, gesticulating, and listing back and forth in $800 springy, office chairs. All while Matt and I sort of just sat there listening like a call-center operator who uses long waits for people to pick up the phone as a moment of silence and calm.

All IT will be like this outlet here, just a utility, one analyst kept insisting. No, it’ll be more balanced, both on-premises and in cloud over time, another said. I’d never encountered an analyst so adamant about public cloud, esp. in 2013. It seemed like they were about to spring across the table and throttle their analyst opponent. I’m sure time ran out before anyone had a chance to whiteboard a good argument. Matt and I, entertained, exchanged cards, shook hands, and left with our new Allman Brother’s album.

Public cloud has been slower to gobble up on-premises IT than fanatics thought it would be, even in the late 2000’s. The on-premises vendors deployed an endless amount of FUD-chaff against moving to public cloud. That fear of the new and unknown slowed things down, to be sure.

But I think more practical matters are what keeps on-premises IT alive, even useful. We used to talk about “data gravity” as the thing holding migrations to public cloud back. All that existing data can’t be easily moved to the cloud, and new apps will just be throwing off tons of new data, so that’ll just pull you down more. (I always though, you know, that you could just go to CostCo and buy a few hard-drives, run an xcopy command overnight, and then FedEx the drives to The Cloud. I’m sure there’s some CAP theorem problem there – maybe an Oracle EULA violation?) But as I’ll get to, I think there’s just tech debt gravity – probably a 5 to 10 million applications[1] that run the world that just can’t crawl out of on-premises datacenters. These apps are just hard to migrate, and sometimes they work well enough at good enough cost, that there’s no business case to move them.

Public cloud grows and grows

But, back to public cloud. If we look at it by revenue, it’s clear that cloud has been successful and is used a lot.

[Being respectful of analyst’s work, the reader is asked to open up Gartner’s Nov 13th press release, entitled “Gartner Forecasts Worldwide Public Cloud Revenue to Grow 17% in 2020” and look at the table there. You can find charts as well. Be sure to add up the SaaS-y categories into just one.]

As with all such charts, this is more a fanciful illustration of our instincts. Charts are a way to turn hunches into numbers, transform the qualitative into the quantitative. You can use an even older forecast (also, see the chart…and be sure to take out “advertising,” obviously) to go all the way back to 2010. As such, don’t get hung up on the exact numbers here.

The direction and general sizing in these lines is what matters. SaaS is estimated at $176 billion in 2020, PaaS $39.7 billion, IaaS at $50b.

So. Lots of revenue there. There are a few things to draw from this chart – again, hunches that we can gussy up into charts:

  1. The categorization of SaaS, PaaS, and IaaS stuck. This was finalized in some NIST work, and if you can believe it, categorizing things like this drove a lot of discussion. I participated in at least three of those exercises, at RedMonk, Dell, and then at 451 Research. Probably more! There was a lot of talk about “bursting” and “hybrid cloud.” Traditional IT, versus private cloud. Is it “on-prem” or “on-premises” – and what does it say of your moral character is you incorrectly use the first? Whatever. We returned to the simplicity of apps, devs, and ops.
  2. SaaS is sort of “forgotten” now as a cloud thing. It looms so large that we can’t even see it when we narrow our focus on the other two parts of cloud. Us in the dev and ops crowds just think of SaaS as normal now, not part of this wild, new category of “cloud.” Everyday, maybe even every hour we all use SaaS – the same is true for enterprises. Salesforce’s revenue went from $1.3bn in 2010 to $17.1bn in 2020. We now debate Office 365 vs. GMail, never Exchange. SaaS is the huge winner. Though it’s sort of not in my interests, when people talk about cloud and digital transformation, I tell them that the most useful thing they should probably focus on is just going all SaaS. You see this with all the remote working stuff now – all those firewalls and SharePoints on intranets are annoying, and supporting your entire company working from home requires the performance and scaling of a big time SaaS company. SaaS is what’s normal, so we don’t even think about it anymore.
  3. Below that, is PaaS. The strange, much maligned layer over the decade. We’re always optimistic about PaaS, as you can see in the newer forecast. It doesn’t seem to deliver on that optimism, at least in the mainstream. I certainly hope it will, and think it should. We’ll see this time. I’ve lived through enough phases of PaaS optimism and re-invention (well, all of them, I suppose) that I’m cautious. To be cynical to be optimistic, as I like to quip: it’s only stupid until it works. I’ll return to the glorious future PaaS in a bit. But first…
  4. Below that, we have IaaS – what us tech people think of mostly as cloud. This is just a big pool of hardware and networking. Maybe some systems management and security services if you consider that “infrastructure.” I don’t know. Pretty boring. However, if you’re not buying PaaS, this is the core of what you’re buying: just a new datacenter, managed by someone else. “Just” is insulting. That managed by someone else is everything for public cloud. You take this raw infrastructure, and you put your stuff on it. Forklift your applications, they say, deploy new applications. This is your “datacenter” in the cloud and the way most people think about “cloud” now-a-days: long rows of blinking lights in dark warehouses sometimes with rainbow colored pipes.

IT can’t matter fast enough

Let’s get back to the central question of the past decade: when will public cloud eclipse on-premises IT?

First, let’s set aside the nuance of “private cloud” versus “traditional IT” and just think of it all as “on-premises.” This matters a lot because the nature of the vendors and the nature of the work that IT does changes if you build and manage IT on your own, inside the firewall. The technology matters, but the responsibility and costs for running and maintaining it year after year, decade after decade turns into the biggest, er, headache. It’s that debate from Forrester people: when will IT become that wall outlet that Nicholas Carr predicted long ago? When will IT become a fungible resource that people can shed when all those blinking lights start holding back their business ambitions?

What we were hunting for in the past ten years was a sudden switch over like this, the complete domination of mobile over PCs:

The Dediu Cliff.  Source: “The rise and fall of personal computing,” Jan 2012, Horace Dediu.

This is one of the most brilliant and useful strategy charts you’ll ever see. It shows how you need to look at technology changes, market share changes. Markets are changed by a new entrant that’s solving the same problems for customers, the jobs to be done, but in a different way that’s ignored by the incumbents.[2] This is sort of big “D,” Disruption theory, but more inclusive. Apple isn’t an ankle biter slowly scaling up the legs of Microsoft, they’re a seasoned, monied incumbent, just “re-defining” the market.[3]

Anyhow, what we want is a chart like this for cloud so that we can find when on-premises crests and we need to start focusing on public cloud. Rather, a few years before that so we have plenty of time to invest and shift.  When I did strategy at Dell, Seth Feder kept up this chart for us. He did great work – he was always talking about how good the “R squared” was – I still don’t know what that means, but he seemed happy with it. I wish I could share his charts, but they’re lost to Dell NDAs and shredders. Thankfully, IDC has a good enough proxy, hardware spend on both sides of the firewall:

[The reader is asked to open IDC’s April 2nd, 2020 press release titled “Cloud IT Infrastructure Spending Grew 12.4% in the Fourth Quarter, Bringing Total 2019 Growth into Positive Territory, According to IDC” and contemplate the third chart therein.]

You can’t use this as a perfect guide – it’s just hardware, and, really can you tell when hardware is used for “private cloud” versus “traditional IT”? And, beyond IaaS, we’d like to see this for the other two aaS’s: applications and developers. If only we had Seth still toiling away on his charts for the past decade.

But, once again, a chart illustrates our hunch: cloud is Hemingway’s bankruptcy thing, slowly racing towards a Dediu Cliff. We still don’t know when on-premises compute will suddenly drop, but we should expect it…any year now…

…or decade…

…I guess.  

¯\_(ツ)_/¯

Pre-cliff jumpers

Competition in cloud was fierce. Again, I’m leaving out SaaS – not my area, don’t have time or data. But let’s look at infrastructure, IaaS.

[The reader is asked to open up the 2010 IaaS MQ and the 2020 IaaS MQ.]

It’s worth putting these charts side-by-side. They’re Gartner Magic Quadrants, of course. As with all charts, you can hopefully predict me saying, they illustrate our intuitions. What’s magical (yes!) about the MQ’s is that they show a mixture of sentiment, understanding, and actual feature set. You can see that in play here as we figured out what cloud was.

Infamously, the first IaaS MQ in 2010 has Amazon in the lower right, and a bunch of “enterprise grade” IaaS people up and to the right. Most of us snickered at and were confused by this. But, that 2010 list and ranking reflected how people, esp. big corporate buyers were thinking about what they wanted cloud to be. They wanted it to be like what they knew and were certain they needed, but run by someone else with that capex-to-opex pixie dust people used to obsesses so much about.

Over the next ten years, everyone figured out what public cloud actually was: something different, more or less. Cloud was untethered from those “enterprise grade” expectations. In fact, most companies don’t want “enterprise grade” anymore, it’s not good enough. They want “cloud grade.” Everyone wants to “run like Google,” do DevOps and SRE. Enterprise buyers are no longer focused on continuing with what they have: they want something different.

All those missing dots are the vendors who lost out. There were many, many reasons. The most common initial was, well, “server hugging.” Just a biased belief in on-premises IT because that’s where all the vendor’s money had always came from. People don’t change much after their initial few decades, enterprises even less.[4]

The most interesting sidebars here are Microsoft and Rackspace. Microsoft shed it’s Windows focus and embraced the Linux and open source stack of cloud. Rackspace tried a pre-kubernetes Kubernetes you kids may not remember, OpenStack. I’m hoping one day there’s a real raucous oral account of OpenStack. There’s an amazing history in there that we in the industry could learn from.

But, the real reason for this winnowing is money, pure and simple.

Disruptors need not apply

You have to be large to be a public cloud. You have to spend billions of dollars, every year, for a long time. Charles Fitzgerald has illustrated this over the years:

It costs a lot to save you so much. Source: “Follow the CAPEX: Cloud Table Stakes 2018 Edition,” Charles Fitzgerald, February 2019.

Most of the orange dots from 2010 just didn’t want to do this: spend billions of dollars. I talked with many of them over the past ten years. They just couldn’t wrap their head around it. We didn’t even know it cost that much, really. Instead, those orange dots fell back on what they knew, tying to differentiate with those “enterprise grade” features. Even if they wanted to and did triy, they didn’t have the billions in cash that Amazon, Microsoft, and Google had. Disruption is nice, but an endless cash-gun is better.

I mean, for example, what was Dotcloud going to do in the face of this? IBM tried several times, had all sorts of stuff in its portfolio, even acquiring its way in with SoftLayer – but I think they got distracted by “Watson” and “Smart Cities.” Was Rackspace ever going to have access to that much money? (No. And in fact, they went private for four years to re-work themselves back into a managed service provider, but all “multi-cloud” now.)

There are three public clouds. The rest of us just sell software into that. That’s, more less, exactly what all the incumbents feared as they kept stacking servers in enterprise datacenters: being at the whim of a handful of cloud providers, just selling adornments.

$80bn in adornments

Source: “Investing City” on Seeking Alpha, originally from Pivotal IPO investor presentation.

While the window for grabbing public cloud profits might have closed, there’s still what you do with all that IaaS, how you migrate your decades of IT to and fro, and what you do with all the “left overs.” There’s plenty of mainframe-like, Microfocus-y and Computer Associates type of revenue to eek out of on-premises, forever.

Let’s look at “developers,” though. That word – developers – means a lot of things to people. What I mean, here, is people writing all those applications that organizations (mostly large ones) write and run on their own. When I talk about “developers,” I more mean whatever people are in charge of writing and running an enterprises (to use that word very purposefully) custom-written software.

Back when Pivotal filed to IPO in March of 2018, we estimated that the market for all of that would be $80.4bn, across PaaS and on-premises.

This brings us back to PaaS. No one says “PaaS” anymore, and the phrase is a bit too narrow. I want to suggest, sort of, that we stop obsessing over that narrow definition, and instead focus on enterprise developers and in-house software. That’s the stuff that will be installed on, running on, and taking advantage of cloud over the next ten years. With that wider scope, an $80bn market doesn’t seem too far fetched.

And it’s real: organizations desperately want to get good at software. They’ve said this for many years, first fearing robot dogs – Google, Amazon. AirBnB, Tesla…whatever. After years of robot dog FUD, they’ve gotten wiser. Sure, they need to be competitive, but really modernizing their software is just table stakes now to stay alive and grow.

What’s exciting, is that organizations actually believe this now and understand software enough to understand that they need to be good at it.

Source: “Improving Customer Experience And Revenue Starts With The App Portfolio,” Forrester Consulting, commissioned by VMware, March, 2020.

We in IT might finally get what we want sometime soon: people actually asking us to help them. Maybe even valuing what we can deliver when we do software well.

Beyond the blinking cursor

Obsessing over that Dediu Cliff for cloud is important, but no matter when it happens, we’ll still have to actually do something with all those servers, in the public cloud or our own datacenters. We’ve gotten really good at building blinking cursor boxes over the past ten years. Blinking cursor boxes

IT people things: developers write code, operations people put together systems. They also have shaky budgets – most organizations are not eager to spend money on IT. This often means IT people are willingly forced to tinker with building their own software and systems rather than purchasing and reusing others.[5] They love building blinking cursor boxes instead of focusing on moving pixels on the screen.

A blinking cursor box is yet another iteration of the basic infrastructure needed to run applications. Applications are, of course, the software that actually moves pixels around the screen: the apps people use to order groceries, approve loan applications, and other thrilling adventures in computing. Applications are the things that actually, like, are useful…that we should focus. But, instead, we’re hypnotized by that pulsing line on a blank screen.

Us vendors don’t help the situation. We compete and sell blinking cursor boxes! So we each try to make one, constantly. Often a team of people makes one blinking cursor box, gets upset at the company they work for, grabs a bunch of VC money, and then goes and makes another blinking cursor box. So many blinking cursors. The public clouds are, of course, big blinking cursor boxes. There was a strange time when Eucalyptus tried fight this trend and to do a sort of Turing Test on the Amazon’s blinking cursor. That didn’t go well for the smaller company. Despite that, slowly, slowly us vendors have gotten closer to a blinking cursor layer of abstraction. We’re kind of controlling our predilections here. It seems to be going well.

Month 13: now, the real work can begin.

Over the past ten years, we’ve seen many blinking cursor boxes come and go: OpenStack, Docker, “The Datacenter of the Future,” and now (hopefully not going) Kubernetes. (There was also still virtualization, Windows, Linux, and mainframes. Probably some AS/400s if we dig around deep enough.) Each of these new blinking cursor boxes had fine intentions to improve on the previous blinking cursor boxes. These boxes are also open source, meaning that theoretically IT people in organizations could download the code, build and configure their own blinking cursor boxes, all without having to pay vendors. This sometimes works, but more often than not what I hear of are 12 month or more projects to stand up the blinking cursor box du jour…that don’t even manage to get the cursor blinking. The screen is just blank.

In larger organizations, there are usually multiple blinking cursor box programs in place, doubling, even tripling that time and money burn. These efforts often fail, costing both time and millions in staff compensation. An almost worse effect is when one or more of the efforts succeeds, kicking off another year of in-fighting between competing sub-organizations about which blinking cursor box should be the new corporate standard. People seem to accept such large-scale, absurdly wasteful corporate hijinks – they probably have more important things to focus on like global plagues, supply chain issues, or, like, the color palette of their new logo.

As an industry, we have to get over this desire to build new blinking cursor boxes every five or so years, both at vendors and enterprises. At the very least we should collaborate more: that seems to be the case with Kubernetes, finally.

Even in a world where vendors finally standardize on a blinking cursor box, the much more harmful problem is enterprises building and running their own blinking cursor boxes. Think, again, of how many large organizations there are in the world, F500, G2,000 – whatever index you want to use. And think of all the time and effort put in for a year to get a blinking cursor box (now, probably Kubernetes) installed from scratch. Then think of the need in three months to update to a new version; six months at the longest, or you’ll get trapped like people did in the OpenStack year and next thing you know it, running a blinking cursor box from the Victorian era. Then think of the effect to add in new databases and developer frameworks (then new versions of those!), security, and integrations to other services. And so on. It’s a lot of work, duplicated at least 2,000 times, more when you include those organizations allow themselves to build competing blinking cursor boxes.

Obviously, working for a vendor that sells a blinking cursor box, I’m biased. At the very least, consider the costs over five to ten years of running your own cloud, essentially. Put in the opportunity cost as well: is that time and money you could instead be spending to do something more useful, like moving pixels around on your customer’s screen?

Once you free up resources from building another blinking cursor box, you can (finally) start focusing on modernizing how you do software. Next: one good place to start.

Best practices, do them

As I’ve looked into how organizations manage to improve how they do software over the years I’ve noticed something. It’ll sound uselessly simple when you read it. Organizations that are doing well follow best practices and use good tools. Organizations that are struggling don’t. This is probably why we call them best practices.

Survey after survey of agile development usage will show this, every year. Simple practices and tools like unit testing are widely followed, but adherence to other practices quickly falls off. How many times have you heard people mention a “sit down stand-up meeting,” or say something like “well, we don’t follow all the agile practices we learned in that five day course – we adapted them to fit us”?

I like to use CI/CD usage as a proxy for how closely people are following best practices.

One of the most important tools people have been struggling to use is continuous integration and continuous delivery (CI/CD[6]). The idea that you can automate the drudgery of building and testing software, continuous integration, is obviously good. The ability to deploy software to production every week, if not daily, is vital staying competitive with new features and getting fast feedback from users on your software’s usefulness.

Very early on, if you’re not doing CI/CD your strategy to improve how you’re doing software – to progress with your cloud strategy, even – is probably going to halt. It’s important! Despite this, for the past ten plus years, usage been poor:

Source: State of Agile Surveys, 3rd through 14th, VersionOne/CollabNet/digital.ai. CI/CD not tracked in 5th/2009. Over the years, definitions change, “delivery” and “deployment” are added; but, these numbers are close enough to other surveys to be useful. See more CI/CD surveys: Forrester survey (2019), DZone CD reports (2014, 2015, 2016, 2017, 2019).

Automating builds and tests with continuous integration is clearly easier (or seen as more valuable?) than continuous delivery. And there’s been an encouraging rise in CD use over the past ten years.

Still, these numbers are not good. Again, think of those thousands of large organizations across the world, and then that half of them are not doing CI, and then that 60% of them are not doing CD. This seems ludicrous. Or, you know, great opportunity for improvement…and all that.

Take a look at what you’re doing – or not doing! – in your organization. Then spend time to make sure you’re following best practices if you want to perform well.

Stagnant apps in new clouds

Over the past ten years, many discussions about cloud centered around technologies and private or public cloud. Was the cloud enterprise grade enough? Would it cost less to run on premises? What about all my data? Something-something-compliance. In recent years, I’ve heard less and less of those conversations. What people worry about now is what they already have: thousands and thousands of existing applications that they want to move to cloud stacks.

To start running their business with software, and start innovating how they do their businesses, they need to get much better at software: be like the “tech companies.” However, most organizations (“most all,” even!) are held back by their existing portfolio of software: they need to modernize those thousands and thousands of existing applications.

Modernizing software isn’t particularly attractive or adventurous. You’re not building something new, driving new business, or always working with new technologies. And when you’re done, it can seem like you’ve ended up with exactly the same thing from the outside. However, management is quickly realizing that maintaining the agility of their existing application portfolio is key if they want future business agility.

In one survey, 76% of senior IT leaders said they were too invested in legacy applications to change. This is an embarrassing situation to be in: no one sets off to be trapped by the successes of yesterday. And, indeed, many of these applications and programs were probably delivered on the promise of being agile and adaptable. And now, well, they’re not. They’re holding back improvement and killing off strategic optionality. Indeed, in another survey on Kubernetes usage, 49% of executives said integrating new and existing technology is the biggest impediment to developer productivity.[7]

After ten years…I think we’ve sort of decided on IaaS, on cloud. And once you’ve got your cloud setup, your biggest challenge to fame and glory is modernizing your applications. Otherwise, you’ll just be sucking up all the same, old, stagnating water into your shiny new cloud.


[1] This is not even an estimate, just a napkin figure. AirFrance-KLM told me they’re modernizing over 2,000 applications. Let’s take the so called Fortune 500, and multiply it out: 500 x 2,000 = 1,000,000. Now, AirFrance-KLM is a big company, but not the biggest by far. A company like JPMC has many more applications. Also, there are governments, militaries, and other large organizations out there beyond the F500. Or you could have started with the Global 2,000. So, let’s assume that there are “millions” of apps out there. (Footnote to footnote: what is an “app”? If you’re asking this, let’s call it a “workload” and see if that satisfies you enough to go back to the main text.)

[2] Yes, yes – this isn’t strictly true. Taking out costs can change markets because you’ve freed up so much cash-flow, lowered prices, etc. There’s “globalization.” Regulations can dramatically change things (AT&T, unbundling, abdicating moral responsibility in publishing, etc.). Unexpected, black swans can destroy fragile parts of the market, table-flipping everything. And more what have you’s &co.’s. Thank you for your feedback; we appreciate your business; this call will now disconnect.

[3] For more on how to use this kind of chart in the strategy world, see Rita McGrath’s books, most recently, Seeing Around Corners.

[4] Some years ago, it was popular to cite such studies as “at the current churn rate, about half of S&P 500 companies will be replaced over the next ten years.” I, dear reader, have been guilty of using such weird mental gymnastics, a sort of Sugar Rush grade, neon-track of logic. (Also, there’s private equity, M&A, the 2008 financial collapse, globalism, cranky CEOs, and all manner of other things that change that list regardless of the, well, business acumen of the victims – pouring gasoline onto the scales of the fires of creative destruction.)

[5] Public cloud has been an interesting inroad against this: you have to pay for public cloud, there’s no way around it. Still, people seem to like it more than paying larger, upfront software licensing fees. In public cloud, the initial process is often much more pleasant than a traditional acquisition process. The sales process of signing up without talking to a person, testing out the software, using it for real…all without having to setup servers, networking, install and upgrade software.

[6] There’s a lot of hair-splitting between “continuous delivery” versus “continuous deployment.” I don’t know. At a certain level of organizational management, the distinction becomes less useful than spending your mental effort on more pressing strategic and management riddles. I think it’s notable, that the jargon we use is “CI/CD” not “CI/CD/CD” (or maybe, more delightfully, CI/CD2?)

[7] I’m fond of citing this survey for another reason that shows how misaligned executive and developer believes can be: 29% of developers said that “Access to infrastructure is the biggest impediment to developer productivity”…but only 6% of executives agreed.

🗂 Link: IT departments spend millions tackling performance issues in complex IT

The vast majority of CIOs expect to deploy new technology stacks in the next 12 months. Most CIOs said they are already using or are planning to deploy microservices (88%), containers (86%), serverless computing (85%), PaaS (89%), SaaS (94%), IaaS (91%) and private cloud (95%) in the next 12 months.

CIO responses captured in the 2019 research indicate that lost revenue (49%) and reputational damage (52%) are among the biggest concerns as businesses transform into software businesses and move to the cloud.

Source: IT departments spend millions tackling performance issues in complex IT

Link: Shipping giant Maersk on taking a cloud-first approach to disrupting the competition

“When we moved to the cloud the first time, we cut down the lead time for an environment from 100 days to 85 days. This is self-inflicted lead time… processes that are keeping you from moving faster,” he said.

“We also had 30 people involved in a classic delivery, and we figured the cost was around €40,000 to provision an environment, in work time, handovers and meetings and what not.”

Source: Shipping giant Maersk on taking a cloud-first approach to disrupting the competition

Platform as a Product talk

Here’s a recording of one of my talks. It’s on what the operations team does when running in a platform, DevOps-y, whatever style:

Developers don’t need “services” from ops, they need products: continuously innovated platforms that evolve weekly. Once ops toil is removed, ops can focus on their customers’ – development – needs. Using stories & tactics from the real-world, this talk helps launch a platform-as-a-product strategy.

And:

Most ops groups can’t give developers what they need. Ops is limited by traditional service delivery mindset and tools. Stability & reliability are now table-stakes when you’re releasing software daily. What developers need now from ops is innovation. Operations has rarely takes this innovation-driven, product approach to providing services, & instead focuses on delivering to specification & limiting SLAs. As with development, ops creates value with continuous operations, product managing their platforms and releasing frequently.

This talk covers how ops groups are transforming from a service delivery mindset a platform-as-a-product approach. With examples from Discover Financial Services, Rabobank, the US Air Force, & others the talk covers the concept, technologies & tools commonly used, & ops tactics needed to kick-off a platform-as-a-product strategy.

Enjoy!

Link: Cloud Foundry Cult

Owen covers CF Summit Basel:

“The users we spoke with didn’t just see it as a PaaS – it was the underlying philosophy of application delivery and management upon which future developments would be based. The Foundation claims Cloud Foundry saves, on average, 10 weeks of development time and $100,000 per app development cycle. In fact, in its own survey, 92% of users cite cross-platform flexibility as important. If these panelists are gaining such benefits, it’s easy to understand why they are so enamored with it.”
Original source: Cloud Foundry Cult

Link: WSO2: Our 2017 Results and 2018 Plan

2% profit margin is much better than no- or negative-percent.

“In 2017, we will exit our Annualized Recurring Revenue (ARR) between $24.5 — $25.5M, a growth of 52%, up from 46% growth the previous year. Our gross margin for the recurring business is 88%, and will increase in coming years. In 2017, we will turn our first profit with $603K EBITDA and generate $2.7M cash from operations.”
Original source: WSO2: Our 2017 Results and 2018 Plan

Link: Worldwide Public Cloud Services Spending Forecast to Reach $160 Billion This Year, According to IDC

Includes an interesting chart that lists the types of services/features (like data management and appdev platforms) that compose vendor revenue. Plus geographic and vertical rankings. But, just a press release.
Original source: Worldwide Public Cloud Services Spending Forecast to Reach $160 Billion This Year, According to IDC

2017 Cloud Foundry Application Runtime Survey – Highlights

There’s a new survey out from the Cloud Foundry Foundation, looking at the users of Cloud Foundry. Here’s some highlights and notes:

  • Another ClearPath joint, n=735.
  • It’s important to keep in mind that this is covers all distress of Cloud Foundry, including open source (no vendor involved).
  • “The percentage of user respondents who require over three months
    per app drops from 51 percent to 18 percent after deploying Cloud Foundry Application Runtime”
  • “…while the percentage of user respondents who require less than a week climbs from 16 percent to 46 percent.”
  • “Nearly half (49 percent) of Cloud Foundry Application Runtime users are large enterprises ($1+ billion annual revenue).”
  • This chart is hard to read, but it shows a reduction in time to deploy across various time periods:
    before-after-release.
  • Uptake is early, but there are definitely mature users: “A plurality of Cloud Foundry Application Runtime users (61 percent) describe their deployments as somewhere in the early stages—trial, PoC, evaluation, or a partial integration into specific business units. Meanwhile, 39 percent have deployed Cloud Foundry Application Runtime more broadly across their company, from total integration in specific business groups to company-wide deployment.”
  • “Comcast, for example has more than 1500 developers using Cloud Foundry Application Runtime daily. Home Depot reports more than 2500 developers.”
  • “Comcast has seen between 50 percent and 75 percent improvement in productivity.”
  • “Half of Cloud Foundry Application Runtime users are currently using containers, such as Docker or rkt, with another 35 percent evaluating or deploying containers.”
  • Container management – there’s a wide variety of tools that people use for container orchestration, including DIY (14%). There’s a lot of interest in having CF do it: “Nearly three-quarters (71 percent) of Cloud Foundry Application Runtime users currently using or evaluating containers are interested in adding container orchestration and management to their Cloud Foundry Application Runtime environment.” Hence, validating the Cloud Foundry Container Runtime.
  • Of course, the surveyed are already CF users, so they’re biased/driven by what they know.
  • Almost half of respondents say that getting started with CF. But people end up liking it: “An overwhelming majority of users (83 percent) would recommend Cloud Foundry Application Runtime to a colleague, including 60 percent who would do so strongly.”
  • “As more companies roll out Cloud Foundry Application Runtime more broadly, the footprint continues to grow. Currently, 46 percent of users have more than 10 apps deployed on Cloud Foundry Application Runtime, including 18 percent with over 100 (and eight percent with over 500).” 4% have over 1,000 apps.
  • CF’s uses: “The primary use is for microservices (54 percent), followed by websites (38 percent), internal business applications (31 percent), Software-as-a-Service (SaaS) (27 percent) and legacy software (eight percent).”
  • Validating multi-cloud: “60 percent say this is very important, and another 30 percent describe it as somewhat important.” Meanwhile, 53% are using more than one type of IaaS.

The news from Docker-land, plus, the money being fought over – Notebook

With DockerCon this week, there’s no end of Docker quotables and items. Here’s my collection

General momentum

Once landed in an account, Docker usage grows their CEO says:

There has also been expansion within customers, with organizations that start with Docker expanding their usage on average by five times within six months

Way back in 2015, the (now annual?) DataDog study of Docker usage among their customers said that 2/3 of companies that try Docker adopt it. Which is all to say: once it gets in, it spreads.

Moby

A toolkit for putting together docker stacks:

In essence, Moby is the build system that creates Docker Community Edition, which is akin to Fedora, and Docker Enterprise is derived from Moby and is akin to Red Hat Enterprise Linux. Link

People got all freaked out. I’d even say “freaked the fuck out.” Competitors, of course, gloated, if only in silence. Criticism of handling the announcement aside (ideally, you wouldn’t like to kick up a stink), I feel like it was more like a tempest in a teapot.

Docker momentum/penetration and types of applications/workloads

Global 2000 customers have somewhere on the order of thousands to tens of thousands of applications, and across these major firms, less than 5 percent of the applications have been containerized so far. While somewhere between 5 percent and 10 percent of the applications that are being containerized are net-new, microservices-style applications that everyone is talking about all the time, the other 90 percent to 95 percent are just lifting and shifting legacy applications from bare metal or virtual machines to containers. Link

VMware threat…or just legacy gobbling?

Docker bounces back and forth between “replacement for VMware” and “a different thing, so don’t worry about VMware.” In this round of Docker news, there’s been some strong pull towards the “replacement for VMware” camp. To be fair, it’s more like doing both:

In general, says Johnston, customers who move from bare metal or VMs to Docker containers can provision, scale, and deploy applications up to 75 percent faster, and those moving from bare metal to containers can save 50 percent on compute and those who are moving from VMs will save around 25 percent. Link

This might also come from the obvious move to start gobbling up legacy (more accurately “existing”) applications. Here, Docker had two customer reference:

Northern Trust, a leading international financial services company, experienced  deployment times that were 4X faster and noted a 2X improvement in infrastructure utilization

And, Microsoft IT:

Microsoft is not only a partner in this program; their IT organization is also a beta customer.  Microsoft IT increased app density 4X with zero impact to performance and were able to reduce their infrastructure costs by a third.

There was also a story of Visa using Docker:

Kocherlakota said Visa is aiming to move as many workloads at it can to the container model to help improve overall efficiency.

See more on this legacy migration stuff and the program with Avanade, Cisco, HP, and Microsoft from Docker’s Scott Johnson.

Major vendors

Other tech companies are often cautious about working with Docker. They’re not really certain about how it helps or threatens their position in the IT stack and, therefore, their ability to sell higher profit margin products and services. No one wants to become the x86 manufacturer of the cloud (read: low margin, commodity).

I’ve noticed this cautiousness slightly melting as more and more vendors are at least putting their stuff in Docker images and, on the public cloud front, supporting the use of Docker. My company, Pivotal, ingests Docker images.

A brief whack at why Microsoft cares, from Christopher Tozzi:

Although there remains work to do to get Docker on Windows ready for prime time, the platform will be important in helping Windows Server stay as nimble as Linux environments in hosting the workloads of the future…. Microsoft’s interest in Docker may seem strange. Microsoft already offers traditional virtual machine products, most notably Hyper-V. In some respects, Docker containers compete with virtual machine platforms…. But that’s not necessarily the case. Depending on how they’re used, containers can complement virtual machines, rather than replace them. If you use virtual machines to host the environment in which Docker runs, your Docker environment becomes more scalable and portable than it would be if it ran on bare metal. That’s likely the type of use case Microsoft envisions for containers on Windows.

More from Nick Martin on Microsoft and Docker.

Oracle bundling middleware in Docker containers:

Oracle becomes the latest enterprise IT vendor to jump on the Docker container bandwagon as it seeks to expand its reach in the public cloud market. Among the container-based application, middleware and development tools made available on the container platform are Oracle’s MySQL database and its WebLogic server. Those tools are in addition to the more than 100 images of Oracle products already available on Docker Hub, its cloud-based image registry.

So, what’s going on here? Staking a claim on The New Stack

I’m often asked to explain all the various cloud stacks, to help Pivotal buyers sort out what CaaS, PaaS, cloud-native, and “cloud strategy” means. They’re trying to figure out their planning for building out new IT, for “doing DevOps.” It’s a mess out there w/r/t to figuring all this out if you’re not a vendor or analyst who’s steeped in this shoggoth every day.

In all the Docker, container, and cloud-native wars, the revenue battle for vendors is mostly about two things:

  1. The pool of money in simply migrating the VMware workload to a new, more efficient layer, hence the ongoing attention to “the VMware threat” that Docker poses). I’m not sure how big this market is because, as a disruptive shift (cf. Linux vs. UNIX vs. Windows vs. z) part of it is reducing the overall spend through lower prices and more efficient usage. But, the existing virtualization market is best described as “fucking huge.”
  2. Fighting over who “owns” (and therefore collects the most profit from) the stack that companies are using to build and run their software. By my estimate, this is something like around a $20-25bn market in the future. You can see a Spanish Civil War like precursor going on in the Java application server market; it’s spreading to a “World War” with respect to all custom software stacks.

On that second point, here’s my latest attempt to describe how things are shaking out category/definition wise:

Of all the SPI cloud categories, PaaS is the most problematic place as all us vendors hate the PaaS term and are trying to re-define what it means. I would break PaaS into two categories currently: (1.) container orchestration, and, (2.) cloud platform.

Container orchestration takes an IaaS and manages the installation and configuration of container images on your new cloud. By “images” here, I mean that you’ve chosen to put your software (probably custom written software, not packaged software) into containers (or the delegated way we do it with buildpacks in CF), specified how all the different nodes are wired together with all the ACLs and configuration, and then given it over to the orchestration software to deploy those containers, set the configuration, and do the ongoing health-checks/remediation.

Ideally, the orchestration platform should also have “day 2” tools to help you monitor and manager (“fix”) problems that happen in production. I assume things like kubernetes, the Docker/Moby constellation of things, Mesosphere, etc. fit here.

People are obsessed with container orchestration now and it’s pretty much all anyone talks about. I think all this is what’s becoming known as “CaaS” – Containers as a Service.

(On this next section, I’m extremely monetarily biased, of course:) A cloud platform either has or depends on an orchestration layer, but adds in integrated middle-ware, ALM tools (from basics like “cf push”, and an overall programming and deployment model with all the tools and enforcements. Heroku is the classic example here in public cloud, and now Cloud Foundry (CF) has taken over this model in public and private cloud, the second (it seems) where most of the usage and money is, at least in the enterprise space. I’d argue, that CF is the enterprise market-leader (by revenue at least, but increasingly penetration in the F500 – while Pivotal has impressive numbers, throw in the other CF distros and it’s even larger, no doubt); at the very least, “the highest growth and in enterprise production usage.” That all depends how you slice it, and of course my slicing favors me.

A cloud platform “pulls together” everything into a fully working “cloud” that deploy and provisions the servers, builds/maintains/deploys the containers, takes care of your networking configuration and concerns (inc. firewalls, etc.), and configs/manages all the middleware needed (e.g. “I want a database” means you just ask for it, instead of having to configure it and make container images of it and specify how it all works together).

The end goal of a cloud platform is the original end-goal of a PaaS: developers don’t have to “setup” any of the infrastructure or, really, middleware (databases, queues, etc.) that they use: they just write the “business logic” of their applications.

All this standardization is technically “restrictive” (developers can’t just install anything they download off the Internet, it has to be integrated into the platform). This is why we often call this model “opinionated,” but it follows the same contract/promises model that Google SREs follow: we promise we can support your applications in production if you use only the things we support, otherwise it’s all on you.

However, the benefit of such opinions is a huge jump in productivity as we see at all our customers: one Pivotal customer manages 1,000+ applications (all angles toward very frequent, DevOps-style releases for fast feedback loops and all that small batch stuff) with just 4 PCF operations staff, etc.

Our DIY white paper makes the case that snow-flaking this all out is a bad idea. At the very least, if you build your own platform, you should try to just have one used organization wide.

In comparing CaaS and cloud platform, the key distinction to me is that a cloud platform bundles and integrates together all your middleware and “services” frameworks. For example, if you want to do microservices with all the bulk-heads and such, that functionality should be built into the cloud platform – you should have to go read-up how to set most of that up. PCF, of course, has Spring Cloud and more for that. All of the systems management tools (thing used in production to detect and fix problems) should also be built in, or the cloud platform should be instrumented so deeply that third party tools can do the managing as well.

Now, these two categories are likely to converge, and then the discussion will just be which cloud platforms are more featureful and better. It’ll be like battling Java application servers.

I haven’t made one of my own “burger” stacks of all this in a long time, but I think (again, highly biased) the ones we use for PCF are pretty good:

More

In case you don’t know, working at Pivotal, I obviously have a stake in how all this turns out, so I’m biased on multiple angles of the above whether I want to be or not. 

Microsoft buys Deis, deeper into Kubernetes & $1.1bn container market – Notebook

A round-up of the news and some context around Microsoft burrowing down further into Kubernetes-land by acquiring Deis:

The deal & market

  • Microsoft: “Deis gives developers the means to vastly improve application agility, efficiency and reliability through their Kubernetes container management technologies…. We expect Deis’ technology to make it even easier for customers to work with our existing container portfolio including Linux and Windows Server Containers, Hyper-V Containers and Azure Container Service, no matter what tools they choose to use.”
  • Deis: “We look forward to making Azure the best place to run containerized workloads.”
  • Deis is/was part of EngineYard, right? – Notable that EngineYard (on April 10th, 2017, day of announcement) doesn’t mention it on their blog, or press release list. And that Deis and Microsoft don’t really either. See 451’s Jay Lyman’s coverage of that deal in 2015.
  • No deal-size was disclosed, of course, but Deis was small and I’m guessing it didn’t fit into EngineYard’s overall strategy, or what (little?) cash they got was a nice to have versus synergies of keeping Deis.
  • Containers are rising in usage, as 451’s Donnie said: “Our latest data says production use of containers has doubled from 10.2% to 22.5% of orgs between Q1 and Q3 2015. Amazing.”
  • 451’s January 2016 container market TAMs and forecast:

Screenshot 2017-04-10 13.56.56

The technology: not so much PaaS anymore, but Kubernetes management

Deis stack

Microsoft likes Kubernetes

  • Seems like Microsoft has gone all k8-crazy. So this is adding k8 support and some cloud-native services/middleware (package mgmt, routing, etc.) to Azure?
  • Back in July of 2016, Microsoft hired a k8 big-wheel (and other, “small wheels,” I’d assume), so they’re obviously into the thing…or at least the thinking behind the think. This leave, once again, Amazon as the last major cloud hold-out on k8.
  • That said, I think Microsoft’s new thing is to like everything that layers on-top, below, or around them. As long as you’re in every deal, you make a lot of money even if you’re not all of every deal. It’s pretty hard, now, of course, to compete with the big clouds.
  • Or, put another way: “Satya is like the Pope Francis of software,” says Alex Polvi, founder and CEO of CoreOS, a company that plays in the same area as Deis. “He took this old institution and made it cool again.”

Misc.

How JPMC is making IT more innovative with PaaS, public and private

wocintech (microsoft) - 154

A good, pretty long overview of JPMorgan Chase’s plans for doing cloud with a PaaS focus. Some highlights.

More than just private-IaaS and DIY-platforms:

Like most large U.S. banks, JPMorgan Chase has had some version of a private cloud for years, with virtualized servers, storage and networks that can be shared in a flexible way throughout the organization.

The bank is upgrading its private cloud to “platform as a service” — in other words, the cloud service will manage the infrastructure (servers, storage, and networks), so that developers don’t have to worry about that stuff.

On the multi-/hybrid-cloud thing:

By the second half of 2017, the bank plans to run proprietary applications on the public cloud. At the same time, it’s building a new, modern internal cloud, code-named Gaia.

While “hybrid-cloud” has been tedious vendor-marketing-drivel over the past ten years, pretty much all of the large organizations I work with at Pivotal have exactly this approach. Public, private, whatever: we want to do it all.

Shifting their emphasis innovation:

“We aren’t looking to decrease the amount of money the firm is spending on technology. We’re looking to change the mix between run-the-bank costs versus innovation investment,” he said. “We’ve got to continue to be really aggressive in reducing the run-the bank costs and do it in a very thoughtful way to maintain the existing technology base in the most efficient way possible.” …Dollars saved by using lower-cost cloud infrastructure and platforms will be reinvested in technology, he said.

On appreciating the scale of “large organizations” that drive their very real challenges with adopting new ways of running IT:

The bank has 43,000 employees in IT; almost 19,000 are developers.

Good luck having the “we have no process by design” process with that setup.

On security, there’s a nice, almost syllogistic re-framing of “cloud security here”:

For years, banks have worried about using the public cloud out of security concerns and fears of what their regulators will say. Ever since the 2013 Target data breach, in which hackers stole card information from 40 million customers by breaking into the computers of an air conditioning company Target used, regulators have strongly urged banks to carefully vet and monitor all third parties, with a specific focus on security.

“We’re spending a significant amount of time to ensure that any applications we choose to run on a public cloud will have the same level of security and controls as those run internally,” Deasy said.

Most notable corporate security breeches over the year have involved on-premises IT (like the HVAC example above). The point is not to make sure that “cloud is as secure as [all that on-prem IT that’s been the source of most security problems in the past], but to make sure that all IT has a rigorous approach to security. “Cloud” isn’t the security problem, doing a shitty job at security is the security problem.

Update: or it could be 30,000.

Source: Unexpected Champion of Public Clouds: JPMorgan CIO Dana Deasy, Penny Crosman, American Banker

SUSE to Acquire HPE’s OpenStack, Cloud Foundry Portfolio, Boost Kubernetes Investment, TheNewStack

“We see PaaS as a strategic component of our software-defined infrastructure and application platform strategy,” stated SUSE President of Strategy, Alliances and Marketing Michael Miller, in a note to The New Stack, “and Cloud Foundry as the open source project and technology that brings together the best innovation and industry collaboration. We want to leverage that innovation for the benefit of our customers, and we have a vision for the convergence of CaaS technologies [in SUSE’s case, Containers as a service] like Docker and Kubernetes and PaaS technologies like Cloud Foundry that we think will address the real-world needs of our customers and partners. We will now work with the Cloud Foundry community to develop that vision.”

http://thenewstack.io/suse-add-hpes-openstack-cloud-foundry-portfolio-boost-kubernetes-investment/

026: SpringOne Platform Preview, Pokémon Go, will Azure win against AWS? (Pivotal Conversations)

Continue reading “026: SpringOne Platform Preview, Pokémon Go, will Azure win against AWS? (Pivotal Conversations)”

025: .NET and Beyond 12 Factors with Kevin Hoffman (Pivotal Conversations)

We’ve seen a goodly spate of news in the container space recently which we cover in the episode. In the second half, we talk with Kevin Hoffman about the .NET world, Steel Toe, and his book, Beyond the Twelve-Factor App. A recent survey from the Cloud Foundry Foundation is widening the framing around container management, adding in the use of Platform-as-a-Service into the usual container orchestration mix. The survey also shows some interesting results around adoption, e.g., managing containers in production ends up being more difficult than people predict during evaluations. Also since our last episode, DockerCon brought a bevy of announcements in the container ecosystem which we cover briefly. And highly relevant to our guest, Kevin Hoffman, .NET Core 1.0 was officially released, as open source. In the second half we talk about the recent history of .NET and how it’s being used to create microservices. We also talk about the three extra “factors” Kevin’s book adds to the 12 factor app and typical experiences when migrating to 12 factor apps.

Full show notes: http://pivotal.io/podcast Feeds, archives, etc: https://soundcloud.com/pivotalconversations

Full show notes: pivotal.io/podcast

Download the episode, check it out in iTunes, subscribe to RSS, or check it out in SoundCloud.

Link: IDC: Federal government seeing cloud spending push

“In addition, the government plans to increase PaaS spending from $227.1 million in FY15 to $231.3 million [in FY16].”

We’re still in a phase where categorization causes weird slices of spend like this, but there you have it. More figures on “cloud” spending in the piece.

Source: IDC: Federal government seeing cloud spending push

The Problem with PaaS Market-sizing

Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20-25bn sometime in the future (5-10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.

A bias against private PaaS

This number is contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just Force.com, Heroku, and parts of AWS, Azure, and Google. This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.

I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, PaaS for large organizations seems to be private PaaS

(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform.)

How much do you have budgeted?

With this premise – that people want private PaaS – I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:

  • IDC’s Application Development forecast puts the application development market (which includes ALM tools and platforms) at $24bn in 2015, growing to $30bn in 2019. The commentary notes that the influence of PaaS will drive much growth here.
  • Recently from Ovum: “Ovum forecasts the global spend on middleware software is expected to grow at a compound annual growth rate (CAGR) of 8.8 percent between 2014 and 2019, amounting to $US22.8 billion by end of 2019.”
  • And there’s my old pull from a Goldman Sachs report that pulled from Gartner, where middleware is $24bn in 2015 (that’s from a Dec 2014 forecast).

When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20-25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.

Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”

So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.

(Ironic-clipart from my favorite source, geralt.)