451’s container orchestration usage survey – Notebook


As part of CoreOS’s conference this week, 451 put out a sponsored study on container orchestration. It’s been much cited and is free, so it’s worth taking a look. Here’s my highlights and notes:

  • Leadgen yourself to CoreOS get a copy of the report.
  • This report is really more of a “container orchestration usage” report than much about “hybrid cloud.”
  • Demographics:
    • “We surveyed 201 enterprise IT decision-makers in April and May 2017. This was not a survey of developers; rather, we received responses from those in C-level and director-level positions, including CISO, CTO, CIO, director of IT, IT Ops and DevOps, and VPs and managers of IT.”
    • All from the US
    • “All of our survey respondents came from organizations using application containers, and all were familiar with their organization’s use of containers.” – This survey, then, tells you what people who’re already using containers are doing, not what the entire market is thinking and planning on.
    • “A significant slice of the survey respondents represented large enterprises.”
  • Organizations are hoping to use containers for “[a] ‘leapfrog’ effect, whereby containers are viewed as a way to skip adoption of other technologies, was tested, and a majority of respondents think Kubernetes and other container management and orchestration software is sufficient to replace both private clouds and PaaS.”
  • Obviously I’m biased, being at Pivotal, but the question here is “to do what?” As we like to say around here, you’re going to end-up with a platform. People need a “platform” on-top of that raw IaaS, and as things like Icito show (not to mention Pivotal’s ongoing momentum), the lower levels aren’t cutting the mustard.
  • There’s an ongoing semantic argument about what “PaaS” means to be mindful of, as well: in contexts like these, the term is often taken to mean “that old stuff, before, like 2009.” At the very least, as with Gartner’s PaaS Magic Quadrant, the phrase often means means “only in the public cloud.” Again, the point is: if you’re developing and running software you need an application development, middleware, and services platform. Call it whatever you like, but make sure you have it. It’s highly likely that these “whatever you want to call ‘PaaS’ PaaSes” will run on-top of and with container orchestration layers, for example, as Cloud Foundry does and is doing.
  • That said, it’s not uncommon for me to encounter people in organizations who really do have a “just the containers, and maybe some kubernates” mind-set in the planning phase of their cloud-native stuff. Of course, they frequently end-up needing more.
  • Back to the survey: keeping in mind that all respondents were already using containers (or at least committed to doing so, I think), ~27% had “initial” production container use, ~25% of respondents had “broad” containers in production. So, if you were being happy-path, you’d say “over half of respondents have containers in production.”
  • In a broader survey (where, presumably, not every enterprise was already using containers), of 300+ enterprises, production container use was: 19% in initial production, 8% were in broad production implementation.
  • Nonetheless, 451 has been tracking steady, high growth in container usage for the past few years, putting the container market at $2.7B by 2020 and $1.1bn in 2017.
  • As the report says, it’s more interesting to see what benefits users actually find once they’re using the technology. Their original desires are often just puppy-love notions after actual usage:

  • Interesting note on lock-in: “Given that avoiding vendor lock-in is generally a priority for organizations, it might seem surprising that it was not ranked higher as an advantage since much of the container software used today is open source… However, our respondents for this study were users of containers, and may have assumed that the technology would be open source and, thus, lock-in less of a concern.” (There’s a whole separate report from Gartner on lock-in that I’ll take a look at, and, of course, some 140 character level analysis.)
  • On marketshare, rated by usage, not revenue:

  • On that note, it’s easy to misread the widely quoted finding of “[n]early three-quarters (71 percent) of respondents indicated they are using Kubernetes” as meaning only Kubernetes. Actually, people are using many of them at once. The report clarifies this: “The fact that almost 75% of organizations reported using Kubernetes while the same group also reported significant use of other container management and orchestration software is evidence of a mixed market.”

As one last piece of context, one of the more recent Gartner surveys for container usage puts usage at around 18%, with 4% of that being “significant production use”:


Of course, looks at more specialized slices of the market find higher usage.

This early in the container market, it’s good to pay close attention to surveys because the sample size will be small, selective, and most people will only have used containers for a short while. But, there’s good stuff in this survey, it’s definitely worth looking at and using.

IT’s usefulness is improving, but there’s plenty of room to fix the meatware, Surveys – Highlights

It’s another survey about business/IT alignment. Who knows how accurate these leadgen PDFs are, but why not? This one is of “646 CIOs and other IT leaders and 200 line of business leaders.” Some summaries from Minda Zetlin:

When LOB leaders were asked about the role their companies’ CIOs play, 41 percent said the CIO is a strategic advisor who identifies business needs and opportunities and proposes technology to address them. Another 22 percent said the CIO is a consultant who provides advice about technology and service providers when asked.

But 10 percent said their CIO was a “roadblock” who raises so many obstacles and objections to new technology that projects are difficult to complete. And another 9 percent said the CIO was a “rogue player,” with IT making technology decisions on its own, and creating visibility and transparency challenges.

Meanwhile, 36 percent of LOB leaders and 31 percent of IT leaders believe other departments “see IT as an obstacle.” And 58 percent of IT leaders but only 13 percent of LOB leaders agreed with the statement, “IT gets scapegoated by other departments when they miss their own goals.”

This seems better than the usual (kind of out of date) scare chart I used use, from a multi-year Cutter survey:

There’s still, as ever, plenty of room to improve business/IT alignment.

Speaking of that, also in that IDG/CIO Magazine survey, there’s a weird mismatch between the perception of The Business and IT about what IT does:

What does The Business want anyway?

Meanwhile, Vinnie quotes a Gartner survey of 388 CEOs:

  • Almost twice as many CEOs are intent on building up in-house technology and digital capabilities as those plan on outsourcing it (57 percent and 29 percent, respectively).
  • Forty-seven percent of CEOs are directed by their board of directors to make rapid progress in digital business transformation, and 56 percent said that their digital improvements have already delivered profits.
  • 33 percent of CEOs measure digital revenue.

Point being: The Business wants IT to matter and be core to how their organizations evolve. They want programmable businesses. Here’s some examples from another summary of that Gartner survey:

Although a significant number of CEOs still mention eCommerce, more of them align new IT infrastructure investments to advanced commercial activities – such as digital product and service innovation, exploring the Internet of Things (IoT), or adopting digital platforms and associated supplier ecosystems.

According to the Gartner assessment, some CEOs have already advanced their digital business agenda – 20 percent of CEOs are now taking a digital-first approach to business development. “This might mean, for example, creating the first version of a new business process or in the form of a mobile app,” said Mr. Raskino.

Furthermore, 22 percent are applying digital business technologies to their traditional processes. That’s where the product, service and business models are being changed, and the new digital capabilities that support those are becoming core competencies.

There’s demand there, the final result of “the consumerization of enterprise IT,” as we used to crow about. IT needs to catch-up on its abilities to do more than “just keep the lights” on or there’ll be a donkey apocalypse out there.

You seem people like Comcast doing this catching-up, very rapidly. The good news is that the software and hardware is easy. It’s the meatware that’s the problem.

Link

Making mainframe applications more agile, Gartner – Highlights

In a report giving advice to mainframe folks looking to be more Agile, Gartner’s Dale Vecchio and Bill Swanton give some pretty good advice for anyone looking to change how they do software.

Here’s some highlights from the report, entitled “Agile Development and Mainframe Legacy Systems – Something’s Got to Give”

Chunking up changes:

  1. Application changes must be smaller.
  2. Automation across the life cycle is critical to being successful.
  3. A regular and positive relationship must exist between the owner of the application and the developers of the changes.

Also:

This kind of effort may seem insurmountable for a large legacy portfolio. However, an organization doesn’t have to attack the entire portfolio. Determine where the primary value can be achieved and focus there. Which areas of the portfolio are most impacted by business requests? Target the areas with the most value.

An example of possible change:

About 10 years ago, a large European bank rebuilt its core banking system on the mainframe using COBOL. It now does agile development for both mainframe COBOL and “channel” Java layers of the system. The bank does not consider that it has achieved DevOps for the mainframe, as it is only able to maintain a cadence of monthly releases. Even that release rate required a signi cant investment in testing and other automation. Fortunately, most new work happens exclusively in the Java layers, without needing to make changes to the COBOL core system. Therefore, the bank maintains a faster cadence for most releases, and only major changes that require core updates need to fall in line with the slower monthly cadence for the mainframe. The key to making agile work for the mainframe at the bank is embracing the agile practices that have the greatest impact on effective delivery within the monthly cadence, including test-driven development and smaller modules with fewer dependencies.

It seems impossible, but you should try:

Improving the state of a decades-old system is often seen as a fool’s errand. It provides no real business value and introduces great risk. Many mainframe organizations Gartner speaks to are not comfortable doing this much invasive change and believing that it can ensure functional equivalence when complete! Restructuring the existing portfolio, eliminating dead code and consolidating redundant code are further incremental steps that can be done over time. Each application team needs to improve the portfolio that it is responsible for in order to ensure speed and success in the future. Moving to a services-based or API structure may also enable changes to be done effectively and quickly over time. Some level of investment to evolve the portfolio to a more streamlined structure will greatly increase the ability to make changes quickly and reliably. Trying to get faster with good quality on a monolithic hairball of an application is a recipe for failure. These changes can occur in an evolutionary way. This approach, referred to in the past as proactive maintenance, is a price that must be paid early to make life easier in the future.

You gotta have testing:

Test cases are necessary to support automation of this critical step. While the tooling is very different, and even the approaches may be unique to the mainframe architecture, they are an important component of speed and reliability. This can be a tremendous hurdle to overcome on the road to agile development on the mainframe. This level of commitment can become a real roadblock to success.

Another example of an organization gradually changing:

When a large European bank faced wholesale change mandated by loss of support for an old platform, it chose to rewrite its core system in mainframe COBOL (although today it would be more likely to acquire an off-the-shelf core banking system). The bank followed a component-based approach that helped position it for success with agile today by exposing its core capabilities as services via standard APIs. This architecture did not deliver the level of isolation the bank could achieve with microservices today, as it built the system with a shared DBMS back-end, as was common practice at the time. That coupling with the database and related data model dependencies is the main technical obstacle to moving to continuous delivery, although the IT operations group also presents cultural obstacles, as it is satis ed with the current model for managing change.

A reminder: all we want is a rapid feedback cycle:

The goal is to reduce the cycle time between an idea and usable software. In order to do so, the changes need to be smaller, the process needs to be automated, and the steps for deployment to production must be repeatable and reliable.

The ALM technology doesn’t support mainframes, and mainframe ALM stuff doesn’t support agile. A rare case where fixing the tech can likely fix the problem:

The dilemma mainframe organizations may face is that traditional mainframe application development life cycle tools were not designed for small, fast and automated deployment. Agile development tools that do support this approach aren’t designed to support the artifacts of mainframe applications. Modern tools for the building, deploying, testing and releasing of applications for the mainframe won’t often t. Existing mainframe software version control and conguration management tools for a new agile approach to development will take some effort — if they will work at all.

Use APIs to decouple the way, norms, and road-map of mainframes from the rest of your systems:

wrapping existing mainframe functions and exposing them as services does provide an intermediate step between agile on the mainframe and migration to environments where agile is more readily understood.

Contrary to what you might be thinking, the report doesn’t actually advocate moving off the mainframe willy-nilly. From my perspective, it’s just trying to suggest using better processes and, as needed, updating your ALM and release management tools.

Read the rest of the report over behind Gartner’s paywall.

Choose your TAM wisely and remember to charge a high price, RethinkDB

[O]ur users clearly thought of us as an open-source developer tools company, because that’s what we really were. Which turned out to be very unfortunate, because the open-source developer tools market is one of the worst markets one could possibly end up in. Thousands of people used RethinkDB, often in business contexts, but most were willing to pay less for the lifetime of usage than the price of a single Starbucks coffee (which is to say, they weren’t willing to pay anything at all). Link

How big is the pie?

Any company selling developers tools needs to figure out the overall market size for what they’re selling. Developers, eager to work tools for themselves (typically, in their mid to late 20s developers work on at least one “framework” project) often fall prey to picking a market that has little to no money and, then, are dismayed when “there’s no money in it.”

What we’re looking for here is a market category and a way of finding how much money is being spent in it. As a business, you want grab as much as the money as possible. The first thing you want to do is make sure there’s enough money for you to care. If you’re operating in a market that has only $25m of total, global spend, it’s probably not worth your while, for example.

Defining your market category, too, is important to find out who your users and buyers are. But, let’s look at TAM-think: finding what the big pie of cash looks like, your Total Addressable Market.

The TAMs on the buffett

If you’re working on developer oriented tech, there are a few key TAMs:

Another interesting TAM for startups in the developer space is a combo one Gartner put out recently put together that shows public and private PaaS, along with “traditional” application platforms: $7.8bn in 2015. 451 has a similar TAM that combines public and private cloud at around $10bn in 2020.

I tried to come up with a public and private PaaS TAM – a very, very loose one – last year and sauntered up to something like $20 to $25bn over the next 5-10 years.

There are other TAMs, to be sure, but those are good ones to start with.

Bending a TAM to your will, and future price changes

In each case, you have to be very, very careful because of open source and public cloud. Open source means there’s less to sell upfront and, that, likely, you’ll have a hard time suddenly going from charging $0 to $1,000’s per unit (a unit is whatever a “seat” or “server” is: you need something to count by!). If you’re delivering your stuff over the public cloud, similar pricing problems arise: people expect it to be really cheap an are, in fact, shocked when it adds up to a high monthly bill.

But briefly: people expect infrastructure software to be free now-a-days. (Not so much applications, which have held onto the notion that they should be paid for: buy the low prices in the app store depress their unit prices too.)

In both cases (open source and public cloud delivery), you’re likely talking a drastically lower unit price. If you don’t increase the overall volume of sales, you’ll whack down your TAM right quick.

So, you have to be really, really careful when using backward looking TAMs to judge what your TAM is. Part of the innovation you’re expected to be doing is in pricing, likely making it cheaper.

The effect is that your marketshare, based on “yesterday’s TAMs,” will look shocking. For example, Gartner pegged the collective revenue of NoSQL vendors (Basho, Couchbase, Datastax, MarkLogic, and MongoDB) at $364M in 2015: 1% of the overall TAM of $35.9bn! Meanwhile, the top three Hadoop vendors clocked in at $323.2M and AWS’s DB estimate was $833.6M.

Pair legacy TAMs with your own bottoms-up TAM

In my experience, the most helpful way for figuring out (really, recomputing TAMs in “real time) is to look at the revenue that vendors in that space are having and then to understand what software they’re replacing. That is, in addition to taking analyst TAMs into perspective, you should come up with your own, bottoms-up model and explain how it works.

If you’re doing IT-lead innovation, using existing (if not “legacy”!) TAMs is a bad idea. You’ll likely end up over-estimating your growth and, worse, which category of software you are and who the buyers are. Study your users and your buyers and start modeling from there, not pivot tables from the north east.

The other angle here is that if you’re “revolutionizing” a market category, it means you’re redefining it. This means there will be no TAM for many years. For example, there was no “IaaS” TAM for a long time, at some point, there was no “Java app server TAM.” In such cases, creating your own TAMs are much more useful.

Finally, once you’ve figured out how big (or small!) your pie of money is, adjust your prices accordingly. More than likely you’ll find that you’ll need to charge a higher price than you think is polite…if you want to build a sustainable, revenue-driven business rather than just a good aggregation startup to be acquired by a larger company…who’ll be left to sort out how to make money.

“the obsolescence of Java EE” – Notebook

Bottom line: Java EE is not an appropriate framework for building cloud-native applications.

In preparation for this week’s Pivotal Conversations, I re-read the Gartner write-up on the decline of traditional JEE and the flurry of responses to it. Here’s a “notebook” entry for all that.

From Gartner’s “Market Guide for Application Platforms”

This is the original report from Anne Thomas and Aashish Gupta, Nov 2016. Pivotal has it for free in exchange for leag-gen’ing yourself.
What is an “application platform” vs. aPaaS, etc.?

Application platforms provide runtime environments for application logic. They manage the life cycle of an application or application component, and ensure the availability, reliability, scalability, security and monitoring of application logic. They typically support distributed application deployments across multiple nodes. Some also support cloud-style operations (elasticity, multitenancy and selfservice).

An “aPaaS,” is a public cloud hosted PaaS, of which they say: “By 2021, new aPaaS deployments will exceed new on-premises deployments. By 2023, aPaaS revenue will exceed that of application platform software.”

On the revenue situation:

platforms-and-paas-revenue

Commercial Java Platform, Enterprise Edition (Java EE) platforms’ revenue declined in 2015, indicating a clear shift in the application platform market…. Application platform as a service (aPaaS) revenue is currently less than half of application platform software revenue, but aPaaS is growing at an annual rate of 18.5%, and aPaaS sales will supersede platform software sales by 2023.

And:

Currently, the lion’s share of application platform software revenue comes from license sales of Java EE application servers. From a revenue perspective, the application platform software market is dominated by just two vendors: Oracle and IBM. Their combined revenues account for more than three-quarters of the market.

Decline in revenue for current market leaders IBM and Oracle over last three years (4.5% and 9.5% respectively), meanwhile uptick from Red Hat, AWS, and Pivotal (33.3%, 50.6% and 22.7% respectively).
Decline/shifting is driven by:

given the high cost of operation, the diminishing skill pool and the very slow pace of adoption of new technologies, a growing number of organizations — especially at the low end of the market — are migrating these workloads to application servers or cloud platforms, or replacing them with packaged or SaaS applications.

And:

Java EE has not kept pace with modern architectural trends. Oracle is leading an effort to produce a new version of Java EE (version 8), which is slated to add a host of long-overdue features; however, Oracle announced at Oracle OpenWorld 2016 that Java EE 8 has been delayed until the end of 2017.3 By the time Java EE catches up with basic features required for today’s applications, it will be at least two or three years behind the times again.

Target for cloud native:

Design all new applications to be cloud-native, irrespective of whether or not you plan to deploy them in the cloud…. If business drivers warrant the investment, rearchitect existing applications to be cloud-native and move them to aPaaS.

Vendor selection:

Give preference to vendors that articulate a platform strategy that supports modern application requirements, such as public, private and hybrid cloud deployment, in-memory computing, multichannel clients, microservices, event processing, continuous delivery, Internet of Things (IoT) support and API management.

Responses

Oracle and Java: confusing

Oracle’s stewardship of Java has been weird of late:

It’s all about WebLogic and WebSphere

I think this best sums it all up, the comments from Ryan Cuprak: “What this report is trying to do is attack Oracle/IBM via Java EE.”

I wouldn’t say “attack,” but rather show that their app servers are in decline, as well as TP processing things. The report is trying to call the shift to both a new way of development (cloud native) and the resulting shifts in product marketshare, including new entrants like Pivotal.

I can’t speak to how JEE is changing itself, but given past performance, I’d assume it’ll be a sauntering-follower to adapting technologies; the variable this time is Oracle’s proven ambivalence about Java and JEE, and, thus, funding problems to fuel the change fast enough to keep apace with other things.

The undying death of JEE – Gartner, app servers, and cloud native – Pivotal Conversation

One of your favorite technologies is on the death wagon, again. Gartner recently recommended avoiding JEE for new, cloud native application development. This predictably kicked up all sorts of push-back from the JEE stalwarts. In this episode we discuss the report, the responses, and all the context to figure out what to make of all this. Spoiler: JEE isn’t dead, as ever, it’s just a part of the ongoing gumbo that is a Java application.

 

Subscribe, follow, feedback

News Links

Gartner on JEE for Cloud Native

TheNewStack: Gartner Purchase of Corporate Executive Board Would Address Changes in IT Advisory Market

Lawrence Hecht has some brief commentary on Gartner buying CEB for $2.6bn – Lawrence takes out the $700m in debt from the actual deal value of $3.3bn. I don’t really know CEB too well.

He also covered some recent analysis of the analyst industry, including the post I did on the topic and podcast at KEA on the idea with other analyst-types, both back in 2015.

All seethe official press release.

Here’s some share price performance, snipping out the time around the acquisition announcement (it goes up, of course):
screenshot-2017-01-17-19-07-24

Link

Five years of declining PC sales

For the year, Gartner estimated shipments at 269.717 million, down 6.2 per cent year-on-year, with each of the major manufacturers except Dell reporting falling sales.

Gartner says high-end PCs are doing well, but of course, are a smaller market:

There have been innovative form factors, like 2-in-1s and thin and light notebooks, as well as technology improvements, such as longer battery life. This high end of the market has grown fast, led by engaged PC users who put high priority on PCs. However, the market driven by PC enthusiasts is not big enough to drive overall market growth.

There may less volume, but it’d be nice to know how that effects profits in the notoriously slim margin PC business.

Meanwhile, on overall, global IT spend:

Companies are due to splash $3.5tr (£2.87tr) on IT this year, globally, although that is down from its previous projection of three per cent.

See some more commentary of that forecast.

Link

On-prem still a big thing, Gartner survey

only 10% of organizations surveyed by Gartner are expected to close their on-premises data centers by 2018

Much of Pivotal’s business is on-premises, very much if it. However, most large organizations I talk with really want to get to much more public cloud as soon as possible. They look to Pivotal Cloud Foundry’s multi-cloud compatibility to help them down the line with that. For example, Home Depot is starting to move applications to Google Cloud.

Anyhow, most people outside of enterprise IY are surprised and a bit incredulous at how much “private cloud” there still is: ¯_(ツ)_/¯

Link