Docker and kubernetes

Dave Bartoletti, an analyst with IT consultancy Forrester, said it’s clear that Kubernetes has won at the orchestration layer. “There’s too much mindshare around it,” he said in a phone interview with The Register. “There are too many developers who just want this.”

Pretty much everyone has the sentiment that kubernetes has won.

More details from Joseph Tsidulko at CRN:

While some components of Enterprise Edition previously could be made to work with Kubernetes, the crucial control plane for managing the lifecycle of containerized applications was incompatible. Docker, however, had participated in the Kubernetes project, and always believed the technologies were complementary, Chanana said.
Docker is now focused on building out the components needed to make Kubernetes an enterprise-grade solution, just as it did with Swarm, he said, including security, high availability, and ease of use through its existing tools and control plane. Those are capabilities Docker uniquely can deliver to ease a lot of the struggles customers face in taking advantage of Kubernetes’ advanced container-scheduling capabilities.

Source: Kubernetes has won. Docker Enterprise Edition to support rival container-wrangling tech

Docker CEO Steve Singh Interview: All About That Migration To Cloud

The single biggest one is the move to public cloud, and this is where Docker is focused today. This is the number one area that we are putting all our investment in. We have this great container platform that allows you to do a lot of things, but just like any company, we need to pick an area of focus and for us, helping customers take legacy apps, moving them to the Docker platform, and allowing them to run it on any infrastructure because it’s hybrid cloud world, does a couple of things — it drives massive savings for customers, typically 50 percent cost reduction in a cost structure, but it also opens up real opportunities for the customer and our partners to innovate within that environment

Also, this is an insanely good example of a fluffy leather chair conference interview, plus, The Channel filter.

More:

Where does the 50 percent savings come from? A few different areas. The biggest is, honestly, in the mass reduction in number of VMs [virtual machines] and that’s not good or bad, it’s just the reality. The other is that there is a massively increased density factor on compute, and so we can put a lot more workloads on a fewer number of servers. If you are a [company like] Nestle, and you are going to take a bunch of information and business systems and move it to the public cloud, doing a one-to-one move is not necessarily all that advantageous.

Partners:

When I joined Docker I had a good conversation with someone over at Microsoft that said ‘I’d love to partner with you.’ His view was, the more people move to Docker, the more business they get on Azure. In fact, for every dollar we generate, he generates $7.

Momentum and the EBIT(A) chase:

we’re growing at 150 percent-plus year over year and expect that to continue for at least another few years. I’m hoping to get to profitability in mid-2019, and that’s important

Source: Docker CEO Steve Singh On The VMware Relationship, Security, And The Opportunities Around Containers For Partners

Software Defined Talk: Cloud Rules Everything Around Me – Red Hat, Moby, Docker CEO, and Halo Effect’ing The First Cloud Wars

There’s much news in the container world with DockerCon and Red Hat having had conferences, plus Docker gets a new CEO. We also do a hind-sight analysis of what wrong with the losers of the Cloud Wars. And, as always, recommendations from the three of us.

Be all civilized and modern by subscribing to the feed, or just download the MP3 directly if you prefer utter, complete control over your ear-holes.

The coming licensing hassles with Dockerized enterprise software

indiana-jones-snakes.jpg
“Licensing. Why’d it have to be licensing?”

Jon Hall, who always has good things to say about traditional IT Service Management butting up against Melinum IT, points out an all too common hassle with new ways of packaging and running IT: accounting for traditional licensing. Here, he points out a likely licensing counting problem with Docker-ized applications, e.g., with Oracle licensing when it comes to the recent, official Docker images with Oracle software in ’em:

But theres a serious gotcha here: as any Software Asset Manager could point out, these actions could have just cost the company a pretty staggering amount of money. How? By falling foul of Oracles notoriously complex licensing system.
Oracle licensing is bloody complex, and its entirely possible that a goalpost or two might have moved by the time you read this.
Oracle Parking.jpg

The news from Docker-land, plus, the money being fought over – Notebook

With DockerCon this week, there’s no end of Docker quotables and items. Here’s my collection

General momentum

Once landed in an account, Docker usage grows their CEO says:

There has also been expansion within customers, with organizations that start with Docker expanding their usage on average by five times within six months

Way back in 2015, the (now annual?) DataDog study of Docker usage among their customers said that 2/3 of companies that try Docker adopt it. Which is all to say: once it gets in, it spreads.

Moby

A toolkit for putting together docker stacks:

In essence, Moby is the build system that creates Docker Community Edition, which is akin to Fedora, and Docker Enterprise is derived from Moby and is akin to Red Hat Enterprise Linux. Link

People got all freaked out. I’d even say “freaked the fuck out.” Competitors, of course, gloated, if only in silence. Criticism of handling the announcement aside (ideally, you wouldn’t like to kick up a stink), I feel like it was more like a tempest in a teapot.

Docker momentum/penetration and types of applications/workloads

Global 2000 customers have somewhere on the order of thousands to tens of thousands of applications, and across these major firms, less than 5 percent of the applications have been containerized so far. While somewhere between 5 percent and 10 percent of the applications that are being containerized are net-new, microservices-style applications that everyone is talking about all the time, the other 90 percent to 95 percent are just lifting and shifting legacy applications from bare metal or virtual machines to containers. Link

VMware threat…or just legacy gobbling?

Docker bounces back and forth between “replacement for VMware” and “a different thing, so don’t worry about VMware.” In this round of Docker news, there’s been some strong pull towards the “replacement for VMware” camp. To be fair, it’s more like doing both:

In general, says Johnston, customers who move from bare metal or VMs to Docker containers can provision, scale, and deploy applications up to 75 percent faster, and those moving from bare metal to containers can save 50 percent on compute and those who are moving from VMs will save around 25 percent. Link

This might also come from the obvious move to start gobbling up legacy (more accurately “existing”) applications. Here, Docker had two customer reference:

Northern Trust, a leading international financial services company, experienced  deployment times that were 4X faster and noted a 2X improvement in infrastructure utilization

And, Microsoft IT:

Microsoft is not only a partner in this program; their IT organization is also a beta customer.  Microsoft IT increased app density 4X with zero impact to performance and were able to reduce their infrastructure costs by a third.

There was also a story of Visa using Docker:

Kocherlakota said Visa is aiming to move as many workloads at it can to the container model to help improve overall efficiency.

See more on this legacy migration stuff and the program with Avanade, Cisco, HP, and Microsoft from Docker’s Scott Johnson.

Major vendors

Other tech companies are often cautious about working with Docker. They’re not really certain about how it helps or threatens their position in the IT stack and, therefore, their ability to sell higher profit margin products and services. No one wants to become the x86 manufacturer of the cloud (read: low margin, commodity).

I’ve noticed this cautiousness slightly melting as more and more vendors are at least putting their stuff in Docker images and, on the public cloud front, supporting the use of Docker. My company, Pivotal, ingests Docker images.

A brief whack at why Microsoft cares, from Christopher Tozzi:

Although there remains work to do to get Docker on Windows ready for prime time, the platform will be important in helping Windows Server stay as nimble as Linux environments in hosting the workloads of the future…. Microsoft’s interest in Docker may seem strange. Microsoft already offers traditional virtual machine products, most notably Hyper-V. In some respects, Docker containers compete with virtual machine platforms…. But that’s not necessarily the case. Depending on how they’re used, containers can complement virtual machines, rather than replace them. If you use virtual machines to host the environment in which Docker runs, your Docker environment becomes more scalable and portable than it would be if it ran on bare metal. That’s likely the type of use case Microsoft envisions for containers on Windows.

More from Nick Martin on Microsoft and Docker.

Oracle bundling middleware in Docker containers:

Oracle becomes the latest enterprise IT vendor to jump on the Docker container bandwagon as it seeks to expand its reach in the public cloud market. Among the container-based application, middleware and development tools made available on the container platform are Oracle’s MySQL database and its WebLogic server. Those tools are in addition to the more than 100 images of Oracle products already available on Docker Hub, its cloud-based image registry.

So, what’s going on here? Staking a claim on The New Stack

I’m often asked to explain all the various cloud stacks, to help Pivotal buyers sort out what CaaS, PaaS, cloud-native, and “cloud strategy” means. They’re trying to figure out their planning for building out new IT, for “doing DevOps.” It’s a mess out there w/r/t to figuring all this out if you’re not a vendor or analyst who’s steeped in this shoggoth every day.

In all the Docker, container, and cloud-native wars, the revenue battle for vendors is mostly about two things:

  1. The pool of money in simply migrating the VMware workload to a new, more efficient layer, hence the ongoing attention to “the VMware threat” that Docker poses). I’m not sure how big this market is because, as a disruptive shift (cf. Linux vs. UNIX vs. Windows vs. z) part of it is reducing the overall spend through lower prices and more efficient usage. But, the existing virtualization market is best described as “fucking huge.”
  2. Fighting over who “owns” (and therefore collects the most profit from) the stack that companies are using to build and run their software. By my estimate, this is something like around a $20-25bn market in the future. You can see a Spanish Civil War like precursor going on in the Java application server market; it’s spreading to a “World War” with respect to all custom software stacks.

On that second point, here’s my latest attempt to describe how things are shaking out category/definition wise:

Of all the SPI cloud categories, PaaS is the most problematic place as all us vendors hate the PaaS term and are trying to re-define what it means. I would break PaaS into two categories currently: (1.) container orchestration, and, (2.) cloud platform.

Container orchestration takes an IaaS and manages the installation and configuration of container images on your new cloud. By “images” here, I mean that you’ve chosen to put your software (probably custom written software, not packaged software) into containers (or the delegated way we do it with buildpacks in CF), specified how all the different nodes are wired together with all the ACLs and configuration, and then given it over to the orchestration software to deploy those containers, set the configuration, and do the ongoing health-checks/remediation.

Ideally, the orchestration platform should also have “day 2” tools to help you monitor and manager (“fix”) problems that happen in production. I assume things like kubernetes, the Docker/Moby constellation of things, Mesosphere, etc. fit here.

People are obsessed with container orchestration now and it’s pretty much all anyone talks about. I think all this is what’s becoming known as “CaaS” – Containers as a Service.

(On this next section, I’m extremely monetarily biased, of course:) A cloud platform either has or depends on an orchestration layer, but adds in integrated middle-ware, ALM tools (from basics like “cf push”, and an overall programming and deployment model with all the tools and enforcements. Heroku is the classic example here in public cloud, and now Cloud Foundry (CF) has taken over this model in public and private cloud, the second (it seems) where most of the usage and money is, at least in the enterprise space. I’d argue, that CF is the enterprise market-leader (by revenue at least, but increasingly penetration in the F500 – while Pivotal has impressive numbers, throw in the other CF distros and it’s even larger, no doubt); at the very least, “the highest growth and in enterprise production usage.” That all depends how you slice it, and of course my slicing favors me.

A cloud platform “pulls together” everything into a fully working “cloud” that deploy and provisions the servers, builds/maintains/deploys the containers, takes care of your networking configuration and concerns (inc. firewalls, etc.), and configs/manages all the middleware needed (e.g. “I want a database” means you just ask for it, instead of having to configure it and make container images of it and specify how it all works together).

The end goal of a cloud platform is the original end-goal of a PaaS: developers don’t have to “setup” any of the infrastructure or, really, middleware (databases, queues, etc.) that they use: they just write the “business logic” of their applications.

All this standardization is technically “restrictive” (developers can’t just install anything they download off the Internet, it has to be integrated into the platform). This is why we often call this model “opinionated,” but it follows the same contract/promises model that Google SREs follow: we promise we can support your applications in production if you use only the things we support, otherwise it’s all on you.

However, the benefit of such opinions is a huge jump in productivity as we see at all our customers: one Pivotal customer manages 1,000+ applications (all angles toward very frequent, DevOps-style releases for fast feedback loops and all that small batch stuff) with just 4 PCF operations staff, etc.

Our DIY white paper makes the case that snow-flaking this all out is a bad idea. At the very least, if you build your own platform, you should try to just have one used organization wide.

In comparing CaaS and cloud platform, the key distinction to me is that a cloud platform bundles and integrates together all your middleware and “services” frameworks. For example, if you want to do microservices with all the bulk-heads and such, that functionality should be built into the cloud platform – you should have to go read-up how to set most of that up. PCF, of course, has Spring Cloud and more for that. All of the systems management tools (thing used in production to detect and fix problems) should also be built in, or the cloud platform should be instrumented so deeply that third party tools can do the managing as well.

Now, these two categories are likely to converge, and then the discussion will just be which cloud platforms are more featureful and better. It’ll be like battling Java application servers.

I haven’t made one of my own “burger” stacks of all this in a long time, but I think (again, highly biased) the ones we use for PCF are pretty good:

More

In case you don’t know, working at Pivotal, I obviously have a stake in how all this turns out, so I’m biased on multiple angles of the above whether I want to be or not. 

Software Defined Talk: Docker is just cheap VMware, right?

Our new episode is up, from this past Friday:

There’s tell that some people just look at containers as a cheaper way to virtualize, eschewing the fancy-lad “cloud-native stuff.” We discuss that idea, plus “the enterprise cloud wars,” and also our feel that Slack is actually a really good tool and company.

Listen directly, subscribe to the podcast feed, and go check out the full show notes, which has a web player as well.

The crowded cloud native space

The wider Cloud Native ecosystem is, however, a very disparate and confused place. We anticipate a significant level of consolidation over the next twelve to eighteen months with some clear winners emerging. The emergence of several opinionated distributions of Kubernetes is hardly a surprise and this space will expand a little further before settling down.

Link

The Container Landscape, choosing what to do now

A round-up of all sorts of container stacks, and some advice on what to do:

Therefore, the key lessons learned from this event (from developer’s perspective): Do not focus on developing code for the container under the hood. Care instead about the business logic. Implement your microservices in a vendor agnostic way.

Do not make the same fault as we all did with J2EE / Java EE where all vendors used the same standard specifications, but still offered many vendor-dependent features and “added value” in their specific “standard implementation”. Migration, i.e. deployment to another Java EE application server was a lot of efforts (re-development, testing, …); sometimes a complete re-write was easier and faster.

There’s a lot of fragmentation in container land now. This is what Linux must have felt like back in the late 90s.

Our advice at Pivotal, of course, is to focus on using Spring and other services towards the top of the stack for that layer of lock-in protection.

Link

071: Unbreakable Docker, or, elephants, er, like other elephants – Software Defined Talk

Eventually, you have to decide how your open source software is going to make money, and your partners probably won’t like it. That’s what the dust-up around Docker is this week, it seems to us. We also talk briefly about VMware’s big conference this week, and rumors of HPE selling off it’s Software group to private equity.

Check out the full show notes for links to the recommendations, conferences, and tech news items we didn’t get to cover: https://cote.io/sdt71

Listen above, subscribe to the feed (or iTunes), or download the MP3 directly.

With Brandon Whichard, Matt Ray, and Coté.

SPONSOR

Show notes

  • Nippers – “Nippers learn about safety at the beach. They learn about dangers such as rocks, and animals (e.g. the blue-ringed octopus), and also about surf conditions, such as rip currents, sandbars, and waves. Older Nippers also learn some basic first aid and may also learn CPR when they reach the age of 13.”

Can someone explain this “Docker forking” hoopla?

  • Coté’s write-up.
  • Docker Inc. doesn’t want to be a commoditized building block
    From a Red Hat person: “The conflict started to escalate earlier this summer, when Docker Inc used its controlling position to push Swarm, it’s own clone of Kubernetes-style container orchestration, into the core Docker project, putting the basic container runtime in a conflict with a notable part of its ecosystem. Docker Inc. then went on to essentially accuse Red Hat of forking Docker – at the Red Hat Summit no less. After that, Docker Inc’s Solomon Hykes came out strongly against the efforts to standardize the container runtime in OCI – an initiative his company co-founded.”
  • Re: that episode where we discuss Docker ecosystem challenges: “Yet on a regular basis, Red Hat patches that enable valid requirements from Red Hat customer use cases get shut down as it seems for the simple reason that they don’t fit into Docker Inc’s business strategy.”
  • A fight over where to draw the line between free/open/commodified and costs/proprietary/competitive: “And while I personally consider the orchestration layer the key to the container paradigm, the right approach here is to keep the orchestration separate from the core container runtime standardization. This avoids conflicts between different layers of the container runtime: we can agree on the common container package format, transport, and execution model without limiting choice between e.g. Kubernetes, Mesos, Swarm.”
  • Don’t bring a pistol to a bazooka fight. Enterprises love RHEL – have you ever tried to sell Ubuntu into organizations? It’s like what selling NT must have been like.

VMware hybrid cloud solutionaring

This Week in Tech Private Equity…

BONUS LINKS! Not covered in podcast.

Spaces vs. Tabs

Recommendations

Deciding where the Docker ecosystem will make money

The Docker forking hoopla is providing an interesting example, in realtime, of how open communities figure out monetization.

#RealTalk: Open communities are not immune to C.R.E.A.M.

One of the most important decisions an open source community makes is where and how it will make money. I always liked Eclipse’s take because they’re mega clear on this topic; the ASF plays this goofy game where they try really hard to pretend they don’t need to answer the question, which itself is an answer, resulting in only the occasional quagmire; Linux has a weird situation where RedHat figured out This One Cool Trick to circumvent the anti-commercial leanings of the GPL; MySQL has a weird dual licensing model that I still don’t fully grasp the strategic implications of; RIP Sun.

The role of standards plays another defining role when it comes to monetization. Think of Java/J(2)EE, vs .Net, vs PHP (a standard-less standard?), vs HTML and WS-*. vs, the IETF/ISOC RFC-scape that defines how the internet works. While not always, by far, standards are often used tactically to lesson the commercial value (or zero it out completely) of any given component “lower” in the stack, pushing the money “up” the stack to the software that implements, uses, or manages the standard. Think of how HMTL itself is “of no value” (and was strategically pushed that way early on), but that the entire SaaS market is something like a $37.7bn market, part of the overall $90.3bn that, arguably, uses HTMLas one of the core technologies in the stack, at the UI layer, (along with native mobile apps. now).

The dynamics of how open source, standards, and the closed source around it are defined and who “controls” them are one of the key strategic processes in infrastructure software.

The Docker ecosystem is sorting out monetization

Right now, you can see this process in action in the Docker ecosystem. Product management decisions at Docker, Inc. are forcing the community to wrestle with how ecosystem members will make money, including Docker Inc. itself.

By “ecosystem,” I mean “all the people and companies that are involved in coding up Docker and/or selling Docker-based products and services.” Actual end-users play a role, of course, but historically don’t have as much power as we’d like at this stage of an open communities formation.

End-users have to vote with their feet and, if they have one, wallets – whether wearing expensive loafers (enterprise) or cracked sandals (paying with nothing but the pride of ubiquity) – which, by definition, is hard to do until a monetization strategy is figured out, or completely lumped all together.

Looking just at the “vendors,” then, each ecosystem member is trying to define which layers of the “stack”‘will be open, and thus, free, and which layers will be closed, and thus, charged for. Intermixed with this line drawing is determining who has control over features and standards (at which level) and, as a result, the creation of viable business models around Docker.

Naturally, Docker, Inc. wants as big slice of that pie as possible. The creator of any open technology has to spend a lot of nail-biting time essentially deciding how much money and market-share it wants to give up to others, even competitors. “What’s in it for me?” other vendors in the ecosystem are asking…and Docker Inc.’s answer is usually either some strategic shoe-gazing or a pretty straight forwardly the reply “less than you’d like.”

As a side note, while I don’t follow Docker, Inc. as an analyst any more (so I’m not mega up-to-date), it seems like the company consistently puts the end-users first. They’re looking to play the Tron role in this ecosystem most valiantly. This role doesn’t, really, conflict at all with elbowing for the biggest slice of the pie.

chart_vendors-focused-on-deployment-platforms-orchestration-developer-tools-2
From The New Stack’s Docker & Container Ecosystem research

Similar to Docker Inc’s incentives to maintain as much control as possible, the “not-Docker, Inc.” members of the ecosystem want to commoditize/open the lower levels of the stack (the “core”), and leave the upper layers as the point of commoditization. This is the easiest, probably most consistently successful business model for infrastructure software: sell proprietary software that manages the “lower,” usually low cost to free, layers in the stack. From this perspective, not-Docker, Inc. members want to fence in the core Docker engine and app packaging scheme as the “atomic unit” of the Docker ecosystem. Then, the not-Docker, Inc.’s want to keep the management layer above that atomic unit for themselves to commercialize (here “orchestration,” configuration management, and the usual systems management stuff) . But, of course, Docker Inc. is all like “nope! That’s my bag o’ cash.”

As explained by one of those ecosystem vendors, who works at Red Hat:

And while I personally consider the orchestration layer the key to the container paradigm, the right approach here is to keep the orchestration separate from the core container runtime standardization. This avoids conflicts between different layers of the container runtime: we can agree on the common container package format, transport, and execution model without limiting choice between e.g. Kubernetes, Mesos, Swarm.

We saw similar dynamics – though by no means open source – in the virtualization market. VMware started with the atomic unit of the hypervisor (remember when we were obsessed with that component in the stack and people used that word a lot?), allowing the ecosystem to build out management on-top of that “lower” unit.

Then, as VMware looked to grow it’s TAM, revenue, and, thus, share price and market-cap, it expanded upward into management. At this point, VMware is a, more or less, the complete suite (or “solution” as we used to call it) of software you need for virtualization. E.g., they use phrases like “Software Defined Datacenter” rather than “virtualization,” indicative of the intended full-scope of their product strategy. (I’m no storage expert, but I think storage and maybe networking?is the last thing VMware hasn’t “won” hands down.)

“What, you don’t like money?”

Screenshot 2016-09-01 12.09.56.png
From one of Donnie’s recent presentations.

All of this is important because over the next 10-15 years, we’re talking about a lot of money. The market window for “virtualization” is open and wildcatters are sniffing on the wafting smell the money flitting through. Well, unless AWS and Azure just snatches it all up, or the likes of Google decides to zero the market.

We used to debate the VMware to Docker Inc. comparison and competitive angle a lot. There was some odious reaction to the idea that Docker Inc. was all about slipping in a taking over VMware’s C.R.E.A.M. At one point, that was plausible from a criss-cross applesauce state of the market, but now it’s pretty clear that, at least from an i-banker spreadsheet’s perspective, VMware’s TAM is the number your doinking around with.

Figuring out that TAM and market size gives you a model for any given ecosystem member’s potential take over the next 10 years. That’s a tricky exercise, though, because the technology stack and market are being re-defined. You’ve got the core virtualization and container technology, then the management layer, and depending on if you’re one of the mega-tech vendors that does software and hardware, you’ve got actual server, storage, and networking revenue that’s dragged by new spend on “containers,” and then you’ve got the bogie of whatever the “PaaS-that-we-shall-not-call-PaaS” market becomes (disclaimer: that’s the one I work in, care a great deal about, am heavily incentivized to see win, and am rooting for – roll in the bias droids!).

I skipped figuring out the market size last year when I tried to round-up the Docker market. Needless to say, I’d describe it as “fucking-big-so-stop-asking-questions-and-ride-the-God-damn-rocket.”

Looking at it from a “that giant sucking sound” perspective, most all of the members in the Docker ecosystem will be in a zero-sum position if Docker Inc moves, and wins, the upper management layers. Hence, you see them fighting tooth-and-nail to make sure Docker Inc is, from their perspective, kept in their place.