451’s container orchestration usage survey – Notebook


As part of CoreOS’s conference this week, 451 put out a sponsored study on container orchestration. It’s been much cited and is free, so it’s worth taking a look. Here’s my highlights and notes:

  • Leadgen yourself to CoreOS get a copy of the report.
  • This report is really more of a “container orchestration usage” report than much about “hybrid cloud.”
  • Demographics:
    • “We surveyed 201 enterprise IT decision-makers in April and May 2017. This was not a survey of developers; rather, we received responses from those in C-level and director-level positions, including CISO, CTO, CIO, director of IT, IT Ops and DevOps, and VPs and managers of IT.”
    • All from the US
    • “All of our survey respondents came from organizations using application containers, and all were familiar with their organization’s use of containers.” – This survey, then, tells you what people who’re already using containers are doing, not what the entire market is thinking and planning on.
    • “A significant slice of the survey respondents represented large enterprises.”
  • Organizations are hoping to use containers for “[a] ‘leapfrog’ effect, whereby containers are viewed as a way to skip adoption of other technologies, was tested, and a majority of respondents think Kubernetes and other container management and orchestration software is sufficient to replace both private clouds and PaaS.”
  • Obviously I’m biased, being at Pivotal, but the question here is “to do what?” As we like to say around here, you’re going to end-up with a platform. People need a “platform” on-top of that raw IaaS, and as things like Icito show (not to mention Pivotal’s ongoing momentum), the lower levels aren’t cutting the mustard.
  • There’s an ongoing semantic argument about what “PaaS” means to be mindful of, as well: in contexts like these, the term is often taken to mean “that old stuff, before, like 2009.” At the very least, as with Gartner’s PaaS Magic Quadrant, the phrase often means means “only in the public cloud.” Again, the point is: if you’re developing and running software you need an application development, middleware, and services platform. Call it whatever you like, but make sure you have it. It’s highly likely that these “whatever you want to call ‘PaaS’ PaaSes” will run on-top of and with container orchestration layers, for example, as Cloud Foundry does and is doing.
  • That said, it’s not uncommon for me to encounter people in organizations who really do have a “just the containers, and maybe some kubernates” mind-set in the planning phase of their cloud-native stuff. Of course, they frequently end-up needing more.
  • Back to the survey: keeping in mind that all respondents were already using containers (or at least committed to doing so, I think), ~27% had “initial” production container use, ~25% of respondents had “broad” containers in production. So, if you were being happy-path, you’d say “over half of respondents have containers in production.”
  • In a broader survey (where, presumably, not every enterprise was already using containers), of 300+ enterprises, production container use was: 19% in initial production, 8% were in broad production implementation.
  • Nonetheless, 451 has been tracking steady, high growth in container usage for the past few years, putting the container market at $2.7B by 2020 and $1.1bn in 2017.
  • As the report says, it’s more interesting to see what benefits users actually find once they’re using the technology. Their original desires are often just puppy-love notions after actual usage:

  • Interesting note on lock-in: “Given that avoiding vendor lock-in is generally a priority for organizations, it might seem surprising that it was not ranked higher as an advantage since much of the container software used today is open source… However, our respondents for this study were users of containers, and may have assumed that the technology would be open source and, thus, lock-in less of a concern.” (There’s a whole separate report from Gartner on lock-in that I’ll take a look at, and, of course, some 140 character level analysis.)
  • On marketshare, rated by usage, not revenue:

  • On that note, it’s easy to misread the widely quoted finding of “[n]early three-quarters (71 percent) of respondents indicated they are using Kubernetes” as meaning only Kubernetes. Actually, people are using many of them at once. The report clarifies this: “The fact that almost 75% of organizations reported using Kubernetes while the same group also reported significant use of other container management and orchestration software is evidence of a mixed market.”

As one last piece of context, one of the more recent Gartner surveys for container usage puts usage at around 18%, with 4% of that being “significant production use”:


Of course, looks at more specialized slices of the market find higher usage.

This early in the container market, it’s good to pay close attention to surveys because the sample size will be small, selective, and most people will only have used containers for a short while. But, there’s good stuff in this survey, it’s definitely worth looking at and using.

~9m/yr. VR unit shipments in context


Simon Sharwood pulls together some shipment numbers to put VR headset shipments in context.

The tl;dr on annual shipments: 9.2m VR headsets, vs. 135.6m wearbles, vs. ~1.5bn smartphones.

Details

VR headsets have a runrate of, like, 9.2m units:

Virtual reality headsets are moving at a rate of 2.3 million a quarter

But, fast growing:

IDC says shipments are up 77.4 per cent year over year.

Meanwhile, wearables are at something like “33.9 million shipments a month,” like a runrate of 135.6m units.

Meanwhile, taking from this year’s Internet Trends report (sourced from Morgan Stanley), smart phone shipments are under 1.5bn, though slowing in growth:


And then smartphone shipments from IDC (probably where Morgan got those numbers):

For the full year [of 2016], the worldwide smartphone market saw a total of 1.47 billion units shipped, marking the highest year of shipments on record, yet up only 2.3% from the 1.44 billion units shipped in 2015.

Source: Virtual reality headsets even less popular than wearable devices

How’s HPE doing? Shrinking on purpose & otherwise

Many quotes of HPE’s CEO, Meg Whitman, explaining the state of HPE, 18 months after all the hijinks. Also, notes on some further cost reductions in the works: “We believe we can take out another $200 million to $300 million in cost in just the second half of this year.”

Stuart Lauchlan’s conclusion:

No-one can doubt the ambition in play here, a corporate reinvention on a massive scale that was never going to be entirely without bumps in the road.

See also his summary of the other half, HP.

Link

Trumponomics: focusing on weird things with a small staff

From The Economist a few weeks back:

The real difference is that Trumponomics (unlike, say, Reaganomics) is not an economic doctrine at all. It is best seen as a set of proposals put together by businessmen courtiers for their king. Mr Trump has listened to scores of executives, but there are barely any economists in the White House. His approach to the economy is born of a mindset where deals have winners and losers and where canny negotiators confound abstract principles. Call it boardroom capitalism.

And, on trade, where history points towards a more open approach being successful:

Contrary to the Trump team’s assertions, there is little evidence that either the global trading system or individual trade deals have been systematically biased against America. Instead, America’s trade deficit—Mr Trump’s main gauge of the unfairness of trade deals—is better understood as the gap between how much Americans save and how much they invest. The fine print of trade deals is all but irrelevant. Textbooks predict that Mr Trump’s plans to boost domestic investment will probably lead to larger trade deficits, as it did in the Reagan boom of the 1980s. If so, Mr Trump will either need to abandon his measure of fair trade or, more damagingly, try to curb deficits by using protectionist tariffs that will hurt growth and sow mistrust around the world.

Meanwhile, by the numbers, the focus is obviously on the wrong sectors for juicing:

A deeper problem is that Trumponomics draws on a blinkered view of America’s economy. Mr Trump and his advisers are obsessed with the effect of trade on manufacturing jobs, even though manufacturing employs only 8.5% of America’s workers and accounts for only 12% of GDP. Service industries barely seem to register. This blinds Trumponomics to today’s biggest economic worry: the turbulence being created by new technologies. Yet technology, not trade, is ravaging American retailing, an industry that employs more people than manufacturing. And economic nationalism will speed automation: firms unable to outsource jobs to Mexico will stay competitive by investing in machines at home. Productivity and profits may rise, but this may not help the less-skilled factory workers who Mr Trump claims are his priority.

Check out the rest: “Courting trouble”.

Internet mattress momentum: Casper had ~$200m in 2016 sales

Casper had been out raising a large round of funding when the talks started, sources said. The startup generated around $200 million in sales in 2016 — its second full year in business — and was valued at around $550 million after its last private investment in 2015.

And, as the headline says: “Target looked at buying the mattress startup Casper for $1 billion but will invest instead.”

There’s a fair amount of commentary on this type of e-commerce stuff in this year’s Internet Trends report as well.

Link

Core DevOps (tech) metrics, from Nicole Forsgren

Everyone always wants to know metrics. While the answer is always a solid “it depends – I mean, what are your business goals and then we can come up with some KPIs,” there’s a reoccurring set of technical metrics. Nicole lists some off:

These IT performance metrics capture speed and stability of software delivery: lead time for changes (from code commit to code deploy), deployment frequency, mean time to restore (MTTR), and change fail rate. It’s important to capture all of these because they are in tension with each other (speaking to both speed and stability, reflecting priorities of both the dev and ops sides of the team), and they reflect overall goals of the team. These metrics as a whole have also been shown to drive organizational performance.

And, then, further summarized by Daniel Bryant:

Key metrics for IT performance capture speed and stability of software delivery, and include: lead time for changes (from code commit to code deploy), deployment frequency, mean time to restore (MTTR), and change fail rate.

Also in the interview, a concise DevOps definition:

I define DevOps as a technology transformation that drives value to organizations through an ability to deliver code with both speed and stability.

See the rest.

Introducing microservices

There’s some good “how do I actually get my organization do all this unicorn stuff” comments in this interview with DreamWorks Animation’s Doug Sherman.

Here’s one sample bit on winning people over to microservices. Instead of going into the lab for six months to work on a tool that they think will be useful, they do a lot more user-driven work upfront and then do (it sounds like) weekly small batches to keep the users apprised of the tools and, you’d guess, give continuous feedback:

You have to understand what people want to do in their domain. In the past, Ive gotten it wrong. Ill come up with an idea I think is sound  I think its the coolest thing ever  and Ill work six months in isolation with my team, and then well do this big reveal. And every time we’ve done that, its gone horribly wrong, because 1) people feel like were lecturing to them, like we know better than them. And then 2) we would typically have over-engineered it! It would be like the 747 cockpit, you know? There would be this overwhelming amount of knobs and bits and pieces that I think are great to have, but from their viewpoint, they only need to do a few things, and thats an overwhelming amount of stuff to have to sign up to be able to do. So now, Ive gotten into a habit: before I even write a single line of code, I interview everybody that potentially will use the solution that Im going to write, and I keep them in lockstep with me and my team just about every week. We keep them engaged, helping to influence the direction Im basically trying to echo out in code all of what they want. Its gone so much better, because they feel invested. They don’t feel like in six months I’m revealing this big, mysterious thing. They feel like this is just something they’ve seen through iterations. And whats empowering about that, too, is if you can get the spiritual leaders of the different departments that you’re trying to encourage to use your solution, they’ll help sell it for you.

And then a bit on their progress:

Were about 50% of the way in having some amount of production coverage powered by microservices which are deployable in cloud containers powered by technologies such as Spring and Spring Cloud.

There’s more, good cultural change stories in the interview.

Analysis of Mary Meeker’s Internet Trends – Notebook


Each year, Mary Meeker and team put together the Internet Trends report that draws together an ever growing collection of charts and analysis about the state of our Internet-driven world, from the latest companies to industry and economic impact. Over the years, the report has gone on to include analysis of markets like China and India. Being a production of the Kleiner Perkins Caufield & Byers venture capital firm, the focus is typically on new technologies and the corresponding business opportunities: you know, the stuff like “millennials like using their smartphones” and the proliferation of smartphones and Internet globally.These reports are good for more than just numbers-gawking, but can also give some quantitative analysis of new, technology innovations in various industries. The consumer and advertising space consumes much of this business analysis, but for example, in this year’s report, there’s an interesting analysis of health-care and transportation (bike sharing in China!). For enterprises out there, it may seem to over-index on startups and small companies, but that doesn’t detract from the value of the ideas when it comes to any organization looking to do some good, old-fashioned “digital transformation.”

Normally, I’d post my notebook things here, but the Pivotal blog overlords wanted to put this in on the Pivotal blog, so check it out there.

Figuring out fixing federal government IT – Notebook

In the US, we love arm-chair strategizing government IT, in particular federal IT. Getting your arms around “the problem” is near impossible.

What do we think is wrong, exactly?

As citizens, our perceptions seem to be that government IT has poor user experience, none at all (there’s no app to do things, you have to go to an office to fill something out, etc.), and that it costs too much. More wonky takes are that there’s not enough data provided, nor insights generated by that data to drive better decision making.

When I’ve spoken with government IT people, their internal needs revolve around increasing (secure) communication, using more modern “white-collar” tools (from simply upgrading their copies of Office, to moving to G Suite/Office 365 suites, or just file sharing), and addressing the citizen perceptions (bringing down costs, making sure the software, whether custom made or “off the shelf,” have better customer experiences.

Is it so hard, really?

It’s also easy to think that government is a special snow-flake, but, really, they have mostly the same problems as any large organization. As highlighted below, the government contracting, procurement, and governance processes are more onerous in government IT, and the profile of “legacy” systems is perhaps higher, but, worse, more of a pull down into the muck.

From my conversations, one of the main barriers to change is systemic inertia, seemingly driven by avoidance of risk and overall lack of motivation to do anything. This lack of motivation is likely driven by the lack of competition: unlike in the private sector, there’s no other government to go to, so there’s no fear of loosing “business,” so what care to change or make things better?

Anyhow, here’s a notebook of federal government IT.

“Legacy”

  • “92 percent of Federal IT managers say it’s urgent for their agency to modernize legacy applications, citing the largest driving factors as security issues (42 percent), time required to manage and/or maintain systems (36 percent), and inflexibility and integration issues (31 percent)” from an Accenture sponsored 2015 survey of “150 Federal IT managers familiar with their agency’s applications portfolio”
  • Theres a large pool of legacy IT, though not as large as you might think: ~60% of portfolio are from before 2010(https://www.gartner.com/document/3604417).
  • That said, the same report says that ~25% of portfolios are pre-1999, with 5% from the 1980’s.
  • On spending: “The government has been reporting that 75 to 80 percent of the federal IT budget is spent on running legacy (or existing) systems.”
  • But, actually, that’s pretty normal: “That may sound alarming to those who aren’t familiar with the inner-workings of a large IT organization. However, the percentage is in-line with the industry average. Gartner says the average distribution of IT spending between run, grow and transform activities — across all industries — is 70 percent, 19 percent and 11 percent respectively. Those numbers have been consistent over the past decade.”
  • However, the spending items above are from Compuware’s CEO, who’s clearly interested in continuing legacy spending, mostly on mainframes.

Priorities

Source: “2017 CIO Agenda: A Government Perspective,” Rick Howard, Gartner, Feb. 2017.

Other notes:

  • In the same survey, data & analytics skills are the leading talent gap, with security coming in second. Everything else is in the single digits.
  • Why care about data? On simply providing it (and you, know, the harder job of producing it), the UN e-Government survey says “Making data available online for free also allows the public – and various civil society organizations –to reuse and remix them for any purpose. This can potentially lead to innovation and new or improved services, new understanding and ideas. It can also raise awareness of governments’ actions to realize all the SDGs, thus allowing people to keep track and contribute to those efforts.”
  • And, on analytics: “Combining transparency of information with Big Data analytics has a growing potential. It can help track service delivery and lead to gains in efficiency. It can also provide governments with the necessary tools to focus on prevention rather than reaction, notably in the area of disaster risk management.”
  • Reducing compliance and overall “bureaucracy” is always a problem. My benchmark case is an 18F project that reduced the paperwork time (ATO) down from 9-14 months to 3 days.

The workloads – what’s the IT do?

  • And, while it’s for the Australian government, check out a good profile of the kinds of basic services, and, therefore, applications that agencies need, e.g.: booking citizenship process appointments, getting permits to open businesses, and facilitating the procurement process.
  • If you think about many of the business services governments do, it’s workflow process: someone submits a request, multiple people have to check and co-relate the data submitted, and then someone has to approve the request. This is a core, ubiquitous thing handled by enterprise software and, in theory, shouldn’t be that big of a deal. But, you know, it usually is. SaaS offerings are a great fit for this, you’d hope.

The problems: the usual old process, expensive COTs, contractors, compliance

  • If you accept that much of government IT is simple workflow management, much of the improving the quality and costs of government IT would likely come from shifting off custom, older IT to highly commoditized, cheap (and usually faster evolving, and more secure), SaaS-based services.
  • Jennifer Pahlka: “When you consider that much of what ails government today is the use of custom development at high cost when a commodity product is readily and cheaply available, we must acknowledge that agile is one useful doctrine, not the doctrine. “
  • So, if you do the old “IT – SaaS = what?” you suck out a lot of resources (money, attention, etc.) by moving from janky, expensive COTs systems (and all the infrastructure and operations support needed to run them). You can both cut these costs (fire people, shut down systems), and then reallocate resources (people, time, and money) to better customizing software. Then, this gets you back to “agile,” which I always read as “software development.”
  • In my experience, government IT has the same opportunities as most companies, taking on a more “agile” approach to IT. This means doing smaller, faster to release batches, with smaller, more focused, “all in teams.” Again, the same thing as most large organizations.
  • An older survey (sponsored by Red Hat): “Just 13% of respondents in a recent MeriTalk/Accenture survey of 152 US Federal IT managers believed they could ‘develop and deploy new systems as fast as the mission requires.’”
  • Mikey Dickerson, 2014: “We’ll break that up by discouraging government contracts that are multibillion-dollar and take years to deliver. HealthCare.gov would have been difficult to roll out piecemeal, but if you, a contractor, have to deliver some smaller thing in four to six weeks while the system is being constructed, you’ll act differently.”
  • Government contractors and procurement are a larger problem in government IT, though. The structure of how business is done with third parties, and the related procurement and compliance red-tape causes problems, and, as put by Andrew McMahon, it creates “a procurement process that has become more important than the outcome.”
  • While there’s “too much” red-tape, in general we want a huge amount of transparency and oversight into government work. In the US, we don’t really trust the government to work efficiently. This become frustrating ironic and circular, then, if your position is that all of that oversight and compliance is a huge part of the inefficiency.
  • As put by one government CIO: “Government agencies, therefore, place a business value on ‘optics’—how something appears to the observant public. In an oversight environment that is quick to assign blame, government is highly risk averse (i.e., it places high business value on things that mitigate risk)…. the compliance requirements are not an obstacle, but rather an expression of a deeper business need that the team must still address.”

Success story

  • Tom Cochran: “While running technology for Obama’s WhiteHouse.gov, open-source solutions enabled our team to deliver projects on budget and up to 75% faster than alternative proprietary-software options. More than anything, open-source technology allows governments to utilize a large ecosystem of developers, which enhances innovation and collaboration while driving down the cost to taxpayers.”
  • As with “agile,” it’s important to not put all your eggs-of-hope in one basket on the topic of open source. My theory is that for many large organizations, simply doing something new and different, upgrading – open or not – will improve your IT situation:
  • While open source has different cost dynamic, I’d suggest that simply switching to new software to get the latest features and mindset that the software imbues gives you a boost. Open source, when picked well, will come with that community and an ongoing focus on updates: older software that has long been abandoned by the community and vendors will stall out and become stale, open or not.
  • One example of success, from Pivotal-land, is the IRS’s modernization of reporting on diligent taxes. It moved from a costly, low customer service quality telephone based system to an online approach. As I overuse in most of my talks, they applied a leaner, more “agile” approach to designing the software and now “taxpayers have initiated over 400,000 sessions and made over $100M in payments after viewing their balance.”

If you’re really into this kind of thing, you should come to our free Pivotal workshop day in D.C., on June 7th. Mark Heckler and I will be going over how to apply “cloud-native” thinking, practices, and technologies to the custom written software portion of all this. Also, I’ll be speaking at a MeetUp later that day on the overall hopes and dreams of cloud-native, DevOps, and all that “agile” stuff.

Appian and tech IPO’s for horses

Appian raised just $48m as a private company, compared with $163m for Alteryx, $220m for Okta, $259m for MuleSoft and more than $1bn for Cloudera. In fact, all four of the unicorn IPOs raised more in a single round of private-market funding than Appian did in total VC funding.Not having done an IPO-sized funding in the private market meant that Appian could come public with a more modest raise. (It took in just $75m, compared with this year’s previous IPOs that raised, on average, $190m for the four unicorns.) And, probably most importantly, the Appian offering showed that these types of IPOs can work, both for issuers and investors. (Appian created about $900m of market value, and saw its shares finish the first day of trading up about 25%.) So when it comes to IPOs for the second half of this year, the ‘Appian way’ could help a lot more startups make it to Wall Street. “Will the ‘Appian way’ lead more startups to Wall Street?”
Brenon Daly

Put another way: maybe you don’t have to be unicorn class to IPO now? Who knows really, it’s always a bit of a mystery.

Pretty RAD, customer count and profile

That said, what exactly does Appian do? Seems like one of those SaaS workflow/RAD companies. Nuthin’ wrong with that:

Appian provides app development software for its business and government customers. “With our platform, organizations can rapidly and easily design, build and implement powerful, enterprise-grade custom applications through our intuitive, visual interface with little or no coding required,” the company explained in their S-1 filing…. Appian acknowledges that its biggest competitors are Salesforce and ServiceNow. IBM and Oracle are also in related spaces.

More from Duncan Riley:

Founded in 1999, Appian offers a software as a service platform that helps business people create enterprise applications, especially for managing business processes, without needing programming expertise. The company is known for its “low-code” approach that allows non-programmers to create applications using building blocks and data, but managed and deployed by developers in a company’s information technology department, all on the same technology platform.

And:

  • “…booked $135 million in sales for the 12 months ended March 31, albeit at a loss of $12.5 million.”
  • “The company’s client base includes the U.S. Department of Agriculture, Sprint, Ryder, Dallas-Fort Worth Airport, BUPA North America, CenturyLink and the Department of Homeland Security.”

Meanwhile, Gartner’s magic quadrant on this space (“Enterprise High- Productivity Application Platform as a Service”), says of Appian:

Appian is an hpaPaaS vendor with strong business process management (BPM) and case management capabilities. Appian has been delivering its Appian Cloud platform since 2007. It has taken a uni ed-platform approach that enables a single application de nition to be accessed on a range of devices without additional development. Appian applications can be developed and executed both on-premises and on its aPaaS offering. Appian has positioned its Appian Cloud platform for general-purpose application development, which includes robust process orchestration, application life cycle management and integration capabilities that compete with both hpaPaaS and high-performance RAD vendors, with a common per-user or per-application-and-user pricing model.

And a few more interesting items from Gartner:

  • “There are more than 400 customers using Appian Cloud”
  • “90% of the reference customers delivered applications in less than three months, which is a much higher proportion than average.”
  • “With a predominantly direct selling effort and higher price point, Appian’s focus has been at the higher end of hpaPaaS market and not on small or midsize businesses (SMBs).”
  • It uses OSGi!