More on “grim” automation – Notebook

A few weeks back my book review of two “the robots are taking over” came out over on The New Stack. Here’s some responses, and also some highlights from a McKinsey piece on automation.

Don’t call it “automation”

From John Allspaw:

There is much more to this topic. Nick Carr’s book, The Glass Cage, has a different perspective. The ramifications of new technology (don’t call it automation) are notoriously difficult to predict, and what we think are forgone conclusions (unemployment of truck drivers even though the tech for self-driving cars needs to see much more diversity of conditions before it can get to the 99%+ accuracy) are not.

Lisanne Bainbridge in her seminal 1983 paper outlines what is still true today.

From that paper:

This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.

When things go wrong, humans are needed:

To take over and stabilize the process requires manual control skills, to diagnose the fault as a basis for shut down or recovery requires cognitive skills.

But their skills may have deteriorated:

Unfortunately, physical skills deteriorate when they are not used, particularly the refinements of gain and timing. This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one. If he takes over he may set the process into oscillation. He may have to wait for feedback, rather than controlling by open-loop, and it will be difficult for him to interpret whether the feedback shows that there is something wrong with the system or more simply that he has misjudged his control action.

There’s a good case made for not only the need for humans, but to keep humans fully trained and involved in the process to handle errors states.

Hiring not abating

Vinnie, the author of one of the books I reviewed, left a comment on the review, noting:

For the book, I interviewed practitioners in 50 different work settings – accounting, advertising, manufacturing, garbage collection, wineries etc. Each one of them told me where automation is maturing, where it is not, how expensive it is etc. The litmus test to me is are they stopping the hiring of human talent – and I heard NO over and over again even for jobs for which automation tech has been available for decades – UPC scanners in groceries, ATMs in banking, kiosks and bunch of other tech in postal service. So, instead of panicking about catastrophic job losses we should be taking a more gradualist approach and moving people who do repeated tasks all day long and move them into more creative, dexterous work or moving them to other jobs.

I think Avent’s worry is that the approach won’t be gradual and that, as a society, we won’t be able to change norms, laws, and “work” over fast enough.


As more context, check out this overview of their own study and analysis from a 2015 McKinsey Quarterly article:

The jobs don’t disappear, they change:

Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank teller’s job was redefined with the advent of ATMs.


our research suggests that as many as 45 percent of the activities individuals are paid to perform can be automated by adapting currently demonstrated technologies… fewer than 5 percent of occupations can be entirely automated using current technology. However, about 60 percent of occupations could have 30 percent or more of their constituent activities automated.

Most work is boring:

Capabilities such as creativity and sensing emotions are core to the human experience and also difficult to automate. The amount of time that workers spend on activities requiring these capabilities, though, appears to be surprisingly low. Just 4 percent of the work activities across the US economy require creativity at a median human level of performance. Similarly, only 29 percent of work activities require a median human level of performance in sensing emotion.

So, as Vinnie also suggests, you can automate all that stuff and have people focus on the “creative” things, e.g.:

Financial advisors, for example, might spend less time analyzing clients’ financial situations, and more time understanding their needs and explaining creative options. Interior designers could spend less time taking measurements, developing illustrations, and ordering materials, and more time developing innovative design concepts based on clients’ desires.

Oracle acquiring Apiary, API design for the $660m (in 2020) API market

As for Oracle, the enterprise software vendor wants to use Apiary’s technology set to make its existing API Integration Cloud more robust. Oracle’s API product focuses primarily on services that help companies monetize and analyze APIs. Apiary provides more of a front-end platform for designing, creating and governing APIs. From Natalie Gagliordi f at ZDnet

From CrunchBase:

  • $8.55M in funding, over three rounds
  • Founded April, 2011.

Apigee was acquired, by Google, last year for $625m. Of course, they were public with (let’s hazard a guess) many, many more customers and revenue: $92.03m in FY2016, to be exact.

Back in September 2015, Carl Lehmann at 451 Research said they had 33 employees (up from 22 in Dec 2014) and estimated their revenue at $2-3m. Carl says, now, it’s “likely below $5m in annual revenue.”

What Apiary does

Apiary’s promise is to be quick and easy when it comes to managing the full life-cycle of API design. As their CEO, Jakub Nesetril, put it when I interviewed him in 2015:

It all starts with that first meeting when you’re thinking about building an API and you’re either kind of, you know, you’re inside meeting room ideating on a white board and then taking a photo of it and sending it to a co-worker, or summarizing it down into an email and sending it down to somebody else, saying hey, I just thought would could build something like this. That white board should be. And, if you do that it becomes, you know, we do a lot to try to make it super simple. We have a language that is like really, really simple for developers to write and we can write down a quick API in five minutes. It’s marked down, it’s like very organic, it’s very simple for developers.

What it creates for you, is creates this kind of common space, common language kind of when you talk about it that’s machine readable, human writable so it’s super simple but it’s also machine writable, and machine readable. The important aspect of it is that we take your white board, we take your … we build a language that we have API blue prints. It’s a… We take that API blueprint and we immediately create a API prototype, the moment you hit your first button. So, from day one when you’ve proposed your first API idea, your first resource you know, your first data structure. You have an API that’s sitting out there on the internet, somebody can query it and guess what, if they decide that the API is broken, that they would like to have a different resource, they would like to change the of a certain data structure, they would like add to it, whatever. They can go in, edit that out, click the save button and boom the API prototype is updated immediately.

Load in some enterprise governance and access controls, and you have something nice and useful. See him explaining more in this 2013 InfoQ interview.

Carl at 451 summarized the meat of what they do back in that 2015 report:

Apiary structures its API lifecycle management platform into five phases. The design phase includes the means to ensure API design consistency using a style guide, a collaborative editor and an approval process. The prototype phase includes productivity capabilities such as auto-generated code and a feedback loop for quality assurance. The implementation phase enables agile-inspired and test-driven development practices, helps deploy server code, and provides for framework integration. The delivery phase includes tools for automated documentation, offers code samples, guides the release of final client code, and offers SDKs. The feedback phase includes debugging, support and usage metrics.

The Money – grabbing part of the $3bn pie

Forrester threw out some API management market-sizing back in June of 2015 (there’s likely something more up-to-date behind their paywall):

We predict US companies alone will spend nearly $3 billion on API management over the next five years. Annual spend will quadruple by the end of the decade, from $140 million in 2014 to $660 million in 2020. International sales will take the global market over the billion dollar mark.

With Oracle’s foot-print in all of enterprise applications and IT (they own Java and share much of the JEE market with IBM), there’s likely some genuine synergies to be had. That is, Oracle could be in a position to boost Apiary sales way above what the tiny company could do on its own.

To be clear, as pointed out above, Apiary doesn’t do all that Apigee does. Apiary is just for the development/design time part of APIs, also providing documentation.

That’s helpful for sure, but I’d guess most of Forrester’s $3bn estimation is likely in actually running and managing APIs. And, in fact, it’s probably more realistic to put Apiary in the development tools/ALM TAM, which is probably in the low, single digit billions. That said, I’m guessing Forrester would put Apiary in their API management bucket; after all, it has “API” in it!

As more background, we talked about the API management market back back when the Apigee acquisition was announced both on Software Defined Talk and Pivotal Conversations.


“the obsolescence of Java EE” – Notebook

Bottom line: Java EE is not an appropriate framework for building cloud-native applications.

In preparation for this week’s Pivotal Conversations, I re-read the Gartner write-up on the decline of traditional JEE and the flurry of responses to it. Here’s a “notebook” entry for all that.

From Gartner’s “Market Guide for Application Platforms”

This is the original report from Anne Thomas and Aashish Gupta, Nov 2016. Pivotal has it for free in exchange for leag-gen’ing yourself.
What is an “application platform” vs. aPaaS, etc.?

Application platforms provide runtime environments for application logic. They manage the life cycle of an application or application component, and ensure the availability, reliability, scalability, security and monitoring of application logic. They typically support distributed application deployments across multiple nodes. Some also support cloud-style operations (elasticity, multitenancy and selfservice).

An “aPaaS,” is a public cloud hosted PaaS, of which they say: “By 2021, new aPaaS deployments will exceed new on-premises deployments. By 2023, aPaaS revenue will exceed that of application platform software.”

On the revenue situation:


Commercial Java Platform, Enterprise Edition (Java EE) platforms’ revenue declined in 2015, indicating a clear shift in the application platform market…. Application platform as a service (aPaaS) revenue is currently less than half of application platform software revenue, but aPaaS is growing at an annual rate of 18.5%, and aPaaS sales will supersede platform software sales by 2023.


Currently, the lion’s share of application platform software revenue comes from license sales of Java EE application servers. From a revenue perspective, the application platform software market is dominated by just two vendors: Oracle and IBM. Their combined revenues account for more than three-quarters of the market.

Decline in revenue for current market leaders IBM and Oracle over last three years (4.5% and 9.5% respectively), meanwhile uptick from Red Hat, AWS, and Pivotal (33.3%, 50.6% and 22.7% respectively).
Decline/shifting is driven by:

given the high cost of operation, the diminishing skill pool and the very slow pace of adoption of new technologies, a growing number of organizations — especially at the low end of the market — are migrating these workloads to application servers or cloud platforms, or replacing them with packaged or SaaS applications.


Java EE has not kept pace with modern architectural trends. Oracle is leading an effort to produce a new version of Java EE (version 8), which is slated to add a host of long-overdue features; however, Oracle announced at Oracle OpenWorld 2016 that Java EE 8 has been delayed until the end of 2017.3 By the time Java EE catches up with basic features required for today’s applications, it will be at least two or three years behind the times again.

Target for cloud native:

Design all new applications to be cloud-native, irrespective of whether or not you plan to deploy them in the cloud…. If business drivers warrant the investment, rearchitect existing applications to be cloud-native and move them to aPaaS.

Vendor selection:

Give preference to vendors that articulate a platform strategy that supports modern application requirements, such as public, private and hybrid cloud deployment, in-memory computing, multichannel clients, microservices, event processing, continuous delivery, Internet of Things (IoT) support and API management.


Oracle and Java: confusing

Oracle’s stewardship of Java has been weird of late:

It’s all about WebLogic and WebSphere

I think this best sums it all up, the comments from Ryan Cuprak: “What this report is trying to do is attack Oracle/IBM via Java EE.”

I wouldn’t say “attack,” but rather show that their app servers are in decline, as well as TP processing things. The report is trying to call the shift to both a new way of development (cloud native) and the resulting shifts in product marketshare, including new entrants like Pivotal.

I can’t speak to how JEE is changing itself, but given past performance, I’d assume it’ll be a sauntering-follower to adapting technologies; the variable this time is Oracle’s proven ambivalence about Java and JEE, and, thus, funding problems to fuel the change fast enough to keep apace with other things.

HPE Software sold for $8.8bn, to Micro Focus


While HPE is getting $2.5bn in cash, the whole deal value is more like $8.8bn, the non-cash being stock. More details:

The Numbers

  • “Under the deal, HP Enterprise shareholders are expected to end up with Micro Focus shares currently valued at about $6.3 billion. Micro Focus will pay HP Enterprise $2.5 billion in cash.” (WSJ)
  • There’s about 12,000 people in HPE Software. (WSJ)
  • HPE Software revenue: “HPE’s software unit generated $3.6 billion in net revenue in 2015, down from $3.9 billion in 2014.”
  • Put another way, from TBR: “2Q16 software revenue [had a] decline of 18% year-to-year, driven down by a license revenue decline of 28% year-to-year.”
  • HPE has been divesting a lot, getting a hoard of cash: “In earlier transactions, HP Enterprise in May completed a $2.3 billion deal in China to sell a 51% stake in a venture there called H3C that sells networking, server and storage hardware and related services. Later the same month, HP Enterprise announced a deal to spin off a computer services business that employs about 100,000 people—two-thirds of the company’s total head count—and merge it with operations of Computer Sciences Corp.”
  • Also: “The company sold at least 84 percent of its 60.5 percent stake in Indian IT services provider Mphasis Ltd to Blackstone Group for $1.1 billion in April.”

What now for HPE?

Continue reading HPE Software sold for $8.8bn, to Micro Focus

Deciding where the Docker ecosystem will make money

I drink your milkshake

The Docker forking hoopla is providing an interesting example, in realtime, of how open communities figure out monetization.

#RealTalk: Open communities are not immune to C.R.E.A.M.

One of the most important decisions an open source community makes is where and how it will make money. I always liked Eclipse’s take because they’re mega clear on this topic; the ASF plays this goofy game where they try really hard to pretend they don’t need to answer the question, which itself is an answer, resulting in only the occasional quagmire; Linux has a weird situation where RedHat figured out This One Cool Trick to circumvent the anti-commercial leanings of the GPL; MySQL has a weird dual licensing model that I still don’t fully grasp the strategic implications of; RIP Sun.

The role of standards plays another defining role when it comes to monetization. Think of Java/J(2)EE, vs .Net, vs PHP (a standard-less standard?), vs HTML and WS-*. vs, the IETF/ISOC RFC-scape that defines how the internet works. While not always, by far, standards are often used tactically to lesson the commercial value (or zero it out completely) of any given component “lower” in the stack, pushing the money “up” the stack to the software that implements, uses, or manages the standard. Think of how HMTL itself is “of no value” (and was strategically pushed that way early on), but that the entire SaaS market is something like a $37.7bn market, part of the overall $90.3bn that, arguably, uses HTMLas one of the core technologies in the stack, at the UI layer, (along with native mobile apps. now).

The dynamics of how open source, standards, and the closed source around it are defined and who “controls” them are one of the key strategic processes in infrastructure software.

The Docker ecosystem is sorting out monetization

Right now, you can see this process in action in the Docker ecosystem. Product management decisions at Docker, Inc. are forcing the community to wrestle with how ecosystem members will make money, including Docker Inc. itself.

By “ecosystem,” I mean “all the people and companies that are involved in coding up Docker and/or selling Docker-based products and services.” Actual end-users play a role, of course, but historically don’t have as much power as we’d like at this stage of an open communities formation.

End-users have to vote with their feet and, if they have one, wallets – whether wearing expensive loafers (enterprise) or cracked sandals (paying with nothing but the pride of ubiquity) – which, by definition, is hard to do until a monetization strategy is figured out, or completely lumped all together.

Looking just at the “vendors,” then, each ecosystem member is trying to define which layers of the “stack”‘will be open, and thus, free, and which layers will be closed, and thus, charged for. Intermixed with this line drawing is determining who has control over features and standards (at which level) and, as a result, the creation of viable business models around Docker.

Naturally, Docker, Inc. wants as big slice of that pie as possible. The creator of any open technology has to spend a lot of nail-biting time essentially deciding how much money and market-share it wants to give up to others, even competitors. “What’s in it for me?” other vendors in the ecosystem are asking…and Docker Inc.’s answer is usually either some strategic shoe-gazing or a pretty straight forwardly the reply “less than you’d like.”

As a side note, while I don’t follow Docker, Inc. as an analyst any more (so I’m not mega up-to-date), it seems like the company consistently puts the end-users first. They’re looking to play the Tron role in this ecosystem most valiantly. This role doesn’t, really, conflict at all with elbowing for the biggest slice of the pie.

From The New Stack’s Docker & Container Ecosystem research

Similar to Docker Inc’s incentives to maintain as much control as possible, the “not-Docker, Inc.” members of the ecosystem want to commoditize/open the lower levels of the stack (the “core”), and leave the upper layers as the point of commoditization. This is the easiest, probably most consistently successful business model for infrastructure software: sell proprietary software that manages the “lower,” usually low cost to free, layers in the stack. From this perspective, not-Docker, Inc. members want to fence in the core Docker engine and app packaging scheme as the “atomic unit” of the Docker ecosystem. Then, the not-Docker, Inc.’s want to keep the management layer above that atomic unit for themselves to commercialize (here “orchestration,” configuration management, and the usual systems management stuff) . But, of course, Docker Inc. is all like “nope! That’s my bag o’ cash.”

As explained by one of those ecosystem vendors, who works at Red Hat:

And while I personally consider the orchestration layer the key to the container paradigm, the right approach here is to keep the orchestration separate from the core container runtime standardization. This avoids conflicts between different layers of the container runtime: we can agree on the common container package format, transport, and execution model without limiting choice between e.g. Kubernetes, Mesos, Swarm.

We saw similar dynamics – though by no means open source – in the virtualization market. VMware started with the atomic unit of the hypervisor (remember when we were obsessed with that component in the stack and people used that word a lot?), allowing the ecosystem to build out management on-top of that “lower” unit.

Then, as VMware looked to grow it’s TAM, revenue, and, thus, share price and market-cap, it expanded upward into management. At this point, VMware is a, more or less, the complete suite (or “solution” as we used to call it) of software you need for virtualization. E.g., they use phrases like “Software Defined Datacenter” rather than “virtualization,” indicative of the intended full-scope of their product strategy. (I’m no storage expert, but I think storage and maybe networking?is the last thing VMware hasn’t “won” hands down.)

“What, you don’t like money?”

Screenshot 2016-09-01 12.09.56.png
From one of Donnie’s recent presentations.

All of this is important because over the next 10-15 years, we’re talking about a lot of money. The market window for “virtualization” is open and wildcatters are sniffing on the wafting smell the money flitting through. Well, unless AWS and Azure just snatches it all up, or the likes of Google decides to zero the market.

We used to debate the VMware to Docker Inc. comparison and competitive angle a lot. There was some odious reaction to the idea that Docker Inc. was all about slipping in a taking over VMware’s C.R.E.A.M. At one point, that was plausible from a criss-cross applesauce state of the market, but now it’s pretty clear that, at least from an i-banker spreadsheet’s perspective, VMware’s TAM is the number your doinking around with.

Figuring out that TAM and market size gives you a model for any given ecosystem member’s potential take over the next 10 years. That’s a tricky exercise, though, because the technology stack and market are being re-defined. You’ve got the core virtualization and container technology, then the management layer, and depending on if you’re one of the mega-tech vendors that does software and hardware, you’ve got actual server, storage, and networking revenue that’s dragged by new spend on “containers,” and then you’ve got the bogie of whatever the “PaaS-that-we-shall-not-call-PaaS” market becomes (disclaimer: that’s the one I work in, care a great deal about, am heavily incentivized to see win, and am rooting for – roll in the bias droids!).

I skipped figuring out the market size last year when I tried to round-up the Docker market. Needless to say, I’d describe it as “fucking-big-so-stop-asking-questions-and-ride-the-God-damn-rocket.”

Looking at it from a “that giant sucking sound” perspective, most all of the members in the Docker ecosystem will be in a zero-sum position if Docker Inc moves, and wins, the upper management layers. Hence, you see them fighting tooth-and-nail to make sure Docker Inc is, from their perspective, kept in their place.

Rackspace goes private for $4.3bn

Rackspace launches their cloud business in 2008. Austin, TX.
  • Apollo Global Management paying $4.3bn to acquire Rackspace, $32 a share in cash, a 38 percent premium (Bloomberg)
  • Competing against AWS is hard, plus the other mega public cloud plays: “Google’s parent, Alphabet Inc., Amazon and Microsoft have combined cash holdings of more than $200 billion compared to Rackspace’s less than $1 billion.”
  • Brenon at 451 points out that Rackspace throws off a good amount of cash, “$674m of EBITDA over the past year,” and concludes:
  • More from Brenon: “While we could imagine that focus on customer service as competitive differentiator might set up some tension under PE ownership (people are expensive and tend not to scale very well), Rackspace has the advantage of having built that into a profitable business. In short, Rackspace is just the sort of business that should fit comfortably in a PE portfolio.”
  • Meanwhile, as we discuss on Software Defined Talk (#70, “No one wants to eat a finger-pie”), AWS is at a run-rate of ~$10-11bn and growing.
  • In the recent Gartner IaaS Magic Quadrant, Racksapce is in the dread lower left hand corner. To be fair, a whole other MQ, “Cloud Enabled Managed Hosting,” which maps closer to what Rackspace says is their core strategy in cloud, has Rackspace leading. But, back to that “normal IaaS” MQ:
  • The MQ says “Rackspace has successfully pivoted from its ‘Open Cloud Company,’ OpenStack-oriented strategy, and returned to its roots as “a company of experts emphasizing its managed service expertise and superior support experience.”
  • Also: “Rackspace will continue to divert investment from its Public Cloud to other areas of its business, rather than try to compete directly for self-managed public cloud IaaS against market-leading providers that can rapidly deliver innovative capabilities at very low cost, or against established IT vendors that have much greater resources and global sales reach.”
  • See also Rachel’s analysis over at RedMonk.


Finally, check out a tad of commentary on the deal in #32 of Pivotal Conversations.

“De-graniting and de-brassing” – Austin’s downtown tech scene

Austin entrepreneur Campbell McNeill said WeWork’s “high energy environment, cool furniture” and location at Sixth and Congress in the heart of downtown allows his startup, Cocolevio, “to attract the young talent we need for our cloud business.”

“It would be considerably more expensive to set up a similar situation on our own as a new tech startup,” said McNeill, Cocolevio’s co-founder and chief technology officer. “We appreciate we may be paying a lot per square foot, but it is completely worth it when you consider the intangible WeWork benefits like networking with other great startups, making great friends, periodic presentations by industry leaders and WeWork Labs.”

Some more highlights from the piece:

  • “three out of four tenants looking for downtown space are likely to be tech-related, Kennedy said. ‘Ten years ago, it would have been less than half that.'”
  • “Rents for the highest quality office space in downtown Austin average $49.07 a square foot per year, according to Cushman & Wakefield. That’s 40 percent higher than top-tier space in the suburbs, where rates average $35.10 a square foot.”
  • “tenants can expect to pay anywhere from $150 to $200 per month per space for unreserved parking. Reserved spots are as high as $300 per month.”
  • “The number of downtown tech workers — between 14,000 and 15,000, according to estimates from the Greater Austin Chamber of Commerce — is still tiny compared with the region’s overall technology workforce, which the chamber estimates at abou 130,000.”

Source: Austin’s tech scene heats up downtown

The APM market is lively, growing 12% last year

“In 2015, the worldwide application performance management software market grew an estimated 12.1% over that in 2014, in large part because of increased demand for a new generation of solutions designed to support DevOps and multicloud infrastructure initiatives,” explains Mary Johnston Turner, research vice president, Enterprise System Management Software. “This new generation of APM solutions is easier to implement, supports more sophisticated analytics, and is less expensive than earlier offerings. As a result, APM is providing value to a much wider range of developers and IT operations teams that need constant, current visibility into end-to-end application performance and end-user experience.”

The previous y/y was 12.7%, so things are going well in that market I’d say. As I recall, this includes mainframe and other “not normal” revenue. If you look at just the subset market of x86 and web apps, it’s even higher around 17%. That “distributed” APM TAM was estimated at $2.2bn in 2014.

I don’t have access to the full APM report, but the size is around several billion. One Gartner estimate put it around $2.6bn in 2014.

See also this vendor share commentary based on Gartner’s analysis of the APM market.

Source: Worldwide Application Performance Management Software Forecast, 2016–2020

IaaS “won” by AWS & Azure – Highlights from the IaaS Magic Quadrant

This year’s IaaS magic quadrant is out. You can get a free re-print thanks to, I believe, Amazon. Here’s some highlights from my “notebook”:

  • Ducy created an animated gif of the past 6 quadrants.
  • AWS and MSFT have won: “This phase of the market has been ‘won.’ The market consolidated dramatically over the course of 2015. Just two providers — AWS and Microsoft Azure — account for almost all of the IaaS-related infrastructure consumption in cloud IaaS, and their dominance is even more thorough if their PaaS-related infrastructure consumption is included as well.”
  • “We expect the overall competitive environment will not change significantly until 2018 at earliest, and new entrants to the market will have minimal impact before that time.”
  • Buyers, choose wisely. Two clouds dominate, there’s lots of fragmentation, so clouds come and go. This pushes people more towards the market-leaders because they seem more stable, despite there being many competing options. E.g., HP shutting down it’s cloud
  • “Public cloud IaaS provides adequate security for most workloads.”
  • If not already lean in IT, IaaS will save money – “The less efficient your organization, the more likely you are to save money by using a cloud provider, especially if you take advantage of this opportunity to streamline and automate your operations.”
  • Criteria of note: must be top 10 by global market-share, data centers at least 250 miles apart, pretty real IaaS capabilities (self-service, technical profiles, etc.)
  • PaaS and IaaS getting closer: “Most customers who adopt the infrastructure resources within a cloud IaaS offering will also adopt associated management services, such as monitoring, and are highly likely to adopt PaaS-level capabilities, such as database as a service, over time.” More: “This market is wholly separate and distinct from cloud SaaS, but is increasingly entangled with the PaaS market.” Also: “The next phase of the market has not yet emerged. It is likely that the next phase of this market will even more tightly integrate IaaS and PaaS capabilities, including an expanded use of container technologies and automated operations management.”
  • There is no cloud portability: “Cloud IaaS is not a commoditized service, and even providers with very similar offerings and underlying technologies often have sufficiently different implementations that there is a material difference in availability, performance, security and service features.” (There are ways to deal with this up at the PaaS layer.)
  • Bonus: FedRAMP ain’t cheap: “costs ~$3.5m, takes ~18 months”

For more: we discussed all of this more on this week’s Software Defined Talk:

And, thanks to Matt Ray for scrounging the original link up for our show notes.

Would you buy auto insurance from Google? The Kids and auto insurance

The young people account for 20% of of the $180bn US auto insurance market. Here’s some trends in their buying behavior a la a BCG infographic:

Infographic on car insurance buying habits.

Some items:

  • That nearly 40% are willing to buy from Amazon, Google, and others should put traditional insurance vendors in full on freak out mode.
  • Once The Kids start the long (up to two weeks!) research process, they’re 70% more likely to switch than The Olds. So, it’s probably a good idea for incumbents to heavily get involved in research, pointing to native content sponsored “third parties” and providing their own research.
  • As one of our Pivotal customers, Allstate, put it: “Everybody is going to disrupt the insurance industry. It hasn’t been disrupted in eighty-plus years.”

Source: bcg.perspectives – How Digital Switchers Are Disrupting US Auto Insurers