🗂 Link: What Cloud Vendors Really Want From Their Customers

How much a customer spends on an annual basis is absolutely an indicator of strength, both internally and to the market. It is also a clear indicator of being able to effectively execute to the often-publicized overarching objective of expanding the customer base’s portfolios of products over the course of the relationship and since inception. Cloud vendors and the analysts that cover them also know that as the annual spend rises, the baseline spend grows, which can be hit with an increase at renewal.

Source: What Cloud Vendors Really Want From Their Customers

🗂 Link: VMware Adds Containers to Its Cloud Provider Platform

The platform also added an integration with VMware’s container orchestrator, Enterprise PKS, which means cloud providers can offer containers-as-a-service. And at VMworld the vendor will showcase a technology preview of vCloud Director integration with Bitnami Community.

VMware bought Bitnami in May. It provides application packaging targeted at container and Kubernetes environments. The Bitnami Community houses one of the largest catalogs of click-to-deploy applications and development stacks. Combining this with and Enterprise PKS will allow VMware Cloud Providers to “provide a cloud that’s developer ready, and offer both VM-based workloads and container-based workloads from the same platform,” Bhardwaj said

Source: VMware Adds Containers to Its Cloud Provider Platform

🗂 Link: The Cost of Banking Is About to Go Up: What the Capital One Breach at Amazon Could Mean for the Industry

“The adoption of cloud platforms is a movement that will not be stopped,” says Jerry Silva, research director, IDC’s Financial Insights Group. “But there will be a slowdown as regulators step in to ensure that the security and resiliency structures that have always applied to banks directly are applied to the cloud providers with which they do business.”

Source: The Cost of Banking Is About to Go Up: What the Capital One Breach at Amazon Could Mean for the Industry

Link: How the subscription paradigm flips the cloud financials market

In the subscription world, you must understand the lifetime relationship with the customer – all the upsells and renewals and how they all build on one another. You also must understand the revenue, billings, and cash derived from those-again, over the entire lifetime of the customer relationship.

In an ASC-606 world, you must track all performance obligations, which is a fancy term for your promises – both the ones explicitly written in your customer agreement and all those pesky side terms that your sales rep slipped into the free-text field on the quote. You must also know all the implicit promises that people make in the deal or that have become ingrained in your business processes.

Source: How the subscription paradigm flips the cloud financials market

Link: Google debuts migration tool for its Anthos hybrid cloud platform

Anthos applications are deployed in software containers, which are used to host the individual components of each app and make them easier to work with. The main benefit is that developers get to use a single set of tools to build and deploy their apps, and push through updates as necessary, no matter what infrastructure those apps are hosted on. Kubernetes makes it easier to manage large clusters of containerized apps.

Source: Google debuts migration tool for its Anthos hybrid cloud platform

Link: Microsoft milestone: Tech giant’s cloud revenue now matches traditional products, analyst says

“We estimate that FY 4Q 19 was the first time MSFT generated as much revenue from running software in its own data centers, including cloud offerings like Azure and Office 365, as well as LinkedIn, Bing, GitHub and Xbox-Live, as it did from software licenses and upgrades, hardware and professional services,” according to the note from CFRA’s John Freeman.

Source: Microsoft milestone: Tech giant’s cloud revenue now matches traditional products, analyst says

Link: Microsoft, Red Hat Partner on OpenShift

The OpenShift-Azure deal extends collaboration between Microsoft and Red Hat that includes the addition of Microsoft SQL server on Red Hat Enterprise Linux. The deal signaled Microsoft’s embrace of OpenShift application container management.

The expanded partnership also gives OpenShift users access to public cloud services such as Azure Cosmos and SQL databases along with cloud-based machine learning models aimed at development of cloud-native enterprise applications.

OpenShift on Azure would “simplify container management on Kubernetes and help customers innovate on their cloud journeys,” added Scott Guthrie, executive vice president of Microsoft’s Cloud and AI Group.

Azure Red Hat OpenShift is available now on Microsoft Azure

Source: Microsoft, Red Hat Partner on OpenShift

Google Cloud stuff

A brief overview:

The expansion centers around Google’s new open-source hybrid cloud package called Anthos, which was introduced at the company’s Google Next event this week. Anthos is based on – and supplants – the company’s existing Google Cloud Service beta. Anthos will let customers run applications, unmodified, on existing on-premises hardware or in the public cloud and will be available on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE), and in data centers with GKE On-Prem, the company says. Anthos will also let customers for the first time manage workloads running on third-party clouds such as AWS and Azure from the Google platform without requiring administrators and developers to learn different environments and APIs, Google said. 

And from an interview with Kurian:

So for us to grow, the primary thing is to scale our go-to-market organization. And we’re very committed to doing that. We just need to hire and train and enable a world class sales team at scale.

Today we have a great sales team, but we are far fewer in number than the other players. We just need to expand that. And as I talked to customers, they asked us to, one: expand our sales organization and our go-to-market teams. Second: specialize (that sales team) with deep expertise in technology and in industry. And third: make it easy to contract and do business with us. We are extremely committed to doing all three of them.

Also, from the product bucket:

Google also announced Anthos Migrate, a beta service that automatically moves virtual machines running on on-premises or other cloud providers into containers on GKE. Assuming it works, that’s a much easier path to the cloud for companies worried about breaking mission-critical applications during the move.

And, a good round up of analyst Tweets.

Every cloud providers, every tech vendor, wants to go up the stack, close to The Business where there’s more money to be had:

During his keynote, Kurian referred to Google Cloud as a “digital transformation provider” – he didn’t say an ‘IaaS alternative to AWS and Azure’. In fact, Google Cloud is open to the fact that enterprises may use multiple IaaS providers (more on that later). Kurian is clearly making a play for Google Cloud to become an enterprise technology vendor that has deep skin in the game with customers, focused on meaningful outcomes, rather than just a pay per usage alternative to other IaaS vendors.

They’re trying a more open source company friendly approach, adding in some popular databases as a service:

Initial technologies include those from open source database system providers Confluent, MongoDB, Elastic, Neo4j, Redis Labs, InfluxData and DataStax.

Also, see the very well written Anthos documentation.

Link: Standard Bank contracts with AWS for mass migration to the cloud

The bank has selected AWS as its preferred cloud provider with the intention of porting its production workloads, including its customer facing platforms and strategic core banking applications to the cloud.

From what I can tell talking with banks, they’re over that 2010 thing of “public cloud isn’t secure enough.” Now it’s a scramble to move their shit up there.

Source: Standard Bank contracts with AWS for mass migration to the cloud

Link: BMC Touches Clouds with Job Scheduler

The new support for cloud platform as a service (PaaS) functions — including Lambda, step functions, and batch on AWS and logic apps and functions on Azure — gives organizations the capability to orchestrate workflows on the cloud. But, importantly, it also allows customers to integrate these cloud functions with applications running in private clouds and hybrid architectures, the company says.

Source: BMC Touches Clouds with Job Scheduler

Link: AWS’s Snowball Edge

A private cloud box from Amazon:

The Snowball Edge Compute Optimized with GPU includes an on-board GPU that you can use to do real-time full-motion video analysis & processing, machine learning inferencing, and other highly parallel compute-intensive work. You can launch an sbe-g instance to gain access to the GPU.

It has Lamda and EC2 capability, targeted at data
manipulation and getting it into (and out of?) AWS. There’s a lot of IoT stuff in AWS now, opening their platform up to things like smart cities, power grid management, and thermostats and lights and shit.
Original source: AWS’s Snowball Edge

Link: The computational legacy is Oracle’s cloud opportunity today

The company said it was saving most of its cloud-native announcements for KubeCon in December, but highlighted its new managed Kubernetes service (OKE, launched in May), platinum-level membership in the Cloud-Native Computing Foundation and growing support of open source projects (e.g., Fn, a functions project; Terraform for Oracle Cloud orchestration) as evidence that it has turned over a new, developer-friendly leaf. Oracle acknowledges a credibility gap with developers, but notes that it is at the start of making a transition similar to the one Microsoft has largely accomplished. As part of this effort, it may pursue acquisitions that give it access to customers that will help change Oracle’s image and shift the culture within the company (perhaps similar to what IBM is hoping to accomplish by buying Red Hat).
Original source: The computational legacy is Oracle’s cloud opportunity today

Link: Big Blue Puts on a Red Hat: IBM Acquires Red Hat

While many organizations have extensive on and off premise infrastructure investments, comparatively few of them are sophisticated in the way that those environments are tied to each other. If expectations are scaled back to the more realistic “multi-cloud” – the idea that an organization may have investments in more than one environment – the relevance and importance of OpenShift becomes more clear.

This is clever to point out that enterprises have enough trouble integrating their existing, on-premise stuff, let along the complexity and newness of tying together public and private cloud.
Original source: Big Blue Puts on a Red Hat: IBM Acquires Red Hat

Link: Google Cloud Revenue

When asked about Google’s on-premises strategy, Pichai said the company is “thoughtfully looking at it,” and cited its partnerships with SAP, Pivotal, and VMware. Google also has a hybrid-cloud product with Cisco and its own Kubernetes-based GKE On Prem available to early access customers.

On-premises data centers remain “a big, big requirement for customers,” and these partnerships help Google address those companies’ needs, Pichai said. When it comes to hybrid cloud, “we are thinking about how to do that better,” Pichai said. “Our overall approach to cloud hybrid modernization I think is the right long-term direction and so we are doing that.”
Original source: Google Cloud Revenue

Link: Google Cloud Revenue

When asked about Google’s on-premises strategy, Pichai said the company is “thoughtfully looking at it,” and cited its partnerships with SAP, Pivotal, and VMware. Google also has a hybrid-cloud product with Cisco and its own Kubernetes-based GKE On Prem available to early access customers.

On-premises data centers remain “a big, big requirement for customers,” and these partnerships help Google address those companies’ needs, Pichai said. When it comes to hybrid cloud, “we are thinking about how to do that better,” Pichai said. “Our overall approach to cloud hybrid modernization I think is the right long-term direction and so we are doing that.”
Original source: Google Cloud Revenue

Link: Amazon move off Oracle caused Prime Day outage in warehouse

The outage, which lasted for hours on Prime Day, resulted in over 15,000 delayed packages and roughly $90,000 in wasted labor costs, according to the report. Those costs don’t include all the lost hours spent by engineers troubleshooting and fixing the errors or any potential lost sales.

I assume Amazon has, and will save much more than that by moving off Oracle.
Original source: Amazon move off Oracle caused Prime Day outage in warehouse

Link: Redis Pulls Back on Open Source Licensing, Citing Stingy Cloud Services

“The modules in question are used to help create managed services on top of Redis, namely RediSearch, Redis Graph, ReJSON, Redis-ML, and Rebloom. Licensed under Apache 2.0 modified with Commons Clause, these can still be freely used in any application, though they can’t be used in a commercial Redis-based offering. For that, you will have to call Redis Labs and work out a paid licensing arrangement.”
Original source: Redis Pulls Back on Open Source Licensing, Citing Stingy Cloud Services

Link: Forrester SVP: VMware Is One Of The ‘Exciting’ Stars Of IT Automation Era

‘O’Donnell called VMware and Pivotal the “crown jewels” of Dell’s $70 billion blockbuster acquisition of EMC in 2015. “It’s the future,” said O’Donnell. “It’s the software side of it. A lot of good stuff came with EMC but what VMware and Pivotal are doing is the future. It’s all about software.”‘
Original source: Forrester SVP: VMware Is One Of The ‘Exciting’ Stars Of IT Automation Era

Tracking your improvement  – “metrics”

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Tracking the health of your overall innovation machine can be both overly simplified and overly complex. What you want to measure is how well you’re doing at software development and delivery as it relates to improving your organization’s goals. You’ll use these metrics to track how your organization is doing at any given time and, when things go wrong, get a sense of what needs to be fixed. As ever with management, you can look at this as a part of putting a small batch process in place: coming up with theories for how to solve your problems and verifying if the theory worked in practice or not.

All that monitoring

In IT most of the metrics you encounter are not actually business oriented and instead tell you about the health of your various IT systems and processes: how many nodes are left in a cluster, how much network traffic customers are bringing in, how many open bugs development has, or how many open tickets the help desk is dealing with on average.

Example of Pivotal Cloud Foundry’s Healthwatch metrics dashboard.

All of these metrics can be valuable, just as all of them can be worthless in any given context. Most of these technical metrics, coupled with ample logs, are needed to diagnose problems as they come and go. In recent years, there’ve been many advances in end-to-end tracing thanks to tools like Zipkin and Spring Sleuth. Log management is well into its newest wave of improvements, and monitoring and IT management analytics are just ascending another cycle of innovation — they call it “observability” now, that way you know it’s different this time!

Instead of looking at all of these technical metrics, I want to look at a few common metrics that come up over and over in organizations that are improving their software capabilities.

Six common cloud native metrics

Some metrics consistently come up when measuring cloud native organizations:

Lead Time

Source

Lead time is how long it takes to go from an idea to running code in production, it measures how long your small batch loop takes. It includes everything in-between from specifying the idea, writing the code and testing it, passing any governance and compliance needs, planning for deployment and management, and then getting it up and running in production.

If your lead time is consistent enough, you have a grip on IT’s capability to help the business by creating and deploying new software and features. Being this machine for innovation through software is, as you’ll hopefully recall, the whole point of all this cloud native, agile, DevOps, and digital transformation stuff.

As such, you want to monitoring your lead closely. Ideally, it should be a week. Some organizations go longer, up to two weeks, and some are even shorter, like daily. Target and then track an interval that makes sense for you. If you see your lead time growing, then you should stop everything, find the bottlenecks and fix them. If the bottlenecks can’t be fixed, then you probably need to do less each release.

Velocity

Velocity shows how many features are typically deployed each week. Whether you call features “stories,” “story points,” “requirements,” or whatever else, you want to measure how many of them the team can complete each week; I’ll use the term “story.” Velocity tells you three things:

  1. Your progress to improving and ongoing performance — at first, you want to find out what your team’s velocity is. They will need to “calibrate” on what they’re capable of doing each week. Once you establish this base line, if it goes down something is going wrong and you can investigate.
  2. How much the team can deliver each week — once you know how many features your team can deliver each week, you can more reliability plan your road-maps. If a team can only deliver, for example, 3 stories each week, asking them to deliver 20 stories in a month is absurd. They’re simply not capable of doing that. Ideally, this means your estimates are no longer, well, always wrong.
  3. If the the scope of features is getting too big or too small — if previously, reliability performing team’s velocity starts to drop, it means that they’re scoping their stories incorrectly: they’re taking on too much work, or someone is forcing them to. On the other hand, if the team is suddenly able to deliver more stories each week or finds themselves with lots of extra time each week, it means they should take on more stories each week.

There are numerous ways to first calibrate on the number of stories a team can deliver each week and managing that process at first is very important. As they calibrate, your teams will, no doubt, get it wrong for many releases, which is to be expected (and one of the motivations in picking small projects at first instead of big, important ones). Other reports like burn down charts can help illustrate how the team’s velocity is getting closer to delivering across major releases (or in each release) and help you monitor any deviation from what’s normal.

Latency

In general, you want your software to be as responsive as possible. That is, you want it to be fast. We often think of speed in this case, how fast is the software running and how fast can it respond to requests? Latency is a slightly different way of thinking about speed, namely, how long does a request take end-to-end to process, returning back to the user.

Latency is different than the raw “speed” of the network. For example, a fast network will send a static file very quickly, but if the request requires connecting to a database to create and then retrieve a custom view of last week’s Austrian sales, it will take awhile and, thus, the latency will be much longer than downloaded an already made file.

From a user’s perspective, latency is important because an application that takes 3 minutes to respond versus 3 milliseconds might as well be “unavailable.” As such, latency is often the best way to measure if your software is working.

Measuring latency can be tricky….or really simple. Because it spans the entire transaction, you often need to rely on patching together a full view — or “trace” — of any given user transaction. This can be done by looking at locks, doing real or synthetic user-centric tracing, and using any number of application performance monitoring (APM) tools. Ideally, the platform you’re using will automatically monitor all user requests and also put together catalog all of the sub-processes and sub-sub-processes that make up the entire request. That way, you can start to figure why things are so slow.

Error Rates

Often, your systems and software will tell when there’s an error: an exception is thrown in the application layer because the email service is missing, an authentication service is unreachable so the user can’t login, a disk is failing to write data. Tracking and monitoring these errors is, obviously, a good idea. Some of them will range from “is smoke coming out of the box?” to more obtuse ones like servicing being unreachable because DNS is misconfigured. Oftentimes, errors are roll-ups of other problems: when a web server fails, returning a 500 response code, it means something went wrong, but doesn’t the error doesn’t usually tell you what happened.

Error rates also occur before production, while the software is being developed and tested. You can look at failed tests as error rates, as well as broken builds and failed compliance audits.

Fixing errors in development can be easier and more straight forward, whereas triaging and sorting through errors in production is an art. What’s important to track with errors is not just that one happened, but the rate at which they happen, perhaps errors per second. You’ll have to figure out an acceptable level of errors because there will be many of them. What you do about all these errors will be driven by your service targets. These targets may be foisted on you in the form of heritage Service Level Agreements or you might have been lucky enough to negotiate some sane targets.

Chances are, a certain rate of errors will be acceptable (have you ever noticed that sometimes, you just need to reload a web-page?) Each part of your stack will throw off and generate different errors: some are meaningless (perhaps they should be more warnings or even just informative notices, e.g., “you’re using an older framework that might be deprecated sometime in the next 30 years) and others could be too costly, or even impossible to fix (“1% of user’s audio uploads fail because their upload latency and bandwidth is too slow”). And some errors may be important above all else: if an email server is just losing emails every 5 minutes…something is terribly wrong.

Generally, errors are collected from logs, but you could also poll the service in question and it might send alerts to your monitoring systems, be that an IT management system or just your phone.

Mean-time-to-repair (MTTR)

If you can accept the reality that things will go wrong with software, how quickly you can fix those problems becomes a key metric. It’s bad when an error happens, but it’s really bad if it takes you a long time to fix it.

Tracking mean-time-to-repair is an ongoing measurement of how quickly you can recovering from errors. As with most metrics, this gives you a target to improve towards and then allows you to make sure you’re not getting worse.

If you’re following cloud native practices and using a good platform, you can usually shrink your MTTR with the ability to roll back changes. If a release turns out to be bad (an error), you can back it out quickly, removing the problem. This doesn’t mean you should blithely roll out bad releases, of course.

Measuring MTTR might require tracking support tickets and otherwise manually tracking the time between incident detection and fix. As you automate remediations, you might be able to easily capture those rates. As with most of these metrics, what becomes important in the long term is tracking changes to your acceptable MTTR and figuring out why the negative changes are happening.

Costs

Everyone wants to measure cost, and there are many costs to measure. In addition to the time spent developing software and the money spent on infrastructure, there are ratios you’ll want to track like number of applications to platform operators. Typically, these kinds of ratios give you a quick sense of how efficiently IT runs. If each application takes one operator, something is probably missing from your platform and process. T-Mobile, for example, manages 11,000 containers in production with just 8 platform operators.

There are also less direct costs like opportunity and value lost due to waiting on slow release cycles. For example, the US Air Force calculated that is saved $391M by modernizing it’s software methodology. The point is that you need to obviously track the cost of what you’re doing, but you also need to track the costs of doing nothing, which might be much higher.

Business Value

“Comcast Cloud Foundry Journey — Part 2,” Greg Otto, Comcast, June 2017.

Of course, none of the metrics so far has measured the most valuable, but difficult metric: value delivered. How do you measure your software’s contribution to your organization’s goals? Measuring how the process and tools you use contributes to those goals is usually harder. This is the dicey plain of correlation versus causation.

Somehow, you need to come up with a scheme that shows and tracks how all this cloud native stuff you’re spending time and money on is helping the business grow. You want to measure value delivered over time to:

  1. Prove that you’re valuable and should keep living and get more funding,
  2. Figure out when you’re failing to deliver so that you can fix it

There are a few prototypes of linking cloud native activities to business value delivered. Let’s look at a few examples:

  1. As described in the case study above, when the IRS replaced call centers with poor availability with software, IT delivered clear business value. Latency and error rates decreased dramatically (with phone banks, only 37% of calls made it through) and the design improvements they discovered led to increased usage of the software, pulling people away from the phones. And, then, the results are clear: by the Fall of 2017, the this application had collected $440m in back taxes.
  2. Sometimes, delivering “value” means satisfying operational metrics rather than contributing dollars. This isn’t the best of all situations to be in, but if you’re told, for example, that in the next two years 60% of applications need to be “on the cloud,” then you know the business value you’re supposed to deliver on. In such cases, simply tracking the replatforming of applications to a cloud platform will probably suffice.
  3. Running existing businesses more efficiently is a popular goal, especially for large organizations. In this case, the value you deliver with cloud native will usually be speeding up businesses processes, removing wasted time and effort, and increasing quality. Duke Energy’s lineworker case is a good example, here. Duke gave lineworkers a better, highly tuned application that queue and coordinate their work in the field. The software increased lineworker’s productivity and reduced waste, directly creating business value in efficiencies.
  4. The US Air Force’s tanker scheduling case study is another good example here: by adapting a cloud native, software model they were able to ship the first version in 120 days and started saving $100,000’s in fuel costs each week. Additionally, the USAF computed the cost of delay — using the old methods that took longer — at $391M, a handy financial metric to consider.
  5. And, then, of course, there comes raw competition. This most easily manifests itself as time-to-market, either to match competitors or get new features out before them. Liberty Mutual’s ability to enter the Australian motorcycle market, from scratch, in six months is a good example. Others, like Comcast demonstrate competing with major disruptors like Netflix.

It’s easy to get very nuanced and detailed when you’re mapping IT to business value. You need to keep things as simple as possible, or, put another way, only as complex as needed. As with the example above, clearly link your cloud native efforts to straight forward business goals. Simply “delivering on our commitment to innovation” isn’t going to cut it. If you’re suffering under vague strategic goals, make them more concrete before you start using them to measure yourself. On the other end, just lowering costs might be a bad goal to shoot for. I talk with many organizations who used outsourcing to deliver on the strategic goal of lowering costs and now find themselves incapable of creating software at the pace their business needs to compete.

Fleshing out metrics

I’ve provided a simplistic start at metrics above. Each layer of your organization will want to add more detail to get better telemetry on itself. Creating a comprehensive, umbrella metrics system is impossible, but there are many good templates to start with.

Pivotal has been developing a cloud native centric template of metrics, divided into 5 categories:

BuiltToAdapt Benchmark.

These metrics cover platform operations, product, and business metrics. Not all organizations will want to use all of the metrics, and there’s usually some that are missing. But, this 5 S’s template is a good place to start.

If you prefer to go down rabbit holes rather than shelter under umbrellas, there are more specialized metric frameworks to start with. Platform operators should probably start by learning how the Google SRE team measures and manages Google, while developers could start by looking at TK( need some good resource ).

Whatever the case, make sure the metrics you choose are

  1. targeting the end goal of putting a small batch process in place to create better software,
  2. reporting on your ongoing improvement towards that goal, and,
  3. alerting you that you’re slipping and need to fix something…or find a new job.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Link: ​Ubuntu’s Mark Shuttleworth pulls no punches on Red Hat and VMware in OpenStack cloud

“If you want OpenStack and Kubernetes support with vendor independence at a low price, Canonical is your company. If you prefer a partner, which offers a soup-to-nuts stack, but at a higher price, look to Red Hat. And, of course, if you’re already wedded to VMware, you’ve made your choice. There’s room for all these approaches to the 21st century cloud and containers.”
Original source: ​Ubuntu’s Mark Shuttleworth pulls no punches on Red Hat and VMware in OpenStack cloud

Link: ​Ubuntu’s Mark Shuttleworth pulls no punches on Red Hat and VMware in OpenStack cloud

“If you want OpenStack and Kubernetes support with vendor independence at a low price, Canonical is your company. If you prefer a partner, which offers a soup-to-nuts stack, but at a higher price, look to Red Hat. And, of course, if you’re already wedded to VMware, you’ve made your choice. There’s room for all these approaches to the 21st century cloud and containers.”
Original source: ​Ubuntu’s Mark Shuttleworth pulls no punches on Red Hat and VMware in OpenStack cloud

Link: ​Ubuntu’s Mark Shuttleworth pulls no punches on Red Hat and VMware in OpenStack cloud

“If you want OpenStack and Kubernetes support with vendor independence at a low price, Canonical is your company. If you prefer a partner, which offers a soup-to-nuts stack, but at a higher price, look to Red Hat. And, of course, if you’re already wedded to VMware, you’ve made your choice. There’s room for all these approaches to the 21st century cloud and containers.”
Original source: ​Ubuntu’s Mark Shuttleworth pulls no punches on Red Hat and VMware in OpenStack cloud