Why Pivotal Serves Free Breakfast to All Employees

Free food, during a limited, half-hour window, both saves people some hassle and gets them to show up at the same time to kick off the workday.

To understand why this is so important, picture Pivotal without free breakfast. Let’s start with the obvious. Most developers would sleep late if it were up to them. They’d roll into the office around 10 or 11 AM. Which means they’d grab a coffee, maybe respond to a few emails, and then sync up with the team.

Before you know it, the morning is over and it’s time for lunch. But hey, that’s okay, we live in a digital world, and you can show up whenever, so long as you get your work done, right? Wrong. Pair programming only works when you have people to pair with. And that means you need to sync their schedules.

We ring a cowbell at 9:05 AM. (The Toronto office smacks a golden gong with a mallet.) It signals that breakfast is over and the office-wide meeting is about to start. After the five-minute standup, the teams have their own standup meetings, and then pairs break off to get rolling at their workstations.

While posed as a pair programming enabler, take out pairing from the above and it also gets the point of having people show-up on-time, not dick around, and do actual work.

If you’ve seen me talk you know the joke of “how a developer spends their day” which usually includes 1-2 hours of actual coding because of all the meetings, you know, those 30 minute sitdown-standup meetings, architectrual reviews, deciding where to go to lunch, the post-lunch-buffet comma, “researching on the Internet, etc…. it’s all just unsynchronized schedules and little not attention spent on actually managing your staff’s time.

Source: Why My Company Serves Free Breakfast to All Employees

The coming licensing hassles with Dockerized enterprise software

indiana-jones-snakes.jpg
“Licensing. Why’d it have to be licensing?”

Jon Hall, who always has good things to say about traditional IT Service Management butting up against Melinum IT, points out an all too common hassle with new ways of packaging and running IT: accounting for traditional licensing. Here, he points out a likely licensing counting problem with Docker-ized applications, e.g., with Oracle licensing when it comes to the recent, official Docker images with Oracle software in ’em:

But theres a serious gotcha here: as any Software Asset Manager could point out, these actions could have just cost the company a pretty staggering amount of money. How? By falling foul of Oracles notoriously complex licensing system.
Oracle licensing is bloody complex, and its entirely possible that a goalpost or two might have moved by the time you read this.
Oracle Parking.jpg

Red Hat OpenShift Momentum – Highlights

Brian Gracely of Red Hat (and formally an analyst who did some of the best “cloud-native”/cloud platform work early on) has a momentum post on Open Shift. Here’s my highlights:

Sizing up revenue and deal-size:
[Q3, FY 2017] Also of note, we closed our second OpenShift deal over $10 million and another OpenShift deal over $5 million. And significantly, we actually had over 50 OpenShift deals alone that were six or seven figures, so really strong traction. [Q4, FY 2017] with our largest deals in Q4 approximately one-third had an OpenShift container platform component.
Red Hat hasn’t yet been too clear on OpenShift revenue, so you have to tea-leave out these revenue spreads, which I haven’t really done. Earlier in April, Jeffrey Burt at The Next Platform had this to say:
During the final three months of last year, subscription revenue for Red Hat’s application development-related [JBoss, etc] and other emerging technologies – which includes OpenShift – hit $125 million, a 40 percent increase from the same period in 2015, and revenue for the group accounted for about 20 percent of Red Hat’s overall revenues for the fourth quarter.
Today, we also announced that Barclays Bank, the Government of British Columbias Office of the CIO, and Macquarie Bank are also using Red Hat OpenShift Container Platform to modernize application development…. airplane manufacturer Airbus about their DevOps journey, and digital travel platform Amadeus about their transformation of handling 2,000x the number of online transactions…. how Amsterdams Schipol Airport (AMS) is using OpenShift to redefine the in-terminal travel experience, how Miles & More GmbH is better managing rewards programs for travelers, and how ATPCO is rethinking how they publish fare-related data to the airline and travel industry.
Much of the write-up focuses on community momentum, true to Red Hat, open source form:

The OpenShift Commons community has 260+ member organizations….

Red Hat engineers lead or co-lead in 10 of the 24 Kubernetes SIG activities.
Finally, some commentary on their strategic shift to Kubernetes:
The huge architectural shift that we made a few years ago in adopting open standards for containers and the Kubernetes container scheduler has allowed us to delivered a unified platform to containerize existing applications and deliver agility and scalability for cloud-native applications and microservices. We call this combination Enterprise Kubernetes+, or Enterprise-Ready Kubernetes.
Red Hat’s OpenShift is, of course, a competitor to us over at Pivotal.

Cloud-native at Comcast, working with Pivotal – Highlights

I’m doing a podcast with Comcast in a few weeks, so I’ve been going over all their public talks on their cloud-native efforts. They’ve been working with Pivotal since around 2014 and are one of the more impressive customer cases with over a 1,000 applications now on Pivotal Cloud Foundry.
Here are some highlights from the talks I’ve been watching. As always, things I put in square brackets are my own comments, the rest are quotes or summaries of what people said:

August, 2016 – Empowering Devops with Cloud Foundry – Sergey Matochkin, Neville George; Comcast

  • Sergey Matochkin.
  • Slides.
  • (17:00) Every deployment to production took at least 6 weeks, but most commonly around 2 months end-to-end. Which also means you need to plan capacity much in advance.
  • We started to use virtualization and containerization “well, well before Docker existed… it was some success, we had some improvements, but those improvements were marginal.”
  • Traditionally, it’d take at least 4-6 months to setup your dev/test infrastructure. But, luckily, virtualization came along.
  • (9:20) Business drivers… Comcast phone service, set-top boxes get DVRs, VoD, etc. All of these require apps on the backend, so the portfolio of apps starts to grow, and with they way they were before it meant they had to build a new datacenter every six months. Virtualization helped here, of course.
  • Also, virtualization allowed us to put a service layer [think “platform”] on-top of the infrastructure.
  • It’d take 4-6 weeks for testing environment, but now it takes 10-15 minutes in a self-service portal.
  • Demo of using Pivotal Cloud Foundry for much of the automation needed to deploy and scale an application.
  • (~32:00) We used to have things like “order servers” and “make load-balancer changes” and somewhere in the bottom of the backlog was “write some code and do some testing.” [That is, they were focusing on items with low business value, below “the value line,” rather than customer features.]
  • “What Cloud Foundry essentially helped us with was to get all those unnecessary user stories out of our backlog so we can focus on the writing code, on testing, and deploying rather than managing infrastructure.”
  • (33:45) momentum/proof-points:
  • momemtum
  • 9 PCF instances; 900+ developers; 2,000+ active apps “most of which are in “the critical path of our customer experience”; 4,100 application instances; 2,000 requests per second.
  • Lots of Slack/ChatOps usage for monitoring and such.

August 3rd, 2016 – Transforming the monolith at 20M tph – Nick Beenham, Comcast

  • Slides.
  • Existing state:
    • 250m transaction per day.
    • Would take 3 months to get a server useful, from moment of purchasing to using.
    • “Over a 100 services run by development teams.”
    • In functional, silo roles.
  • (3:45) “We knew we had that large, rigid infrastructure. [Pivotal] Cloud Foundry and it’s adoption really enables us to change that to gain the agility, to gain the elasticity at scale.
  • Taking away roles to reduce finger-pointing and all the negative stuff, and unified team, of course.
  • (7:35) Anecdote of Nick going from “ops guy” to writing code and liking coding.
  • (12:18) ESP router that was a small router written in Go to translate SOAP requests as part of a strangler pattern. Decades old SOA layer that they wanted to modernize. But they couldn’t strip it out, would take so long. So, were going to duck-type as SOA, but do REST and micro services underneath. Strangler pattern, etc. This is what the ESP router does marshals and unmarshalls between microservices and SOAP stuff. But new things need to be done in new style.
  • Also, “de-mingling data,” moving off Oracle RAC/GoldenGate for multi-site. Some simpler CRUD services to front the data.
  • (~15:00) Used to take a week+ to deploy the entire stack, but with Pivotal Cloud Foundry it takes minutes. It gives us a great deal of velocity that we’ve never had before. “Sometimes we’ll deploy multiple times an hour.”
  • (17:00) From 1,000’s of lines of bash to deploy out to various WebLogic clusters, which has for the most part moved to Cloud Foundry.
  • Improving production updates: bringing new node up and shutting old node down slowly; canary updates, with a CI test suite, then switching over to a production install.

August 1st, 2016 – James Taylor – The Power of Partnership & Building a Cloud Native Tier-1 Platform

  • @jctbmwi8
  • “Sparrow, Service Activation Platform.”
  • “Helping someone put a smile on their face is one of the greatest gifts we can give each other.”
  • Their VP provides the feedback loop of things to focus on. Right now: reducing technical debt, reducing incidents, increasing velocity, experimentation.
  • (~6:30) “You can’t move forward – innovate – if you don’t have time to try new things.”
  • (~18:35) “If you’re spending time configuring a Docker container, that’s time you’re not spending coding or solving a problem.”
  • (13:51): “At the end of the day, [business] value is what puts money in everyone’s pocket. If our company, Comcast, can’t create something of value, no one’s gonna pay for us…if we can’t create value. So it’s important for us to understand ‘how can you create value?’”
  • (~22:02, starting epic rant!) “Who is our customer and what value do we bring to our customers…”
  • If you’re spending money on support, that’s cutting into your margins. A call coming in costs $8 right off the bat, then more as it takes longer. So you want to figure out preventing customer support problems… which points to understanding your customers more.
  • [A good overview of thinking about “value” in the context of a specific application, their customer activation center, Sparrow.] “If you have a [support] call rate of 30%, you’re probably cutting out all the value… So we try to figure out, how do we prevent calls?” [Very similar to IRS cloud-native story.]
  • “We’ve been holding technical workshops”: Internal training things every month with Pivotal people, leveraging Pivotal knowledge. With our development teams every month: webinar, or on-site visit.
  • Sparrow: 5 junior Java developers… we built it from scratch in parallel while existing teams maintained the platform… we then had to integrate the processes together… figure out decomposing the monolith platforms, etc….then we had to just cut off stuff when it was too much of a hassle.

August 17th, 2016 – Greg Otto SpringOne Platform keynote

  • Slides.
  • X1 boxes – a new release about once a month.
  • Processing 10’s of millions of transactions on this new platform daily on Pivotal Cloud Foundry/new platform.
  • “About a 75% lift in velocity as well as time to market, and the business is really feeling it.”
  • Developer reactions:
  • comcast what customers are saying.png
  • Momentum Stats:
  • comcast key state from otto.png
    • 40 apps to 900 apps, 2015 to 2016
    • 300 AIs to 4,100 AIs, 2015 to 2016
  • All with “zero outbound marketing from my team, this all word of mouth from all those happy developers.”

June 9th, 2016 – Greg Otto CF Summit keynote

  • “Late last year in 2015” – live in production [on Pivotal Cloud Foundry] with business critical systems from our back-office systems on our Cloud Foundry environment.
  • We put Pivotal Cloud Foundry directly in the customer critical path.
  • Applications doing 30,000 event a second on Cloud Foundry.
  • Started in 2014, met with Pivotal.
  • Had sort of thrown all the people into the Pivotal Cloud Foundry pool, they had to do a lot of research and such.
  • But, people were really interested in the ease of working with the platform [the productivity improvements].
  • Successful prototype app 30 days after platform.
  • Idea to feature, before after: “several weeks, at least”/“2-3 days”
  • Time-line and summary:
  • comcast otto summary.png

June, 2016 – Open source at Comcast story

  • Write-up.
  • “If Comcast has a problem to solve, there are three possible approaches: solve it themselves by making an investment in teams and resources; solve it through a commercial vendor that could build a product for them; or work with the open source community.”
  • OpenStack: “In addition to Linux, Comcast is a heavy user of OpenStack. They use a KVM hypervisor, and then a lot of data center orchestration is done through OpenStack for the coordination of storage and networking resources with compute and memory resources. Muehl said that Comcast has roughly a petabyte of memory and around a million virtual CPU cores that they are running under the OpenStack umbrella. As an operator, Comcast does a lot of things around operations, and they use Ansible to deploy and manage OpenStack at scale.”
  • Cloud Foundry: “They also use Cloud Foundry, but according to Muehl that work is in the very early stages at Comcast.”

May 2015 – Running Cloud Foundry at Comcast talk

  • Neville George, Sam Guerrero, Tim Leong, Sergey Matochkin
  • They wanted to make custom URLs.
  • Used Puppet for stuff.
  • (~8:30) Their requirements for a platform:
  • comcast platform requirements.png
  • A lot of emphasis on self-service and the micro services benefits of operating independently, product management wise.
  • They use OpenStack, Docker, and [Pivotal] Cloud Foundry.
  • Pre-provisioning resources for a pool of containers that are ready to go, etc.
  • (~27) a couple applications in production today… we’ll be ramping up quickly.
  • (Either this video or the 2016 one, a few minutes from the end) Q, training mode. A, Sergey: “I can’t say we have a really good training model…. We do brown-bags to have people aware. We focus on 12 factor application model… on overall microservices model, not just to shape application, but also data. Developers need to understand how they [do] applications for PaaS instead of traditional.

Scaling DevOps in large organizations – My April Register column

apololypse_now_smell_of_napalm_in_the_morning

My The Register column this month is on scaling DevOps/cloud-native teams to the entire organization. It’s easy to build one team that does software in a new and exciting way, but how can you move to two teams, five teams, and then 100’s? It goes over the amalgamation of a few case studies and plenty of over-the-top gonzo analogies, per usual.

Check it out, and check out past ones if you’re curious for more.

The news from Docker-land, plus, the money being fought over – Notebook

With DockerCon this week, there’s no end of Docker quotables and items. Here’s my collection

General momentum

Once landed in an account, Docker usage grows their CEO says:

There has also been expansion within customers, with organizations that start with Docker expanding their usage on average by five times within six months

Way back in 2015, the (now annual?) DataDog study of Docker usage among their customers said that 2/3 of companies that try Docker adopt it. Which is all to say: once it gets in, it spreads.

Moby

A toolkit for putting together docker stacks:

In essence, Moby is the build system that creates Docker Community Edition, which is akin to Fedora, and Docker Enterprise is derived from Moby and is akin to Red Hat Enterprise Linux. Link

People got all freaked out. I’d even say “freaked the fuck out.” Competitors, of course, gloated, if only in silence. Criticism of handling the announcement aside (ideally, you wouldn’t like to kick up a stink), I feel like it was more like a tempest in a teapot.

Docker momentum/penetration and types of applications/workloads

Global 2000 customers have somewhere on the order of thousands to tens of thousands of applications, and across these major firms, less than 5 percent of the applications have been containerized so far. While somewhere between 5 percent and 10 percent of the applications that are being containerized are net-new, microservices-style applications that everyone is talking about all the time, the other 90 percent to 95 percent are just lifting and shifting legacy applications from bare metal or virtual machines to containers. Link

VMware threat…or just legacy gobbling?

Docker bounces back and forth between “replacement for VMware” and “a different thing, so don’t worry about VMware.” In this round of Docker news, there’s been some strong pull towards the “replacement for VMware” camp. To be fair, it’s more like doing both:

In general, says Johnston, customers who move from bare metal or VMs to Docker containers can provision, scale, and deploy applications up to 75 percent faster, and those moving from bare metal to containers can save 50 percent on compute and those who are moving from VMs will save around 25 percent. Link

This might also come from the obvious move to start gobbling up legacy (more accurately “existing”) applications. Here, Docker had two customer reference:

Northern Trust, a leading international financial services company, experienced  deployment times that were 4X faster and noted a 2X improvement in infrastructure utilization

And, Microsoft IT:

Microsoft is not only a partner in this program; their IT organization is also a beta customer.  Microsoft IT increased app density 4X with zero impact to performance and were able to reduce their infrastructure costs by a third.

There was also a story of Visa using Docker:

Kocherlakota said Visa is aiming to move as many workloads at it can to the container model to help improve overall efficiency.

See more on this legacy migration stuff and the program with Avanade, Cisco, HP, and Microsoft from Docker’s Scott Johnson.

Major vendors

Other tech companies are often cautious about working with Docker. They’re not really certain about how it helps or threatens their position in the IT stack and, therefore, their ability to sell higher profit margin products and services. No one wants to become the x86 manufacturer of the cloud (read: low margin, commodity).

I’ve noticed this cautiousness slightly melting as more and more vendors are at least putting their stuff in Docker images and, on the public cloud front, supporting the use of Docker. My company, Pivotal, ingests Docker images.

A brief whack at why Microsoft cares, from Christopher Tozzi:

Although there remains work to do to get Docker on Windows ready for prime time, the platform will be important in helping Windows Server stay as nimble as Linux environments in hosting the workloads of the future…. Microsoft’s interest in Docker may seem strange. Microsoft already offers traditional virtual machine products, most notably Hyper-V. In some respects, Docker containers compete with virtual machine platforms…. But that’s not necessarily the case. Depending on how they’re used, containers can complement virtual machines, rather than replace them. If you use virtual machines to host the environment in which Docker runs, your Docker environment becomes more scalable and portable than it would be if it ran on bare metal. That’s likely the type of use case Microsoft envisions for containers on Windows.

More from Nick Martin on Microsoft and Docker.

Oracle bundling middleware in Docker containers:

Oracle becomes the latest enterprise IT vendor to jump on the Docker container bandwagon as it seeks to expand its reach in the public cloud market. Among the container-based application, middleware and development tools made available on the container platform are Oracle’s MySQL database and its WebLogic server. Those tools are in addition to the more than 100 images of Oracle products already available on Docker Hub, its cloud-based image registry.

So, what’s going on here? Staking a claim on The New Stack

I’m often asked to explain all the various cloud stacks, to help Pivotal buyers sort out what CaaS, PaaS, cloud-native, and “cloud strategy” means. They’re trying to figure out their planning for building out new IT, for “doing DevOps.” It’s a mess out there w/r/t to figuring all this out if you’re not a vendor or analyst who’s steeped in this shoggoth every day.

In all the Docker, container, and cloud-native wars, the revenue battle for vendors is mostly about two things:

  1. The pool of money in simply migrating the VMware workload to a new, more efficient layer, hence the ongoing attention to “the VMware threat” that Docker poses). I’m not sure how big this market is because, as a disruptive shift (cf. Linux vs. UNIX vs. Windows vs. z) part of it is reducing the overall spend through lower prices and more efficient usage. But, the existing virtualization market is best described as “fucking huge.”
  2. Fighting over who “owns” (and therefore collects the most profit from) the stack that companies are using to build and run their software. By my estimate, this is something like around a $20-25bn market in the future. You can see a Spanish Civil War like precursor going on in the Java application server market; it’s spreading to a “World War” with respect to all custom software stacks.

On that second point, here’s my latest attempt to describe how things are shaking out category/definition wise:

Of all the SPI cloud categories, PaaS is the most problematic place as all us vendors hate the PaaS term and are trying to re-define what it means. I would break PaaS into two categories currently: (1.) container orchestration, and, (2.) cloud platform.

Container orchestration takes an IaaS and manages the installation and configuration of container images on your new cloud. By “images” here, I mean that you’ve chosen to put your software (probably custom written software, not packaged software) into containers (or the delegated way we do it with buildpacks in CF), specified how all the different nodes are wired together with all the ACLs and configuration, and then given it over to the orchestration software to deploy those containers, set the configuration, and do the ongoing health-checks/remediation.

Ideally, the orchestration platform should also have “day 2” tools to help you monitor and manager (“fix”) problems that happen in production. I assume things like kubernetes, the Docker/Moby constellation of things, Mesosphere, etc. fit here.

People are obsessed with container orchestration now and it’s pretty much all anyone talks about. I think all this is what’s becoming known as “CaaS” – Containers as a Service.

(On this next section, I’m extremely monetarily biased, of course:) A cloud platform either has or depends on an orchestration layer, but adds in integrated middle-ware, ALM tools (from basics like “cf push”, and an overall programming and deployment model with all the tools and enforcements. Heroku is the classic example here in public cloud, and now Cloud Foundry (CF) has taken over this model in public and private cloud, the second (it seems) where most of the usage and money is, at least in the enterprise space. I’d argue, that CF is the enterprise market-leader (by revenue at least, but increasingly penetration in the F500 – while Pivotal has impressive numbers, throw in the other CF distros and it’s even larger, no doubt); at the very least, “the highest growth and in enterprise production usage.” That all depends how you slice it, and of course my slicing favors me.

A cloud platform “pulls together” everything into a fully working “cloud” that deploy and provisions the servers, builds/maintains/deploys the containers, takes care of your networking configuration and concerns (inc. firewalls, etc.), and configs/manages all the middleware needed (e.g. “I want a database” means you just ask for it, instead of having to configure it and make container images of it and specify how it all works together).

The end goal of a cloud platform is the original end-goal of a PaaS: developers don’t have to “setup” any of the infrastructure or, really, middleware (databases, queues, etc.) that they use: they just write the “business logic” of their applications.

All this standardization is technically “restrictive” (developers can’t just install anything they download off the Internet, it has to be integrated into the platform). This is why we often call this model “opinionated,” but it follows the same contract/promises model that Google SREs follow: we promise we can support your applications in production if you use only the things we support, otherwise it’s all on you.

However, the benefit of such opinions is a huge jump in productivity as we see at all our customers: one Pivotal customer manages 1,000+ applications (all angles toward very frequent, DevOps-style releases for fast feedback loops and all that small batch stuff) with just 4 PCF operations staff, etc.

Our DIY white paper makes the case that snow-flaking this all out is a bad idea. At the very least, if you build your own platform, you should try to just have one used organization wide.

In comparing CaaS and cloud platform, the key distinction to me is that a cloud platform bundles and integrates together all your middleware and “services” frameworks. For example, if you want to do microservices with all the bulk-heads and such, that functionality should be built into the cloud platform – you should have to go read-up how to set most of that up. PCF, of course, has Spring Cloud and more for that. All of the systems management tools (thing used in production to detect and fix problems) should also be built in, or the cloud platform should be instrumented so deeply that third party tools can do the managing as well.

Now, these two categories are likely to converge, and then the discussion will just be which cloud platforms are more featureful and better. It’ll be like battling Java application servers.

I haven’t made one of my own “burger” stacks of all this in a long time, but I think (again, highly biased) the ones we use for PCF are pretty good:

More

In case you don’t know, working at Pivotal, I obviously have a stake in how all this turns out, so I’m biased on multiple angles of the above whether I want to be or not. 

With no competition, government websites often have no incentive to be good

In contrast to agile, private-sector companies, the public sector does not face any pressure from competition. When it comes time to renew your license, there is only one place for you to do that: and, unfortunately for Americans, that’s the DMV. With no competitive forces, government agencies do not have to innovate or take bold risks when it comes to digital.

And, as ever, being smart about using updated tools and new methods yield huge productivity results:

While running technology for Obama’s WhiteHouse.gov, open-source solutions enabled our team to deliver projects on budget and up to 75% faster than alternative proprietary-software options. More than anything, open-source technology allows governments to utilize a large ecosystem of developers, which enhances innovation and collaboration while driving down the cost to taxpayers.

While open source has different cost dynamic, I’d suggest that simply switching to new software to get the latest features and mindset that the software imbues gives you a boost. Open source, when picked well, will come with that community and an ongoing focus on updates: older software that has long been abandoned by the community and vendors will stall out and become stale, open or not.

With most large organizations, and especially government, simply doing something will give you a huge boost in all your KPIs in the short term. Picking a thriving, vibrant stack is critical for long term success. Otherwise, five or ten years from now, whether using open or closed source, you’ll end up in the same spot, dead in the water and sucking.

Link

DIUx working in streamlining IT projects at the DoD

Since May 2016, DIUx has completed 21 contracts using other transaction (OT) authority and the average time is 78 days, Shah said at the New America Foundation Future of War summit in Washington.

The mission of DIUx, he said, “is to do agile culture change.…We are never going to be the acquisition arm of the Department of Defense, we’re not the R&D arm of the department.”
DIUx has so far comprised $42 million in program funding, which Shah characterized as a “rounding error of a rounding error” of the DOD budget.

Hey, they’re trying over there in the government. It ain’t easy. I’ve meet with some of the folks there and they sure seem genuine about fixing things up and curious to work closer with the civilian IT world.

When I meet with military people they use the word “agile” over and over: meaning, they’re incredibly interested in modernizing. It’s just the tiny matter of figuring out how to get from here to there.

Link

Vanguard’s thinking on microservices

Breaking up the monolith with good, old fashioned, OO-think:

Instead, Vanguard has begun a journey to break apart our monolithic legacy systems piece-by-piece by replacing them with microservices over time. With a microservices architecture, we remove the business logic and data logic from our applications and replace it with a set of re-usable modules of code that are built and deployed as independent entities. We then compliment this architecture by chunking out our user interfaces into modular purpose-built components.

De-coupling for stability and resiliency, among other things:

This service-based approach to application architecture provides a variety of advantages over the jumble of code that defines a non-modular monolithic application. First, services reduce redundancy by making sure there is only one copy of application logic for a given capability – regardless of how many applications leverage that logic. In the long run, this leads to lower development costs and increases speed to market. Second, since these services are deployed independently and built in a resilient manner, outages in one area of an application are less likely to bring down an entire system. In some instances, several of our services can be down without our clients being aware of a loss in functionality thanks to the ability of our applications to automatically react to a service that isn’t available. Finally, services enable our applications to scale easier. The marriage of cloud and services means we can quickly spin up infrastructure to handle surges in the number of transactions we need to handle without needing to scale up an entire application.

Vanguard CIO: Why we’re on a journey to evolve to a microservices architecture

Pivotal Conversations: “Running like Google,” the CRE Program & Pivotal, with Andrew Shafer

The summary:

What does it really mean to “run like Google”? Is that even a good idea? Andrew Shafer comes back to the podcast to talk with Coté about how the Google SRE book and the newly announced Google CRE program start addressing those questions. We discuss some of the general principals, and “small” ones too that are in those bodies of work and how they represent an interesting evolution of it IT management is done. Many of the concepts that the DevOps and cloud-native community talks about pop in Google’s approach to operations and software delivery, providing a good, hyper-scale case study of how to do IT management and software development for distributed applications. We also discuss Pivotal’s involvement in the Google CRE program.

Check out the SoundCloud listing, or download the MP3 directly.