Questions around audit and compliance always come up in discussions about improving software, and certainly when it comes to introducing things like continuous delivery, DevOps, and esp. something as big and different as Pivotal Cloud Foundry. To that end, I wrote up a way to approach those issues, along with a few tips for dealing with compliance and audit for my FierceDevOps column last month.
The onerous steps auditors want you to do were usually put in there for good reason, but, as I put it:
Unfortunately, the way that three-ring binder wielding ninjas and IT staff battle it out over these and other compliance check-lists often loses sight of the original, good intentions. Instead, it infects everyone with a bad case of table-flipping madness. Thanks to cloud technologies and the empathy over table-flipping approaches in DevOps, we’ve been finding ways to get over compliance hurdles and even, in some cases, make compliance projects easier and better.
There’s a summary on the Pivotal blog, and/or you can check out the full piece.
(Binders picture from tookapic)
From Survey Analysis: DevOps Adoption Survey Results,” Sep 2015. You can also see it presented in this recorded presentation, with more commentary and Nathan Wilson and George Spafford’s overall take and advice on DevOps.
I was asked to be on a panel for the first Docker Austin meetup of the year, tonight, Jan 7th at 6pm. Here’s some slides I put together in my capacity as “person who used to put together slides like this and is trying really hard not to do his job pitching Pivotal to avoid being rude” (well, except for a shameless plug or two):
See ya there!
Update: I’d wanted to put a TAM in – the money Docker and containers are going after, this from Gartner:
While matching it to virtualization is a poor match (you’d probably also need some systems management and maybe even appdev numbers in there), I think looking at the current x86 virtualization TAM is as good as you’re going to get with a conservative approach.
My reasoning is that if “the market” is willing to pay this much for virtualization now, that’s the kind of foot-print and allocation we should start looking at for “containers” (over more of a 10 year time span, probably).
For this kind of hand-wavey, way future looking TAM’ing, what’s a plus or minus a billion or so anyhow?
I don’t do press passes as much as I did when I was an analyst, but here’s one from a recent email interview for a ProjectsAtWork story:
Q: What’s your favorite tip to improve collaboration when an organization moves to agile and DevOps?
A: I think the core DevOps thing with collaboration is getting people to trust each other. Most corporate cultures are not built on people trusting each other and feeling comfortable: they’re based in competitive, zero sum structures or command and control management at best.
Organizations that are looking to DevOps for help are likely trying to innovate new software and services and so they have to shift to a mode of operating that encourages collaboration and creativity. Realizing that is a critical step: we want to create and run new software, so we need to understand and become a software producing organization.
In contrast, if you operate differently if you’re just driving down costs each quarter and not creating much with IT. We’d counter-argue that if you’re a large organization and you’re not worrying about software then you’ll be creamed by your competition who is becoming a software organization.
If forced to pick one tip to increase collaboration I would say: do it by starting to work. How you do this is to pick a series of small projects and slowly expand the size of the projects. These projects should be low profile, but have direct customer/revenue impact so that they’re real. It’s important for these projects to be actual applications that people use, not just infrastructure and back-end stuff. It will help the team understand the new way of operating and at the same time help build up momentum and success for company wide transformation later down the road.
As a basic tactic, Andrew Shafer has a fun, effective tactic about having each people on the team wrote fantasy press releases about each other to start to build trust.
(See the full piece by Will Kelly over on the site.)
With consumer SaaSes and mobile apps coming and going, I’ve been thinking of the idea of “disposable software”: apps that last a year or so, but aren’t guaranteed to last longer. In the consumer space, there’s rarely been a guarantee that free software will last – that’s part of the “price” you pay for free.
This mentality is getting into business software more and more, however, and I don’t think “enterprises” are prepared for it. Part of the premium you pay for enterprise software should include the guarantee that it will have a longer life-cycle, but it’s worth asking if it does.
Also, it’s good for enterprises to be aware of vendors, particularly open source driven ones, are putting out code that might be “disposable.” The prevailing product management think nowadays encourages experimenting and trying things out: abandoning “failed” experiments and continuing successful ones. Clearly, if you’re a “normal” enterprise, you want to avoid those failed experiments and, at best, properly control and govern your use of them.
Of course, there are trade-offs:
- With consumer, experiment-driven software, you’re always getting the newest thinking, which might turn out to be a good idea and provide your business with differentiating, “secret sauce”; or it might be a failed experiment that gets canceled
- With “enterprise,” stable software you can generally count on it existing and being supported next year; but you’ll often be behind the curve on innovation, meaning you’ll have to layer on the “secret sauce” on your own.
It’s good to engage with both types of strategies, you just have manage the approach to hedge the risks of each.
I gave a talk at Gartner AADI, US going over the need for organizations to become good at software (you know, our usual thing at Pivotal) and some thinking we have about the three pillars of becoming a software defined business (software defined delivery, DevOps, and microservices) as well as the “contracts and promises” way of looking at what Pivotal Cloud Foundry does. I manage to jam it all into 30 minutes. Here’s the abstract:
If software is eating the world, software capability is the disruptor’s advantage and the disrupted’s vulnerability. Continuous Delivery, Microservices and DevOps are three labels that describe aspects of the same phenomena; the principles and practices of high performing organizations that deliver highly available software, rapidly, at scale. This presentation catalogs the capabilities that allow organizations to move quickly, reliably and economically in an end-to-end infrastructure-to-application platform; these Cloud Native advantages outlined as promises and contracts.
In addition to the slides, check out the video recording from Gartner, they’ve got them a fancy interface with the video and slides they gots!
I’m at Gartner AADI this year, the first time I’ve been to a Gartner conference. One of the sessions was a read-out of a recent survey about Agile. While a small sample set – “167 IT leaders in 33 countries” – it was screened for people who were familiar with or doing agile of some kind. As with all types of surveys like this, it’s interesting to look at the results both with respect to “what people are doing” and how they’re thinking about what they’re doing. Here’s some slides I took pictures of during the talk:
My first take-aways are:
- Well, Scrum is popular.
- Most of the “stumbling blocks” are, of course, meatware problems: people and “culture.”
- Pair programming, as always, gets no respect.
- Organizations want to use agile for speed, not for cutting costs.
As I mentioned earlier, Roy and and I had a webinar today about Dell’s cloud portfolio and strategies. Here’s a recording of it, enjoy.
And, there’s also one on DevOps, from Barton and Willis.