Greenfield cloud projects, webinar recording

The recording of my webinar on greenfield cloud projects is up. It’s based on the first part of my series on getting a good cloud strategy in place and executing it. There’s two more webinars coming up, on working with legacy applications and IT department transformation.

Here’s the slides if you want those.

Avoiding screwing up your cloud strategy: the greenfield journey

I’m doing a series of webinars based on my cloud native journey blog series, see the slides above (once the recording posts, I’ll embed it here as well!).

The gist of this series is my collection of advice on getting your cloud strategy right, mostly for large organizations. It starts with defining why you’d care (custom written software can now be used as a core competitive advantage, like never before), what the goals are (getting good at custom software development and delivery), and then gives advice across three different phases (greenfield, legacy, and organization transformation), or parts of the “maturity cycle” (a phrase I didn’t really use in the series).

Check out the first webinar on Nov. 5th at noon central, with two more coming in December, on the 1st and then the 15th.

What does IT need to start doing to become a software defined business?

I was asked to talk to do an internal, “brown-bag” style talk at a company this week. I chose to do a slightly more technical-oriented version of the talk I tend to give, commentary and pointers on moving your orginization over to relying on more and more custom written software to run your business. Here, I give a brief business context and then throw out three areas to start focusing on if you’re interested in cloud, DevOps, and all this nonsense. The recommendations are to look into contious delivery and DevOps, figure out cloud-native applications (and microservices), and then plan out your cloud platform strategy.

As ever, I’m trying to make actionable a few things that are often fuzzy. I don’t acomplish that too well – a seperate hour long talk on each topic would be better – but I at least hope to explaing the thing, say why it’s useful, and give some pointers for further study.

I put the slides up previously, but because people often ask me for the talk track, I thought I’d record a rehersal run and post it here. So, check out the video if you’re into this kind of thing.

The Coming Donkey Apocalypse, DevOpsDays Austin recording

My talk from DevOpsDays Austin is up. Check it out if you’ve been curious about the talk track to the mute slides.

As a reminder, there’s a prose version of the talk available as well in one of my FierceDevOps columns, and here’s the slides.

Check out my love affair with “uh” in the begining, I think it clears up a bit at the end.

What are normal people doing with continuous delivery?

//player.vimeo.com/video/120746550?wmode=opaque&api=1

My latest Pivotal blog post is up, it re-caps a presentation I did recently covering what “the market” is doing with continuous delivery.

There’s a lot of opportunity, the glass is half full. See the slides over in my previous post on this talk.

Also, check out the recording of the full talk (it has some bonus material on containers recent role in CD) from HeavyBit, embedded above.

Coté Memo #055: It’s cold in Toronto

Follow-up

Tech & Work World

Quick Hits

“Script” for Docker orchestration

Oftentimes, I write a “script” for presentations. I never read it for the presentation (maybe I should!), but it helps me organize my thoughts and presentation. Here’s the one I used for the Docker orchestration webinar I did with CloudSoft last week:

Macro-context: the demand for building your own PaaS – DevOps, cloud, etc.

[chart on DevOps delivery pipeline, old dto solutions DevOps pipeline]

I like to reduce things down to brutal simplicity. I have a lot going on in my work and personal life so I have to leave nuance for entertainment. To me, cloud is mostly about supporting custom written software, whether that’s for something like a SaaS (from social to ERP), consumer facing business applications (like online banking), or applications used by companies to help run their business. And what that means is creating a delivery pipeline that encompass all the phases of an applications life and automates each step as much as possible to reduce bottlenecks and increase throughput. You have to look at this pipeline as a mission critical process in your business: from development to production, it’s you’re factory, the thing that helps you make money…not just a cost center. You’re pushing out incrementally improving software that runs your business.

Custom written software is the most valuable work-load for cloud, I’d theorize, where value is rated by how technology can help a company get competitive differentiation.

So, something like Docker is especially interesting because it promises to speed up that pipeline. I don’t think anyone know exactly what it will shape up to be, but it looks like the “private PaaS” answer we’ve all be looking for.

[chart with SaaS, PaaS, ISaaS, IaaS market-sizing – put OpenStack market-sizing in there next to it]

Now, PaaS is an odd category of public cloud. It seems like the perfect realization of the efficiencies of cloud, and yet it keeps limping along as a market as this 451 market-sizing shows.

Much of that revenue comes from Salesforce which has its own Force.com platform (a “PaaS for SaaS” as we call it) and Heroku.

[chart on demand for private cloud]

For some reason, developers and companies have a lust for build your own cloud: they either get their own gear or rent raw IaaS and build up their own stacks to support as automated as a DevOps delivery pipeline as possible. Maybe it’s cost (I haven’t ever heard a developer say public PaaSes like Heroku are cheap), maybe it’s the need to exactly customize functionality, maybe it’s good old paranoia and FUD (which could be justified, who knows).

Whatever the case, people want control over their stacks, and that’s where things get interesting. The more control you have, the more you have to worry about the more hassle there is.

We seem to have a long way to go to replicate the magical, effortless push to deploy demo we remember from early public PaaS days. Much of what’s needed is what’s currently going under the title of “orchestration” which, roughly, means “making sure my complex distributed system is installed and configured properly…and then allowing me to modify its runtime characteristics and upgrade it.” You know, getting the application up and running, tuning its performance ongoing as needed, and upgrading it.

[insert chart showing rise of new automation tools/brands]

This area has long been the domain of custom shell scripts, manual configuration, and, thankfully, in recent years configuration management companies like Chef and Puppet (who are taking over the automation reigns from OpsWare and Bladelogic).

Docker has burst on to the scene of late as an interesting salve for cloud infrastructure woes. To me, it starts with the right goals: make using cloud easier for developers. That may seem subtle, but it’s different than most infrastructure software goals which is make like easier for sysadmins and auditors.

To keep pushing on the dream of being able to build your own PaaS, the ad hoc community around all of this has been obsessing about orchestration of Docker-based clouds, let’s call them, of late. So let’s look at that.

Emerging market for orchestration

Mindmap from Krishnan Subramanian

When it comes Docker orchestration, there are almost too many projects to count, and even a few products. I love this mindmap from Krishnan that shows just full this market is – and tedious for analyst to keep up with. This is a good sign, however: there’s so much interest and passion in figuring out Docker and how to orchestrate it that surely, something will work.

Emerging requirements for orchestration

When I look across what all of these projects are trying to do – and slap in some old IT Service Management think – I come up a list of requirements for orchestration. Some of them may seem obvious, but it’s always good to be explicit. If you spot ones that are wrong, or missing, you should pass them along and perhaps we can winnow down a list. We don’t need a manifesto or any nonsense like that, but in studying this space, you do find a distinct lack of architectural-level specifications and requirements – which is fine, people are busy coding.

Cluster/fleet management

  • Operate in terms of multiple nodes, not single nodes – configuring a single Docker node is mind-blowingly easy, doing it over 50 or 100 nodes gets to be tedious, esp. if you want to continually be turning over builds. Pets vs. cattle and all that.

Configuration management & Automation

  • Application modeling that describes the layout and configuration of various components – this is an old ITSM notion, “service modeling,” which got bogged down in drag and drop fantasy (just like UML). You need to model what all the different components are and how they fit together
  • Basic CRUD – creating nodes, updating nodes, restarting them as needed. You want more than just modeling what a node looks like, you want you orchestrator to actually do something.

  • Separating configuration from basic state – easily modify configuration without having to change too much about each node/image, like changing port numbers easily without rebuilding the entire node

  • Ensure proper configuration passing across nodes – passing server names and ports to servers, handing out credentials, wiring in service directories, etc.

Heterogeneous platform support

  • Support for different infrastructure, bare-metal, to plain old virtualization, to multipule clouds – some might call this “hybrid cloud” or “multi-cloud” – useful just for moving along the pipeline

Baby and bathwatering ITSM

  • Asset database to track all your cows – another ITSM trick. this starts getting into “enterprise” needs, but it handy even if aren’t tweedy. You need to know what you have out in the wild and quickly locate it when things go wrong.

  • Capacity management and adjustment of resources – not only monitoring if you’re over (or under!) capacity, but actually going back to your CRUD operations to make adjustments on your nodes. This is also where keeping configuration separate from node state is handy: you could increase memory, keeping the same configuration, for example, without having to rebuild or swap out nodes.

ABC

  • Ease of use, and esp. low cost – otherwise, why not just use a full on PaaS? This is always easy to forget, but it’s sort of the point of all of this. Ask yourself, is this easy and quick to use? If it’s not, something is wrong.

This last point is key. You need to remember that once you’ve done all the above, that’s when the difficult work begins. You still need to come up with an idea for an actual application and its features that will help your business. You need to stop orchestrating and start coding, not to mention working on the product management that will tell you what to code in the first place. Don’t get all caught up in all this Heathkit stuff: save your cycles for the most valuable thing: ABC.

[Always be coding chart]

And with that, I want to pass it over to CloudSoft to tell you how they’re helping you get closing to coding.

Fun & IRL

No fun today, just work.

Sponsors

  • FRONTSIDE.IO – HIRE THEM! Do you need some developer talent? When you have a web project that needs the “A Team,” call The Frontside. They’ve spent years honing their tools and techniques that give their clients cutting-edge web applications without losing a night’s sleep. Learn more at http://frontside.io/cote

Meta-data

I’d prefer my toilet to not fail fast – #InnoIT Think Tank

Dell World Social Think Tank - Enabling Innovation in IT.

Earlier this week at Dell World I sat in on an afternoon Think Tank moderated by TechCrunch’s Alex Williams. Essentially, we discussed the challenging role of IT now-a-days. Per usual, there was much discussion of getting IT to be more innovative and the “threat” that new IT delivery methods like cloud and consumer technologies bring to the status quo. Because technology can do so much, so much faster now-a-days the IT department has a huge challenge and a contradictory mission: IT has to keep the lights on, be stable, and at the same time innovate their brains out.

Dell World Social Think Tank - Enabling Innovation in IT.

Being a professional observer of the IT industry and its history I’ve often found that those two things require different processes, different people, and different technologies. The mind set of keeping things stable a reliable (the five nines crowd) doesn’t fit with coming up with new stuff. Practices like Agile and the rapid delivery cycles in DevOps can help, but at some point, the two paths of ensuring stability and profiting from disruption are divergent enough that you can’t perfectly co-mingle them…and yet, that’s what we expect from the IT department.

I’ve been reading Taleb’s latest book, Antifragile and I’m really liking the premise of it: you want to build systems that benefit from failure and disruption. There might be something of a middle-ground in that nuance, and it’s certainly a way of thinking that cloud has benefited from. We’ll see how quickly we can get IT – and corporate! – culture to start embracing failure as education and helpful instead of something to be avoided even to the point of doing nothing instead of trying.

Don’t fear “closed” systems

After my Ignite talk at DevOpsDays last week, Barton George did a “so, what did you just talk about?” video, above. It’s a pretty good summary of the point I as trying to make in the talk: we’re well into an “integrated” phase of the IT industry.

And check out all the other interviews Barton did at Velocity and DevOpsDays last week.