Coté Memo #28: Yet another DevOps landscape, webinar tips for analysts

(I’ve had a little email newsletter for sometime. It’s fun! People like it and write to me! Rather than rely on the archiving at TinyLetter, I thought I’d post the archives here. However, feel free to subscribe to the newsletter in its proper format, email…or just read it here, whatever you like.)


Hello again, welcome to #28. Today we have 33 subscribers, so we’re +/-0. I’d love to hear what you like, dislike, your feedback, etc.: (If you’re reading this on the web, you should subscribe to get the daily email.)

See past newsletters in the archives, and, as always, see things as they come at and @cote.


Tech & Work World

Quick Hits

The DevOps Landscape

I need to put together a stronger DevOps research agenda at work. We actually have a great paper from 2010 that Jay Lyman wrote, but there’s a certain systematic set of material that’s good to have on most topics.

In 451 speak, here’s the body of work I’d want to see over the course of a year on the topic:

(1.) Define DevOps with a taxonomy and “landscape”

(1.a.) Write down and categorize all the relevant vendors and projects

(2.) Write a Spotlight defining the space, going over concerns/best practices for buyers (“enteprises” or “end-users”), vendors, finance (these could be separate spotlights)

(3.) Write a SectorIQ [this covers potential acquisitions in a space] going over startups in there

(4.) Write a TBI [30-40 page PDF, “long form report”] or Spotlight that’s a “buying guide” targeted at enterprises that goes over how short-list options. For DevOps, this would include numerous open source projects as well.

(5.) Do all the usual weekly company coverage of people in the space as defined by 1.a.

You know, just a short list of stuff.

To that end, I started a mindmap to think about how to slice up “DevOps” and eventually list vendors, projects, and practices that would drive our research and what we focus on. The mindmap is likely to be thrown-up all over, thrown away, and evolved; I think I’ve spent about 15 minutes on it so far. But I’d be curious for pointers and thoughts on how to put this all together.

A cursory lmgtfy brings up numerous other slices at this over the years:

I need to do the above for two primary reasons:

  1. We’re getting lots of inquiry from vendors, enterprises, and finance to understand the space. They just want to definitive coverage of it.
  2. I need to narrow down our focus on DevOps and add in the discipline of having a “list” of topics we regularly cover.

It goes without say: I’d love your input!

P.S.: is MindMeister the best option if I don’t want to shell out for MindManager?

How to do a webinar for analysts

I was doing a webinar today, and when it was my time to be quiet, I tapped in some tips on what analysts are supposed to do in webinars (I guess in addition to paying attention when they’re not talking ;>).

The framing is that analysts are often brought in to do webinars with vendors. The commercial goals are to (a.) help draw an audience, and, (b.) get some credibility and interesting content from the analyst. In other words: it’s a marketing activity, from the vendor’s view point. Typically, the analyst speaks to “macro trends” for half of the webinar, the vendor pitches how their product helps you, the customer, profit from those macro concerns, and then there’s question and answer. You do webinars for thought leader and lead-gen.

In no particular order, here’s some tips for analysts doing vendor webinars:

  • Timing is critical, webinars often involved more than one person, so you’re stealing time from others if you go on.
  • Don’t mention rival vendors or “solutions” in any but the vaguest way.
  • No need for a lot of context setting and explanation, just focus on simple direct things without educating too much – need to move quick.
  • Try not to sound bored.
  • Don’t be dismissive of the core concepts under discussion, e.g., “cloud, you know, some people like it.”
  • You’re setting up the audiences minds to listen to the vendor pitch, so leave them thinking happy thoughts.
  • Cut out parenthetical asides, they take up time.
  • Practice and write-up your talk, even if you don’t read the script. This is a performance, not a “talk,” discussion, or a podcast.
  • Live vs. recorded. Doesn’t matter that much, but economics of webinar mean little to no editing should be done. There’s really no benefit to being live.
  • Q&A: from audience is good for your own input, or canned.
  • Slides should match talk. In this instance, you kind of are reading the slides, it’s not keynote stuff. But, be brief. If you have 5 points in a slide – 5 data points on a chart – just talk about 3 or 2 to cut down time.
  • Give the vendor lots of time to talk, unless they don’t want to.
  • For the vendor: Publish a recording for maximal value.
  • For the vendor: demos are a nice thing to do.

Fun & IRL

What’s more fun than tips on doing webinars?!



We discuss thinking beyond human error as Bill starts to summarize the book Behind Human Error. It’s always helpful to look at how the system and process caused the wrong move. Also, thinking about hardware, and some nice feedback from designers.

Subscribe to the feed:

Your friends @cote and @BillHiggins

Hardware, what is it?

Follow-up on “design”

Human Error

  • Bill goes over Behind Human Error, causing us to discuss how various pipelines (systems) in product management work in waterfall and non-waterfall mode.
  • How do product managers fit in to a design-heavy pipeline?

The decline of Novell

I’ve been reading up on Novell’s history. So far it’s got some fascinating twists and turns. Wikipedia sums up the turning point well:

The inclusion of networking as a core system component in all mainstream PC operating systems after 1995 led to a steep decline in Novell’s market share.

That is, once networking become “commoditized,” the unique position Novell had with IPX changed. And then there’s some channel hijinks that happened.

I’m also obsessed with figuring out what went wrong at Sun in the 2000s – Novell seems like some good mental training wheels for that.

The decline of Novell

How to be a hardware analyst…?

Eucalyptus cloud on Dell hardware

After reading an, as ever, great, deep coverage of some new fangled piece of hardware from TPM, I got to thinking: I don’t really know how hardware analysts approach their craft. What framing and context do they use to understand, evaluate, and judge any given chunk of hardware?

I’ve never been much of a hardware person (which was an odd strength while I was at Dell, being that I was there to work on software strategy). However, I keep coming across converged infrastructure companies and products that start to get my “that seems interesting” senses tingling. For example, in the piece I linked to above, it seems interesting the VMware is going to have an OCP compliant product in the market, through Quanta. I also spoke with several VCE people at VMworld, and their premium price business model is intriguing (it reminds me of the IBM model: we’ll send a plane full of people the instant you have a runny nose – which leads to lots of talk of time to value, ROI, etc.).

Being a software person and, worse, someone who was not formally trained in computer science, the only way I think about hardware is, essentially, is it faster and cheaper? Which I know is very naive.

In software, you can ask that question, but question is also more about the capabilities software gives you and how businesses either can benefit or be harmed by it (like using cloud to more quickly deliver new features to production), and “culture shifts” (like BYOD or going from Office to Google Apps, using SDN or virtualization to change how IT is architected and deployed, etc.).

Again, to me, a “software person,” any “advance” in hardware is just a question of being faster and cheaper…but I’m assuming there’s more to it.

(We discussed this in the opening of today’s Under Development podcast. Bill had some good answers.)

In an API-driven cloud, Intigua wants to wrap APIs around your management midsection

Intigua's stack vision

A report I wrote on Intigua is up now. Here’s the 451 Take for y’all now:

Intigua has always been a company with a difficult marketing proposition, having started off as a packaging and deployment balm for systems management agents. While there is certainly utility to ‘managing the managers,’ a broader positioning and purpose was clearly needed. Intigua’s new positioning as an enabler of cloud management APIs looks encouraging, and if the company can extend into ‘orchestration’ as a consequence, it can start addressing one of the major gaps of large enterprises that are ‘going cloud.’ It’s nice that all of those cloud-native companies can manage tens of applications with their devops and cloud approaches – but how will the large companies of the world manage the tens of thousands of applications they’re beset with?

In talking with some folks who’ve been dealing with so-called “APIs” at the infrastructure stack…there’s a lot of work to do to make the management layer APIs behave like one would expect. Because WS-*.

Intigua bought re-print rights to the last piece I wrote on them, so you can read it for free on their site.

Client can read the full report, and try a trial if you’re not signed up with us yet.

In an API-driven cloud, Intigua wants to wrap APIs around your management midsection

Hardware is the price variable

With EVO, VMware is pitting the hardware vendors against each other for deals that will likely involve hundreds to thousands of nodes in large enterprises, and the competition will drive down hardware prices and therefore the overall price of the EVO solution. If hardware costs less than it might otherwise without such pressure, that extra margin can come from the software and support in the EVO stack.

It’s rough being a hardware vendor. At the VCE level, pro-services is another margin lever to play with (mostly the increase price, not discount), but that’s a bit up market.

Hardware is the price variable

Zenoss is on the hunt for large enterprises with a little help from Hadoop and Docker (451 Report)

Zenoss Sponsored Netbook

Back in my RedMonk days, I spoke with Zenoss a lot, so it was nice to finally catch-up with them again. They’re moving up-market and adding spending much time beefing up their back-end to handle the resulting, larger scale demands for a systems management platform in the enterprise space.

The full report is available for 451 clients, but here’s the 451 Take:

Zenoss has been undergoing much change in recent years. While other startups were snatched up and folded into larger vendors’ emerging cloud portfolios, Zenoss remained independent. The company has been transforming from its open source roots and now is solidly a commercial company, focusing upmarket on $45,000+ deals instead of smaller accounts. This is a wise move that lifts Zenoss out of competing at the low end (where the expansive nature of the platform makes the proposition too expensive) and allows it to focus on large enterprises that tend to like overstuffed systems management portfolios vs. the point tools from the likes of SolarWinds and others, which gobble up cash in the midmarket and below. As companies are switching their IT over to more cloud-like infrastructures, management vendors like Zenoss that can keep up with the new demands should find opportunities for growth.

Is it working? Further in the report we cover the financial metrics that are known:

The company says it has seen 30% Y/Y revenue growth and is now ‘north’ of $20m in annual revenue (Inc. reported its 2013 revenue at $22.4m). Zenoss says this is a record high and that it has a 93% renewal rate.

If you’re not a client, sign-up for a trial to take a peek behind our paywall.

Zenoss is on the hunt for large enterprises with a little help from Hadoop and Docker (451 Report)

What VMware means when they say “hybrid cloud”

Gartner’s @cloudpundit has a great way of summing up VMware’s future-proofing problems when it comes to their strategy.

tl;dr: they need to straddle two worlds, pre-cloud and post-cloud infrastructure. When VMware says “hybrid cloud,” that straddling of “legacy” IT and “real cloud” seems to be what they mean:

That brings us to VMware (and many of the other traditional IT vendors who are trying to figure out what to do in an increasingly cloud-y world). Today’s keynote messages at VMworld have been heavily focused on cost reduction and offering more agility while maintaining safety (security, availability, reliability) and control. This is clearly a message that is targeted at traditional IT, and it’s really a story of incremental agility, using the software-defined data center to do IT better. There’s a heavy overtone of reassurance that the VMware faithful can continue to do business as usual, partaking of some cool new technologies in conjunction with the VMware infrastructure that they know and love — and control.

But a huge majority of the new agile-mode IT is cloud-native. It’s got different champions with different skills (especially development skills), and a different approach to development and operations that results in different processes and tooling. “Agility” doesn’t just mean “faster provisioning” (although to judge from the VMware keynote and customer speakers, IT Operations continue to believe this is the case). VMware needs to find ways to be relevant to the agile-IT mode, rather than just helping traditional-IT VMware admins try to improve operations efficiency in a desperate grasp to retain control. (Unfortunately for VMware, the developer-relevant portions of the company were spun off into Pivotal.)

This last parenthetical point is what always confus[es|ed] me about the Pivotal divestiture. I get all sorts of answers depending on who I ask, the official one (as far as I understand it) is always the least interesting, of course.

What VMware means when they say “hybrid cloud”