Moving beyond the endless debate on bi-modal IT

I get all ants-in-pants about this whole bi-modal discussion because I feel like it’s a lot of energy spent talking about the wrong things.

This came up recently when I was asked about “MVP”, in a way that basically was saying “our stuff is dangerous [oil drilling], so ‘minimal’ sounds like it’d be less safe.” I tried to focus them on the “V” and figure out what “viable” was for their situation. The goal was to re-enforce that the point of all this mode 2/small batch/DevOps/PDCA/cloud native/OODA nonsense is to keep iterating to get to the right code.

Part of the continual consternation around bi-modal IT – sad/awesome mode – is misalignment around that “viability” and scoping on “enterprise” projects. This is just one seam-line around the splits of the discussion being unhelpful


The awesome mode people are like:

You should divide the work into small chunks that you release to production as soon as possible – DevOps, Agile, MVP, CI/CD – POW! You have no idea what or how you should implement these features so you need to iteratively do it cf.

And the sad mode folks are like:

Yes, but we have to implement all this stuff all at once and can’t do it in small slices. Plus, mainframes and ITIL.

Despite often coming off as a sad mode apologist, I don’t even know what the sad mode people are thinking. There’s this process hugger syndrome that, well on both sides really, creates strawpeople. The goal of both methods is putting out software that makes users more productive, including having it actually work, and not overpaying for the whole thing.

The Enemy is finding any activity that doesn’t support those goals and eliminated it as much as possible. In this, there was some good scrabbling from the happy mode people laughing at ITSM think early on, but at this point, the sad people have gotten the message, have been reminded of their original goal, and are now trying to adapt. In fact, I think there’s a lot the “sad mode” people could bring to the table.

To play some lexical hopscotch, I don’t think there is a “mode 1.” I think there’s just people doing a less than awesome job and hiding behind a process-curtain. Sure, it may to be their choice and, thus, not their fault. “Shitty jobs are being done,” if you prefer the blamelesss-veil of passive voice.

Fix your shit

When I hear objections to fixing this situation, I try to b nice and helpful. After all, I’m usually there as part of an elaborate process to get money from these folks in exchange for helping them. When they go all Eeyore on me, I have to reframe the room’s thinking a little bit without getting too tough love-y.

“When I put these lithium batteries in this gas car, it doesn’t seem to work. So electric cars are stupid, right?

You want to walk people to asking “how do we plan out the transition from The Old Way That Worked At Some Point to The New Way That Sucks Less?” They might object with a sort of “we don’t need to change” or the even more snaggly “change is too hard” counter-point.

I’m not sure there are systems that can just be frozen in place and resist the need to change. One day, in the future, any system (even the IRS’!) will likely need to change and if you don’t already have it setup to change easily (awesome mode), you’re going to be in a world of hurt.

The fact that we discuss how hard it is to apply awesome mode to legacy IT is evidence that that moment will come sooner than you think.

(Insert, you know, “where’s my mobile app, Nowakowski?” anecdote of flat-footedness here.)

ITIL end in tears(tm)

The royal books of process, ITIL, are another frequent strawperson that frothy mouthed agents of change like to light up. Few things are more frustrating than a library of books that cost £100 each. There’s a whole lot in there, and the argument that the vendors screw it all up is certainly appetizing. Something like ITIL, though, even poorly implemented falls under the “at least it’s an ethos” category.

I’m no IT Skeptic or Charles T. Betz, but I did work at BMC once: as with “bi-modal,” and I really don’t want to go re-read my ITIL books (I only have the v2 version, can someone spare a few £100’s to read v3/4?), but I’m pretty sure you could “do DevOps” in a ITIL context. You’d just have to take out the time consuming implementation of it (service desks, silo’d orgs, etc.).

Most of ITIL could probably be done with the metaphoric (or literal!) post-it notes, retrospectives, and automated audit-log stuff that you’d see in DevOps. For certain, it might be a bunch of process gold-plating, but I’m pretty sure there’s no unmovable peas under all those layers of books that would upset any slumbering DevOps princes and princesses too bad.

Indeed, my recollection of ITIL is that it merely specifies that people should talk with each other and avoid doing dumb shit, while trying to improve and make sure they know the purpose/goals of and “service” that’s deployed. They just made a lot of flow charts and check lists to go with it. (And, yeah: vendors! #AmIrightohwaitglasshouse.)

Instead of increasing the volume, help spray away the shit

That gets us back to the people. The meatware is what’s rotting. Most people know they’re sad, and in their objections to happiness, you can find the handholds to start helping:

Yes, awesome mode people, that sounds wonderful, just wonderful. But, I have 5,000 applications here at REALLYSADMODECOGLOBAL, Inc. – I have resources to fix 50 of them this year. YOUR MOVE, CREEP!

Which is to say, awesome mode is awesome: now how do we get started in applying it at large orginizations that are several fathoms under the seas of sad?

The answer can’t be “all the applications,” because then we’ll just end up with 5,000 different awesome modes (OK, maybe more like 503?) – like, do we all use Jenkins, or CircleCI, or Travis? PCF, Docker, BlueMix, OpenShift, AWS, Heroku, that thing Bob in IT wrote in his spare time, etc.

Thus far, I haven’t seen a lot of commentary on planning out and staging the application of mode 2. Gartner, of course, has advice here. But it’d be great to see more from the awesome mode folks. There’s got to be something more helpful than just “AWESOME ALL THE THINGS!”

Thanks to Bridget for helping draw all this blood out while I was talking with her about the bi-modal pice she contributed to.

These aren’t the ROI’s you’re looking for, or, “ROI: ¯_(ツ)_/¯”

I have a larger piece on common objections to “cloud native” that I’ve encountered over the last year. Put more positive, “how to get your digital transformation started with a agile, DevOps, and cloud native” or some such platitudinal title like that. Here’s a draft of the dread-ROI section.

The most annoying buzzkill for changing how IT operates (doing agile, DevOps, taking “the cloud native journey,” or whatever you think is the opposite of “waterfall”) is the ROI counter-measure. ROI is a tricky hurdle to jump because it’s:

  1. Highly situational and near impossible to properly prove at the right level — do you want to prove ROI just within the scope of the IT department, or at the entire business-level? What’s the ROI of missing out on transitioning Blockbuster to Netflix? What’s the ROI of a mobile app for a taxi company when Uber comes along? What’s the ROI for investing in a new product that may or may not work within three years, but will save the company’s bacon in five years?
  2. Compounded by the fact that the “value” of good software practices is impossible to predict. Drawing the causal lines between “pair programming” and “we increased market-share by 3% in Canada” can be a hard line to draw. You can back-think a bunch of things like “we reduced defects and sped up code review time by pairing,” but does that mean you made more money, or did you make more money because the price of oil got halved?

In my experience, when people are trying to curb you on ROI, what they’re asking is “how will I know the time and money I’m going to spend on this will pay off and, thus, I won’t lose time and money? (I don’t want to look like a fool, you see, at annual review time)”

What they’re asking is “how do I know this will work better than what I’m currently doing or alternatives.” It also usually means “hey vendor, prove to me that I should pay you.”

As I rambled through last year, I am no ROI expert. However, I’ve found two approaches that seem to be more something than nothing: (1.) creating a business case and trusting that agile methods will let you succeed, and, (2.) pure cost savings from the efficiencies of agile and “cloud native.”

A Business Case

A business case can tell you if you your approach is too expensive, but not if it will pay for itself because that depends on the business being successful.

Here, you come up with a new business idea, a product, service, or tweak to an existing one of those. “We should set up little kiosks around town where people can rent DVDs for a $1 a day. People like renting DVDs. We should have a mobile app where you can reserve them because, you know, people like using mobile. We should use agile to do this mobile app and we’re going to need to run it somewhere, like ‘the cloud.’ So, hey, IT-nerds, what’s the ROI on doing agile and paying for a cloud platform on this?”

In this case, you (said “IT-nerds”) have some externally imposed revenue and profit objectives that you need to fit into. You also have some time constraints (that you’ll use to push back on bloated requirements and scope creep when they occur, hopefully). Once you have these numbers, you can start seeing if “agile” fits into it and if the cost of technology will fit your budget.

One common mis-step here is to think of “cost” as only the licensing or service fees for “going agile.” The time it takes to get the technology up and running and the chance that it will work in time are other “costs” to account for (and this is where ROI for tech stuff gets nasty: how do you put those concerns into Excel?).

To cut to the chase, you have to trust that “agile” works and that it will result in the DVD rental mobile app you need under the time constraints. There’s no spreadsheet friendly thing here that isn’t artfully dressed up qualitative thinking in quantitate costumes. At best you can point to things like the DevOps reports to show that it’s worked for other people. And for the vendor expenses, in addition to trusting that they work, you have to make sure the expenses fit within your budgets. If you’re building a $10m business, and the software and licensing fees amount to $11m, well, that dog won’t hunt. There are some simple, yet helpful numbers to run here like the TCO for on-premises vs. public cloud fees.

Of course, a major problem with ROI thinking is that it’s equally impossible to get a handle on competing ways to solve the problem, esp. the “change nothing” alternative. What’s the ROI of how IT currently operates? It’d be good to know that so you can compare it to the proposed new way.

If you’re lucky enough to know a realistic, helpful budget like this, your ROI task will be pretty easy. Then it’s just down to horse trading with your various enterprise sales reps. Y’all have fun with that.


Focus on removing costs, not making money.

If you’re not up for the quagmire of business case driven ROI, you can also discuss ROI in terms of “savings” the new approach affords. For things like virtualizing, this style of ROI is simple: we can run 10 servers on one server now, cutting our costs down by 70–80% after the VMware licensing fees.

Doing “agile,” however, isn’t like dropping in a new, faster and cheaper component into your engine. Many people I encounter in conference rooms think about software development like those scenes from 80s submarine movies. Inevitably, in a submarine movie, something breaks and the officer team has to swipe all the tea cups off the officer’s mess table and unfurl a giant schematic. Looking over the dark blue curls of a thick Eastern European cigarette, the head engineer gestures with his hand, then slams a grimy finger onto the schematics and says “vee must replace the manifold reducer in the reactor.”

Solving your digital transformation problems is not like swapping “agile” into the reactor. It’s not a component based improvement like virtualization was. Instead, you’re looking at process change (or “culture,” as the DevOps people like to say), a “thought technology.” I think at best what you can do is try to calculate the before and after savings that the new process will bring. Usually this is trackable in things like time spent, tickets opened, number of staff needed, etc. You’re focusing on removing costs, not making money. As my friend Ed put it when we discussed how to talk about DevOps with the finance department:

In other words, if I’m going to build a continuous integration platform, I would imagine you could build out a good scaffolding for that and call it three or four months. In the process of doing that, I should be requiring less help desk tickets get created so my overtime for my support staff should be going down. If I’m virtualizing the servers, I’ll be using less server space and hard drive space, and therefore that should compress down. I should be able to point to cost being stripped out on the back end and say this is maybe not 100% directly related to this process, but it’s at least correlated with it.

In this instance, it’s difficult to prove that you’ll achieve good ROI ahead of time, but you can at least try to predict changes informed by the savings other people have had. And, once again, you’re left to making a leap of faith that qualitative anecdotes from other people will apply to you.

For example, part of Pivotal’s marketing focuses on showing people the worth of buying a cloud platform to support an agile approach to software deliver (we call that “cloud native”). In that conversation, I cite figures like this:

  • Developers [at Allstate] used to spend only 20% of their time coding and now it’s closer to 90%
  • A federal government agency wanted to save money on call-centers by converting the workflow to a web app. They’d scheduled to complete the project in 9 months, but after converting to agile delivered it in 6 weeks.
  • When doing agile, because testing is pushed down to the team level and automated, you can expect to reduce your traditional QA spend. In fact, many shops on the cloud native journey have massively eliminated their QA department as a stand alone entity.
  • ING’s savings from transforming to a more cloud-y IT setup: “Investment of €200m to further simplify, standardize and automate IT; Decommissioning 40% of application landscape; Moving 80% of applications to zero-touch private cloud.” Resulting in savings of €270m starting in 2018.
  • From Orange: “Who isn’t happy to continue working when projects are delivered on average six times faster than with a waterfall approach?”
  • “[R]espondents from a recent government study who have already used PaaS say they save 47% of their time, or 1 year and 8 months off a 3.5 year development cycle. For those who have not deployed PaaS, respondents believe it can shave 31% off development time frames and save 25% of their annual IT budget, a federal savings of $20.5 billion.”
  • 14 months down to 6 months, 16 staff down to 8 staff: “[w]hen planning the first product developed on Pivotal Cloud Foundry, CoreLogic allocated a team of 12 engineers with four quality assurance software engineers and a management team. The goal was to deliver the product in 14 months. Instead, the project ultimately required only a product manager, one user experience designer and six engineers who delivered the desired product in just six months.”
  • “We did an analysis of hundreds of projects over a multiyear period. The ones that delivered in less than a quarter succeeded about 80% of the time, while the ones that lasted more than a year failed at about the same rate. We’re simply not very good at large efforts.”
  • From a 1999 study: “software projects that use iterative development deliver working software 38% sooner, complete their projects twice as fast, and satisfy over twice as many software requirements.”
  • After switching over to “the new way,” one large retailer has already seen 80% Reduction in cycle time and scope and reduced cycle time from 123 days to 23 days.
  • One large insurance company can now manage 1,500 apps with just two operators. There were many, many more before that. Another large bank could manage 145 apps with just 2 operators, and so on.

In most of these cases, once you switch over to the new way, you end up with extra capacity because you can now “do IT” more efficiently. Savings, then, come from what you decide to do with that excess capacity: (a.) doing more with the new capacity like adding more functionality to your existing businesses, creating new businesses, or entering new markets, or, (b.) if you don’t want to “grow,” you get rid of the expense of that excess capacity (i.e., lay-off the excess staff or otherwise get them off out of the Excel sheet for your business case).

But, to be clear, you’re back into the realm of imagining and predicting what the pay-off will be (the “business case” driven ROI from above) or simply stripping out costs. It’s a top-line vs. bottom-line discussion. And, in each case, you have to take on faith the claims about efficiencies, plus trust that you can achieve those same savings at your orginizations.

With these kinds of numbers and ratios, the hope is, you can whip out a spreadsheet and make some sort of chart that justifies doing things the new way. Bonus points if you use Monte Carlo inspired ranges to illustrate the breadth of possibilities instead of stone-code line-graph certainty.

Everything is up when there’s no bottom

As an added note of snark: all of these situations assume you know the current finances for the status quo way of operating. Surely, with all that ITIL/ITSM driven, mode 1 thinking you have a strong handle on your existing ROI, right? (Pause for laughs.)

More seriously, the question of ROI for thought technologies is extremely tricky. In that conversation on this topic that I had with Ed last year, the most important piece of advice was simple: talk with the finance people more and explain to them what’s going on.

That’s the most effective (and least satisfying!) advice you get about any of this “doing things the new way” change management prattle: whether it’s auditors, DBAs, finance, PMO people, or whoever is throwing chaff in your direction: just go and talk with them. Understand what it is they need, why they’re doing their job, and bring them onto the team instead relegating them to the role of The Annoying Others.

Check out another take on this over in my September 2016 column at The Register.

Questioning DRY


Recently, I’ve been in conversations where people throw some doubt on DRY. In the cloud native, microservices mode of operating where independent teams are chugging along, mostly decoupled from other teams, duplicating code and functionality tends to come more naturally, even necessarily. And the benefits of DRY (reuse and reducing bugs/inconstancy from multiple implementation of the same thing), theoretically, no longer are more valuable than the effort put into DRYing off.

That’s the theory a handful of people are floating, at least. I have no idea if it’s true. DRY is such an unquestionable tenant of all programming think that it’s worth tracking it’s validity as new modes of application development and deployment are hammered out. Catching when old taboos flip to new truths is always handy.
Continue reading “Questioning DRY”

Please teach my kid Spanish, or, What have the Romans ever done for us?

I got a survey from my son’s school district about foreign language preferences. I was predictably shocked that Spanish wasn’t listed in the rankings we were asked to do:

To be post-PC, I suppose Spanish isn’t a foreign language in the US, esp. in Texas. However, I wanted to drive home my point so left this comment.

I really, really would like my son (and daughter when she’s old enough) to be taught Spanish in school. I could go look up the stats, but given our geography (our hemisphere, really), Spanish and English seem like the most useful, functional languages.

I only know terrible gringo-Spanish and I wish I knew it much better. When I went to school, we were deathly afraid that the Japanese were going to take over, so I took Japanese in junior high, then French in high school. I finally wised up and took Spanish in college and now speak my crappy Spanish and barely understand it when others speak it. Spanish is such a valuable language for not only everyday life, but also understanding, emphasizing, and therefor beneficially living with all our the Spanish speaking fellow citizens. I’m sure Chinese will be handy, which is why I ranked it as first in the options given, but if I could rank Spanish as my preferred #1–10, then I’d list Chinese as #11, followed by the ordering I had.

Latin seems like a waste, and I’m an incredibly jingoistic about Western culture. If a dead language had to be taught, I’d rather Ancient Greek was taught so that kids could read Greek texts in their original state: I don’t think the Romans did that much that’s beyond remixes of Greek things (Marcus Aurelius and Lucretius aside) — I mean, can you imagine reading Plato, Aristotle, Heraclitius, and Homer in the original Greek? It’d be amazing!

Apologies for the “open letter,” please read it in the appropriate voice.

7 BigCo Anti-patterns — white collars doing it wrong

Awesome corporate clipart from geralt.

To wrap up this little run of Thriving in BigCo’s posts, here’s a quick listing of seven, often sad and unhelpful bad practices I’ve noticed people and large organizations doing. Try to avoid them.

  1. Sad bag of slides — you see a person uses the same basic slides in their presentations over and over again, trying to argue for the same point each time they get an audience. They may put together new pitches, slightly masked, but you end up seeing those same old stuff. At some point, you can time how long it will take this person to suggest their idea, pull out their sad slides and go for it.
  2. No free coffee — the company doesn’t even understand the basics of the culture they want, e.g., software developers need free coffee and will be more prone to leave. Cf. “penny-wise, pound foolish.”
  3. Strategy by gratuitous differentiation — thinking that the way to compete is to do something your competitors don’t do…without understanding why they don’t do it. For example, back in the “what do we do about cloud?!” days (~2010–2014 for the first wave) many large IT vendors would want to compete with Amazon’s cloud services by being “more enterprise.” Instead of just jumping in the first blue-cloud-ocean you find, you need to carefully understand why Amazon may not do these “enterprise” things, e.g., they cost a lot more and kill margin.
  4. “What, you don’t like money?” — despite a business throwing off a lot of cash, you don’t want to acquire it. Like worrying about buying an otherwise successful software company because they have a large mainframe business…a business that generates good revenue at large margins with a captive market, so you should keep it and run it if you like money.
  5. 40/4 = 10, or, “4-up” — to reduce the number of slides in a presentation, you put the content of four slides on one. This “reduces” your 40 slide presentation to just 10 slides!
  6. Pay people to ignore them — BigCo’s love hiring new employees, paying them well, and then rarely listening to them. Instead, you hire outsiders and consultants who say similar things, but are listened to. In fact, the first task of any good management consultant team is to go interview all those bright, but ignored, employees you have and ask them what they’d do. The lesson is to track how many ideas come internally vs. externally and, rather than just blame your people for low internal idea generation, ask yourself if you’re just not listening.
  7. Death by Sync — the price of communication is high and you have to be judicious about how far in debt you go. Instead of doing this, most companies spend lots of time “syncing” with other groups and people in the company rather than just doing things. Part of what upper management needs to do is establish a culture and processes that prevent death-by-sync. Also known as “hanging yourself with pre-wire.”

And, if you missed them, here are the longer ones:

“Nope, that’s not a problem.”

One of the things that separates a more senior person from a more junior person is statement, “nope, that’s not a problem.” At the very least, the statement may be more like, “that is a problem, but we’re not going to solve it now. Ship it!” I’ve been through this several times, on both sides of the seniority curve and now respect people who can wisely decide when not to care about a problem and go on with the work.

For example, let’s say you’ve done a fair amount of work on the deck for The Big Meeting, and at the 11th hour you found that something is slightly wrong: you’ve mis-categorized one of the 30 sub-categories in your market sizing. The worker bees are all upset that there’s an error and are having a call to see how to redo “everything.” A good manager can come into a situation like this and say, “yeah, it’d bee too much chaos to change it now. We’ll do it later. Just put a little foot-note down there if you think it’s that bad. Ship it!” The worker bees are somewhat flummoxed, or maybe relieved: but that’s the power of management, to decide to go ahead with it instead of having to make it perfect.

There is some amount of risk to allowing such “bugs” to go through the process, but sometimes — oftentimes! — the risk is minimal and worth taking.

Asking questions often leads to more work, for you

From geralt.

Most of what we do as white collar workers is help our organization come to a decision. What new geographies to sell our enterprise software and toothpaste in, what pricing to make our electric razors and cranes, which people to fire and which to promote, or how much budget is needed according to the new corporate strategy. Even in the most cynical corporate environment, asking questions — and getting answers! — is the best, primary way to set the stage for making a decision.

You have to be careful, however,of how many questions you ask, and on what topic. If you ask too many questions, you may find that you’ll just create more work for yourself. Before asking just any old question, ask yourself if you’re willing to do the work needed to answer it…because as the asker, you’ll often be tasked with doing that work. We see this in life all the time, you ask someone “want to go to lunch?” and next thing you know, you’re researching in all the restaurants within five miles that have gluten free, vegan, and steak options.

I’ve seen countless “staffers” fall prey to this in meetings with managers who’re putting together plans and strategies. The meeting has spent about 30 minutes coming up with a pretty good plan, and then an eager staffer pipes up and suggests 1–3 other options. The manager is intrigued! Quickly, that staffer is asked to research these other options, but, you know, the Big Meeting is on Monday, so can you send me a memo by Sunday morning?

In some situations, this is fine and expected. But in others, conniving management will just use up as much as your energy as possible: it’s always better to have done more research, so why not let that eager staffer blow-up their Saturday to have more back-up slides? Co-workers can also let you self-assign homework into burnout if they find you annoying: you’ll notice that when you, the eager staffer, pipe up, they go suddenly quiet and add no input.

As always, you have to figure out your corporate culture. But, just make sure that before you offer up alternatives and otherwise start asking The Big Questions, you’re read to back-up those questions by doing the extra homework to answering them yourself.

Avoid fence painting by assigning homework

Early corporate culture pioneers.

One of the more eye-rolling tactics of white collar workers is what I call “fence painting”: an employee somehow gets someone else to do work for them. This can be as simple as coasting off budget, but the more insidious practice is to get other outside your chain of command to do work for you.

Most of us have experienced this: days after The Big Meeting you suddenly think “why am I up at 11am working on this report for Scopentholler? I don’t even work in that division!”

What you, the whitewash encrusted white collar worker want to do here is somehow still seem “up for anything” and fully capable, and yet not end up painting Scopentholler’s fences. I suggest assigning “homework” as the first rung of filters. If someone wants your input, or wants you to somehow get involved in a project, come up with some mini-project they need to do first. Have them write a brief for you, put on a meeting to bring you up to speed, do a report about how the regional sales have been going, or otherwise force them to do some “homework.”

Home work filters and prioritizes

Assigning homework does two things:

  1. It gauges their commitment to getting you involved, filtering out lazy-delegators. If all this person is looking to do is get you to do their work, it’s highly unlikely that they’ll do additional work. Mysteriously, the importance of you being involved in this project will disappear.
  2. If they do the homework, you get more information (one of the major currencies of corporate culture) and you can better gauge if it’s actually worth your time to get involved. If the quality of the “homework” is good, you’re interested in the work, and it aligns with your responsibilities, then you should probably consider the original fence painting task.

I’ve seen assigning homework cut out a huge amount of fence painting work, for me and others I’ve observed doing this. Also, once it’s known that realize you assign homework, people will stop preying on you: “oh, don’t ask Crantouzok to get involved, you’ll just have to make another deck before she lifts a finger!”

Of course, if you’re pure of heart and mind, you can always just be direct and say “that’s low in my priority queue and not really my responsibility.” But, as with all thriving in BigCo tips here, first make sure the BigCo you’re working in is equally pure of heart and mind.

Dead Horse Points

In corporate meetings, oftentimes one person figures out a problem and comes up with a solution. Equally often, multiple people in the meeting like the re-iterate the point in their own words, adding 5–10 minutes more to the meeting.

Once the epiphany and decision is made, everyone should just close the issue, and move on. No need for people to comment on it more.

For example, in one company I worked for we were discussing a software product name. The project had been called “APM.” But it turns out, it wasn’t an APM product, it was just exposing instrumented metrics in the software. This is, in itself, incredibly valuable, but not full blown APM. Someone initially pointed out, “we shouldn’t call that APM,” and everyone agreed.

Then 3 other people chimed in with their retelling of this point, basically embellishing and rephrasing the point. In most corporate meetings, and I’d argue in the never-ending meeting of Slack channels and email, there’s no need for all that extra talk after a realization and decision is made. Someone needs to pipe up and say “what’s the next issue?” or close out the meeting.

A presentation is just a document that’s been printed in landscape mode

Slides must stand on their own

Much presentation wisdom of late has revolved around the actual event of a speaker talking, giving the presentation. In a corporate setting, the actual delivery of the presentation is not the primary purpose of a presentation. Instead, a presentation is used to facilitate coming to a decision; usually you’re laying out a case for a decision you want the company to support. Once that decision is made, the presentation is often used as the document of record, perhaps being updated to reflect the decision in question better.

As a side-note, if your presentation doesn’t argue for a specific, “actionable” decision, you’re probably doing it wrong. For example, don’t just “put it all on the table” without suggesting what to do about it.

Think of presentations as documents which have been accidentally printed in landscape and create them as such. You will likely not be given the chance to go through your presentation from front to end like you would at a conference, You’ll be interrupted, go back and forth, and most importantly, end up emailing the presentation around to people who will look at it without you presenting.

You should therefore make all slides consumable without you being there. This leads to the use of McKinsey titles (titles that are one-liners explaining the point you’re making) and slides that are much denser than conference slides. The presentation should have a story-line, an opening summary of the points you want to make, and a concluding summary of what the decision should be (next steps, launching a new project, the amount needed for your budget, new markets to enter, “and therefore we should buy company X,” etc.).

This also gives rise to “back-up” slides which are not part of the core story-line but provide additional, appendix-like information for reference both during the presentation meeting and when others look at the presentation on their own. You should also put extensive citations in footnotes with links so that people consuming the presentation can fact check you; bald claims and figures will be defeated easily, nullifying your whole argument to come to your desired decision.

Also remember that people will take your slides and use them in other presentations, this is fine. And, of course, if successful, your presentation will likely be used as the document of record for what was decided and what the new “plan” was. It will be emailed to people who ask what the “plan” is and it must be able to communicate that accordingly.

Remember: in most corporate settings, a presentation is just a document that has been printed in landscape mode.

See more “surviving in BigCo’s” tips in this round-up post.

Getting Digital Transformation Wrong, Software Development Edition

Just figuring out how to piece it all together.

“Digital transformation,” we’re all supposed to be doing that right? Most conversations I get involved now-a-days amount to “how do we do that?” and “how’s it going for other organizations?” When it comes to the custom written software part of whatever “digital transformation” is, here’s a few common “anti-patterns” I’ve been noticing:

1. Products, not projects.

Management has to change from a project-driven mindset to a product-driven one. Instead of annual budgets and plans, they’ll need to move to the weekly/monthly loop of shipping code, observing it, improving it, etc. I suspect this idea of “uncertainty” in long-term plans will seem baffling to most. Of course, the “uncertainty” is always there in long-term planning: you have the uncertainty of ever being successful — just because you have a plan doesn’t mean…well…anything when it comes to success in software.

2. Accept that you have no idea what you’re doing, and build a system that helps you figure it out.

A “waterfall” approach (plan, code, deliver, done) like is very fragile when it comes to unexpected events and changes (e.g., executive and market whim). This is why classically, development teams do not like scope and requirements documents changed. Changing and adding features “in the middle” of projects is very damaging as it screws up the order of work, budgets, and innumerable other things. It’s like if half way through assembling an Ikea bookshelf you decided you wanted a desk instead.

Companies get in a whiplash vortex where they’re constantly late and over budget because they keep changing the plan (bookshelf-cum-desk). They never get anything done because they’re using the wrong process. This is where the idea of “small batches,” mixed with “code must always be shippable,” comes into play to reduce risk. If you do small batches, you can change your mind every week…in a less painful way than in a waterfall mentality. (Pro-tip: it’s actually not a good idea to change your idea much, but, instead, experiment and use this feedback loop to explore new ideas and features. “Pivoting” is fine early one when you’re trying to discover your product/market fit, but once you do, you don’t want to change you mind too much.)

3. “Good cooking takes time. If you are made to wait, it is to serve you better, and to please you.”

Most big companies — rather, people managing those companies — like to rush things and spend zero resources on them, where resources means “time, money, and corporate attention.” Software is hard work that must be studied and understood. That’s why we pay software people a lot. Management often sets “the game” incorrectly. The first thing they do is rush the work and create really short project timelines for too much stuff. “Creating a sense of urgency” can be more damaging than helpful.

Instead or lighting fires under everyone’s collective asses, management needs to start small, painfully small.

Companies also often don’t want to spend any incremental budget, and their IT Service Management practices treat computational resources are sparse, static, and expensive (Mainframe, Unix, Windows) rather than infinite, fungible and cheap (cloud, Linux, etc.): you have to wait months to get a server. Once you layer in audit, compliance, and security (other resources that are treated as sparse and have endless downside/risk if something goes wrong [you get hacked!] and little upside to speeding it up), it delays things even further.

In short, management has framed the whole process of “doing software” in a way that’s slow and expensive. Thus, by (un)intention, doing software is slow and expensive.

(The the next item is a consequence too much resource control: the unwillingness to train and spend time on “the factory.”)

4. “You’re doing it wrong.”

Many of the productive practices in software — like using version control, doing CI and CD, writing tests before code, A/B testing, strict automation etc. — seems like a lot of unnecessary work until you’ve experienced the benefits, mostly in the medium and the long term. It’s always easier to just SCP code onto a server, or demo your application on someone’s laptop instead of doing the proper CI/CD pipeline and automation modeling on a real production environment. I’m still shocked at this, but most large organizations follow very little, if any good software development practices:

From : “Town Hall: Agile in the Enterprise,” Mike West, Nathan Wilson, Thomas Murphy, Dec 2015, Gartner AADI US conference.

Fixing this is tedious, “hand-to-hand” battle: OK, so, start writing tests first… fully automate your builds and get a proper environment that you can deploy at will to, etc. That’s likely a good 3–6 months of calendar time right there (for training, screwing up, then re-training, and also building all your own platforms if you don’t just use off the shelf workflows) before you can start really getting into the flow of shipping code.

5. Innovation takes a lot more time than you think.

Corporate managers and product owners don’t understand how much work is involved in innovation. In my experience, “management” thinks that their dictates and “requirements” are clear and straightforward. They think they’re dealing with the “known knowns” quadrant of Rumsfeldian knowledge, but at best they’re dealing with “known unknowns,” and likely deal with “unknown unknowns”…about their own wants and desires!

For example, as a requester of software, you often have no idea what you actually want until you see it several times. This means you need to set-up a process to continually check-in on progress (e.g., demos ever 1–2 weeks with the option to change direction) and, more importantly, be involved: management must “spend” their attention on the software development process. And at the same time, you have to resist the urge to manage by table flipping and instead be happy “learning by failing.”

6. Technical debt and legacy are rarely addressed, let alone quantified.

Most any large organization will have a pile of legacy applications and processes. Most of this will be riddled by technical debt, if only parts of the system that (a.) no one really understands how they works, and, (b.) there really aren’t automated tests. This means — building on Michael Feather’s Legacy Code Dilemma — that it’s too high risk to touch/change your legacy code, let alone do it quickly. Beyond just modifying those systems, this can include integrating with systems as well, usually highly customized ERP systems that any business needs to work with to actually do work. While it’s possible to “refactor” code, I’m becoming more and more of the opinion that at best you quarantine it (or do some “strangler pattern” mumbo jumbo if you’re really good), or just put it all on an a piece off ice, light it on fire, and shove it off into the ocean.

7. Fix the Meatware.

As I’m ever going on about, many of these are meatware problems: technology is often just fine, it’s the people that need fixing. The confounding thing with this issue — how do you transform large organizations to [A|a]gile? — is that all actors in this system (except, sometimes, “top management”) knows exactly what to do and what’s wrong. There’s some sort of complex game theory going on that ruins the easiness of fixing the problem, and everyone ends up becoming expert at telling you why nothing can improve instead of figuring out how to do better. I think that’s something of what Andrew’s always on about with his prisoner’s dilemma and Nash equilibrium who walk into a stone cutter bar.

For a more recent, detailed study on this topic, check out my book Monolithic Transformation.

Eventually, to do a developer strategy your execs have to take a leap of faith

A kingmaker in the making.

I’ve talked with an old colleague about pitching a developer-based strategy recently. They’re trying to convince their management chain to pay attention to developers to move their infrastructure sales. There’s a huge amount of “proof” an arguments you can make to do this, but my experience in these kinds of projects has taught me that, eventually, the executive in charge just has to take a leap of faith. There’s no perfect slide that proves developers matter. As with all great strategies, there’s a stack of work, but the final call has to be pure judgement, a leap of faith.

“Why are they using Amazon instead of our multi-billion dollar suite?”

You know the story. Many of the folks in the IT vendor world have had a great, multi-decade run in selling infrastructure (hardware and software). All the sudden (well, starting about ten years ago), this cloud stuff comes along, and then things look weird. Why aren’t they just using our products? To cap it off, you have Apple in mobile just screwing the crap out of the analogous incumbents there.

But, in cloud, if you’re not the leaders, you’re obsessed with appealing to developers and operators. You know you can have a “go up the elevator” sale (sell to executives who mandate the use of technology), but you also see “down the elevator” people helping or hindering here. People complain about that SOAP interface, for some reason they like Docker before it’s even GA’ed, and they keep using these free tools instead of buying yours.

It’s not always the case that appealing to the “coal-facers” (developers and operators) is helpful, but chances are high that if you’re in the infrastructure part of the IT vendor world, you should think about it.

So, you have The Big Meeting. You lay out some charts, probably reference RedMonk here and there. And then the executive(s) still isn’t convinced. “Meh,” as one systems management vendor exec said to me most recently, “everyone knows developers don’t pay for anything.” And then, that’s the end.

There is no smoking gun

If you can’t use Microsoft, IBM, Apple, and open source itself (developers like it not just because it’s free, but because they actually like the tools!) as historic proof, you’re sort of lost. Perhaps someone has worked out a good, management consultant strategy-toned “lessons learned” from those companies, but I’ve never seen it. And believe me, I’ve spent months looking when I was at Dell working on strategy. Stephen O’Grady’s The New Kingmakers is great and has all the material, but it’s not in that much needed management consulting tone/style. (I’m ashamed to admit I haven’t read his most recent book yet, maybe there’s some in there.)

Of course, if Microsoft and Apple don’t work out as examples of “leaders,” don’t even think of deploying all the whacky consumer-space folks out like Twitter and Facebook, or something as detailed as Hudson/Jenkins or Oracle DB/MySQL/MariaDB.

I think SolarWinds might be an interesting example, and if Dell can figure out applying that model to their Software Group, it’d make a good case study. Both of these are not “developer” stories, but “operator” ones; same structural strategy.

Eventually, they just have to “get it”

All of this has lead me to believe that, eventually, the executives have to just take a leap of faith and “get it.” There’s only so much work you can do — slides and meetings — before you’re wasting your time if that epiphany doesn’t happen.

The transformation is complete.

If this is your bag, come check out a panel on the developer relations at the OpenStack Summit on April 28th, in Austin — I’ll be moderating it!

So you want to become a software company? 7 tips to not screw it up.

Hey, I’ve not only seen this movie before, I did some script treatments:

Chief Executive Officer John Chambers is aggressively pursuing software takeovers as he seeks to turn a company once known for Internet plumbing products such as routers into the world’s No. 1 information-technology company.

Cisco is primarily targeting developers of security, data-analysis and collaboration tools, as well as cloud-related technology, Chambers said in an interview last month.

Good for them. Cisco has consistently done a good job to fill out its portfolio and is far from the one-trick pony people think it is (last I checked, they do well with converged infrastructure, or integrated systems, or whatever we’re supposed to call it now). They actually have a (clearly from lack of mention in this piece) little known-about software portfolio already.

In case anyone’s interested, here’s some tips:

1.) Don’t buy already successful companies, they’ll soon be old tired companies

Software follows a strange loop. Unlike hardware where (more or less) we keep making the same products better, in software we like to re-write the same old things every five years or so, throwing out any “winners” from the previous regime. Examples here are APM, middleware, analytics, CRM, web browsers…well…every category except maybe Microsoft Office (even that is going bonkers in the email and calendaring space, and you can see Microsoft “re-writing” there as well [at last, thankfully]). You want to buy, likely, mid-stage startups that have proven that their product works and is needed in the market. They’ve found the new job to be done (or the old one and are re-writing the code for it!) and have a solid code-base, go-to-market, and essentially just need access to your massive resources (money, people, access to customers, and time) to grow revenue. Buy new things (which implies you can spot old vs. new things).

2.) Get ready to pay a huge multiple

When you identify a “new thing” you’re going to pay a huge multiple of 5x, 10x, 20x, even more. You’re going to think that’s absurd and that you can find a better deal (TIBCO, Magic, Actuate, etc.). Trust me, in software there are no “good deals” (except once in a lifetime buys like the firesale fro Remedy). You don’t walk into Tiffany’s and think you’re going to get a good deal, you think you’re going to make your spouse happy.

3.) “Drag” and “Synergies” are Christmas ponies

That is, they’re not gonna happen on any scale that helps make the business case, move on. The effort it takes to “integrate” products and, more importantly, strategy and go-to-market, together to enabled these dreams of a “portfolio” is massive and often doesn’t pan out. Are the products written in the exactly the same programming language, using exactly the same frameworks and runtimes? Unless you’re Microsoft buying a .Net-based company, the answer is usually “hell no!” Any business “synergies” are equally troublesome: unless they already exist (IBM is good at buying small and mid-companies who have proven out synergies by being long-time partners), it’s a long-shot that you’re going to create any synergies. Evaluate software assets on their own, stand-alone, not as fitting into a portfolio. You’ve been warned.

4.) Educate your sales force. No, really. REALLY!

You’re thinking your sales force is going to help you sell these new products. They “go up the elevator” instead of down so will easily move these new SKUs. Yeah, good luck, buddy. Sales people aren’t that quick to learn (not because they’re dumb, at all, but because that’s not what you pay and train them for). You’ll need to spend a lot of time educating them and also your field engineers. Your sales force will be one of your biggest assets (something the acquired company didn’t have) so baby them and treat them well. Train them.

5.) Start working, now, on creating a software culture, not acquiring one

The business and processes (“culture”) of software is very different and particular. Do you have free coffee? Better get it. (And if that seems absurd to you, my point is proven.) Do you get excited about ideas like “fail fast”? Study and understand how software businesses run and what they do to attract and retain talent. We still don’t really understand how it all works after all these years and that’s the point: it’s weird. There are great people (like my friend Israel Gat) who can help you, there’s good philosophy too: go read all of Joel’s early writing of Joel’s as a start, don’t let yourself get too distracted by Paul Graham (his is more about software culture for startups, who you are not — Graham-think is about creating large valuations, not extracting large profits), and just keep learning. I still don’t know how it works or I’d be pointing you to the right URL. Just like with the software itself, we completely forget and re-write the culture of software canon about every five years. Good on us. Andrew has a good check-point from a few years ago that’s worth watching a few times.

6.) Read and understand Escape Velocity

This is the only book I’ve ever read that describes what it’s like to be an “old” technology company and actually has practical advice on how to survive. Understand how the cash-cow cycle works and, more importantly for software, how to get senior leadership to support a cycle/culture of business renewal, not just customer renewal.

7.) There’s more, of course, but that’s a good start

Finally, I spotted a reference to Stall Points in one of Chambers’ talks the other day which is encouraging. Here’s one of the better charts you can print out and put on your wall to look at while you’re taking a pee-break between meetings:

That charts all types of companies. It’s hard to renew yourself, it’s not going to be easy. Good luck!

The Problem with PaaS Market-sizing

Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20–25bn sometime in the future (5–10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.

A bias against private PaaS

This number is in contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just, Heroku, parts of AWS, Azure, and Google, and bunch of “Other.” This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.

I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, for PaaS for large organizations seems to be private PaaS

(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform. Also, things are getting even more ambiguous at the infrastructure layer all the time — perhaps “private PaaS” means more “owning” the PaaS layer, regardless of who “owns” the IaaS layer.)

How much do you have budgeted?

With this premise — that people want private PaaS — I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:

  • IDC’s Application Development forecast puts the application development market (which includes ALM tools and platforms) at $24bn in 2015, growing to $30bn in 2019. The commentary notes that the influence of PaaS will drive much growth here.
  • Recently from Ovum: “Ovum forecasts the global spend on middleware software is expected to grow at a compound annual growth rate (CAGR) of 8.8 percent between 2014 and 2019, amounting to $US22.8 billion by end of 2019.”
  • And there’s my old pull from a Goldman Sachs report that pulled from Gartner, where middleware is $24bn in 2015 (that’s from a Dec 2014 forecast).

When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20–25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.

Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”

So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.

(Ironic-clipart from my favorite source, geralt.)

Roles and Responsibilities for DevOps and Agile Teams

This post is pretty old and possibly out of date. There’s an updated version of it in my book, Monolithic Transformation.


There are a handful of “standard” roles used in typical Agile and DevOps teams. Any application of Agile and DevOps is highly contextual and crafted to fit your organization and goals. This document goes over the typical roles and their responsibilities and discusses how Pivotal often sees companies staffing teams. It draws on “the Pivotal Way,” our approach to creating software for clients, recommendations for staffing the operations roles for Pivotal Cloud Foundry (our cloud platform), and recent recommendations and studies from agile and DevOps literature and research.

I don’t discuss definitions for “Agile” or “DevOps” except to say: organizations that want to improve the quality (both in lack of bugs and “design” quality as in “the software is useful”) and uptime/resilience of their software look to the product development practices of agile software development and DevOps to achieve such results. I see DevOps as inclusive of “Agile” in this discussion, and so will use the terms interchangeably. For more background, see one of my recent presentations on this topic.

Finally, there is no fully correct and stable answer to the question of what roles and responsibilities are in teams like this. They fluctuate constantly as new practices are discovered and new technologies remove past operational constraints, or create new ones! This is currently my best understanding of all this based both organizations that are using Cloud Foundry and those who are using other cloud platforms. As you discover new, better ways, I’d encourage you to reach out to me so we can update this document.

Core Roles for DevOps Teams

There are generally two types of teams we see: “business capabilities teams” who work on the actual software or services (“the application”) in question and “agile operations teams” as seen below:

While not depicted in the diagram above, the amount of staff in each layer dramatically reduces as you go “down” the stack. The most number of people are in the business capability layer working on the actual applications and services, many less individuals are working on creating and customizing capabilities in Pivotal Cloud Foundry (e.g., a service to interact with a reservation or ERP system that is unique to the organization), while many, many less “operators” work at keeping the cloud platform up and running, updated, and handle the hardware and networking issues.

Making the Applications — Business Capability Roles

These teams work on the application or services being delivered. The composition of these teams changes over time as each team “gels,” learning the necessary operations skills to be “DevOps” oriented and master the domain knowledge needed to make good design choices.

These are the core roles in this layer:

  • Developer/Engineer — those who not only “code,” but gain the operations knowledge needed to support the application in production, with the help of…
  • Operations — those who work with developers on production needs and help support the application in production. This role may lessen over time as developers become more operations aware, or it may remain a dedicated role.
  • Product Owner/Product Manager — the “owner” of the application in development that specifies what should be done each iteration, prioritizing the list of “requirements.”
  • Designer — studies how users interact with the software and systematically designs ways to improve that interaction. For user-facing applications this also includes visual design.”.

These are roles that are not always needed and sometimes be fulfilled partially by shared, but designated to the team staff:

  • Tester — staff that helps with the effort ensure the software does what was intended and functions properly.
  • Architect — in large organization, the role or architect is often used to ensure that individual teams are aligning with the larger organization’s goals and strategy, while also acting as a consultative enabler to help teams be successful and share knowledge.
  • Data scientist — if data analysis is core to the application being developed, getting the right analysis and domain skills will be key.


These are the programmers, or “software developers.” Through the practice of pairing, knowledge is quickly spread amongst developers, ensuring that there are no “empires” built, and addresses the risks of a low “bus factor.”Developers are also encouraged to “rotate” through various roles from front to back-end to get good exposure to all parts of the project. By using a cloud platform, like Pivotal Cloud Foundry, developers are also able to package and deploy code on their own through the continuous integration and continuous delivery tools.

Developers are not expected to be experts at operations concerns, but by relying on the self-service and automation capabilities of cloud platforms do not need to “wait” for operations staff to perform configuration management tasks to deploy applications. Over time, with this reliance on a cloud platform which cleanly specifies how to best build applications so that they’re easily supported in production, developers gain enough operations knowledge to work without dedicated operations support.

The amount of developers on each team is variable, but so far, following the two pizza team rule of thumb, we see anywhere from 1 to 3 pairs, that is 2 to 6, and sometimes more.


In a cloud native mode, until business capabilities teams have learned the necessary skills to operate applications on their own, they will need operations support. This support will come in the form of understanding (and co-learning!) how the cloud platform works, and assistance troubleshooting applications in production. Early on you should plan to have heavy operations involvement to help collaboration with developers and share knowledge, mostly around getting the best from the cloud platform in place. You may need to “assign” operations staff to the team at the beginning, making them so called designated operations staff instead of dedicated, as explained in Effective DevOps.

Many teams find that the operations role never leaves the team, which is perfectly normal. Indeed, the desired end-state is that the application teams have all the development and operations skills and knowledge needed to be successful.

As two cases to guide your understanding of this role:

  • Etsy has approximately 15 operations engineers to a few hundred other engineers, according to Effective DevOps.
  • As re-told by Diego Lapiduz at 18F, early on when teams were learning how to use and operate, he and a handful of other operations staff spent much of their time with development teams, getting intimately familiar with each application. Now, because the practice of designated operations staff is less needed, he and his operations peers are less involved and have little knowledge of the applications in use…which is good, and as intended!

As a side note, it’s common for operations people to freak out at this point, thinking they’re being eliminated. While it’s true that margin-berzerked management could choose to look at operations staff as “waste,” it’s more likely that following Jevon’s Paradoxoperations staff will be needed even more as the amount of applications and services multiply.

Product Owner/Product Manager

This is the role that defines and guides the requirements of the application. It is also one of the roles that varies in responsibilities the most across products. At its core, this role is the “owner” of the software under development. In that respect, they help prioritize, plan, and deliver software that meets your requirements. Someone has to be “the final word” on what happens in high functioning teams like this. The amount of control vs. consensus driven management is the main point of variability in this role, plus the topic areas that the product owner must be knowledge of.

It’s best to approach the product owner as a “breadth first role”: they have to understand the business, the customer, and the technical capabilities. This broad knowledge helps them make sure they’re making the right prioritization decisions.

In Pivotal Labs engagements, this role is often performed by a Pivotal employee, pairing up with one of your staff to train and transfer knowledge. Whether during a project or through workshops, they help you understand the Pivotal, iterative development approach, as well as mentor and train your internal staff to help them learn agile methods and skills, which will enable them to move on with confidence when the engagement is complete.


One of the major lessons of contemporary software is that design matters, a tremendous amount more than previously believed. The “small batch” mentality of learning and improving software afforded by cloud platforms like Pivotal Cloud Foundry gives you the ability to design more rapidly and with more data-driven precision than ever before. Hence, the role of a designer is core to cloud native teams.

The designer focuses on identifying the feature set for the application and translating that to a user experience for the development team. Activities may include completing the information architecture, user flows, wireframes, visual design, and high-fidelity mock-ups and style guides. Most importantly, designers have to “get out of the building” and not only see what actual users are doing with the software, but get to know those users and their needs intimately.

Testers (partial/optional)

While the product manager, and overall team are charged with testing their software, some organizations either want or need additional testing. Often this is “exploratory testing” where a third party (the tester[s]) are trying to systematically find the edge cases and other “bugs” the development team didn’t think of.

It’s worth questioning the need for separate testers if you find yourself in that situation to make sure you need them. Much routine “QA” is now automated (and can, thus, be done by the team and automated CI/CD pipelines), but you may want exploratory, manual testing in addition to what the team is already doing and verification that the software does as promised and functions under acceptable duress. But even that can be automated in some situations as the Chaos Monkey and Chaos Lemur show.

Architect (partial/optional)

Traditionally, this role is responsible for conducting enterprise analysis, design, planning, and implementation, using a “big picture” approach to ensure a successful development and execution of strategy. Those goals can still exist in many large organizations, but the role of an architect is evolving to be an enabler for more self-sufficient, and decoupled teams. Too often this role has become a “Dr. No” in most large organizations, so care must be taken to ensure that the architect supports the team, not the other way around.

Architects are typically more senior technical staff who are “domain experts.” They may also be more technically astute and in a consultative way help ensure the long-term quality and flexibility of the software that the team creates, share best practices with teams, and otherwise enables the teams to be successful. As such, this role may be a fully dedicated one who, hopefully, still spends much of their time coding so as not to “go soft” and lose not only the trust of developers but an intimate enough knowledge of technology to know what’s possible and not possible in contemporary software.

Data Science (partial/optional)

If your application includes a large amount of data analysis, you should consider including a data scientist role on the team. This role can follow the dedicated/designated pattern as discussed with the operations role above.

Data Science today is where design might have been a few years ago. It is not considered to be a primary role within a product team, but more and more products today are introducing a level of insight not seen before. Google Now surfaces contextual information; SwiftKey offers word predictions based on swipe patterns; Netflix offers recommendations based on what other people are watching; and Uber offers predictive arrival times of their drivers. These features help turn transactional products into smart product.

Other Roles

There are many other roles that can and do exist in IT organizations. These are roles like database administrators (DBAs), security operations, network operations, or storage operations. In general, as with any “tool,” you should use what you need when you need it. However, as with the architect role above, any role must reorient itself to enabling the core teams rather than “governing” them. As the DevOps community has discussed at length for nearly ten years, the more you divide up your staffing by function, the further you move from a small, integrated team, and your goal of consistently and regularly building quality software will become harder.

Agile Operations Roles

Roles here focus on operating, supporting, and extending the cloud platform in use. These roles typically sit “under” the Agile and DevOps teams, so the discussion here is briefer. Each role is described in term of roles and responsibilities typically encountered in Pivotal Cloud Foundry installs. These can vary by organization and deployment (public vs. private cloud, the need for multi-cloud support, types of IaaS used, etc.)

For a brief definition of a cloud platform, see “the use of a cloud platform”section below.

Application Operator

These are typically the “operations” people described above and serve as a supporting and oversight function to the business capabilities teams, whether designated or dedicated. Typical responsibilities are:

  • ŸManages lifecycle and release management processes for apps running in Pivotal Cloud Foundry.
  • ŸResponsible for the continuous delivery process to build, deploy, and promote Pivotal Cloud Foundry applications.
  • ŸEnsures apps have automated functional tests that are used by the continuous delivery process to determine successful deployment and operation of applications.
  • ŸEnsures monitoring of applications is configured and have rules / alerts for routine and exceptional application conditions.
  • ŸActs as second level support for applications, triaging issues, and disseminating them to the platform operator, platform developer or application developer as required.

A highly related, sometimes overlapping, role is that of centralized development tool providers. This role creates, sources, and manages the tools used by developers all the way from commonly used libraries to, version control and project management tools, to maintaining custom written frameworks. Companies like Netflix maintain “tools teams” like this, often open sourcing projects and practices they develop.

Platform Operator

The is the typical “sys admin” for the cloud platform itself:

  • Manages IaaS infrastructure that Pivotal Cloud Foundry is deployed to, or co-ordinates with the team that does.
  • Installs and configures Pivotal Cloud Foundry.
  • ŸPerforms capacity, availability, issue, and change management processes for Pivotal Cloud Foundry.
  • ŸScales Pivotal Cloud Foundry, forecasting, adding, and removing IaaS and physical capacity as required.
  • ŸUpgrades Pivotal Cloud Foundry.
  • ŸEnsures management and monitoring tools are integrated with Pivotal Cloud Foundry and have rules / alerts for routine and exceptional operations conditions.

Platform Engineering

This team and its roles are responsible for extending the capabilities of the cloud platform in use. What this role does per organization can vary, but common tasks of this role for organizations using Pivotal Cloud Foundry are to:

  • ŸMakes enhancements to existing buildpack(s) and builds new buildpack(s) for the platform. ŸBuilds service broker(s) to manage lifecycle of external resources and make them available to Pivotal Cloud Foundry apps.
  • ŸBuild Pivotal Cloud Foundry tiles with associated BOSH releases and service brokers to enable managed services in Pivotal Cloud Foundry.
  • ŸManages release and promotion process for buildpacks, service brokers, and tiles across Pivotal Cloud Foundry deployment topology.
  • ŸIntegrates Pivotal Cloud Foundry APIs with external tool(s) when required.

Physical Infrastructure Operations

While not commonly covered in this type of discussion, someone has to maintain the hardware and data centers. In a cloud native organization this function is typically so highly abstracted and automated — if not outsourced to a service provider or public cloud altogether — that it it does not often play a major role on cloud native operations. However, especially at first as your organization is transforming this new way of operating, you will need to work with physical infrastructure operations staff, whether in-house or with your outsourcer.

Supporting Best Practices

The use of a cloud platform

Much of the above is predicated on the use of a cloud platform. A cloud platform is a system, such as Pivotal Cloud Foundry, that provides the runtime and production needs to support applications on-top of cloud infrastructure (public and private IaaS), and often, as in the case of Pivotal Cloud Foundry, with fully integrated middleware services that are natively supported (such as databases, queues, and application development frameworks).

Two of the key ways of illustrating the efficiency of using Pivotal Cloud Foundry are (a.) the ratios of operators to application running in the platform, and, (b.) reduction in lead time. Here are some relevant metrics for Pivotal Cloud Foundry:

  • One large financial institution is currently supporting 145 applications with two operations staff.
  • 18F was able to reduce Authorization to Operate (ATO) from 9+ months to 3 days once auditors understood the automation in the open source Cloud Foundry instance at
  • When planning the first product developed on Pivotal Cloud Foundry, CoreLogic allocated a team of 12 developers with four QA staff and a management team. The goal was to deliver the product in 14 months. Instead, the project ended up requiring only a product manager, one user experience designer and six engineers who delivered the desired product in just six months. As the second, third, and next projects roll on, you can expect delivery time to reduce even more.
  • A large retail user of Pivotal Cloud Foundry was running over 250 applications in non-production and nearly 100 applications in production after only 4 months using the platform.
  • Humana was able to launch an Apple Watch application in just five weeks.

Without a cloud platform like Pivotal Cloud Foundry in place, organizations will find it difficult, if not impossible, to achieve the infrastructure efficiencies in both development and at runtime needed to operate at full speed and effectiveness.

For reference, the below diagram based on Pivotal Cloud Foundry, is a “white board” of the functions a cloud platform provides:

For a gentle introduction to this stack, see the introductory discussion video at The New Stack. And, for even more discussion of the nature of cloud platforms, see Brian Gracely’s discussion of “structured cloud platforms” and Casey West’s write-up on the same topic.

Sizing: two pizza teams

While some sizing guidelines have been listed throughout, there are no hard and fast rules. The notion a “two pizza team” is a well accepted theory of software team sizes. As Amazon CEO Jeff Bezos is said to have decreedif a team couldn’t be fed with two pizzas, it’s too big. Without making too many assumptions about the size of a “large” pizza or how many slices each individual needs to eat, you could estimate this to around six to fifteen people. This may vary, of course, but the point is to keep things as small as possible to minimize communication overhead, context switching, and responsibility evasion due to “that’s not my job,” among other “wastes.”

The assumption is teams this small is that many of these small teams will organize to create big, overall results. Architectural approaches like microservices that emphasize loose coupling and defensive use of services help enable that coordination at a technical level, while making sure that teams are aware of and aligned to overall organizational goals help coordinate the “meatware.” This coordination is difficult, to be sure, but start looking at your organization a set of autonomous units working together, rather than one giant team. As an example, Pivotal Cloud Foundry engineering is composed of around 300 developers spread over 40 loosely coupled teams.

Dedicated, integrated teams lead to the best outcomes

Agile-think and DevOps teams seek to put every role and, thus, person, needed to deliver the software product from inception to running in production on the same team, with their time fully dedicated to that task. These are often called “integrated teams” or “balanced teams.”

IT has long organized itself in the opposite way, separating out roles (and, thus people) into distinct teams like operators, QA, developers, and DBAs in the hopes of optimizing the usage of these roles. This is the so called “silo” approach. When you’re operating in the more exploratory, rapid, agile cloud native fashion, this attempt at “local optimization” creates “global optimization” problems due to hand-offs between teams (leading to wastage from time and communication errors). A silo approach can also poor people interactions across team which damages the software quality and availability. “That’s not my responsibility” often leads it to being no one’s responsibility.

Most organizations have a “project” mindset when it comes to software as well. The functional roles emerge from their silos as needed to work on the project, and then disband once done. Good software development benefits from a “product” mindset where the team, instead, stays dedicated to the product.

Operating in this way, of course, is not always possible, if only at first as organizations switch over their mind-set from “siloed” teams to dedicated teams. So care must be taken when slicing a person up between different teams: intuitively we understand that people cannot truly “multitask,” and numerous studies have shown the high cost of context switching and spreading a person’s attention across many different projects. In discussing roles, keep in mind that the further you are from truly integrated, dedicated teams, the less efficiency and productivity you’ll get from people on those teams, and, thus, in the project as a whole.

More Reading

As described in the introduction, this is a quick overview of the roles and responsibilities for Agile and DevOps teams. The book Effective DevOpscontains much discussion of these topics and background on studies and proof-points. For a slightly more “enterprise-y” take, see the concepts in disciplined agile delivery which should be read as slightly “pre-cloud native” but nonetheless helpful.

Additionally, Pivotal Labs, and the Pivotal Customer Success and Transformation teams can discuss these topics further and help transform your organization accordingly. With over twenty years of experience and as the maintainers of Pivotal Cloud Foundry, one of the leading cloud platforms, we have been learning and perfecting these methods for sometime.

Thanks to the many reviewer comments from Pivotal folks, in particular Robbie CluttonCornelia Davis, and David McClure. And, as always, excellent post-post-ironic corporate clipart from geralt.

For a more recent, detailed study on this topic, check out my book Monolithic Transformation.

Self-motivated teams lead to better software

This post is pretty old and possibly out of date. There’s an updated version of it in my book, Monolithic Transformation.

In contrast to the way traditional organizations operate, cloud native enterprises are typically comprised of self-motivated and directed teams. This reduces the amount of time it takes to make decisions, deploy code, and see if the results helped move the needle. More than just focusing on speed of decision making and execution, building up these intrinsically motivatedteams helps spark creativity, fueling innovative thinking. Instead of being motivated by metrics like number of tickets closed or number of stories/points implemented, these teams are motivated by ideas like increased customer satisfaction with faster forms processing or helping people track and improve their health.

In my experience, one of the first steps in shifting how your people think — from the Pavlovian KPI bell — is to clearly explain your strategy and corporate principals. Having worked in strategy roles in the past, I’ve learned how poorly companies articulate their goals, constraints, and strategy. While there are many beautiful (and not so beautiful!) presentations that extol a company’s top motivations and finely worded strategies, most employees are left wondering how they can help day to day.

Whether you’re a leader or an individual contributor, you need to know the actionable details of the overall strategy and goals. Knowing your company’s strategy, and how it will implement that strategy, is not only necessary for breeding self-motivated and self-managed people, but it also comes in handy when you’re looking to apply agile and lean principles to your overall continuous delivery pipeline. Tactically, this means taking the time to create detailed maps of how your company executes its strategy. For example, you might do value-stream mappingalignment maps, or something like value chain mapping. This is a case where I, again, find that companies have much less than they think they do — often, organizations have piles of diagrams and documents — but — very rarely have something that illustrates everything — from having an idea for a feature to getting it in customer’s hands. A cloud native enterprise will always seek to be learning from and improving that end-to-end process. So, it’s required to map your entire business process out and have everyone understand it. Then, we all know how we deliver value.

The process of value-stream mapping, for example, can be valuable for simply finding waste (so much so that the authors of Learning to See cite only focusing on waste removal as a lean anti-pattern). As one anecdote goes — after working for several weeks to finally get everything up on the big white board, one of the executives looked up at all the steps and time wasted in their process to get software out the door and said, “there’s a whole lot of stupid up there.” The goal of these exercises is not only removing stupid, but also to focus on and improve the processes.


For a more recent, detailed study on this topic, check out my book Monolithic Transformation.

The Problem with PaaS Market-sizing

Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20-25bn sometime in the future (5-10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.

A bias against private PaaS

This number is contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just, Heroku, and parts of AWS, Azure, and Google. This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.

I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, PaaS for large organizations seems to be private PaaS

(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform.)

How much do you have budgeted?

With this premise – that people want private PaaS – I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:

  • IDC’s Application Development forecast puts the application development market (which includes ALM tools and platforms) at $24bn in 2015, growing to $30bn in 2019. The commentary notes that the influence of PaaS will drive much growth here.
  • Recently from Ovum: “Ovum forecasts the global spend on middleware software is expected to grow at a compound annual growth rate (CAGR) of 8.8 percent between 2014 and 2019, amounting to $US22.8 billion by end of 2019.”
  • And there’s my old pull from a Goldman Sachs report that pulled from Gartner, where middleware is $24bn in 2015 (that’s from a Dec 2014 forecast).

When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20-25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.

Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”

So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.

(Ironic-clipart from my favorite source, geralt.)