The premise of this book, for most anyone, is painfully boring: planning out and project managing the installation of COTS software. This is mostly lumbering, on-premises ERP applications: those huge, multi-year installs of software that run the back office and systems of record for organizations. While this market is huge, touches almost every company, and has software that is directly or indirectly touched by almost everyone each day (anytime you buy something or interact with a company)…it’s no iPhone.
If you’re in the business of selling enterprise software and services, however, Beaubouef’s book is a rare look inside the buyer’s mind and their resulting work-streams when they’re dealing with big ol' enterprise IT. As a software marketer, I read it for exactly that. I was hoping to find some ROI models (a scourge of my research). It doesn’t really cover that at all, which is fine.
There’s a core cycle of ideas and advice flitting in and bout of the book that I like:
While the book focuses on on-premises software, the overall thinking could easily apply to any implementation of a large IT-driven, vendor provided system: SaaS would work, and to an extent the kind of infrastructure software we sell at Pivotal. As the points above go over, the core thrust of the book is about managing how you make sure your IT is actually helping the business, not bogging down in its self.
If you’re pretty vague on what you should do in these large IT initiatives, you could do a lot worse than read this book.
Check out the book: Maximize Your Investment: 10 Key Strategies for Effective Packaged Software Implementations
I get all ants-in-pants about this whole bi-modal discussion because I feel like it’s a lot of energy spent talking about the wrong things.
This came up recently when I was asked about “MVP”, in a way that basically was saying “our stuff is dangerous [oil drilling], so ‘minimal’ sounds like it’d be less safe.” I tried to focus them on the “V” and figure out what “viable” was for their situation. The goal was to re-enforce that the point of all this mode 2/small batch/DevOps/PDCA/cloud native/OODA nonsense is to keep iterating to get to the right code.
Part of the continual consternation around bi-modal IT - sad/awesome mode - is misalignment around that “viability” and scoping on “enterprise” projects. This is just one seam-line around the splits of the discussion being unhelpful
The awesome mode people are like:
You should divide the work into small chunks that you release to production as soon as possible - DevOps, Agile, MVP, CI/CD - POW! You have no idea what or how you should implement these features so you need to iteratively do it cf. projectcartoon.com
And the sad mode folks are like:
Yes, but we have to implement all this stuff all at once and can’t do it in small slices. Plus, mainframes and ITIL.
Despite often coming off as a sad mode apologist, I don’t even know what the sad mode people are thinking. There’s this process hugger syndrome that, well on both sides really, creates strawpeople. The goal of both methods is putting out software that makes users more productive, including having it actually work, and not overpaying for the whole thing.
The Enemy is finding any activity that doesn’t support those goals and eliminated it as much as possible. In this, there was some good scrabbling from the happy mode people laughing at ITSM think early on, but at this point, the sad people have gotten the message, have been reminded of their original goal, and are now trying to adapt. In fact, I think there’s a lot the “sad mode” people could bring to the table.
To play some lexical hopscotch, I don’t think there is a “mode 1.” I think there’s just people doing a less than awesome job and hiding behind a process-curtain. Sure, it may to be their choice and, thus, not their fault. “Shitty jobs are being done,” if you prefer the veil of passive voice.
When I hear objections to fixing this situation, I try to b nice and helpful. After all, I’m usually there as part of an elaborate process to get money from these folks in exchange for helping them. When they go all Eeyore on me, I have to reframe the room’s thinking a little bit without getting too tough love-y.
“When I put these lithium batteries in this gas car, it doesn’t seem to work. So electric cars are stupid, right?
You want to walk people to asking “how do we plan out the transition from The Old Way That Worked At Some Point to The New Way That Sucks Less?” They might object with a sort of “we don’t need to change” or the even more snaggly “change is too hard” counter-point.
I’m not sure there are systems that can just be frozen in place and resist the need to change. One day, in the future, any system (even the IRS’!) will likely need to change and if you don’t already have it setup to change easily (awesome mode), you’re going to be in a world of hurt.
The fact that we discuss how hard it is to apply awesome mode to legacy IT is evidence that that moment will come sooner than you think.
(Insert, you know, “where’s my mobile app, Nowakowski?” anecdote of flat-footedness here.)
The royal books of process, ITIL, are another frequent strawperson that frothy mouthed agents of change like to light up. Few things are more frustrating than a library of books that cost £100 each. There’s a whole lot in there, and the argument that the vendors screw it all up is certainly appetizing. Something like ITIL, though, even poorly implemented falls under the “at least it’s an ethos” category.
I’m no IT Skeptic or Charles T. Betz, but I did work at BMC once: as with “bi-modal,” and I really don’t want to go re-read my ITIL books (I only have the v2 version, can someone spare a few £100’s to read v3/4?), but I’m pretty sure you could “do DevOps” in a ITIL context. You’d just have to take out the time consuming implementation of it (service desks, silo’d orgs, etc.).
Most of ITIL could probably be done with the metaphoric (or literal!) post-it notes, retrospectives, and automated audit-log stuff that you’d see in DevOps. For certain, it might be a bunch of process gold-plating, but I’m pretty sure there’s no unmovable peas under all those layers of books that would upset any slumbering DevOps princes and princesses too bad.
Indeed, my recollection of ITIL is that it merely specifies that people should talk with each other and avoid doing dumb shit, while trying to improve and make sure they know the purpose/goals of and “service” that’s deployed. They just made a lot of flow charts and check lists to go with it. (And, yeah: vendors! #AmIrightohwaitglasshouse.)
That gets us back to the people. The meatware is what’s rotting. Most people know they’re sad, and in their objections to happiness, you can find the handholds to start helping:
Yes, awesome mode people, that sounds wonderful, just wonderful. But, I have 5,000 applications here at REALLYSADMODECOGLOBAL, Inc. - I have resources to fix 50 of them this year. YOUR MOVE, CREEP!
Which is to say, awesome mode is awesome: now how do we get started in applying it at large orginizations that are several fathoms under the seas of sad?
The answer can’t be “all the applications,” because then we’ll just end up with 5,000 different awesome modes (OK, maybe more like 503?) - like, do we all use Jenkins, or CircleCI, or Travis? PCF, Docker, BlueMix, OpenShift, AWS, Heroku, that thing Bob in IT wrote in his spare time, etc.
Thus far, I haven’t seen a lot of commentary on planning out and staging the application of mode 2. Gartner, of course, has advice here. But it’d be great to see more from the awesome mode folks. There’s got to be something more helpful than just “AWESOME ALL THE THINGS!”
Thanks to Bridget for helping draw all this blood out while I was talking with her about the bi-modal pice she contributed to.
I have a larger piece on common objections to “cloud native” that I’ve encountered over the last year. Put more positive, “how to get your digital transformation started with a agile, DevOps, and cloud native” or some such platitudinal title like that. Here’s a draft of the dread-ROI section.
The most annoying buzzkill for changing how IT operates (doing agile, DevOps, taking “the cloud native journey,” or whatever you think is the opposite of “waterfall”) is the ROI counter-measure. ROI is a tricky hurdle to jump because it’s:
In my experience, when people are asking you about ROI, what they’re asking is “how will I know the time and money I’m going to spend on this will pay off and, thus, I won’t lose time and money? (I don’t want to look like a fool, you see, at annual review time)”
What they’re asking is “how do I know this will work better than what I’m currently doing or alternatives.” It also usually means, “hey vendor, prove to me that I should pay you.”
As I rambled through last year, I am no ROI expert. However, I’ve found two approaches that seem to be more something than nothing: (1.) creating a business case and trusting that agile methods will let you succeed, and, (2.) pure cost savings from the efficiencies of agile and “cloud native.”
A business case can tell you if your approach is too expensive, but not if it will pay for itself because that depends on the business being successful.
Here, you come up with a new business idea, a product, service, or tweak to an existing one of those. “We should set up little kiosks around town where people can rent DVDs for a $1 a day. People like renting DVDs. We should have a mobile app where you can reserve them because, you know, people like using mobile. We should use agile to do this mobile app and we’re going to need to run it somewhere, like ‘the cloud.’ So, hey, IT-nerds, what’s the ROI on doing agile and paying for a cloud platform on this?”
In this case, you (said “IT-nerds”) have some externally imposed revenue and profit objectives that you need to fit into. You also have some time constraints (that you’ll use to push back on bloated requirements and scope creep when they occur, hopefully). Once you have these numbers, you can start seeing if “agile” fits into it and if the cost of technology will fit your budget.
One common mis-step here is to think of “cost” as only the licensing or service fees for “going agile.” The time it takes to get the technology up and running and the chance that it will work in time are other “costs” to account for (and this is where ROI for tech stuff gets nasty: how do you put those concerns into Excel?).
To cut to the chase, you have to trust that “agile” works and that it will result in the DVD rental mobile app you need under the time constraints. There’s no spreadsheet friendly thing here that isn’t artfully dressed up qualitative thinking in quantitate costumes. At best you can point to things like the DevOps reports to show that it’s worked for other people. And for the vendor expenses, in addition to trusting that they work, you have to make sure the expenses fit within your budgets. If you’re building a $10m business, and the software and licensing fees amount to $11m, well, that dog won’t hunt. There are some simple, yet helpful numbers to run here like the TCO for on-premises vs. public cloud fees.
Of course, a major problem with ROI thinking is that it’s equally impossible to get a handle on competing ways to solve the problem, esp. the “change nothing” alternative. What’s the ROI of how IT currently operates? It’d be good to know that so you can compare it to the proposed new way.
If you’re lucky enough to know a realistic, helpful budget like this, your ROI task will be pretty easy. Then it’s just down to horse-trading with your various enterprise sales reps. Y’all have fun with that.
Focus on removing costs, not making money.
If you’re not up for the quagmire of business case-driven ROI, you can also discuss ROI in terms of “savings” the new approach affords. For things like virtualizing, this style of ROI is simple: we can run 10 servers on one server now, cutting our costs down by 70–80% after the VMware licensing fees.
Doing “agile,” however, isn’t like dropping in a new, faster and cheaper component into your engine. Many people I encounter in conference rooms think about software development like those scenes from 80s submarine movies. Inevitably, in a submarine movie, something breaks and the officer team has to swipe all the tea cups off the officer’s mess table and unfurl a giant schematic. Looking over the dark blue curls of a thick Eastern European cigarette, the head engineer gestures with his hand, then slams a grimy finger onto the schematics and says “vee must replace the manifold reducer in the reactor.”
Solving your digital transformation problems is not like swapping “agile” into the reactor. It’s not a component-based improvement like virtualization was. Instead, you’re looking at process change (or “culture,” as the DevOps people like to say), a “thought technology.” I think at best what you can do is try to calculate the before and after savings that the new process will bring. Usually, this is trackable in things like time spent, tickets opened, number of staff needed, etc. You’re focusing on removing costs, not making money. As my friend Ed put it when we discussed how to talk about DevOps with the finance department:
In other words, if I’m going to build a continuous integration platform, I would imagine you could build out a good scaffolding for that and call it three or four months. In the process of doing that, I should be requiring less help desk tickets get created so my overtime for my support staff should be going down. If I’m virtualizing the servers, I’ll be using less server space and hard drive space, and therefore that should compress down. I should be able to point to cost being stripped out on the back end and say this is maybe not 100% directly related to this process, but it’s at least correlated with it.
In this instance, it’s difficult to prove that you’ll achieve good ROI ahead of time, but you can at least try to predict changes informed by the savings other people have had. And, once again, you’re left to making a leap of faith that qualitative anecdotes from other people will apply to you.
For example, part of Pivotal’s marketing focuses on showing people the worth of buying a cloud platform to support an agile approach to software deliver (we call that “cloud native”). In that conversation, I cite figures like this:
In most of these cases, once you switch over to the new way, you end up with extra capacity because you can now “do IT” more efficiently. Savings, then, come from what you decide to do with that excess capacity: (a.) doing more with the new capacity like adding more functionality to your existing businesses, creating new businesses, or entering new markets, or, (b.) if you don’t want to “grow,” you get rid of the expense of that excess capacity (i.e., lay-off the excess staff or otherwise get them off out of the Excel sheet for your business case).
But, to be clear, you’re back into the realm of imagining and predicting what the pay-off will be (the “business case” driven ROI from above) or simply stripping out costs. It’s a top-line vs. bottom-line discussion. And, in each case, you have to take on faith the claims about efficiencies, plus trust that you can achieve those same savings at your organizations.
With these kinds of numbers and ratios, the hope is, you can whip out a spreadsheet and make some sort of chart that justifies doing things the new way. Bonus points if you use Monte Carlo inspired ranges to illustrate the breadth of possibilities instead of stone-code line-graph certainty.
As an added note of snark: all of these situations assume you know the current finances for the status quo way of operating. Surely, with all that ITIL/ITSM driven, mode 1 thinking you have a strong handle on your existing ROI, right? (Pause for laughs.)
More seriously, the question of ROI for thought technologies is extremely tricky. In that conversation on this topic that I had with Ed last year, the most important piece of advice was simple: talk with the finance people more and explain to them what’s going on.
That’s the most effective (and least satisfying!) advice you get about any of this “doing things the new way” change management prattle: whether it’s auditors, DBAs, finance, PMO people, or whoever is throwing chaff in your direction: just go and talk with them. Understand what it is they need, why they’re doing their job, and bring them onto the team instead of relegating them to the role of The Annoying Others.
Check out another take on this over in my September 2016 column at The Register.
I’ve talked with an old colleague about pitching a developer-based strategy recently. They’re trying to convince their management chain to pay attention to developers to move their infrastructure sales. There’s a huge amount of “proof” an arguments you can make to do this, but my experience in these kinds of projects has taught me that, eventually, the executive in charge just has to take a leap of faith. There’s no perfect slide that proves developers matter. As with all great strategies, there’s a stack of work, but the final call has to be pure judgement, a leap of faith.
You know the story. Many of the folks in the IT vendor world have had a great, multi-decade run in selling infrastructure (hardware and software). All the sudden (well, starting about ten years ago), this cloud stuff comes along, and then things look weird. Why aren’t they just using our products? To cap it off, you have Apple in mobile just screwing the crap out of the analogous incumbents there.
But, in cloud, if you’re not the leaders, you’re obsessed with appealing to developers and operators. You know you can have a “go up the elevator” sale (sell to executives who mandate the use of technology), but you also see “down the elevator” people helping or hindering here. People complain about that SOAP interface, for some reason they like Docker before it’s even GA’ed, and they keep using these free tools instead of buying yours.
It’s not always the case that appealing to the “coal-facers” (developers and operators) is helpful, but chances are high that if you’re in the infrastructure part of the IT vendor world, you should think about it.
So, you have The Big Meeting. You lay out some charts, probably reference RedMonk here and there. And then the executive(s) still isn’t convinced. “Meh,” as one systems management vendor exec said to me most recently, “everyone knows developers don’t pay for anything.” And then, that’s the end.
If you can’t use Microsoft, IBM, Apple, and open source itself (developers like it not just because it’s free, but because they actually like the tools!) as historic proof, you’re sort of lost. Perhaps someone has worked out a good, management consultant strategy-toned “lessons learned” from those companies, but I’ve never seen it. And believe me, I’ve spent months looking when I was at Dell working on strategy. Stephen O’Grady’s The New Kingmakers is great and has all the material, but it’s not in that much needed management consulting tone/style. (I’m ashamed to admit I haven’t read his most recent book yet, maybe there’s some in there.)
Of course, if Microsoft and Apple don’t work out as examples of “leaders,” don’t even think of deploying all the whacky consumer-space folks out like Twitter and Facebook, or something as detailed as Hudson/Jenkins or Oracle DB/MySQL/MariaDB.
I think SolarWinds might be an interesting example, and if Dell can figure out applying that model to their Software Group, it’d make a good case study. Both of these are not “developer” stories, but “operator” ones; same structural strategy.
All of this has lead me to believe that, eventually, the executives have to just take a leap of faith and “get it.” There’s only so much work you can do — slides and meetings — before you’re wasting your time if that epiphany doesn’t happen.
The transformation is complete.
If this is your bag, come check out a panel on the developer relations at the OpenStack Summit on April 28th, in Austin — I’ll be moderating it!
Hey, I’ve not only seen this movie before, I did some script treatments:
Chief Executive Officer John Chambers is aggressively pursuing software takeovers as he seeks to turn a company once known for Internet plumbing products such as routers into the world’s No. 1 information-technology company. … Cisco is primarily targeting developers of security, data-analysis and collaboration tools, as well as cloud-related technology, Chambers said in an interview last month.
Good for them. Cisco has consistently done a good job to fill out its portfolio and is far from the one-trick pony people think it is (last I checked, they do well with converged infrastructure, or integrated systems, or whatever we’re supposed to call it now). They actually have a (clearly from lack of mention in this piece) little known-about software portfolio already.
In case anyone’s interested, here’s some tips:
Software follows a strange loop. Unlike hardware where (more or less) we keep making the same products better, in software we like to re-write the same old things every five years or so, throwing out any “winners” from the previous regime. Examples here are APM, middleware, analytics, CRM, web browsers…well…every category except maybe Microsoft Office (even that is going bonkers in the email and calendaring space, and you can see Microsoft “re-writing” there as well [at last, thankfully]). You want to buy, likely, mid-stage startups that have proven that their product works and is needed in the market. They’ve found the new job to be done (or the old one and are re-writing the code for it!) and have a solid code-base, go-to-market, and essentially just need access to your massive resources (money, people, access to customers, and time) to grow revenue. Buy new things (which implies you can spot old vs. new things).
When you identify a “new thing” you’re going to pay a huge multiple of 5x, 10x, 20x, even more. You’re going to think that’s absurd and that you can find a better deal (TIBCO, Magic, Actuate, etc.). Trust me, in software there are no “good deals” (except once in a lifetime buys like the firesale fro Remedy). You don’t walk into Tiffany’s and think you’re going to get a good deal, you think you’re going to make your spouse happy.
That is, they’re not gonna happen on any scale that helps make the business case, move on. The effort it takes to “integrate” products and, more importantly, strategy and go-to-market, together to enable these dreams of a “portfolio” is massive and often doesn’t pan out. Are the products written in exactly the same programming language, using exactly the same frameworks and runtimes? Unless you’re Microsoft buying a .Net-based company, the answer is usually “hell no!” Any business “synergies” are equally troublesome: unless they already exist (IBM is good at buying small and mid-companies who have proven out synergies by being long-time partners), it’s a long-shot that you’re going to create any synergies. Evaluate software assets on their own, stand-alone, not as fitting into a portfolio. You’ve been warned.
You’re thinking your sales force is going to help you sell these new products. They “go up the elevator” instead of down so will easily move these new SKUs. Yeah, good luck, buddy. Sales people aren’t that quick to learn (not because they’re dumb, at all, but because that’s not what you pay and train them for). You’ll need to spend a lot of time educating them and also your field engineers. Your sales force will be one of your biggest assets (something the acquired company didn’t have) so baby them and treat them well. Train them.
The business and processes (“culture”) of software is very different and particular. Do you have free coffee? Better get it. (And if that seems absurd to you, my point is proven.) Do you get excited about ideas like “fail fast”? Study and understand how software businesses run and what they do to attract and retain talent. We still don’t really understand how it all works after all these years and that’s the point: it’s weird. There are great people (like my friend Israel Gat) who can help you, there’s good philosophy too: go read all of Joel’s early writing of Joel’s as a start, don’t let yourself get too distracted by Paul Graham (his is more about software culture for startups, who you are not — Graham-think is about creating large valuations, _not_ extracting large profits), and just keep learning. I still don’t know how it works or I’d be pointing you to the right URL. Just like with the software itself, we completely forget and re-write the culture of software canon about every five years. Good on us. Andrew has a good check-point from a few years ago that’s worth watching a few times.
This is the only book I’ve ever read that describes what it’s like to be an “old” technology company and actually has practical advice on how to survive. Understand how the cash-cow cycle works and, more importantly for software, how to get senior leadership to support a cycle/culture of business renewal, not just customer renewal.
Finally, I spotted a reference to Stall Points in one of Chambers’ talks the other day which is encouraging. Here’s one of the better charts you can print out and put on your wall to look at while you’re taking a pee-break between meetings:
That charts all types of companies. It’s hard to renew yourself, it’s not going to be easy. Good luck!
Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20–25bn sometime in the future (5–10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.
This number is in contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just Force.com, Heroku, parts of AWS, Azure, and Google, and bunch of “Other.” This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.
I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, for PaaS for large organizations seems to be private PaaS
(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform. Also, things are getting even more ambiguous at the infrastructure layer all the time — perhaps “private PaaS” means more “owning” the PaaS layer, regardless of who “owns” the IaaS layer.)
With this premise — that people want private PaaS — I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:
When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20–25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.
Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”
So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.
Satisfying the mythical auditors is often one of the first barriers to spreading DevOps initiatives more widely inside an organization. While these process-driven barriers can be annoying and onerous, once you follow the DevOps tradition of empathetic inclusion — being all “one team” — they can not only stop slowing you down but actually help the overall quality of the product. Indeed, the very reason these audit checks were introduced in the first place was to ensure overall quality of the software and business. There’s some excellent, exhaustive overviews out there of dealing with audits and the like in DevOps. In this column, I wanted to go through a little mental re-orientation for how to start thinking about and approaching the “compliance problem.”
In this context, I think of “auditors” as falling into the category of governance, risk and compliance (GRC) — any function that acts as a check on code as and how the code is produced and run as it goes through its lifecycle. I would put security in here as well, though that tends to be such a broad, important topic that it often warrants its own category (and the security people seem to like maintaining their occultic silo-tude, anyhow).
The GRC function(s) may impose self-created policies (like code and architectural review), third party and government imposed regulations (like industry standard compliance and laws such as HIPAA), and verification that risky behavior is being avoided (if you write the code, you can’t be the same person who then uses that code for cash payouts, perhaps, to yourself, for example). In all cases, “compliance” is there to ensure overall quality of the product and the process that created it. That “quality” may be the prevention of malicious and undesired behavior; that is, in a compliance-driven software development mindset, the ends rarely justify the means.
In many cases, the GRC function is more interested in proof that there is a process in place than actually auditing each execution of that process. This is a curious thing at first. Any developer knows that the proof is in the code, not the documentation. And, indeed, for some types of GRC the amount of automation that a DevOps mindset puts into place could likely improve the quality of GRC, ironically.
Indeed, automation is one of the first areas to look at when reducing DevOps/GRC friction. First, treat complying with policies as you would any other feature. Describe it, prioritize it and track it. Once you have gotten your hands around it, you can start figure out how to best implement that “feature.” Ideally, you can code and automate your way out of having to do too much manual work.
There’s work being done in the US Federal government along these lines that’s helpful because it’s visible and at scale. First, as covered in a recent talk by Diego Lapiduz, part of what auditors are looking for is to trust the software and infrastructure stack that apps are running on. This is especially true from a security standpoint. The current way that software is spec’d out and developed in most organizations follows a certain “do whatever,” or even YOLO principal. App teams are allowed to specify which operating systems, orchestration layers and middleware components they want. This may be within an approved list of options, but more often than not it results in unique software stacks per application.
As outlined by Diego, this variation in the stack meant that government auditors had to review just about everything, taking up to months to approve even the simplest line of code. To solve this problem, 18F standardized on one stack — Cloud Foundry — to run applications on, not allowing for variance at the infrastructure layer. They then worked with the auditors to build trust in the platform. Then, when there was just the metaphoric or literal “one line of code” to deploy, auditors could focus on much less, certainly not the entire stack. This brought approval time down to just days. A huge speed up.
When it comes to all the paperwork, also look to ways to automate the generation of the needed listings of certifications and compliance artifacts. This shouldn’t be a process that’s done in opaque documents, nor manually, if at all possible. Just as we’d now recoil in horror at manually deploying software into production, we should try to achieve “compliance as code” that’s as autogenerated (but accurate!) as possible. To that end, the work being done in the OpenControl project is showing an interesting and likely helpful approach.
The lessons for DevOps teams here is clear: Standardize your stack as much as possible and work with auditors to build their trust in that platform. Also, look into how you can automate the generation of compliance documents beyond the usual .docx and .pptx suspects. This will help your GRC process move at DevOps speed. And it will also allow your auditors to still act as a third party governing your code. They’ll probably even do a better job if they have these new, smaller batches of changes to review.
To address the compliance issue fully, you’ll need to start working with the actual compliance stakeholders directly to change the process. There’s a subtle point right there: Work with the people responsible for setting compliance, not those responsible for enforcing it, like IT. All too often, people in IT will take the strictest view of compliance rules, which results in saying “no” to virtually anything new — coupled with Larman’s Law, you’ll soon find that, mysteriously, nothing new ever happens and you’re back to the pre-DevOps speed of deployment, software quality levels and timelines. You can’t blame IT staff for being unimaginative here — they’re not experts in compliance and it’d be risky for them to imagine “workarounds.” So, when you’re looking to change your compliance process, make sure you’re including the actual auditors and policy setters in your conversations. If they’re not “in the room,” you’re likely wasting your time.
As an example, one of the common compliance problems is around “developers deploying to production.” In many cases and industries, a separation of duties is required between coding and deploying. When deploying code to production was a more manual, complicated process, this could be extremely onerous. But once deployments are push-button automated with a good continuous delivery pipeline,you might consider having the product manager or someone who hasn’t written code be the deployer. This ensures that you can “deploy at will,” but keeps the actual coders’ fingers off the button.
As another intriguing compliance strategy, suggested by Home Depot’s Tony McCulley (who also suggested the above approach to the separation of duties) is to give GRC staff access to your continuous delivery process and deployment environment. This means instead of having to answer questions and check for controls for them, you can allow GRC staff to just do it on their own. Effectively, you’re letting GRC staff peer into and even help out with controls in your software. I’d argue that this only works if you have a well-structured platform supporting your CD pipeline with good UIs that non-technical staff can access.
It might be a bit of a stretch, but inviting your GRC people into your DevOps world, especially early on, may be your best bet at preventing compliance slowdowns. And, if there’s any core lesson of DevOps, it’s that the real problems are not in the software or hardware, but the meatware. Figuring out how to work better with the people involved will go a long way towards addressing the compliance problem.
(I originally wrote this December 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
Working at home, with a family, is a challenge, as this nice overview piece at The Register goes over. You think you’re trading all those interruptions from co-workers talking about the sportsball or just complaining about the daily grind, but you’re actually trading in for a different set of co-workers, your family. And their requests for your attention are harder to stonewall than chatty cube-mates.
And then there’s the whole “out of site, out of mind” effect with management at work. I’ve worked at home on and off (mostly at home) over the past decade and it has it’s challenges. I lead a public enough work-life, along with remote working aware folks, that Management forgetting about me rarely comes up. However, as my kids have grown up and there’s, consequently, more going on at home, figuring out how to shut-out my family is a constant challenge. You see, that’s the taboo part! “Shut-out” - you could say “manage” or all sorts of things, but if you follow the maker/manager mentality that most individual contributor (non-managers) knowledge workers must, you have to shut people (“distractions”) out.
On the other hand, this “flow” is a luxury us privileged folks have been experiencing for a long time:
What I didn’t know at the time was that this is what time is like for most women: fragmented, interrupted by child care and housework. Whatever leisure time they have is often devoted to what others want to do – particularly the kids – and making sure everyone else is happy doing it. Often women are so preoccupied by all the other stuff that needs doing – worrying about the carpool, whether there’s anything in the fridge to cook for dinner – that the time itself is what sociologists call “contaminated.”
I came to learn that women have never had a history or culture of leisure. (Unless you were a nun, one researcher later told me.) That from the dawn of humanity, high status men, removed from the drudge work of life, have enjoyed long, uninterrupted hours of leisure. And in that time, they created art, philosophy, literature, they made scientific discoveries and sank into what psychologists call the peak human experience of flow.
Women aren’t expected to flow.
It’s like there’s a maker/manager/_mother_ time management paradigm. (Speaking of that privilege: here I am, with time to type this very post.)
What I’ve been doing is tying to reprogram my mind to think in slices of time fragments and to gorge on 60 minute time spans when they come up. I recall learning that one of the reasons Nietzsche wrote so many aphorisms was because he didn’t have time to write longer pieces; his chronic sickness conditions (whatever they were) gave him little “flow” time.
When I shifted to work at Dell and was on the road the at 451 Research, I was similarly afflicted with fragmented time (at Dell, you’d be in meetings all day because that’s how things ran). I remember one time when I was 451 Research I’d been trying to finish a piece on SUSE and was walking down a ponderously long casino hallway: I just stopped, pulled out my laptop, and started typing for about ten minutes. Finding those little slices that adds up to a full 90 to 120 minutes is hard…but, at least with non-programming knowledge work, you can get over the tax of context switcthing enough to make it worth it.
However, this is all within a large context: the computer. All of that partial attention swapping on the Internet over these years has helpfed warp my brain to work in fragments, but now I need to train my mind to swap between computer and “real life.” So far, it’s slow going.
All of this on the other hand, I really value working from home. I enjoy seeing my kids and wife all day long (so much more so than all those random run-ins with people in the office). I like being in my own environment, being able to eat at home, and on those rare occasions when I’m in a boring, useless, but obligatory meeting, doing something more useful with my time as I listen in. I have one of the better situations I’ve ever had at work right now: everyone on my team, including my boss, is remote. This means we all know the drill, use the tools, and coordinate.
As my wife is fond of telling me, I should just lock my office door more, which is true. The other part that you, as a remote worker, have to program your brain for is: you’re going to be interrupted while you’re in “flow” a lot. Just accept it. In the office there’s plenty of fire-alarms, going to lunch, people stopping by your desk, and so on. We can’t all be on the flat food diet. My other bit of advice is to take advantage of being at home and a flexible work schedule to do more with your family. If you’re like me, you travel a fair amount as well. So just as I have to gobble up every long span of time greedily, when I’m home and have the chance to do things with family, I try to.
There’s just as much pull for DevOps in government as there is in the private sector. While most of our focus around adoption is on how businesses can and are using DevOps and continuous delivery, supported by cloud, to create better software, many government agencies are in the same position and would benefit greatly from figuring out how to apply DevOps in their organizations.
Just 13% of respondents in a recent MeriTalk/Accenture survey of 152 US Federal IT managers believed they could “develop and deploy new systems as fast as the mission requires.” The impact of improving on that could be huge. For example, the US Federal government, by conservative estimates, spends $84 billion a year on IT. And yet, the Standish Group believes that 94% of government IT projects fail. These are huge numbers that, with even small improvements, can have massive impact. And that’s before even considering the benefits of simply improving the quality of software used to provide government services.
As with any organization, the first filter for applicability is whether or not the government organization is using custom written software to accomplish it’s goals. If all the organization is doing is managing desktops, mobile, and packaged software, it’s likely that just SaaS and BYOD are the important areas to focus on. DevOps doesn’t really apply, unless there’s software being written and deployed in your organization or, as is more common in government agencies, for your organization as we’ll get to when we discuss “contractors.”
When it comes to adopting and being successful with DevOps, the game isn’t too different than in the business world: much of the change will have to do with changing your organization’s process and “culture,” as well as adopting new tools that automate much of what was previously manual. You’ll still need to actually take advantage of the feedback loop that helps you improve the quality of your software, in respect to defect, performance in production, and design quality. There are a few things that tend to be more common in government organizations that bear some discussion: having to cut through red-tape, dealing with contractors, and a focus on budget.
While “enterprise” IT management tasks can be onerous and full of change review boards and process, government organizations seem to have mastered the art of paperwork, three ring binders, and red tape in IT. As an example, in the US Federal government, any change needs to achieve “Authority To Operate” which includes updating the runbook covering numerous failure conditions, certifying security, and otherwise documenting every aspect of the change in, to the DevOps minded, infinitesimal detail. And why not? When was the last time your government “failed fast” and you said “gosh, I guess they’re learning and innovating! I hope they fail again!” No, indeed. Governments are given little leash for failure and when things go terribly wrong, you don’t just get a tongue lashing from your boss, but you might get to go talk to Congress and not in the fun, field-trip how a bill is made kind of way. Being less cynical, in the military, intelligence, and law enforcement parts of government, if things go wrong more terrible things than denying you the ability to upload a picture of your pot roast to Instagram can happen. It’s understandable — perhaps, “explainable” — that government IT would be wrapped up in red-tape.
However, when trying to get the benefits of continuous delivery, DevOps, and cloud (or “cloud native” as that tryptic of buzzwords is coming to be known), government organizations have been demonstrating that the comforting mantle of red-tape can be stripped. For example, in the GSA, the 18F group has reduced the time it takes to get a change through from 9–14 months to just two to three days.
They achieved this because now when they deploy applications on their cloud native platform (a Cloud Foundry instance that they run on Amazon Web Services) they are only changing the application, not the whole stack of software and hardware below the application layer. This means they don’t need to re-certify the he middleware, runtimes and development frameworks, let alone the entire cloud platform, operating systems used, networking, hardware, and security configurations. Of course, the new lines of application code need to be checked, but because they’re following the small batch principles of continuous delivery, those net-new lines are few.
The lesson here is that you’ll need to get your change review process — the red-tape spinners — to trust the standard cloud platform you’re deploying your applications on. There could be numerous ways to do this from using a widely used cloud platform like Cloud Foundry, building up trusted automation build processes, or creating your own platform and software release pipelines that are trusted by your red-tape mavens.
If you want to get staff in a government IT department ranting at you all night long, ask them about contractors. They loathe them and despise them and will tell you that they’re “killing” government IT. Their complaints is that contractors cannot structurally deal with an Agile mentality that refuses to lock-down a full list of features that will be delivered on a specific date. As you shift to not even a “DevOps mindset,” but an Agile mindset where the product team is more discovering with each iteration what the product will be and how to best implement it, you need the ability to change scope throughout the project as you learn and adapt. There is no “fail fast” (read: learning) when the deliverables 12 months out are defined in a 300 page document that took 3–6 months to scope and define.
Once again, getting into this state is likely explainable: it’s not so much that any actor is responsible, it’s more that the management in government IT departments is now responsible to fix the problem. The problem is more than a square peg (waterfall mentalities from contractors) in a round-hole (government IT departments that want to be more Agile) issue. After several decades of outsourcing to contractors, there’s also a skills and cultural gap in the IT departments. Just as custom written software is becoming strategically important to more organizations, many large IT departments find themselves with little experience and even less skill when it comes to software and product development. I hear these same complaints frequently from the private sector who’ve outsourced IT for many years, if not decades.
The Agile community has long discussed this problem and there are always interesting, novel efforts to get back to insourcing. A huge part is simply getting the terms of outsourcing agreements to be more compatible. The flip-side of this is simplifying the process to become a government contractor: it’s sure not easy at the moment. Many of the newer, more Agile and DevOps minded contractors are smaller shops that will find the prospect of working with the government daunting and, well, less profitable than working with other organizations. Making it easier for more shops to sign up will introduce more competitions rather than the more limited strangle-hold by paperwork, smaller market that exists now. The current pool of government contractors seems mostly dominated by larger shops that can navigate the government procurement process and seem to, for whatever reason, be the ones who are the most inflexible and waterfall-y.
Another part is refusing to ceed project management and scoping management to external partiers; and, making sure you have the appropriate skills in-house to do so. Finally, the management layers in both public and private sector need to recognize this as a gap that needs to be filled and start recruiting more in-house talent. Otherwise, the highly integrated state of DevOps — let alone a product focus vs. a project focus — will be very hard to achieve.
Every organization faces budget problems. We call them “unicorns” because they have this mythical quality of seemingly unlimited budget. The spiral horn-festooned are the exception that proves the rule that all organizations are expected to spend money wisely. Government, however, seems to operate in a permanent state of shrinking IT budgets. And even when government organizations experience the rare influx of cash, there’s hyper-scrutiny on how it’s spent. To me, the difference is that private sector companies can justify spending “a lot” of money if “a lot” of profit results, where-as government organizations don’t find such calculations as easily. Effectively, government IT departments have to prove that they’re spending only as much money as necessary and strategically plan to have their budget stripped down in each budgetary cycle.
Here, the Lean-think part of DevOps can actually be very helpful and, indeed, may become a core motivation for government to look to DevOps. My simplification of the goals of DevOps are to:
Those two goals end up working harmoniously together (with smaller batches of code deployed more frequently, you reduce the risk of each causing major downtime, for example). For government organizations focused on “budget,” the focus on removing as much “waste” from the system to speed up the delivery cycle starts to look very attractive for the cost-cutting minded. A well functioning DevOps shop will spend much time analyzing the entire, end-to-end cycle with value-stream mapping, stripping out all the “stupid” from the process. The intention of removing waste in DevOps think is more about speeding up the software release process and helping ensure better resilience in production, but a “side effect” can be removing costs from the system.
Often, in the private sector we say that resources (time, money, and organization attention) saved in this process can be reallocated to helping grow the business. This is certainly the case in government, where “the business” is, of course, understood not as seeking profits but delivering government services and fulfilling “mission” requirements. However, simply reducing costs by finding and removing unneeded “waste” may be an highly attractive outcome of adopting DevOps for governments.
As with any large organization, governments can be horrendous bureaucracies. Pulling out the DevOps empathy card, it’s easy to understand why people in such government bureaucracies can start to stagnate and calcify, themselves becoming grit in the gears of change if not outright monkey-wrenches.
In particular, there are two mind-sets that need to change as government staff adopt DevOps:
Again, these problems frequently happen in the private sector. But, they seem to be larger problems in government that bear closer attention. Thankfully, it seems like leaders in government know this: in a recent Gartner, global survey, 40% of government CIOs said they needed to focus more on developing and communicating their vision and do more coaching. In contrast, 60% said they needed to reduce the time spent in command-and-control mode. Leading, rather than just managing, the IT department, as ever, is key to the transformative use of IT.
In any given time, it’s easy to be dismissive of government as wasteful and even incompetent. That’s the case in the U.S. at least, if you can judge based on the many politicians who seem to center their political campaigns around the idea of government waste. In contrast, we praise the private sector for their ability to wield IT to…better target ads to get us to buy sugar coated corn flakes. Don’t get me wrong, I’m part of the private sector and I like my role chasing profit. But we in the “enterprise” who are busy roaming the halls of capitalism don’t often get the chance to positively effect, let alone simply help and improve the lives of, everyone on a daily basis. Government has that chance and when you speak with most people who are passionate about using IT better in government, they want to do it because they are morally motivated to help society.
The benefits of adopting DevOps have been clearly demonstrated in recent years, and for businesses we’re seeing truth in the statement that you’re either becoming a software organization or losing to someone who is. As government organizations start to think about improving how they do IT, they have the chance to help all of us, “winning” isn’t zero-sum like it can be in the business world. To that end, as we in the industry find new, better ways to create and deliver software, it behoves us to figure out how government can benefit as well. That’ll get us a even closer towards making software suck less something we’ll all benefit from.
(I originally wrote this September 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
I’m always wanting to do a talk or write a series of items on the white-collar toolchain, or surviving in big companies. Here’s one principal about presentations in corporate settings.
Much presentation wisdom of late has revolved around the actual event of a speaker talking, giving the presentation. In a corporate setting, the actual delivery of the presentation is not the primary purpose of a presentation. Instead, a presentation is used to facility coming to a decision; usually you’re laying out a case for a decision you want the company to support. Once that decision is made, the presentation is often used as the document of record, perhaps being updated to reflect the decision in question better.
As a side-note, if your presentation doesn’t argue for a specific, “actionable” decision, you’re probably doing it wrong. For example, don’t just “put it all on the table” without suggesting what to do about it.
Think of presentations as documents which have been accidentally printed in landscape and create them as such. You will likely not be given the chance to go through your presentation from front to end like you would at a conference, You’ll be interrupted, go back and forth, and most importantly, end up emailing the presentation around to people who will look at it without you presenting.
You should therefore make all slides consumable without you being there. This leads to the use of McKinsey titles (titles that are one-liners explaining the point you’re making) and slides that are much denser than conference slides. The presentation should have a story-line, an opening summary of the points you want to make, and a concluding summary of what the decision should be (next steps, launching a new project, the amount needed for your budget, new markets to enter, “and therefore we should buy company X,” etc.).
This also gives rise to “back-up” slides which are not part of the core story-line buy provide additional, appendix-like information for reference both during the presentation meeting and when others look at the presentation on their own. You should also put extensive citations in footnotes with links so that people consuming the presentation can fact check you; bald claims and figures will be defeated easily, nullifying your whole argument to come to your desired decision.
Also remember that people will take your slides and use them in other presentations, this is fine. And, of course, if successful, your presentation will likely be used as the document of record for what was decided and what the new “plan” was. It will be emailed to people who ask what the “plan” is and it must be able to communicate that accordingly.
Remember: in most corporate settings, a presentation is just a document that has been printed in landscape mode.