Satisfying the mythical auditors is often one of the first barriers to spreading DevOps initiatives more widely inside an organization. While these process-driven barriers can be annoying and onerous, once you follow the DevOps tradition of empathetic inclusion — being all “one team” — they can not only stop slowing you down but actually help the overall quality of the product. Indeed, the very reason these audit checks were introduced in the first place was to ensure overall quality of the software and business. There’s some excellent, exhaustive overviews out there of dealing with audits and the like in DevOps. In this column, I wanted to go through a little mental re-orientation for how to start thinking about and approaching the “compliance problem.”
In this context, I think of “auditors” as falling into the category of governance, risk and compliance (GRC) — any function that acts as a check on code as and how the code is produced and run as it goes through its lifecycle. I would put security in here as well, though that tends to be such a broad, important topic that it often warrants its own category (and the security people seem to like maintaining their occultic silo-tude, anyhow).
The GRC function(s) may impose self-created policies (like code and architectural review), third party and government imposed regulations (like industry standard compliance and laws such as HIPAA), and verification that risky behavior is being avoided (if you write the code, you can’t be the same person who then uses that code for cash payouts, perhaps, to yourself, for example). In all cases, “compliance” is there to ensure overall quality of the product and the process that created it. That “quality” may be the prevention of malicious and undesired behavior; that is, in a compliance-driven software development mindset, the ends rarely justify the means.
In many cases, the GRC function is more interested in proof that there is a process in place than actually auditing each execution of that process. This is a curious thing at first. Any developer knows that the proof is in the code, not the documentation. And, indeed, for some types of GRC the amount of automation that a DevOps mindset puts into place could likely improve the quality of GRC, ironically.
Indeed, automation is one of the first areas to look at when reducing DevOps/GRC friction. First, treat complying with policies as you would any other feature. Describe it, prioritize it and track it. Once you have gotten your hands around it, you can start figure out how to best implement that “feature.” Ideally, you can code and automate your way out of having to do too much manual work.
There’s work being done in the US Federal government along these lines that’s helpful because it’s visible and at scale. First, as covered in a recent talk by Diego Lapiduz, part of what auditors are looking for is to trust the software and infrastructure stack that apps are running on. This is especially true from a security standpoint. The current way that software is spec’d out and developed in most organizations follows a certain “do whatever,” or even YOLO principal. App teams are allowed to specify which operating systems, orchestration layers and middleware components they want. This may be within an approved list of options, but more often than not it results in unique software stacks per application.
As outlined by Diego, this variation in the stack meant that government auditors had to review just about everything, taking up to months to approve even the simplest line of code. To solve this problem, 18F standardized on one stack — Cloud Foundry — to run applications on, not allowing for variance at the infrastructure layer. They then worked with the auditors to build trust in the platform. Then, when there was just the metaphoric or literal “one line of code” to deploy, auditors could focus on much less, certainly not the entire stack. This brought approval time down to just days. A huge speed up.
When it comes to all the paperwork, also look to ways to automate the generation of the needed listings of certifications and compliance artifacts. This shouldn’t be a process that’s done in opaque documents, nor manually, if at all possible. Just as we’d now recoil in horror at manually deploying software into production, we should try to achieve “compliance as code” that’s as autogenerated (but accurate!) as possible. To that end, the work being done in the OpenControl project is showing an interesting and likely helpful approach.
The lessons for DevOps teams here is clear: Standardize your stack as much as possible and work with auditors to build their trust in that platform. Also, look into how you can automate the generation of compliance documents beyond the usual .docx and .pptx suspects. This will help your GRC process move at DevOps speed. And it will also allow your auditors to still act as a third party governing your code. They’ll probably even do a better job if they have these new, smaller batches of changes to review.
To address the compliance issue fully, you’ll need to start working with the actual compliance stakeholders directly to change the process. There’s a subtle point right there: Work with the people responsible for setting compliance, not those responsible for enforcing it, like IT. All too often, people in IT will take the strictest view of compliance rules, which results in saying “no” to virtually anything new — coupled with Larman’s Law, you’ll soon find that, mysteriously, nothing new ever happens and you’re back to the pre-DevOps speed of deployment, software quality levels and timelines. You can’t blame IT staff for being unimaginative here — they’re not experts in compliance and it’d be risky for them to imagine “workarounds.” So, when you’re looking to change your compliance process, make sure you’re including the actual auditors and policy setters in your conversations. If they’re not “in the room,” you’re likely wasting your time.
As an example, one of the common compliance problems is around “developers deploying to production.” In many cases and industries, a separation of duties is required between coding and deploying. When deploying code to production was a more manual, complicated process, this could be extremely onerous. But once deployments are push-button automated with a good continuous delivery pipeline,you might consider having the product manager or someone who hasn’t written code be the deployer. This ensures that you can “deploy at will,” but keeps the actual coders’ fingers off the button.
As another intriguing compliance strategy, suggested by Home Depot’s Tony McCulley (who also suggested the above approach to the separation of duties) is to give GRC staff access to your continuous delivery process and deployment environment. This means instead of having to answer questions and check for controls for them, you can allow GRC staff to just do it on their own. Effectively, you’re letting GRC staff peer into and even help out with controls in your software. I’d argue that this only works if you have a well-structured platform supporting your CD pipeline with good UIs that non-technical staff can access.
It might be a bit of a stretch, but inviting your GRC people into your DevOps world, especially early on, may be your best bet at preventing compliance slowdowns. And, if there’s any core lesson of DevOps, it’s that the real problems are not in the software or hardware, but the meatware. Figuring out how to work better with the people involved will go a long way towards addressing the compliance problem.
(I originally wrote this December 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
Working at home, with a family, is a challenge, as this nice overview piece at The Register goes over. You think you’re trading all those interruptions from co-workers talking about the sportsball or just complaining about the daily grind, but you’re actually trading in for a different set of co-workers, your family. And their requests for your attention are harder to stonewall than chatty cube-mates.
And then there’s the whole “out of site, out of mind” effect with management at work. I’ve worked at home on and off (mostly at home) over the past decade and it has it’s challenges. I lead a public enough work-life, along with remote working aware folks, that Management forgetting about me rarely comes up. However, as my kids have grown up and there’s, consequently, more going on at home, figuring out how to shut-out my family is a constant challenge. You see, that’s the taboo part! “Shut-out” - you could say “manage” or all sorts of things, but if you follow the maker/manager mentality that most individual contributor (non-managers) knowledge workers must, you have to shut people (“distractions”) out.
On the other hand, this “flow” is a luxury us privileged folks have been experiencing for a long time:
What I didn’t know at the time was that this is what time is like for most women: fragmented, interrupted by child care and housework. Whatever leisure time they have is often devoted to what others want to do – particularly the kids – and making sure everyone else is happy doing it. Often women are so preoccupied by all the other stuff that needs doing – worrying about the carpool, whether there’s anything in the fridge to cook for dinner – that the time itself is what sociologists call “contaminated.”
I came to learn that women have never had a history or culture of leisure. (Unless you were a nun, one researcher later told me.) That from the dawn of humanity, high status men, removed from the drudge work of life, have enjoyed long, uninterrupted hours of leisure. And in that time, they created art, philosophy, literature, they made scientific discoveries and sank into what psychologists call the peak human experience of flow.
Women aren’t expected to flow.
It’s like there’s a maker/manager/_mother_ time management paradigm. (Speaking of that privilege: here I am, with time to type this very post.)
What I’ve been doing is tying to reprogram my mind to think in slices of time fragments and to gorge on 60 minute time spans when they come up. I recall learning that one of the reasons Nietzsche wrote so many aphorisms was because he didn’t have time to write longer pieces; his chronic sickness conditions (whatever they were) gave him little “flow” time.
When I shifted to work at Dell and was on the road the at 451 Research, I was similarly afflicted with fragmented time (at Dell, you’d be in meetings all day because that’s how things ran). I remember one time when I was 451 Research I’d been trying to finish a piece on SUSE and was walking down a ponderously long casino hallway: I just stopped, pulled out my laptop, and started typing for about ten minutes. Finding those little slices that adds up to a full 90 to 120 minutes is hard…but, at least with non-programming knowledge work, you can get over the tax of context switcthing enough to make it worth it.
However, this is all within a large context: the computer. All of that partial attention swapping on the Internet over these years has helpfed warp my brain to work in fragments, but now I need to train my mind to swap between computer and “real life.” So far, it’s slow going.
All of this on the other hand, I really value working from home. I enjoy seeing my kids and wife all day long (so much more so than all those random run-ins with people in the office). I like being in my own environment, being able to eat at home, and on those rare occasions when I’m in a boring, useless, but obligatory meeting, doing something more useful with my time as I listen in. I have one of the better situations I’ve ever had at work right now: everyone on my team, including my boss, is remote. This means we all know the drill, use the tools, and coordinate.
As my wife is fond of telling me, I should just lock my office door more, which is true. The other part that you, as a remote worker, have to program your brain for is: you’re going to be interrupted while you’re in “flow” a lot. Just accept it. In the office there’s plenty of fire-alarms, going to lunch, people stopping by your desk, and so on. We can’t all be on the flat food diet. My other bit of advice is to take advantage of being at home and a flexible work schedule to do more with your family. If you’re like me, you travel a fair amount as well. So just as I have to gobble up every long span of time greedily, when I’m home and have the chance to do things with family, I try to.
There’s just as much pull for DevOps in government as there is in the private sector. While most of our focus around adoption is on how businesses can and are using DevOps and continuous delivery, supported by cloud, to create better software, many government agencies are in the same position and would benefit greatly from figuring out how to apply DevOps in their organizations.
Just 13% of respondents in a recent MeriTalk/Accenture survey of 152 US Federal IT managers believed they could “develop and deploy new systems as fast as the mission requires.” The impact of improving on that could be huge. For example, the US Federal government, by conservative estimates, spends $84 billion a year on IT. And yet, the Standish Group believes that 94% of government IT projects fail. These are huge numbers that, with even small improvements, can have massive impact. And that’s before even considering the benefits of simply improving the quality of software used to provide government services.
As with any organization, the first filter for applicability is whether or not the government organization is using custom written software to accomplish it’s goals. If all the organization is doing is managing desktops, mobile, and packaged software, it’s likely that just SaaS and BYOD are the important areas to focus on. DevOps doesn’t really apply, unless there’s software being written and deployed in your organization or, as is more common in government agencies, for your organization as we’ll get to when we discuss “contractors.”
When it comes to adopting and being successful with DevOps, the game isn’t too different than in the business world: much of the change will have to do with changing your organization’s process and “culture,” as well as adopting new tools that automate much of what was previously manual. You’ll still need to actually take advantage of the feedback loop that helps you improve the quality of your software, in respect to defect, performance in production, and design quality. There are a few things that tend to be more common in government organizations that bear some discussion: having to cut through red-tape, dealing with contractors, and a focus on budget.
While “enterprise” IT management tasks can be onerous and full of change review boards and process, government organizations seem to have mastered the art of paperwork, three ring binders, and red tape in IT. As an example, in the US Federal government, any change needs to achieve “Authority To Operate” which includes updating the runbook covering numerous failure conditions, certifying security, and otherwise documenting every aspect of the change in, to the DevOps minded, infinitesimal detail. And why not? When was the last time your government “failed fast” and you said “gosh, I guess they’re learning and innovating! I hope they fail again!” No, indeed. Governments are given little leash for failure and when things go terribly wrong, you don’t just get a tongue lashing from your boss, but you might get to go talk to Congress and not in the fun, field-trip how a bill is made kind of way. Being less cynical, in the military, intelligence, and law enforcement parts of government, if things go wrong more terrible things than denying you the ability to upload a picture of your pot roast to Instagram can happen. It’s understandable — perhaps, “explainable” — that government IT would be wrapped up in red-tape.
However, when trying to get the benefits of continuous delivery, DevOps, and cloud (or “cloud native” as that tryptic of buzzwords is coming to be known), government organizations have been demonstrating that the comforting mantle of red-tape can be stripped. For example, in the GSA, the 18F group has reduced the time it takes to get a change through from 9–14 months to just two to three days.
They achieved this because now when they deploy applications on their cloud native platform (a Cloud Foundry instance that they run on Amazon Web Services) they are only changing the application, not the whole stack of software and hardware below the application layer. This means they don’t need to re-certify the he middleware, runtimes and development frameworks, let alone the entire cloud platform, operating systems used, networking, hardware, and security configurations. Of course, the new lines of application code need to be checked, but because they’re following the small batch principles of continuous delivery, those net-new lines are few.
The lesson here is that you’ll need to get your change review process — the red-tape spinners — to trust the standard cloud platform you’re deploying your applications on. There could be numerous ways to do this from using a widely used cloud platform like Cloud Foundry, building up trusted automation build processes, or creating your own platform and software release pipelines that are trusted by your red-tape mavens.
If you want to get staff in a government IT department ranting at you all night long, ask them about contractors. They loathe them and despise them and will tell you that they’re “killing” government IT. Their complaints is that contractors cannot structurally deal with an Agile mentality that refuses to lock-down a full list of features that will be delivered on a specific date. As you shift to not even a “DevOps mindset,” but an Agile mindset where the product team is more discovering with each iteration what the product will be and how to best implement it, you need the ability to change scope throughout the project as you learn and adapt. There is no “fail fast” (read: learning) when the deliverables 12 months out are defined in a 300 page document that took 3–6 months to scope and define.
Once again, getting into this state is likely explainable: it’s not so much that any actor is responsible, it’s more that the management in government IT departments is now responsible to fix the problem. The problem is more than a square peg (waterfall mentalities from contractors) in a round-hole (government IT departments that want to be more Agile) issue. After several decades of outsourcing to contractors, there’s also a skills and cultural gap in the IT departments. Just as custom written software is becoming strategically important to more organizations, many large IT departments find themselves with little experience and even less skill when it comes to software and product development. I hear these same complaints frequently from the private sector who’ve outsourced IT for many years, if not decades.
The Agile community has long discussed this problem and there are always interesting, novel efforts to get back to insourcing. A huge part is simply getting the terms of outsourcing agreements to be more compatible. The flip-side of this is simplifying the process to become a government contractor: it’s sure not easy at the moment. Many of the newer, more Agile and DevOps minded contractors are smaller shops that will find the prospect of working with the government daunting and, well, less profitable than working with other organizations. Making it easier for more shops to sign up will introduce more competitions rather than the more limited strangle-hold by paperwork, smaller market that exists now. The current pool of government contractors seems mostly dominated by larger shops that can navigate the government procurement process and seem to, for whatever reason, be the ones who are the most inflexible and waterfall-y.
Another part is refusing to ceed project management and scoping management to external partiers; and, making sure you have the appropriate skills in-house to do so. Finally, the management layers in both public and private sector need to recognize this as a gap that needs to be filled and start recruiting more in-house talent. Otherwise, the highly integrated state of DevOps — let alone a product focus vs. a project focus — will be very hard to achieve.
Every organization faces budget problems. We call them “unicorns” because they have this mythical quality of seemingly unlimited budget. The spiral horn-festooned are the exception that proves the rule that all organizations are expected to spend money wisely. Government, however, seems to operate in a permanent state of shrinking IT budgets. And even when government organizations experience the rare influx of cash, there’s hyper-scrutiny on how it’s spent. To me, the difference is that private sector companies can justify spending “a lot” of money if “a lot” of profit results, where-as government organizations don’t find such calculations as easily. Effectively, government IT departments have to prove that they’re spending only as much money as necessary and strategically plan to have their budget stripped down in each budgetary cycle.
Here, the Lean-think part of DevOps can actually be very helpful and, indeed, may become a core motivation for government to look to DevOps. My simplification of the goals of DevOps are to:
Those two goals end up working harmoniously together (with smaller batches of code deployed more frequently, you reduce the risk of each causing major downtime, for example). For government organizations focused on “budget,” the focus on removing as much “waste” from the system to speed up the delivery cycle starts to look very attractive for the cost-cutting minded. A well functioning DevOps shop will spend much time analyzing the entire, end-to-end cycle with value-stream mapping, stripping out all the “stupid” from the process. The intention of removing waste in DevOps think is more about speeding up the software release process and helping ensure better resilience in production, but a “side effect” can be removing costs from the system.
Often, in the private sector we say that resources (time, money, and organization attention) saved in this process can be reallocated to helping grow the business. This is certainly the case in government, where “the business” is, of course, understood not as seeking profits but delivering government services and fulfilling “mission” requirements. However, simply reducing costs by finding and removing unneeded “waste” may be an highly attractive outcome of adopting DevOps for governments.
As with any large organization, governments can be horrendous bureaucracies. Pulling out the DevOps empathy card, it’s easy to understand why people in such government bureaucracies can start to stagnate and calcify, themselves becoming grit in the gears of change if not outright monkey-wrenches.
In particular, there are two mind-sets that need to change as government staff adopt DevOps:
Again, these problems frequently happen in the private sector. But, they seem to be larger problems in government that bear closer attention. Thankfully, it seems like leaders in government know this: in a recent Gartner, global survey, 40% of government CIOs said they needed to focus more on developing and communicating their vision and do more coaching. In contrast, 60% said they needed to reduce the time spent in command-and-control mode. Leading, rather than just managing, the IT department, as ever, is key to the transformative use of IT.
In any given time, it’s easy to be dismissive of government as wasteful and even incompetent. That’s the case in the U.S. at least, if you can judge based on the many politicians who seem to center their political campaigns around the idea of government waste. In contrast, we praise the private sector for their ability to wield IT to…better target ads to get us to buy sugar coated corn flakes. Don’t get me wrong, I’m part of the private sector and I like my role chasing profit. But we in the “enterprise” who are busy roaming the halls of capitalism don’t often get the chance to positively effect, let alone simply help and improve the lives of, everyone on a daily basis. Government has that chance and when you speak with most people who are passionate about using IT better in government, they want to do it because they are morally motivated to help society.
The benefits of adopting DevOps have been clearly demonstrated in recent years, and for businesses we’re seeing truth in the statement that you’re either becoming a software organization or losing to someone who is. As government organizations start to think about improving how they do IT, they have the chance to help all of us, “winning” isn’t zero-sum like it can be in the business world. To that end, as we in the industry find new, better ways to create and deliver software, it behoves us to figure out how government can benefit as well. That’ll get us a even closer towards making software suck less something we’ll all benefit from.
(I originally wrote this September 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
I’m always wanting to do a talk or write a series of items on the white-collar toolchain, or surviving in big companies. Here’s one principal about presentations in corporate settings.
Much presentation wisdom of late has revolved around the actual event of a speaker talking, giving the presentation. In a corporate setting, the actual delivery of the presentation is not the primary purpose of a presentation. Instead, a presentation is used to facility coming to a decision; usually you’re laying out a case for a decision you want the company to support. Once that decision is made, the presentation is often used as the document of record, perhaps being updated to reflect the decision in question better.
As a side-note, if your presentation doesn’t argue for a specific, “actionable” decision, you’re probably doing it wrong. For example, don’t just “put it all on the table” without suggesting what to do about it.
Think of presentations as documents which have been accidentally printed in landscape and create them as such. You will likely not be given the chance to go through your presentation from front to end like you would at a conference, You’ll be interrupted, go back and forth, and most importantly, end up emailing the presentation around to people who will look at it without you presenting.
You should therefore make all slides consumable without you being there. This leads to the use of McKinsey titles (titles that are one-liners explaining the point you’re making) and slides that are much denser than conference slides. The presentation should have a story-line, an opening summary of the points you want to make, and a concluding summary of what the decision should be (next steps, launching a new project, the amount needed for your budget, new markets to enter, “and therefore we should buy company X,” etc.).
This also gives rise to “back-up” slides which are not part of the core story-line buy provide additional, appendix-like information for reference both during the presentation meeting and when others look at the presentation on their own. You should also put extensive citations in footnotes with links so that people consuming the presentation can fact check you; bald claims and figures will be defeated easily, nullifying your whole argument to come to your desired decision.
Also remember that people will take your slides and use them in other presentations, this is fine. And, of course, if successful, your presentation will likely be used as the document of record for what was decided and what the new “plan” was. It will be emailed to people who ask what the “plan” is and it must be able to communicate that accordingly.
Remember: in most corporate settings, a presentation is just a document that has been printed in landscape mode.
What’s the point of it all? Why are we doing this? These questions pop up frequently in IT teams where the reason for doing your daily activities — like churning through tickets, whizzing up builds, or “doing the DevOps” — seems only that someone, somewhere told you to do it.
If you’re in this situation — you have no idea how your activities are helping your organization make money — you should stop and find out quickly what your company’s goals and strategies are to make sure you’re not wasting time. The good news is the confusion is probably not your fault; the bad news is that you’ll have to convince management that the fault is theirs.
The adoption of things like DevOps or the cloud sometimes happens for wrong or unknown reasons — gratuitous plans without a tight connection to business goals. We used to call this “management by magazine,” and it happens now more than ever. A process — even “cultural” — change like DevOps is not like the easy improvement fodder of virtualization. But you can’t blame IT management for trying gratuitous optimization by technology. The magic of VMware was that you just installed it, and things got better because it improved resource utilization. You didn’t need to figure out why or match it to goals. If you inject DevOps into an organization expecting it to just improve things without tightly coupling to strategy, you’ll get weird results. You’ll probably just create more work!
Agile, DevOps, and now “cloud native” (I hope you’re updating your buzzword lexicons!) need strong connections to the business goals — some would say “strategy” — to be successful. In order to operate in a lean fashion, you want to only do things that are valuable and useful to the customer (or obligatory to stay in business, like compliance and auditability). Indeed, being able to sort out what’s valuable and useful to the business is a key tool for doing DevOps successfully. You want to cut out all the stuff that doesn’t matter, or at least minimize it. Otherwise, you just sort of do everything and anything because there’s no way to determine if any given activity is helpful.
So how do you align your work with the overall business strategy?
There are tried and true (though seemingly new to the IT department) techniques like value-stream mapping: take any given business process and map out all the activities that happen from end-to-end, questioning if each is needed. Most people are shocked at how much “stupid” is going on in such maps and it’s a great technique for finding and removing bottlenecks.
If you’re in the consumer business — like so many “unicorns” are — it’s easy to understand the mission and the goals: get more people buying books, downloading your app, streaming more videos, and so forth. But in other, more traditional settings, it’s common to find a willful disentanglement between how IT is used and how it contributes to customer value. More than not, the stasis-inducing ludlum of time and success just numbs people’s collective minds and sets them into auto-pilot here.
You see this happen most often around decision making processes in business: things that need approval, planning processes and market assessments. People in large companies love cogitating and wrapping process around activities that cause change in the company; it feels like they almost like to slow down change and activity. You might even codify in a whole process with change review board meetings and careful inspection of all the changed components by a panel of architectural and security audit wizards.
You can also identify where your processes aren’t matching with business goals and strategies by cultivating squeaky wheels.
When change happens, individuals often pipe up asking, “Why are we doing this? Why is this valuable to the customer?” More than likely, they’re seen as troublemakers or sand in the gears, and are shut down by the group, Five Monkeys style. At best, these individuals cope with learned helplessness; at worst, they leave, kicking off a sort of Idiocracy effect in the remaining organization.
These “complainers” are actually a valuable source of data for testing out how well understood a company’s goals and strategies are. You want to court these types of people to continually test out how effective the organization is at setting goals and strategy. One fun practice, as mentioned by Ticketmaster’s Jody Mulkey, is to interview new employees a month after starting to ask them what seems “screwy around here” before they get used to it.
So what do you do when they or any other process you’ve tried identify real disconnects between what you’re doing and why? The fun begins — because it’s management’s job to fix this bug. The role of mid- and upper-level management in the cloud native era is poorly understood and documented (its always been so, of course, in creative-driven endeavors like software). To be successful at these types of initiatives, management has a lot of work to do and the managers who are overseeing DevOps teams can’t assume things will just proceed as normal. This is why, as with software, you need to continually test the assumption that people know the business goals and strategy.
This point has been stuck in my brain after reading Leading the Transformation (an excellent book for managers figuring out how to apply DevOps “in the large”), which states the point more plainly than I can:
Management needs to establish strategic objectives that make sense and that can be used to drive plans and track progress at the enterprise level. These should include key deliverables for the business and process changes for improving the effectiveness of the organization.
What I like about this advice (and the rest in the book) is that it’s geared to defining management’s job in all this DevOps hoopla. In said hoopla, we spend a lot of time on what the team does, but we don’t spend too much time on what management should do. It’s lovely thinking about flattening the organization and having everyone act as responsible peers, but in large organizations, this isn’t done easily or quickly. Just as with all those zippy containers, you need an orchestration layer to make sense of all the DevOps teams working together. That’s the job of management in the cloud native era: setting goals and orchestrating all the teams to make sure they’re all pulling in the right direction.
(I originally wrote this August 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
Think you can show DevOps ROI? Think again
“What is the ROI for DevOps?” is a question that has been tossed my way frequently of late. There are numerous reasons why this is at the same time an absurd but also important question.
Modeling DevOps ROI is absurd because predicting the gains and costs of a process, let alone one as new as DevOps, is difficult and dependent on all sorts of unique variables per organization.
However, thinking through DevOps ROI is an important step for adoption because the promises of DevOps are so grandiose and the changes needed sound large and almost impossible to achieve for “normal” people.
That is, DevOps is an unmeasurable process with respect to ROI (it has value, to be sure, but is nearly impossible to measure independently and precisely) and, yet, because “doing DevOps” seems to be such a big change, organizations need assurances that transformation will be “worth it.”
So, if you’re asked to help show the ROI for DevOps, what can you do? Let’s cover three ways to approach the problem. I don’t think any of them are a real answer, but they get closer to satisfying some possible motivations for asking for ROI in the first place.
First, what is ROI? I misuse economic and accounting terms all the time, but I think of “ROI” as showing the profit you achieve after a given period of time, for our purposes, by doing something new and different with IT: you might buy some new software (running on a cloud platform like Pivotal Cloud Foundry instead of just IaaS), do your software development and delivery differently (like, “doing DevOps”), and so forth.
With ROI, you’re not only interested in the question “does it work,” you’re interested in the question “did this make me money?” Oftentimes, you’re also interested in comparing the costs of competing approaches, or just inflicting vendors with the thrill of “bake offs” and ROI spreadsheet fights.
To figure that basic ROI, you use a brutally simple formula:
(Gain — Cost)/Cost = ROI
You can convert the end result to a percentage if you’re not into the whole decimal thing.
As a simple example, let’s say you sell an app that allows people to track how many apples they eat each day, so they can keep those ravenous doctors out of the way. After it’s shipped for a month, you’ve made $20,000 in sales for the app. To get to that point, it costs them $5,000 in paying for developer time and $5,000 in infrastructure charges (the back-end that analyzes the data, mashes it up with Facebook and Twitter profiles, and then sells that data to the Apple Sellers Association of Tomorrow takes some horse-power and storage!).
So, the ROI for the apple muncher app is:
($20,000 — $10,000)/$10,000 = 100%
A pretty good return on your investment! It’s certainly better than the rate I get on any of my personal investments.
So, what would be the ROI of introducing DevOps to that process? More importantly, how could you predict it? There are many ways to answer the ROI question, including the favorite “that’s a bad question, you shouldn’t want that” which can take on all sorts of subtle and helpful forms. Let’s look at three possible approaches.
If you have clear inputs and outputs — your gains and costs — then things can be realistically simple. This is the favorite approach of ROI spreadsheets: they’ll cost out software license costs, hardware/IaaS costs, and people costs (employees and consultants).
Once you’ve figured out costs, you need to estimate what your gains will be: either based on historic run rates, or, more likely, on a mixture of a prediction and hope for how much you’ll make in the future. Tracking the demand for software can be hard and this estimate is one of the most dangerous parts of this simple method. If all you want to do is track the ROI for saving money, perhaps things are a little easier. And while this implies that you’re not looking to DevOps to support a revenue growth strategy, perhaps that’s good input: if you’re not looking to grow your business, maybe it’s not right for you and will have negative ROI.
You then have to pick a period of time to snap-shot and you just run the math.
Of course, few, if any, of the things you’re costing out here are “DevOps.” You might spend money on a commercial continuous integration tool, on a cloud platform or a DevOps consultant. You’ll certainly spend money on people…but you didn’t really spend money on “doing DevOps.”
You might be tempted to simply ascribe gains to DevOps. “For this release, we were doing DevOps, and we made $30,000 with apple muncher! DevOps brought us $10,000 in new revenue.” But that doesn’t feel right.
Still, if you have a good handle on the costs during some period of time where you were doing DevOps and the gain that resulted from that period of time, you could come up with a bottoms-up ROI analysis. I think it’ll be somewhat dicey since it’s so hard to attribute costs and gains directly to DevOps but, hey, it’s better than either telling people they’re asking the wrong question or its mute cousin: nothing.
As you might be teasing out, one of the problems with ROI is that it doesn’t really take time into account. You need to draw clear lines around the time period in which you’re including the factors that create your gains and costs. (If you’re interested in an approach that does take time into account, check out Rex Morrow’s suggestion to use IRR instead of ROI.)
Using this lack of time problem as a generative constraint, you could instead study the ROI of changing to DevOps. What did switching over to DevOps cost us? What did it cost us compared to maintaining our current process state?
Here, you’re taking whatever your regular ROI calculation is and just adding the one-time cost of time and money it took to change to DevOps. Figuring out what your gain is will be problematic. Again, what you’ll be gaining are new capabilities (to deliver software faster and increase your uptime in production); how those contribute to gains is still left as a mysterious exercise to the reader.
Still, if you want to run the numbers on something like “they tell me it will take three months and $50,000 in training and consultants to ‘do the DevOps’” this might satisfy your ROI craving. Again, you’ll need to have a pre-existing ROI at hand to simply plug your DevOps costs into.
In the “DevOps is all cost” ROI scenario, we avoided ascribing gain to changing to DevOps. Again, while this is overly simplified, the deliverables of DevOps are to provide a continuous delivery process for your product and ensure that your product has excellent uptime (that is, “it works”). How could you account for the gain of those two desirables? You could create a way of assigning value to the knowledge you gain from weekly iterations about how to improve your product. You could also calculate the savings from avoided downtime.
It’s fun to model-out placing value on the first part, “knowing,” but most people asking for ROI will likely look at that as a “soft” metric and, therefore, not really useful to their “hard”-centric minds. Including money saved (or generated?) by avoiding downtime could be interesting in a point in time (if a trading system goes down, money is lost when no one can trade), but how do you account for it ongoing?
The issue with including DevOps in this “easy” type of ROI calculation is figuring out how much gain and cost to attribute to DevOps.
As with uptime, sometimes it can be easy: before we did DevOps, the system was down two hours a day, now it’s only down five –10 minutes a day, if at all. If there’s a pain you’re seeking to remove, then perhaps this model will work.
Your pain might also be “it takes us too long to deliver software,” which is a common problem for DevOps adopters. If you know how to measure the gain of time to market, for example, then you can do one of these bottoms-up ROI cases: “We were able to deliver a third release of apple muncher in two weeks instead of the six it had been taking. This means we could start charging for the new in-app purchases sooner, gaining us $5,000 more over that two week period.”
If you like this kind of figuring, check out Zend’s suggestion for how to do continuous delivery ROI for some inspiration. Like all “good” ROI calculations, it requires changing ROI around slightly to fit what’s measurable…and some good estimating.
I’ve deftly avoided actually giving you anything actionable here. Calculating ROI is a very numbers-, spreadsheet-friendly exercise and any answer should really include at least a starter spreadsheet to get you calculating things. However, as the above hopefully shows, it is indeed the case that asking for “DevOps ROI” is the wrong question. The “ROI” is getting the process and tools in place to create a better product. Obviously, as you rack up the costs associated with DevOps (both in money and time spent), you can start to model the overall ROI of the project versus the revenue and profit you generate, but there’s little DevOps specific about that.
Beyond such obvious answers, when I see people asking for “DevOps ROI,” what we can offer them is over-thinking like the above and examples of it working at organizations. Examples like Allstate and Humana are good mainstream cases, and you can listen in to more on the excellent Goat Farm podcast.
Additionally, I would suggest focusing on looking at DevOps as a continual improvement initiative rather than trying to predict ROI. Much of the problem with figuring out ROI is in having to predict costs and gains. Instead I would try to focus on tracking and trying to improve how you’re doing things in short intervals. In my experience, most organizations devalue the idea of continuously learning and trying to improve their process. Focusing on that might be a better use of time than summoning up a solid case for DevOps ROI.
I’d love to see examples of how you did an DevOps ROI case…or avoided it all together. If we can accumulate enough after-the-fact studies, then at least we could make a “rule of thumb” collection. Leave a comment below!
(I originally wrote this July 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
A little while back I did an email interview with Ray Wang from iThome Weekly, in Taiwan. It’s a little piece about DevOps getting more and more into the enterprise. To read the Google, robot translation, it looks like I did some things “single-handedly,” where in fact I was one of many hands.
As always, here’s the original email exchange we had:
Q. You mention about software-defined business in your article, can you tell more details about what is software-defined business? Why CEO have to think about the software-defined business？ Why DevOps is so important for software defined business?
“Software defined businesses” are companies that are using custom written software to dramatically change and enhance how they run their business. Uber is a good example. Instead of just being a taxi or car service, they use software they wrote to change how their business runs: calling and paying for a taxi on your sell phone is much different than hailing a cab and paying in cash. Insurance and banking companies that are moving more and more of their daily business and interaction with customers to run over mobile apps and other custom written applications are another good example; we see this happening at Pivotal customers lie Allstate, Humana, and banks that use Pivotal Cloud Foundry.
Q: What’s your definition of DevOps? Does DevOps equal to Continuous Delivery? Which definition of DevOps you don’t like most and why?
In general, I think of DevOps as the process and “culture” you wrap around continuous delivery to get the full effect of CD. I tend to speak about them interchangeably at this point; I suppose you don’t need DevOps to get the full benefits of continuous delivery, but they seem to go together well (you could always have just a jelly or just a peanut butter sandwich, but they seem to show up together a lot). CD is always looking to automate as much as possible, delivery to production frequently, and use the feedback loop this rapid cycle gives you (you an observe what your users does each week or day instead of each six months) which are many of the things DevOps seeks to enable as well.
It’s easy to get caught up in DevOps conversations that spend all of them time talking about “culture” and the need to change. I’m interested in that, but I always want to hear about actual, tactical things companies can do to get the benefits of DevOps. We all know how businesses use IT needs to change to be better, and that it’s hard to do so. I want to make sure the overall community is giving advice that’s helpful and, dare I say it, “actionable.”
Q: Should companies have to implement agile development before implement DevOps? Why?
It certainly helps to know what agile development is as a school of thought and to have done some form of agile to trust that way of thinking. If you’ve never done agile before, it just becomes part of trying to do DevOps. It’s certainly hard to think of being successful at DevOps without also doing agile software development.
Q: If CIO want to tell his CEO boss about what’s the value for non-technology companies , what’s your suggestion?
Time to market is the main, measurable, benefit. What that means, to me, is that software is being given to customers more frequently. New features and fixes come out weekly instead of once every 6 to 12 months. The business (the CEO) has to figure out what to do with time to market. If you can put a new features in production each week, how will that help the business? In the consumer space (where much of this mentality comes from) you can add more and more features to out-compete competitors. In the business space, the actual business has to change and evolve at a fast pace to fully take advantage of time to market. All that said, I don’t think any CEO is satisfied with the pace of change in IT. They’d all like it to move not just “faster,” but to get more meaningful features in production more often. Humana provides an interesting example: because they had been honing their software delivery process they were able to launch an Apple Watch app in just five weeks. That timeline is pretty amazing for most enterprise IT projects, let along being in the App Store on the first day of the Apple Watch’s release.
Q: Gartner say 2016 will be the year of DevOps. Do you agree with that? why or why not?
Sure, but I think you’ll see the next 3 or so years be the year of DevOps. I don’t think there’s any one year in particular that will stand out. I don’t think there was “a year of Agile Software Development” in the 2000s, it just took over slowly. What important is for companies to understand what DevOps can help give them - faster time to market for their software-driven products and services - and figure out what they’d do with that new ability. “Doing DevOps” is not easy, so you really need to value the end result or you’ll likely loose interest in the transformation process and let it unhelpfully fizzle out.
And, check out the recording of the DevOpsDays Austin talk referenced in the article if you’re interested.
Can DevOps declare victory yet? Not quite, but soon.
Figuring out when a technology inflection point happens is always hard, if not impossible, in real-time. It’s easy to point backwards and say when ERP, agile software development, the Web, business intelligence, mobile or cloud suddenly became “normal.” I think DevOps is right at the door of that point, and as some recent Gartner predictions have proffered, we could see something like a quarter of all large enterprises using DevOps next year.
But it won’t be easy. The same house of industry sages also threw some cold water on that exuberance by predicting a 90% failure among organizations attempting to do DevOps if they fail to properly address process and culture.
As DevOps spreads to more and more IT shops, what can we in the DevOps community do to help? Clearly, we need to keep up the overall conversation about what DevOps is and the process/cultural changes needed to be successful. Another critical element is to start telling more and more stories of how non-technology companies are succeeding with cloud and DevOps. I think the recent Humana profile provides an interesting template here, as does Standard Bank’s forays into DevOps.
In addition to keeping up the good work, there are four key areas that will be helpful.
In one of my favorite straw-polls, groups who focus on the wrong outcomes and goals with private cloud have similar failure rates as those Gartner describes for organizations attempting to do DevOps.
What exactly those right goals are, for both DevOps and cloud adoption, is a new theory of mine I wanted to road-test with the DevOpsDays Austin crowd. As far as I can tell, the best goal of both a “cloud project” and “doing DevOps” is to do continuous delivery. So, cloud and DevOps let businesses set up the process and technologies needed to deliver custom written software on weekly, if not shorter turns, and actively study, learn and adapt software from the feedback of actual people using their software. This is the path of becoming a software defined business and DevOps is the definitive “how” of how that’s done.
To that end, I suggested to the audience in Austin that we should start, more or less, thinking of continuous delivery and DevOps as synonymous. Once you frame what DevOps is — what DevOps enables — as that, the conversation becomes crisper and, I believe, easier for everyone to understand and do something about. As I discussed last time, becoming a software defined business entails (a.) starting to think in a product-oriented manner (greatly facilitated by continuous delivery), and, (b.) ensuring that you have the overall cloud platform in place that provides the feathered, infrastructure bed for everything.
And, to add to the tracking of DevOps’ ascension to the mainstream, if you think of it as continuous delivery, some recent studies have shown that while overall CD use is low, growth has been ramping up year over year, just like DevOps.
While even the best tools without the proper process (or “culture”) are ineffective, most people think in terms of stacks and tool-chains. So many of the DevOps conversations I’ve been involved in over recent years start by talking about tools and technologies. We’re in IT: it’s what we know.
I’d really like to see us start discussing common tool-chains and patterns of use (“cookbooks” to use an older, common programming documentation metaphor) for doing DevOps. Reference implementations even! Vendors do well telling you what they think the toolchain should be — please, oh please, feel free to ask me! ;). In fact, I’d say there’s almost an unhelpful amount of fragmentation in the infrastructure management layer at the moment: there are so many options that one can be left confused and overwhelmed.
Instead of letting us vendors define those stacks, I’d like to see the overall community get even more involved. Don’t be afraid to talk about tools in the face of all this culture talk! And don’t let us vendors steal the show.
Almost by definition, the IT shop at a non-technology company will be chock full of existing IT and “legacy code.” That’s the very IT that was once the growth-engine darling of the company and laid the foundation for where they are now.
As we all know but try to shy away from admitting too loudly, the new cycle of code and tech rarely works with the previous cycle’s code. I talk with companies almost weekly that are very interested in the question of how to integrate new cloud-native and mobile applications with five, 10, even 15+ year old centralized IT services. They want all the power of cloud and continuous delivery, but need help rationalizing and working with what they already have.
In my view, this is a conversation that doesn’t happen often enough in the DevOps community. It first starts with a — wait for it! — seemingly dottering old term: “IT portfolio management.” That is, taking the time to assess what IT you have and understand the business priorities around it. Without that kind of big picture, systems-based understanding of what you have, any whiz-bang awesomeness you get with DevOps will pale in comparison to the rumbling rebar-festooned concrete ball of legacy IT you have to deal with. (Damon Edwards gave a great talk right before mine introducing one method of getting down and dirty with portfolio management.)
There are many thought-technologies of how to approach this, from Gartner’s bi-modal IT approach, to some interesting work going on over at the Cutter Consortium. The point is to have the discipline and maturity to actually do portfolio management so you can start to improve everything and better prioritize your time and projects.
And, to point out the obvious: we need to start documenting how the application being written and supported by DevOps teams is integrating and co-existing with non-DevOps (or “legacy”) applications and services.
As its name implies, DevOps has been on a land-grab mission that started back in the Agile days. If agile had gone the portmanteau route in naming itself, we might have seen DevQA, or even ProductDevQA. Agile development very consciously crossed silos and unified product management and QA with development, so much so that by the time we came up with DevOps, “Dev” represented all of those traditional roles.
Now, as companies are looking to IT and custom written software to help them become software defined businesses, DevOps-minded folks need to start thinking about how they can get more involved with “the business.” Do you know who these mythical business people are, what they’re worried about, how they think? Can you speak their language and help them learn yours?
To pick one very specific item that’s always a punji pit of IT despair: what KPIs and metrics should you use to communicate “up the chain”? (Ernest Mueller and Karthik Gaekwad have a great presentation on just this topic from last year.)
Think of it this way: what is the “API” for your business, and how can you start programming it…if not designing the API? Once DevOps is tightly integrated with the business side, and most companies are activity thinking about how custom written software can help run, grow, and innovate their business…then we’ll be able to declare DevOps success in the mainstream.
(I originally wrote this May 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
(I originally wrote this April 2015 for FierceDevOps, a site which has made it either impossible or impossibly tedious to find these articles. Hence, it’s now here.)
Quick tip: if you’re in a room full managers and executives from non-technology companies and one of them asks, “what kind of company do you think we are?”…no matter what type of company they are, the answer is always “a technology company.” That’s the trope us in the technology industry have successfully deployed into the market in recent years. And, indeed, rather than this tip being backhanded mocking, it’s praise. These companies are taking advantage of the opportunity to use software and connected devices in novel ways to establish competitive advantage in their businesses. They’re angling to win customer cash by having better software and technology than their competitors.
What does it look like “on the ground,” though when it comes to “being a technology company”? I’d argue that the traditional ways we think about structuring the IT department is different than how technology companies structure themselves. To massively simplify it, traditional IT departments are oriented around working on projects, where-as technology companies are oriented around working on products.
Project oriented thinking takes in requests from an outside entity and works on solving an immediate, well understood problem. There’s often a definitive end to the project: the delivery of the new “service.” Project oriented thinking is good for creating an initial version of an application, installing and upgrading existing packaged software, setting up new offices, on-boarding employees, and other things that have a definitive completion date and well known tasks.
When organizing around this type of work, you setup a functional organization that can be assembled to implement the specific problem. Here, by “functional organization,” I mean groups of people who are defined by their expertise in something: networking, server administrator, software development, project management, audit and compliance, security, and so on. These people are typically shared across various projects as needed and usually are responsible for just what they know about. (For a very different take on how to use functional organizations, see Horace Dediu’s discussion of how Apple organizes itself.)
On-top of this, you take a request-driven approach to change management, which defines when to launch a new project or make “small” changes to an existing one (like adding a user). In the 2000s, we fancied up this concept by calling it “service management.”
Once the project is up and running, there may be something called “maintenance mode” which sees IT making sure, for example, that the ERP application has enough disk space available, that new users are added to the application, and that extra capacity is added when needed.
This mind-set is very handy when you’re dealing with keeping a bunch of products from tech vendors up and running. It’s even good if you have custom written applications that are not changed frequently. What’s also great, is that because each project is well defined at the start and has a definitive end, you can measure success and financial metrics easily: how many requests did we handle (tickets closed) this month? did we deliver on time? did we deliver on-budget? is the project profitable (thus, did we pay too much for that software or get a good deal?)
However, two things have been changing this state of affairs, pushing IT to be more exploratory in nature. As a consequence, the structure of the IT department will need to change as well to maximize ITs value to the overall business.
I often joke that it’s been impossible to see a keynote in recent years without seeing the horsemen of the digital apocalypse. These are the cliche topics that seem to come up in every keynote. Two of these lay the groundwork for why the structure of the IT department needs to change:
These two alone create a pull for more custom written software in businesses. It’s fast and cheaper to create software, and competition is relying on that to create new business models that challenge incumbents or, rather, those businesses that are not evolving how they run their business with software. Again: think of all those taxi services versus Uber.
There’s a third “horseman” in the broader industry that’s driving the need to change how IT departments are structured: the rise of SaaS. Before the advent of SaaS across application categories, software had to be run and managed in-house (or handed off to outsources to run): each company needed its own team of people to manage each instance of the application.
Source: Source: Two studies, first with 1,137 respondents, second with 1,097, involved in their company’s IT buying decisions participated in the Jan 2014 and July 2014 survey, including 470 and 445 whose company currently use public cloud. “Corporate Cloud Computing Trends,” 451 ChangeWave, Feb, 2014 & “Corporate Cloud Computing Trends,” 451 ChangeWave, Aug 2014.
As SaaS use grows more and more, that staffing need changes. How many IT staff members are needed to keep Google Apps or Microsoft’s Office 365 up and running? How many IT staff do you need to manage the storage for Salesforce or Successfactors? Indeed, I would argue that companies use more and more SaaS instead of on-premises packaged software, the staffing needs change dramatically: they lessen. You can look at this in a cost-cutting way, as in “let’s reduce the budget!” Hopefully you can look at it in a growth way instead: we’ve freed up the budget to focus on something more valuable to the business. In most cases, that thing will writing custom software. That is: developers.
This is where the shift to thinking like a product organization is vital. First of all, if you feel the need to develop more custom software — as you should! — you’ll need to hire and train more software developers, product managers, QA staff, and related folks. You’ll also want to cultivate an environment where new ideas can be explored, user-tested in production, and then quickly refined in a loop that spans mere weeks if not one week. You’ll need to become a continuous delivery and learning organization. Jonathan Murray has called this type of organization a “software factory” and has explained how he implemented the change while at Warner Music. More recently, books like Lean Enterprise have explained how this type of thinking can be applied outside of “startup culture,” whose concerns tend to be more around achieving a high valuation to get the company acquired or IPO rather than building and maintaining sustainable business models.
Setting up an organization like this requires not only developers, but creating the actual “factory” that they operate in. I think of this factory as a “platform” and the folks responsible for standing up and caring for that platform are a new type of operations staff. They’re in charge of, really, providing the “cloud” that developers effortlessly deploy and run their applications in.
This new type of IT staff has to think about how they add in as many self-service and highly elastic services in their “cloud” as possible. They too are creating a “product,” one that’s targeted at the internal developer teams and which must continually have new features added to it.
Meanwhile, your developers will be arranged into product-centric teams, hopefully working more closely with line of business managers and staff who are helping craft and grow new applications. No doubt they’ll need operations skill on the team: staff who know how to properly architect and operationalize cloud-native applications.
This is where the now classic DevOps mentality comes in: in order to properly focus on a product, the team must be responsible for all parts of that product’s life, from development through production and back. With a proper cloud platform in place and the operations team to support it, these goals are more achievable than if the product team has to start from bare metal, or work with IT through a ticket system.
To be pragmatic, you probably can’t dedicate all people fully to a product and will need to share them. This carries large risks, however, namely, making sure you properly prioritize an individual’s time and realizing that they’ll have a harder time keeping up with fewer products rather than more. Quality and ability to deliver on time will likely decrease. It may seem like an impossible goal, but often in order to stay competitive — to survive — large, seemingly impossible changes are needed
I’ve spent a lot of time over the years working with cloud market-sizings, and occasioanlly on them. They’re always a bit whackadoodle and can be difficult to pull apart. But, so long as they’re consistent year of year, they do give a good intedication of momentum and a comparision to other markets. This is what you should be using emerging technology marketsizing for: just indications of which way the wind is blowing and how strong that wind is relative to other breezes.
All too often, strategy and M&A people (and other “MBA” types who’re doing valuations and finances, plus all the hangers on in the chattering classes) get obsssed with market-sizing as if they’re “real” and start to do things like include them as fundamental parts of their business plan, e.g., “we’re going to capture 1% of the cloud market this year!” The implication there is that if they don’t, they’ve failed…but when you realize how corny the market-sizing are, you realize that basing your yearly plan on an Excel macro is a poor use of time.
With that disclaiming context established, I love finding market sizing numbers. They’re fun once you get enough of them and can start figuring out the relative size of markets. Which is to say: how much money is being spent each year across the world on various types of technology.
Platform-as-a-Service is one of the more tricky markets to size and has spun all sorts of directions over the years. It’s always the smallest of the three aaS’s, but the highest growth. Even worse, most PaaS market-sizing you see is for only public PaaS. I’ve heard that Gartner has consternated much about this over the years: if you insert a simple “r” into it, they have an on-premises category called CrEAP that sizes what, I think, is “private PaaS,” and I believe they’re fixing it up further.
However, the problem with sizing the PaaS market is asking what you’re sizing. There’s “PaaS from SaaS” offerings like Force.com and then so called “first generation” PaaSes like Heroku (also at Salesforce) and EngineYeard. Then there’s hosted development tools (all the CI services otu there) that, for some reason, show up in PaaS market-sizing. But now there’s all the private PaaS offerings that happen to also run as public PaaS (it turns out the enterprise market really likes private cloud as well as public). And then there’s trouble-makers like us at Pivotal who bristle at the notion of being called (just) a PaaS. Layer Docker, Mesos, kubernetes and all those folks in there…and you’re head should be spinning.
The composition of this market has changed dramatically over the years. My theory is that it’s not well “shaken out” (defined) yet and it’ll change more. So, when I get asked for PaaS market-sizing, I sigh a bit inside.
I think the best process is to thinking through a what PaaS is used for and then try to figure out how much money is spent on solving that problem, not the exact technologies used to solve it. To me, that gets to some indication (whether it’s a ceiling, floor, or mid-point, I don’t know) of how much money there is up to grabs if you’re selling PaaS.
To that end, I always like this chart from a recent Goldman PDF:
As you can infer from my over contextualizing above, I think most PaaS market-sizing is bunk, but this chart a good way of thinking through getting to answer. It compares traditional packaged software and on-premises hardware spend to IaaS and PaaS to show how it starts to erode into “non-cloud” IT. I’m not sure if IaaS and PaaS and public cloud only (it probaly is, which is problematic).
When using this chart, the voice over is something like “tracking this market is difficult at this moment, as any analyst will tell you. I like to use something like this chart as a guide for thinking about it. With PaaS, what you’re interested in is how much traditional IT teams writing cloud-native applications are taking over, and this is one swag at it. Notice that the growth rates are drastically different too: there’s little to no growth in traditional.
The other thing (since most analysts don’t track private PaaS) this suggests is that all the traditional middleware money switches over to all PaaS (public and private) at some point…at least all the “new” money, the growth. Or, at least, that it’s a rough heuristic. It could be less (prices go down with cloud, right? ;>) or it could be more (software eating the world X IT - SaaS = what? == more customer software development at companies leading to more spend on the application development category).
So, if you add up the traditional markets of Appdev and middleare, you get something like a $35-40bn market in 2018 or so, depending how exuberant or dower you want to be, for pubic and private PaaS. Again, that wet-finger-in-the-winding is “bunk,” but it tells you type of money you should think about. Assume that over the next 10 years most “appdev and middleware” spend converts to “PaaS” and then you’ve got something that looks less shitty than the PaaS market-sizing analysts do now-a-days…and more real, I think.
I had lunch with Israel Gat yesterday. Lobster bisque in a sourdough bread bowl, to answer your first question. We were talking about the concept of a “software defined business” (and I was complaining about how HEB needs more of that, if only to get digital Buddy Bucks).
The question came up, so will companies really do this “software defined business” stuff (that’s the phrase I like for “third platform," “digital enterprise,” horseman style jabber-jargon)?
Well, over the next 3 years, I think much of the marketing efforts in tech will converge on exactly that. This is what tech companies will try to sell and the “thought lordship” they’ll try to deploy into the market. I think it’ll actually be to the tremendous benefit of customers, not just a hustle. Soon the egg will become a chicken, and the chicken will start making demands on the egg. Which one is egg and chicken? Indeed! One can never tell the causation directionality in these things.)
Why will tech companies focus on software defined businesses as a growth driver? Well, it’s kind of the only area for growth, at least interesting growth. Keep in mind that if you’re a big, publicly traded company, you have to grow, you need to find new money sources. Last year’s revenue can’t just sit there, staying stable. Otherwise you’re toast because investors will want to allocate their money in companies that are growing, not shrinking (they’ll dump your stock, and but another). This is true for any business, but very true for technology companies.
Here’s my rough sense of revenue streams tech companies will have:
You know, you get a new mobile phone ever 2-3 years. Instead of subscribing to a cable package, you subscribe to HBO Go. (You sort of end up paying the same amount, but who’s paying that close of attention when there’s so new Game of Thrones episodes to watch?) You buy subscription services. Games.
Basically, the “not enterprise” market. This is what most tech press covers and talks about; it’s (sadly?) what we think of as “tech” now.
There’s growth in here, but it’s a totally different space than traditional, let alone “enterprise” tech. Microsoft finally seems to have figured this out, but meanwhile Apple, Google, and Facebook are gobbling up all the revenue and growth…not to mention all the new companies that have come along.
Businesses still need a lot of software, but I’d argue that “systems of record” are probably well saturated at this point and low growth. This used to be the fuel of the tech sector, but if everyone has a system of record in place…how much more more spend can there be there each year? (I’m sure we could look up some IDC or Gartner numbers about single digit growth in these markets.) Not much. One area of interest is shifting over to SaaS, or:
“Well, I guess I need to replace all this ‘legacy’ stuff with cloud.” You know in storage, compute, and maybe networking. There’s lots of hardware and infrastructure software churn here. In the software category, I’d put migrating your on-premises ERP/systems of record stuff to SaaS here as well: moving to Salesforce, Successfactors, etc. In a squishy analogy, things like Adobe transforming from licensed sales to subscription sales.
This is a long term play with lots of cash; for businesses though: do they end up with anything net-new? Have you actually tried to use Salesforce? It removes the hassle of having the manage your own CRM instance, but you still have to manage how your company uses the application…otherwise it’s baffling what’s going on in there. My point is: the company ends up kind of back where it was before the great rip-n-replace, just more optimized IT, with hopefully HTML5 and native mobile apps instead of Flex.
Here, companies are looking to create new custom written software that helps run their businesses in new ways or creates entirely new business models. It’s the thing we at Pivotal target, what you see coming out of the IBM/Apple partnership (mostly - some of it just the next step in the great rewrite the UI every decade journey of green-screen->HTML 1.0->Flex->HTML5->Swift), and it’s what will benefit us people most: companies get new businesses and, thus, growth themselves, us individuals get companies using software more to hopefully make working with them suck less. You know: Uber and all that.
There’s lots of “drag” (secondary spending) that gets to #3 above: you know, you’re gonna need a platform for all that stuff, and the hardware and services around it…but instead of just ending up with the same IT-driven capabilities, you’ll have new capabilities in your business.
So, if you’re a tech company, and you’re looking at the 4 sources of cash and growth above, the fourth option looks pretty good. #1 means competing with Apple, Google, and Facebook and then a dog’s breakfast of lower margin goods below the UI layer. #2 and #3 are good, known quantities, but probably with single-digit growth, if not tricky waters to traverse in the lower cloud infrastructure layers.
Then you look at the other option: a wide open field of possibilities where you “go up the elevator” and avoid the Morlocks. Large tech companies have to do all of these, of course, but I suspect you’ll see most of the razzle-dazzle spread on the fourth.
Those cigar makers have nothing to do with this, but cool picture, huh?
Never mind journalism, it’s industry analysts who are being disrupted.
I keep coming across a new crop of IT industry analysts who end up getting compared incorrectly to journalists. It’s little wonder as most people have little idea what an industry analyst does; it’s not like analysts, hidden behind their austere paywalls, help much there.
People like Horace Dediu, Ben Thompson, and others are experimenting with ways to disrupt industry analysts. They’re using new business models and tools that often seem bonkers to the more traditional analysts wrapped up all warm and tight in their blue blazers.
Their models focus on narrow topics with broad appeal (Apple, vendor sports among high profile tech companies [you can call this “strategy”], and “social”) and they tend to make much, if not all, of their content free. What they lack is the breadth of the overall industry analyst world (they have no opinion on what type of identity and access management or CRM system you might want to use), but that can could be fixed as more “independent” analysts like themselves pop up. There’s also not a lot of “short-listing” (ranking of vendors and products intended to be used by IT decision makers and buyers) that these folks do; this an area where incumbents can easily defend themselves.
One way of looking at it is the “consumerization of industry analysis:” focusing on selling and serving individuals rather than enterprises. Indeed,current industry analyst shops sell mostly to companies and are near impossible for individuals to work with.
While Horace is patient zero here, the best example of this trend in action is Ben Thompson, or “stratechery.” For whatever reason - and his self-proclaimed Midwestern modestly would make him blush at this notion - he talks about his business more and, thus, provides a better view into the business side of this trend.
In the first episode of his podcast, Ben lays out the model he’s trying to execute (and how, you know, the Internet and blogging makes all this possible); he later elaborated on it with his rain forrest layer cake metaphor; and in an even more recent episode goes over how his business has evolved.
How well do these models work? Well, we have some data points from Ben since he’s discussed his momentum by subscriber numbers a few times. Let’s compare it to what I’ve made as analyst over the years to get a sense of what’s “normal”:
(Sources: my often shoddy memory [adding up salary and bonus approximations], and Ben Thompson talking about reaching 1,000 subscribers on November 13th, 2014, and then 2,000 on February 2nd, 2015).
This excludes a lot of thing: health insurance is the biggest and other non-cash compensation.
The point is to show that at the individual level, Ben is doing well. His business is performing well compared to what’s “normal” for analysts. The recent growth rate looks even more promising. I actually ended up at the high end of the analyst wage chart (I think). The average is a lot closer to $100,000 the more junior you get.
If this model can be replicated by other individuals , we’ll see the biggest disruption to the industry analyst business since the Web. What the established firms have is marketing reach, brand awareness, and lots of money and time. The first two are hard for individuals to achieve, but not impossible. The last two are harder.
I think there’s a lot of room for Gartner (the mega firm) and the RedMonks (boutique firms) of the world, but in the middle things will get harder. Forrester is always rearing to be a #2, but revenue-wise, they have a long way to go; IDC will probably keep winching the cost-cranks and double down on being Master of the PivotTable. My former friends at 451 Research have a lot of potential, but like all the other folks in the middle, they need to keep honing their strategies and go-to-market. I keep hearing that HfS is awesome, which could provide an interesting case.
With that bucket of points made for the tl;dr crowd, the rest is an extended treatment.
I ran into Nick Muldoon a few years ago at a DevOpsDays (in 2012, right in the middle of my time at Dell) and he paid me a high compliment, loosely quoted from memory: “I always thought you could be the Gruber of enterprise IT.” Indeed, that’s what all my type dream about when we drive past those lottery billboards on the way to the airport at 4:30am: sitting at home, reading news, blogging, and being so awesome that it pays well.
To some extent, I did that at RedMonk; not at Dell, for sure: there’s no talking in public, really, when you work on strategy and M&A. And I did that on the pay side of the paywall at 451 Research. I loved it: writing up what I think about the IT industry targeted at helping my “audience” (we called them “clients”) make better decisions about IT, be it product management, competing, investing, or using IT. (There’s a minority that look towards analysts for entertainment, which is valid, but likely pays poorly.)
In part, that’s what I’ve been asked to do in my new job at Pivotal, except with a Pivotal bent, of course.
Just a few years into it, RedMonk decided to do away with their paywall and ended up showing one path to disrupting the industry analyst market: the idea of providing free analyst reports through blogs seemed crazy, but it worked. James captured the, uh, esprit de corps in the analyst world well in a 2005 post:
I was at a recent event when a well known industry analyst, who used to run a firm well known for writing white papers in support of vendor positions, sat down. I was discussing how blogs, RSS splicing and aggregation were going to change industry analyst and other information-based businesses. They sniffed and said that bloggers had no credibility. This from someone that sold their credibility down the river long ago.
Yup, analysts are a friendly lot…
As with VCs, one of the problems an analyst has is generating enough flow to get the raw materials you need for your day-to-day work: getting people to talk to you enough, frequently enough, and deeply enough to gather all the information you need to usefully pontificate. You need raw fodder for your content creation. Ben alludes to this is another way: you have to create a pipe (or an overflowing Evernote notebook) of content ideas, things to write and talk about…to analyze.
For RedMonk, having no paywall meant that their marketing was done for “free.” The consequence was (and still is) that RedMonk can’t charge for content, it’s all free. Most firms in the industry analyst business charge a lot for content. My last analyst shop, 451 Research, charges a bundle, and people seem to like it: 451 writes great stuff and their large customer base shows that people value it. But, it does mean that 451 needs to do marketing separately; they don’t get those “zero marketing budget” dynamics RedMonk does. Neither model is better or worse, just different depending on what and how you’re running the business. Both models still get paid for consulting, webinars, and a multitude of other things.
Let’s look at three firms to peek into the bushes of the business a little bit.
RedMonk thus differentiated itself from other analyst firms first by making all of its research free (at the time, very novel): it allowed RedMonk to build pull in the market, that is, it made marketing free. It wasn’t easy, and it took awhile, but it worked.
Their research topics matched this structural approach as well, namely:
They still do that and do it well, along with some of the usual analyst business models (like consulting, webinars, events, etc.)…but all of RedMonk’s activities revolve around knowing about the new shit sooner than the next analyst and being able to explain how to fit it into client thinking. Their events business (launched after I left) looks like an an adjacent business to the “knowing what the fuck we’re talking about” strategy: the tried and true come “hang out with the smart folks and drink your face of” business model.
RedMonk is cheap compared to other firms. The entry level is $5,000 for startups, and goes up from there. Companies like IBM, SAP, and Microsoft pay a lot more (and get a lot more!) but still get a really good deal compared to what other firms charge. You can check out their client logo page to estimate their revenue if you do a little estimating for the larger account sizes and Excel swagging: not too shabby, eh?
451, structurally, is similar to other analyst outfits: there’s a paywall for most everything. As with any analyst outfit, 451 does paid consulting, webinars, events, and other usual marketing driven stuff. 451 also does data center planning (they acquired The Uptime Institute some time ago) and has some interesting data-driven businesses that are being marshaled into proper quantitative analyst products. 451’s key differentiation is mixing its scale with the “the new shit” focus (perhaps a bit less bleeding edge than RedMonk, but not much), all stuck in the speed blender of publishing velocity.
451 seeks out new technologies, not old ones, and writes a lot: each analyst has to write somewhere between 40–60 reports a year, basically one ~1,500 word report a week…not including other deliverables. For the most part, if you brief a 451 analyst they’ll write a report on you, vendors love that and it helps with content flow (and gives analysts inbox heart-burn). I was terrible at that cadence coming from the RedMonk school (which emphasizes consulting, which I did a lot of at 451 instead of writing), but the best performers at 451 rarely take a briefing that resulting in no report being written.
451 is slightly cheaper than larger folks like Gartner and much more expensive than RedMonk. I was delightfully shocked at how much 451 charged coming from RedMonk; which is more a reflection of how cheap RedMonk is (I’m not sure they’ve raised prices, at least at the entry level, since 2006 when I started there - great for clients!). You get a lot more content, of all types, however, from 451 than from RedMonk due to 451’s sheer analyst bulk and core process of weekly report writing.
You’ll recall that my personal revenue was much higher at 451. I think that’s a reflection of the “leverage” a larger group of analysts can have: selling the same thing (reports and knowledge) over and over more.
Gartner is giant. It has breadth and has captured much marketshare. In analyst sales calls you often hear a variation on this:
Well, we’re signing up with Gartner because we have to, and IDC next because we need their PivotTables…the rest of you get to fight over what’s left (want to write a white paper for me?).
Gartner is really good at being the Microsoft of the analyst space…and I mean that as a compliment.
One of the key activities they do is ranking vendors. That may seem trivial, but it’s huge. Gartner tells you what the safe bet in IT acquisition is. It may not be the growth bet, or even the anti-disruption bet for your industry, but it’s the safe bet. And really, with the way most people use IT, that’s all they want. People don’t want to be Uber, they’re forced to compete with Uber, and they’d rather Uber didn’t exist at all.
If this beguiles you, think about your own buying habits outside the realm of computers. Do you prefer to buy your building materials at Home Depot, or some experimental shop on the side of the road? Like lumber and Shop-Vacs, most people look to computers for a function, not a complex belief system (I could have typed “paradigm”), let alone putting a business strategy in action.
(We at Pivotal like to think we help companies who are wise enough to take the first mover advantage when it comes to using IT to gain competitive advantage. This is shockingly not everyone in the world, which is fine: so far there’s been plenty of wise customers out there.)
Throw in their relative scale, and Gartner is in the hollowed “don’t fuck it up” position.
Gartner is expensive, from what I’ve heard and encountered when I’ve been on the vendor side. However, depending on what you need their content is good and the ability to influence (that is, educate, not make them parrot your messaging) analysts through working with them is nice. The IIAR seems to like Gartner, that group of analyst relations folks having ranked Gartner as #1 most every year since 2008 (it’s interesting to note that individual analyst winners are much different). Enterprises seem to like Gartner a lot as well from anecdotes I hear.
(See more analyst shop rankings from Kea Company’s 2014 survey if you like that kind of thing.)
The new (or “new new,” if you’re RedMonk and crew) crop of analysts follows the “make all the good stuff free” rule of having no paywalls (though Ben Thompson is following a sort of open core model). And in making all that stuff free, they’re doing much of the type of work that industry analysts do now, mostly of the qualitative sort. Horace gets into forecasts and market-sizing a bit, actually, which makes him even more of a threat; the other folks don’t seem to spend time on that. (Of late Horace has been mixing in market numbers from traditional analyst shops as well, but he also doesn’t do surveys.)
Again, the reason this new crop of “bloggers” are threatening to industry analysts is because they’re serving some of the same purposes, with much of the same tools and outputs of traditional analysts. And for those analyst activities they don’t currently do: it’s not too far fetched to think that someone soon will. The core differences are similar to previous disruptors (RedMonk and 451 - see, that’s why I outlined them above, all you tl;dr ding-a-lings!), but with some tweaks, namely:
Most of these analysts have a very narrow focus that appeals to a mainstream market. One individual can only cover so much, but if there’s a large audience for that topic, it will suffice.
Horace ostensibly is the world’s premier Apple analyst. That’s a bold claim as I don’t read any Apple analysts you have to pay to read, so maybe there’s some better than him locked behind paywalls; at the very least, he’s a big deal for his size. As a side effect of being an Apple analyst, he covers the mobile space in general. There’s two more tricks for him that build off this seemingly narrow focus:
The combination of Apple, “PC of the future,” and “innovation” all amounts to a very large “audience.”
(Recently, Horace went to go work for a think-tank; it’s hard to tell if that invalidates some of the “this independent blogger-cum-analyst thing is a thing” thinking here or not.)
While Ben Thompson started out seemingly as another Apple/mobile analyst, I’d argue he’s become more of a “third platform” analyst, discussing how the “consumerization of IT,” is effecting the tech industry.
This narrow focus means that both (and most of these new types of analysts) cover “vendor sports”: they don’t give buyers advice about what products and services to buy, they instead tell you how various tech vendors are doing and explore the strategic possibilities of new types of technology.
There are very few of these new analysts that are prescriptive when it comes to buying. I’m not sure why, but I’d theorize that it has a lot to do with the cost structures of doing such work (in the cycle time of analysts learning, collecting epiphanies, publishing, and then collecting money form clients for sharing the results). This is a large part of why I think Gartner’s scale and established position is and will continue to be hard to beat, head on at least.
There are some challengers here:
Once these new bloggers move beyond vendor sports - if they can - and start recommending what to buy, the dynamics will change a lot. Until then, they’re nibbling at the industry analyst business…but that’s how it all starts.
On that quest to find an enterprise Gruber, there’s been a rash of sites of late that go for that. There’s still a giant gap in the market for good enterprise tech coverage.
I watch these sites closely to see how they pan out and if they fall into the usual journalistic traps that start to preclude good analysis.
What you’d really like to see is some dramatic business model hacking in the mid and small section of the industry analyst market. What would it mean to have a Ben Thompson or a RedMonk approach at a place like 451, or Forrester even? Those shops would have huge cultural issues to deal with (analysts are, ironically, a lot who’re the least interested in doing new things in their own processes: they hate changing), but the established brand/reach and capital (in money and time) those larger firms could bring to the strategies of the micro firms would be interesting.
The problem with the big shops taking in the blogger-cum-analysts is that big shops don’t like to create rock star analysts. The rock stars leave to become independent because they can make more money, or, at least have more freedom. They, like me, also get snatched up by vendors who can pay much more including something rarely seen in the analyst world: those mythical stock options which could worth anything between the title for a large house, the cost of a college diploma, or pack of novelty cup-cake papers.
Larger firms are better positioned to cement their position by upping their game by deeply evaluating and short-listing technologies. There are two examples right in front of us: OpenStack and Docker. Both of those vacillate between IaaS utopia (sometimes people come down from their buzz and realize that Docker often aspires to be a PaaS too) and shit-shows cracking in tire-fires all the way down.
Someone like a Gartner has the time, money, and (potential) authority to run labs to test technologies like these out and give solid recommendations on what to use and not use…per business use case, even. To quote the meme, one does not simply build an enterprise cloud…so how could you expect anyone who just creates PDFs about cloud to actually be credible?
With all this glee, you may be wondering why I’m now at a vendor. Good question, as they say when they’re buying time to think. The core of it is that my fixed expenses are too high. I have a family, a large house, and even a new dog (I resisted as long as I could - promise!). And, I’m the single earner for all that.
While I would love to bushwhack my way through this emerging analyst jungle, I don’t want to Mosquito Coast my family; and let’s be honest, myself either. The warm, bi-weekly embrace of a vendor is very comforting. So, like the analysts themselves who observe from the sideline, I’ll be eagerly watching how the industry analyst sports-ball brackets play out.
(Also, check out the two part podcast - part one and part two - with myself and some other analysts on this topic.)
Occasionally, my fellow analysts ask me for advice on being an analyst. Here’s an edited up version of one of my recent emails:
You have to learn to trust your intuition about what you focus on, your own style and voice, and, most importantly for monetization, how you market yourselves. The last point is important for commercial success: in most cases, the (analyst) company you work for will do a poor job marketing you compared to how well you can market yourself.
The first points are on the core parts of being an analyst: deciding what to focus on. While you may proofer opinions about “everything,” it’s good to have a stable of things you really focus on. You’ll need this when it comes to getting things done: you need a way of deciding what to cover, there’ll be no end of offers and topics that people want help on once you’re mildly known, and you need focus. Commercially it’s good to have focus as well. It’s easier to market yourself as a specialist and close deals on that than a generalist.
The other thing I would do - depending on your relationship with your boss and management chain - is stop asking permission for anything. Since “publishing” and onion-mongering is so freewheeling and basically “zero cost” now-a-days, you have to go out there and try new ways of publishing all the time. It’s like the advice us analysts give business: stuff is changing so fast, you have to adopt and use new technologies or die! The same applies to analysts, and yet we’ve got Cobbler’s Shoes Syndrome (we experiment very little).
Just do things that seem like they will help to first promote your personal brand (and therefore “worth”) and second bring in a profit to your firm. As a self-serving example, though small, pretty early on I just started uploading presentations and “brochure” stuff to my SlideShare and, of course, my blog. Several engagements were driven by this, and it also gave me URLs to send to people. Before doing this, I’d been sitting on my thumbs waiting to hear back about getting permission to do this…and then I just started doing it. Think a podcast would help? Just start one! And so forth.
And, on the second point (bringing in profit to your firm): all the motivational crap for net-heads like us focuses too much on the building a personal brand, doing what you love, and whatnot; you have to remember to bring in money to your employer. Your firm will notice you bringing in revenue (and esp. profit!) and clients above all else; they’re a business, not a charity, and that’s what they care about.
To that end - to give general work advice - make sure you have as good a relationship with your boss’ boss (your “second line” manager). This is good for any company, and applies to analyst work, especially where it’s easy for management to lose track of analysts (they have a company to run and can’t keep up with everything you publish - I know! Weird, huh?).
Your second line manager is the one who approves and hands out bonuses, promotions, arbitrates disputes, etc., with minimal input from your actual manager, in general. Analysts shops tend to organize along a taxonomy instead of being flat (I know, weird, huh?) so the individual analysts get lost in the upside-down tree. You, then, have to do the work to establish a relationship with the management chain above you. There are no immediate benefits, but it’s good “credit” to build up. Ask for a 1:1 meeting every two weeks or at least once a month. Just to discuss ideas, what you’re up to, and ask him what you can do to help.
As my snide aside above indicates, most analyst shops are hopelessly behind on using IT on their own. Make a bar-chart of knowledge of “Slack” vs. “SharePoint 2008” and see what it looks like. One thing you can impress (or “be known for”) the management chain with is always suggesting new tools; more than just pointing to them, explaining what they are and why they will improve the company. You know, being an analyst to the analysts.
I would also force yourself to causally interview, if not formally, for new jobs at least twice a year to get a sense of what’s out there, who’d be interested, your worth (do people know you? Are they interested in hiring you?), etc. Interview at other analyst shops for sure - you’ll get a peek into how they operate that will be useful - and non-analyst companies. I find that I can only work at my current job confidently if I know I can get another job easily. Plus, you want to avoid being isolated and only understand the labor market through your current employer’s.
I’ve been an analyst, now, for almost eight years of my ~17 years working. I was (very!) lucky to be hired by RedMonk who taught me near everything I know about being an analyst, and then work in strategy/M&A at Dell which taught me a lot more. I learn new things all the time at my current job at 451 Research. There’s not really a good manual or understood practices for doing analyst work. The issue, once again, is that things are changing fast so new methods are constantly needed.
The analysts I admire don’t really “reinvent” themselves constantly - that’s madness - but they’re consistent in three primary skills:
On the last point: an analyst company is always going to be less interested in creating “stars.” Instead, the company wants to make a star out of its brand. This is understandable, and just fine for them (you would act the same way if you were management whose goal is to grow the value of the company, not the individuals in the company). Thus, for you, the individual analyst, you have to learn how to take care of yourself first and accept that no one is better positioned to do so than…yourself.
As one final piece of advice, so as not to make you into a psychotic spewer of self-promotional filth always busting up the china shop, figure out where “the line” is when it comes to behavior and activities in your firm. Who are people in your firm that have bad reputations “in front of clients” or are generally thought of as screwballs? So long as you’re operating in the company, try not to be like them, walk right up to the line of acceptability, but don’t cross it.
(For view from a different perspective, check out my recent update to “How to deal with industry analysts” talk.)