Speed

This post is an early draft of a chapter in my book,  Monolithic Transformation.

From John Mitchell:

Speed is the currency of business today and speed is the common attribute that differentiates companies and industries going forward. Anywhere there is lack of speed, there is massive business vulnerability:

● Speed to deliver a product or service to customers.

● Speed to perform maintenance on critical path equipment.

● Speed to bring new products and services to market.

● Speed to grow new businesses.

● Speed to evaluate and incubate new ideas.

● Speed to learn from failures.

● Speed to identify and understand customers.

● Speed to recognize and fix defects.

● Speed to recognize and replace business models that are remnants of the past.

● Speed to experiment and bring about new business models.

● Speed to learn, experiment, and leverage new technologies.

● Speed to solve customer problems and prevent reoccurrence.

● Speed to communicate with customers and restore outages.

● Speed of our website and mobile app.

● Speed of our back-office systems.

● Speed of answering a customer’s call.

● Speed to engage and collaborate within and across teams.

● Speed to effectively hire and onboard.

● Speed to deal with human or system performance problems.

● Speed to recognize and remove constructs from the past that are no longer effective.

● Speed to know what to do.

● Speed to get work done.

Continuous innovation only works with an enterprise that embraces speed and the data required to measure it. By creating conditions for continuous innovation, we must bring about speed. While this is hard, it has a special quality that makes the job a little easier. Through data, speed is easy to measure.

Innovation, on the other hand, can be extremely difficult to measure. For example, was that great quarterly revenue result from innovation or market factors? Was that product a one hit wonder or result of innovation? How many failures do we accept before producing a hit? These questions are not answerable. But we can always capture speed and measure effects of new actions. For example, we can set compliance expectations on speed and measure those results.

Speed is not only the key measurement, it becomes a driver for disruptive innovation. Business disruption has frequently arisen from startups and new technologies, not seeking optimization, but rather discovering creative ways to rethink problems to address speed. Uber is about speed. Mobile is about speed. IoT is about speed. Google is about speed. Drones are about speed. AirBnB is about speed. Amazon is about speed. Netflix is about speed. Blockchain is about speed. Artificial Intelligence is about speed.

Continuous Innovation then is the result of an enterprise, driven by speed, which is constantly collecting data, developing and evaluating ideas, experimenting and learning, and through creativity and advancing technologies, is constructing new things to address ever evolving customer needs.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Power-line picture from Claudiu Sergiu Danaila.

Team composition: not all ninjas

This post is an early draft of a chapter in my book, Monolithic Transformation.

By way of “A brief history of rockstars destroying guitars.”

Skilled, experienced team members are obviously valuable and can temper the risk failure by quickly delivering software. Everyone would like the mythical 10x developer, and would even settle for a 3 to 4x “full stack developer.” Surely, management often thinks, doing something as earth-shattering as “digital transformation” only works with highly skilled developers. You see this in surveys all the time: people say that lack of skills is a popular barrier to improving their organization’s software capabilities.

This mindset is one of the first barriers to scaling change. Often, an initial, team of “rockstars” has initial success, but attempts to clone them predictably fails and scaling up change is stymied. It’s that “lack of skills” chimera again. It’s impossible to replicate these people, and companies rarely want to spend the time and money to actually train existing staff.

Worse, when you use the only ninjas need apply tactic, the rest of the organization loses faith that they could change as well. “When your project is successful,” Jon Osborn explains, “and they look at your team, and they see a whole bunch of rockstars on it, then the excuse comes out, ‘well, you took all the top developers, of course you were successful.’”

Instead of only recruiting elite developers, also staff your initial teams with a meaningful dose of normals. This will not only help win over the rest of the organization as you scale, but also means you can actually find people. A team with mixed skill levels also allows you train your “junior” people on the job, especially when they pair with your so called “rockstars.”

Rockstars known to destroy hotel rooms

I met a programmer with 10x productivity once. He was a senior person and required 10 programmers to clean up his brilliant changes. –Anonymous on the c2 wiki

Usually what people find, of course, is that this rockstar/normal distinction is situational and the result of a culture that awards the lone wolf hero instead of staff that helps and supports each other. Those mythical 10x developers are lauded because of a visual cycle of their own creation. At some point, they spaghetti coded out some a complicated and crucial part of the system “over the weekend,” saving the project. Once in production, strange things started happening to that piece of code, and of course our hero was the only one who could debug the code, once again, over the weekend. This cycle repeats itself, and we laud this weekend coder, never realizing they’re actually damaging our business.

Relying on these heros, ninjas, rockstars, or what have you is a poor strategy in a large organization. Save the weekend coding for youngsters in Ramen chomping startups that haven’t learned better yet. “Having a team dynamic and team structure that the rest of the organization can see themselves in,” Osborn goes on to say, “goes a long way towards generating a buy in that you’re actually being successful and not cheating by using all your best resources.”

Volunteers

When possible, recruiting volunteers is the best option for your initial projects, probably for the first year. Forcing people to change how they work is a recipe for failure, esp. at first. You’ll need motivated people who are interested in change or, at least, will go along with it instead of resisting it.

Osborn describes this tactic at Great American Insurance Group: “We used the volunteer model because we wanted excited people who wanted to change, who wanted to be there, and who wanted to do it. I was lucky that we could get people from all over the IT organisation, operations included, on the team… it was a fantastic success for us.”

This might be difficult at first, but as a leader of change you need to start finding and cultivating these change-ready volunteers. Again, you don’t necessarily want rockstars, so much as open minded people who enjoy trying new things.

Rotating out to spread the virus of digital transformation

Few organizations have the time or budget-will to train their staff. Management seems to think that a moist bedding of O’Reilly books in a developer’s dark room will suddenly pop-up genius skills like mushrooms. Rotating pairing in product teams addresses this problem in a minimally viable way inside a team: team members learn from each other on a daily basis. Event better, staff is actually producing value as they learn instead of sitting in a neon-light buzzing conference room working on dummy applications.

To scale this change, you can selectively rotate staff out of a well functioning team into newer teams. This seeds their expertise through the organization, and once you repeat this over and over, knowledge will spread faster. One person will work with another, becoming two skilled people, who each work with another person, become four skilled people, then eight, and so on. Organizations like Synchrony go so far as the randomly shuffle desks every six months to ensure people are moving around.

More than just skill transfer and on the job training, rotating other staff through your organization will help spread trust in the new process. People tend to trust their peers more than leaders throwing down change from high, and much more than external “consultants,” and worse, vendor shills like myself. As ever, building this trust through the organization is key to scaling change.

Orange France is one of the many examples of this strategy in practice. After the initial success revitalizing their SMB customer service app, Orange started rotating developers to new teams. Developers that worked on the new mobile application pair with Orange developers from other teams, the website team. As ever with pairing, they both teach the peers how to apply agile and improve the business with better software at the same time. Talking about his experience with rotating pairing, Orange’s Xavier Perret says that “it enabled more creativity in the end. Because then you have different angles, [a] different point of view. As staff work on new parts of the system they get to know the big picture better and being “more creative problem solving” to each new challenge, Perret ads.

While you may start with ninjas, you can take a cadre of volunteers and slowly by surely build up a squad of effective staff that can spread transformation throughout your organization. All with less throwing stars and trashed hotel rooms than those 10x rockstars leave in their wake.

This post is an early draft of a chapter in my book, Monolithic Transformation.

Beyond digital transformation BS, improving your organization by fixing your software strategy

This post lists early draft of a chapters in my now published book, Monolithic Transformation.

Credit to Team Tirefi.re.

The phrase “digital transformation” is mostly bull-shit, but then again, it’s perfect. The phrase means executing a strategy to innovate new business models driven by rapidly delivered, well designed, and agile software. For many businesses, fixing their long dormant, lame software capabilities is an urgent need: companies like Amazon loom as over-powering competitors in most every industry. More threatening, clever, existing enterprises have honed their ability software capabilities over the past five years.

Liberty Mutual, for example, entered a new insurance market on the other side of the world in 6 months, doubling the average close rate. Home Depot has grown it’s online business by around $1bn each of the past four years, is the #2 ranked digital retailer by Gartner L2, and is adding more than 1,000 technical hires in 2018. The US Air Force modernized their air tanker scheduling process in 120 days, driving $1m in fuel savings each week, and leading to canceling a long-standing $745m contract that hadn’t delivered a single line of code in five years.

Whatever businesses you’re in, able, ruthless competition is coming from all sides: new entrants and existing behemoths. Their success is driven by an agile, cloud-driven software strategy that transforms their organizations into agile businesses.

Let’s take a breath.

That’s some full tilt bluster, but we’ve been in an era of transient advantage for a long time. Businesses need every tool they can lay hands on to grow their business, sustain their existing cash-flows, and fend off competitors. IT has always been a powerful tool for enabling strategies, as they say, but in the past 10 years seemingly helpful but actually terrible practices like outsourcing have ruined most IT department’s ability to create useful software for the businesses they supposedly support.

These organizations need to improve how they do software to transform their organizations into programmable businesses.

Studying how large organizations plan for, initially fail, and then succeed at this kind of transformation is what I spend my time doing. This book (which I’m still working on) collects together what I’ve found so far, and is constructed from the actual experiences and stories of people of who’ve suffered through the long journey to success.

Enjoy! And next time someone rolls their eyes at the phrase “digital transformation,” ask them, “well, what better phrase you got, chuckle-head?”

Draft chapters

I’m posting draft chapters of this book as I MVP-polish them up. In sort of the right order, here they are:

  1. Why change?
  2. Spraying the bullshit of “vision” & “strategy”.
  3. Communicate the digital vision and strategy.
  4. Creating a culture of change, continuous learning, & comfort.
  5. Enterprise architecture still matters.
  6. Creating alliances & holding zero-sum trolls at bay.
  7. A series of small projects, building momentum to scale.
  8. Product teams — agile done right.
  9. Team composition: not all ninjas.
  10. Tracking your improvement — “metrics.”
  11. Dealing with compliance — it might even be a good idea.
  12. You own it (conclusion)

There’s also the complete draft in progress if you can bear it. Also, there’s a previous “edition” of sorts, and the ever shifting talk I give on this content.

This post lists early draft of a chapters in my now published book, Monolithic Transformation.

Communicate the digital vision and strategy

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Your employees listening to yet another annual vision and strategy pitch.

If a strategy is presented in the boardroom but employees never see, is it really a strategy? Obviously, not. Leadership too often believes that the strategy is crystal clear but staff usually disagree. For example, in a survey of 1,700 leaders and staff, 69% of leaders said their vision was “pragmatic and could easily translated into concrete projects and initiatives.” Employees, had a glummer picture: only 36% agreed.

Your staff likely doesn’t know the vision and strategy. More than just understanding it, they rarely know how they can help. As Boeing’s Nikki Allen put it:

In order to get people to scale, they have to understand how to connect the dots. They have to see it themselves in what they do — whether it’s developing software, or protecting and securing the network, or provisioning infrastructure — they have to see how the work they do every day connects back to enabling the business to either be productive, or generate revenue.

There’s little wizardry to communicating strategy. First, it has to be compressible. But, you already did that when you established your vision and strategy…right? Next, you push it through all the mediums and channels at your disposal to tell people over and over again. Chances are, you have “town hall” meetings, email lists, and team meetings up and down your organization. Recording videos and podcasts of you explaining the vision and strategy is helpful. Include strategy overviews in your public speaking because staff often scrutinizes these recordings. While “Enterprise 2.0” fizzled out several years ago, Facebook has trained all us to follow activity streams and other social flotsam. Use those habits and the internal channels you have to spread your communication.

You also need to include examples of the strategy in action, what worked and didn’t work. As with any type of persuasion, getting people’s peers to tell their stories are the best examples. Google and others find that celebrating failure with company-wide post mortems is instructive, career-ending crazy as that may sound. Stories of success and failure are valuable because you can draw a direct line between high-level vision to fingers on keyboard. If you’re afraid of sharing too much failure, try just opening up status metrics to staff. Leadership usually underestimates the value of organization-wide information radiators, but staff usually wants that information to stop prairie dogging through their 9 to 5.

As you’re progressing, getting feedback is key: do people understand it? Do people know what to do to help? If not, then it’s time to tune your messages and mediums. Again, you can apply a small batch process to test out new methods of communicating. While I find them tedious, staff surveys help: ask people if they understand your strategy. Be to also ask if know how to help execute the strategy.

Manifestos can help decompose a strategy into tangible goals and tactics. The insurance industry is on the cusp of turbulent competitive landscape. To call it “disruptive,” would be too narrow. To pick one sea of chop, autonomous vehicles are “changing everything about our personal auto line and we have to change ourselves,” says Liberty Mutual’s Chris Bartlow. New technologies are only one of many fronts in Liberty’s new competitive landscape. Every existing insurance company and cut-throat competitors like Amazon are using new technologies to both optimize existing business models and introduce new ones.

“We have to think about what that’s going to mean to our products and services as we move forward,” Bartlow says. Getting there required re-engineering Liberty’s software capabilities. Like most insurance companies, mainframes and monoliths drove their success over past decades. That approach worked in calmer times, but now Liberty is refocusing their software capability around innovation more than optimization. Liberty is using a stripped down set of three goals to make this urgency and vision tangible.

“The idea was to really change how we’re developing software. To make that real for people we identified these bold, audacious moves — or ‘BAMS,’” says Liberty Mutual’s John Heveran:

These BAMs grounded Liberty’s strategy, giving staff very tangible, if audacious, goals. With these in mind, staff could start thinking about how they’d achieve those goals. This kind of manifesto, makes strategy actionable.

So far, it’s working. “We’re just about cross the chasm on our DevOps and CI/CD journey,” says Liberty’s Miranda LeBlanc. “I can say that because we’re doing about 2,500 daily builds, with over a 1,000 production deployments per a day,” she adds. These numbers are tracers of putting a small batch process in place that’s used to improve the business. They now support around 10,000 internal users at Liberty and are better provisioned for the long ship ride into insurance’s future.

Choosing the right language is important for managing IT transformation. For example, most change leaders suggest dumping the term “agile.” At this point, near 25 years into “agile,” everyone feels like they’re agile experts. Whether that’s true is irrelevant. You’ll faceplam your way through transformation if you’re pitching switching to a methodology people believe they’ve long mastered.

It’s better to pick your own branding for this new methodology. If it works, steal the buzzwords du jour, from “cloud native,” DevOps, or serverless. Creating your own brand is even better. As we’ll discuss later, Allstate created a new name, CompoZed Labs, for its transformation effort. Using your own language and branding can help bring smug staff onboard and involved. “Oh, we’ve always done that, we just didn’t call it ‘agile,’” sticks-in-the-mud are fond of saying as they go off to update their Gantt charts.

Make sure people understand why they’re going through all this “digital transformation.” And make even more sure they know how to implement the vision and strategy, or, as you start thinking, our strategy.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Creating alliances & holding zero-sum trolls at bay

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Source.

Lone wolves rarely succeed at transforming business models and behavior at large organizations. True to the halo effect, you’ll hear about successful lone wolves often. What you don’t hear about are all the lone wolves who limped off to die alone. Even CEOs and boards often find that change-by-mandate efforts fail. “Efforts that don’t have a powerful enough guiding coalition can make apparent progress for a while,” as Kotter summarizes, “But, sooner or later, the opposition gathers itself together and stops the change.”

Organizations get big by creating and sustaining a portfolio of revenue sources, likey over decades. While these revenue sources may transmogrify from cows to dogs, if frightened or backed into a corner, hale but mettlesome upstarts will are usually trampled by the status quo stampede. At the very least, they’re constantly protecting their neck from frothy, sharp-tooth jackals. You have to work with those cows and canines, often forming “committees.” Oh, and, you know, they might actually be helpful.

How you use this committee is situation. It might be the placate enemies who’d rather see you fail than succeed, looking to salvage corporate resources from the HMS Transformation’s wreak. The old maxim to keep your friends close and your enemies closer summarizes this tactic well. Getting your “enemies” committed to and involved in your project is an obvious, facile suggestion, but it’ll keep them at bay. You’ll need to remove my cynical tone from your committee and actually rely on them for strategic and tactical input, support in budgeting cycles, and, eventually, involvement in your change.

For example, a couple years back I was working with all the C-level executives at a large retailer. They’d come together to understand IT’s strategy to become a software defined business. Of course, IT could only go so far and needed the the actual lines of business to support and adopt that change. The IT executives explained how transforming to a cloud native organization would improve the company’s software capabilities in the morning. In the afternoon, they all started defining a new application focused on driving repeat business, using the very techniques discussed in the morning. This workshopping solidified IT’s relationship with key lines of business and started working transforming those businesses. It also kicked off real, actual work on the initiative. By seeing the benefits of the new approach in action, IT also won over the CFO who’d been the most skeptical.

As this anecdote illustrates, building an alliance often requires serving your new friends. IT typically has little power to drive change, especially after decades of positioning themselves as a service bureau instead of a core enabler of growth. As seen in the Duke lineworker case above, asking the business what they’d like changed is more effective than presuming to know. As that case also shows, a small batch process discovers what actually needs to happen despite the business’ initial theories. But, getting there requires a more of a “the customer is always right” approach on IT’s part.

Now, there are many tactics for managing this committee; as ever Kotter does an excellent job of cataloging them in Leading Change. In particular, you want to make sure the committee members remain engaged. Good executives can quickly smell a waste of time and will start sending junior staff if the wind of change smells stale (wouldn’t you do the same?). You need to manage their excitement, treating them as stakeholder and customers, not just collaborators. Luckily, most organizations I’ve spoken with find that cloud native technologies and methodologies so vastly improve their software capabilities, in such a short amount of time that winning over peers is easy. As one executive a year intro their digital transformation program told me, “holy-@$!!%!@-cow we are starting to accelerate. It’s getting hard to not overdo it. I have business partners lined up out the door.”

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Tracking your improvement  – “metrics”

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Tracking the health of your overall innovation machine can be both overly simplified and overly complex. What you want to measure is how well you’re doing at software development and delivery as it relates to improving your organization’s goals. You’ll use these metrics to track how your organization is doing at any given time and, when things go wrong, get a sense of what needs to be fixed. As ever with management, you can look at this as a part of putting a small batch process in place: coming up with theories for how to solve your problems and verifying if the theory worked in practice or not.

All that monitoring

In IT most of the metrics you encounter are not actually business oriented and instead tell you about the health of your various IT systems and processes: how many nodes are left in a cluster, how much network traffic customers are bringing in, how many open bugs development has, or how many open tickets the help desk is dealing with on average.

Example of Pivotal Cloud Foundry’s Healthwatch metrics dashboard.

All of these metrics can be valuable, just as all of them can be worthless in any given context. Most of these technical metrics, coupled with ample logs, are needed to diagnose problems as they come and go. In recent years, there’ve been many advances in end-to-end tracing thanks to tools like Zipkin and Spring Sleuth. Log management is well into its newest wave of improvements, and monitoring and IT management analytics are just ascending another cycle of innovation — they call it “observability” now, that way you know it’s different this time!

Instead of looking at all of these technical metrics, I want to look at a few common metrics that come up over and over in organizations that are improving their software capabilities.

Six common cloud native metrics

Some metrics consistently come up when measuring cloud native organizations:

Lead Time

Source

Lead time is how long it takes to go from an idea to running code in production, it measures how long your small batch loop takes. It includes everything in-between from specifying the idea, writing the code and testing it, passing any governance and compliance needs, planning for deployment and management, and then getting it up and running in production.

If your lead time is consistent enough, you have a grip on IT’s capability to help the business by creating and deploying new software and features. Being this machine for innovation through software is, as you’ll hopefully recall, the whole point of all this cloud native, agile, DevOps, and digital transformation stuff.

As such, you want to monitoring your lead closely. Ideally, it should be a week. Some organizations go longer, up to two weeks, and some are even shorter, like daily. Target and then track an interval that makes sense for you. If you see your lead time growing, then you should stop everything, find the bottlenecks and fix them. If the bottlenecks can’t be fixed, then you probably need to do less each release.

Velocity

Velocity shows how many features are typically deployed each week. Whether you call features “stories,” “story points,” “requirements,” or whatever else, you want to measure how many of them the team can complete each week; I’ll use the term “story.” Velocity tells you three things:

  1. Your progress to improving and ongoing performance — at first, you want to find out what your team’s velocity is. They will need to “calibrate” on what they’re capable of doing each week. Once you establish this base line, if it goes down something is going wrong and you can investigate.
  2. How much the team can deliver each week — once you know how many features your team can deliver each week, you can more reliability plan your road-maps. If a team can only deliver, for example, 3 stories each week, asking them to deliver 20 stories in a month is absurd. They’re simply not capable of doing that. Ideally, this means your estimates are no longer, well, always wrong.
  3. If the the scope of features is getting too big or too small — if previously, reliability performing team’s velocity starts to drop, it means that they’re scoping their stories incorrectly: they’re taking on too much work, or someone is forcing them to. On the other hand, if the team is suddenly able to deliver more stories each week or finds themselves with lots of extra time each week, it means they should take on more stories each week.

There are numerous ways to first calibrate on the number of stories a team can deliver each week and managing that process at first is very important. As they calibrate, your teams will, no doubt, get it wrong for many releases, which is to be expected (and one of the motivations in picking small projects at first instead of big, important ones). Other reports like burn down charts can help illustrate how the team’s velocity is getting closer to delivering across major releases (or in each release) and help you monitor any deviation from what’s normal.

Latency

In general, you want your software to be as responsive as possible. That is, you want it to be fast. We often think of speed in this case, how fast is the software running and how fast can it respond to requests? Latency is a slightly different way of thinking about speed, namely, how long does a request take end-to-end to process, returning back to the user.

Latency is different than the raw “speed” of the network. For example, a fast network will send a static file very quickly, but if the request requires connecting to a database to create and then retrieve a custom view of last week’s Austrian sales, it will take awhile and, thus, the latency will be much longer than downloaded an already made file.

From a user’s perspective, latency is important because an application that takes 3 minutes to respond versus 3 milliseconds might as well be “unavailable.” As such, latency is often the best way to measure if your software is working.

Measuring latency can be tricky….or really simple. Because it spans the entire transaction, you often need to rely on patching together a full view — or “trace” — of any given user transaction. This can be done by looking at locks, doing real or synthetic user-centric tracing, and using any number of application performance monitoring (APM) tools. Ideally, the platform you’re using will automatically monitor all user requests and also put together catalog all of the sub-processes and sub-sub-processes that make up the entire request. That way, you can start to figure why things are so slow.

Error Rates

Often, your systems and software will tell when there’s an error: an exception is thrown in the application layer because the email service is missing, an authentication service is unreachable so the user can’t login, a disk is failing to write data. Tracking and monitoring these errors is, obviously, a good idea. Some of them will range from “is smoke coming out of the box?” to more obtuse ones like servicing being unreachable because DNS is misconfigured. Oftentimes, errors are roll-ups of other problems: when a web server fails, returning a 500 response code, it means something went wrong, but doesn’t the error doesn’t usually tell you what happened.

Error rates also occur before production, while the software is being developed and tested. You can look at failed tests as error rates, as well as broken builds and failed compliance audits.

Fixing errors in development can be easier and more straight forward, whereas triaging and sorting through errors in production is an art. What’s important to track with errors is not just that one happened, but the rate at which they happen, perhaps errors per second. You’ll have to figure out an acceptable level of errors because there will be many of them. What you do about all these errors will be driven by your service targets. These targets may be foisted on you in the form of heritage Service Level Agreements or you might have been lucky enough to negotiate some sane targets.

Chances are, a certain rate of errors will be acceptable (have you ever noticed that sometimes, you just need to reload a web-page?) Each part of your stack will throw off and generate different errors: some are meaningless (perhaps they should be more warnings or even just informative notices, e.g., “you’re using an older framework that might be deprecated sometime in the next 30 years) and others could be too costly, or even impossible to fix (“1% of user’s audio uploads fail because their upload latency and bandwidth is too slow”). And some errors may be important above all else: if an email server is just losing emails every 5 minutes…something is terribly wrong.

Generally, errors are collected from logs, but you could also poll the service in question and it might send alerts to your monitoring systems, be that an IT management system or just your phone.

Mean-time-to-repair (MTTR)

If you can accept the reality that things will go wrong with software, how quickly you can fix those problems becomes a key metric. It’s bad when an error happens, but it’s really bad if it takes you a long time to fix it.

Tracking mean-time-to-repair is an ongoing measurement of how quickly you can recovering from errors. As with most metrics, this gives you a target to improve towards and then allows you to make sure you’re not getting worse.

If you’re following cloud native practices and using a good platform, you can usually shrink your MTTR with the ability to roll back changes. If a release turns out to be bad (an error), you can back it out quickly, removing the problem. This doesn’t mean you should blithely roll out bad releases, of course.

Measuring MTTR might require tracking support tickets and otherwise manually tracking the time between incident detection and fix. As you automate remediations, you might be able to easily capture those rates. As with most of these metrics, what becomes important in the long term is tracking changes to your acceptable MTTR and figuring out why the negative changes are happening.

Costs

Everyone wants to measure cost, and there are many costs to measure. In addition to the time spent developing software and the money spent on infrastructure, there are ratios you’ll want to track like number of applications to platform operators. Typically, these kinds of ratios give you a quick sense of how efficiently IT runs. If each application takes one operator, something is probably missing from your platform and process. T-Mobile, for example, manages 11,000 containers in production with just 8 platform operators.

There are also less direct costs like opportunity and value lost due to waiting on slow release cycles. For example, the US Air Force calculated that is saved $391M by modernizing it’s software methodology. The point is that you need to obviously track the cost of what you’re doing, but you also need to track the costs of doing nothing, which might be much higher.

Business Value

“Comcast Cloud Foundry Journey — Part 2,” Greg Otto, Comcast, June 2017.

Of course, none of the metrics so far has measured the most valuable, but difficult metric: value delivered. How do you measure your software’s contribution to your organization’s goals? Measuring how the process and tools you use contributes to those goals is usually harder. This is the dicey plain of correlation versus causation.

Somehow, you need to come up with a scheme that shows and tracks how all this cloud native stuff you’re spending time and money on is helping the business grow. You want to measure value delivered over time to:

  1. Prove that you’re valuable and should keep living and get more funding,
  2. Figure out when you’re failing to deliver so that you can fix it

There are a few prototypes of linking cloud native activities to business value delivered. Let’s look at a few examples:

  1. As described in the case study above, when the IRS replaced call centers with poor availability with software, IT delivered clear business value. Latency and error rates decreased dramatically (with phone banks, only 37% of calls made it through) and the design improvements they discovered led to increased usage of the software, pulling people away from the phones. And, then, the results are clear: by the Fall of 2017, the this application had collected $440m in back taxes.
  2. Sometimes, delivering “value” means satisfying operational metrics rather than contributing dollars. This isn’t the best of all situations to be in, but if you’re told, for example, that in the next two years 60% of applications need to be “on the cloud,” then you know the business value you’re supposed to deliver on. In such cases, simply tracking the replatforming of applications to a cloud platform will probably suffice.
  3. Running existing businesses more efficiently is a popular goal, especially for large organizations. In this case, the value you deliver with cloud native will usually be speeding up businesses processes, removing wasted time and effort, and increasing quality. Duke Energy’s lineworker case is a good example, here. Duke gave lineworkers a better, highly tuned application that queue and coordinate their work in the field. The software increased lineworker’s productivity and reduced waste, directly creating business value in efficiencies.
  4. The US Air Force’s tanker scheduling case study is another good example here: by adapting a cloud native, software model they were able to ship the first version in 120 days and started saving $100,000’s in fuel costs each week. Additionally, the USAF computed the cost of delay — using the old methods that took longer — at $391M, a handy financial metric to consider.
  5. And, then, of course, there comes raw competition. This most easily manifests itself as time-to-market, either to match competitors or get new features out before them. Liberty Mutual’s ability to enter the Australian motorcycle market, from scratch, in six months is a good example. Others, like Comcast demonstrate competing with major disruptors like Netflix.

It’s easy to get very nuanced and detailed when you’re mapping IT to business value. You need to keep things as simple as possible, or, put another way, only as complex as needed. As with the example above, clearly link your cloud native efforts to straight forward business goals. Simply “delivering on our commitment to innovation” isn’t going to cut it. If you’re suffering under vague strategic goals, make them more concrete before you start using them to measure yourself. On the other end, just lowering costs might be a bad goal to shoot for. I talk with many organizations who used outsourcing to deliver on the strategic goal of lowering costs and now find themselves incapable of creating software at the pace their business needs to compete.

Fleshing out metrics

I’ve provided a simplistic start at metrics above. Each layer of your organization will want to add more detail to get better telemetry on itself. Creating a comprehensive, umbrella metrics system is impossible, but there are many good templates to start with.

Pivotal has been developing a cloud native centric template of metrics, divided into 5 categories:

BuiltToAdapt Benchmark.

These metrics cover platform operations, product, and business metrics. Not all organizations will want to use all of the metrics, and there’s usually some that are missing. But, this 5 S’s template is a good place to start.

If you prefer to go down rabbit holes rather than shelter under umbrellas, there are more specialized metric frameworks to start with. Platform operators should probably start by learning how the Google SRE team measures and manages Google, while developers could start by looking at TK( need some good resource ).

Whatever the case, make sure the metrics you choose are

  1. targeting the end goal of putting a small batch process in place to create better software,
  2. reporting on your ongoing improvement towards that goal, and,
  3. alerting you that you’re slipping and need to fix something…or find a new job.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Dealing with compliance — it might even be a good idea

This post is an early draft of a chapter in my book,  Monolithic Transformation.

“Compliance” will be one of your top bugbears as you improve how your organization does software. As numerous organizations have been finding, however, compliance is a solvable problem. You can even improve the quality of compliance and risk management in most cases with your new processes and tools, introducing more, reliable controls than traditional approaches.

I’ve seen three approaches to dealing with compliance, often used together as a sort of maturity model:

  1. Ignore compliance, compliantly — select projects to work on that don’t need much compliance, if any. Eventually, you’ll want to work on projects that do, but this buys you time to learn by doing and building up a small series of successful projects.
  2. Minimal Viable Compliance — often, the compliance requirements you must follow have built up over years, even decades. It’s very rare that any control is removed, but it’s very frequent that they should be. Find the smallest set of controls you actually need to satisfy.
  3. Transform compliance — as you scale up your transformation efforts, like most organizations you’ll find that you have to work with auditors. Most organizations are finding that simply involving auditors in your software lifecycle from start to end not only helps you pass compliance with flying colors, but that it improves the actual compliance work.

But first, what exactly is “compliance”?

Paul tells you what compliance is.

If you’re a large organization, chances are you’ll have a set of regulations you need to comply with. These are both self- and government-imposed. In software, the point of regulations is often to govern the creation of software, how it’s managed and in run in production, and how data is handled. The point of most compliance is risk management, e.g., making sure developers deliver what was asked for, making sure they follow protocol for tracking changes and who made them, making sure the code and the infrastructure is secure, and making sure that people’s personal data is not needlessly exposed.

Compliance often takes the form of a checklist of controls and verifications that must be passed. Auditors are staff that go through the process of establishing those lists, tracking down their status in your software, and also negotiating if each control must be followed or not. The auditors are often involved before and after the process to establish the controls and then verify that they were followed. It’s rare that auditors are involved during the process, which is a huge source of wasted time, it turns out. Getting involved after your software has been created requires much compliance archaeology and, sadly, much cutting and pasting between emails and spreadsheets, paired with infinite meeting scheduling.

When you’re looking to transform your software capabilities, this traditional approaches to compliance, however, often end up hurting businesses more than helping them. As Liberty Mutual’s David Ehringer describes it

The nature of the risk affecting the business is actually quite different: the nature of that risk is, kind of, the business disrupted, the business disappearing, the business not being able to react fast enough and change fast enough. So not to say that some of those things aren’t still important, but the nature of that risk is changing.

Ehringer says that many compliance controls are still important, but there are better ways of handling them without worsening the largest risk: going out of business because innovation was too late.

Let’s look at three ways that organizations are avoiding failure by compliance.

Ignore compliance, compliantly

While just a quick fix, engineering a way to avoid compliance is a common first approach. Early on, when you’re learning a new mindset for software and build up a series of small successes, you’ll likely work on applications that require little to no compliance. These kinds of applications often contain no customer data, don’t directly drive or modify core processes, or otherwise touch anything that’d need compliance scrutiny.

These may seem disconnected from anything that matters and, thus, not worth working on. Early on, though, the ability to get moving and prove that change is possible often trumps any business value concerns. You don’t want to eat these “empty calorie” projects too much, but it’s better than being killed off at the start.

Minimal Viable Compliance

Part of what makes compliance seem like toil is that many of the controls seem irrelevant. Over the years, compliance builds up like plaque in your steak-loving arteries. The various controls may have made sense at some time — often responding to some crisis that occured because this new control wasn’t followed. At other times, the controls may simply not be relevant to the way you’re doing software.

Clearing away old compliance

When you really peer into the audit abyss, you’ll often find out that many of the tasks and time bottlenecks are caused by too much ceremony and processes no longer needed to achieve the original goals of audibility. Target’s Heather Mickman recounts her experience with just such an audit abyss clean-up in The DevOps Handbook:

As we went through the process, I wanted to better understand why the TEAP-LARB [Target’s existing governance] process took so long to get through, and I used the technique of “the five whys”…which eventually led to the question of why TEAP-LARB existed in the first place. The surprising thing was that no one knew, outside of a vague notion that we needed some sort of governance process. Many knew that there had been some sort of disaster that could never happen again years ago, but no one could remember exactly what that disaster was, either.

As Boston Scientific’s CeeCee O’Connor says, finding your path to minimal viable compliance means you’ll actually need to talk with auditors and understand the compliance needs. You’ll likely need to negotiate if various controls are needed or not, more or less proving that they’re not. When working with auditors on an application that helped people manage a chronic condition, O’Connor group first mapped out what they called “the path to production.”

Boston Scientific’s “Path to Production.”

This was a value-stream like visual that showed all of the steps and processes needed to get the application into production, including, of course compliance steps. Representing each of these as sticky notes on a wall allowed the team to quickly work with auditors to go through each step — each sticky note — and ask if it was needed. Answering such a question requires some criteria, so applying lean they team asked the question “does this process add value for the customer?”

You’re already helping compliance

This mapping and systematic approach allowed the team and auditors to negotiate the actual set controls needed to get to production. At Boston Scientific, the compliance standards had built up over 15 years, growing thick, and this process helped thin them out, speeding up the software delivery cycle.

The opportunity to work with auditors will also let you demonstrate how many of your practices are already improving compliance. For example, pair programming means that all code is continuously being reviewed by a second person and detailed test suite reports show that code is being tested. Once you understand what your auditors need, there are likely other processes that you’re following that contribute to compliance.

Discussing his work at Boston Scientific, Pivotal’s Chuck D’Antonio describes a happy coincidence between lead design and compliance. When it comes to pacemakers and other medical devices, you’re only supposed to build exactly the software needed, removing any extraneous software that might bring bugs. This requirement matches almost exactly with one of the core ideas of minimum viable products and lean: only deliver the code needed. Finding these happy coincidences, of course, requires working closely with auditors. It’ll be worth a day or two of meetings and tours to show your auditors how you do software and ask them if anything lines up already.

Case Study: “It was way beyond what we needed to even be doing.”

Operating in five US states and insuring around 15 million people, health insurance provider HCSC is up to its eyeballs in regulations and compliance. As it started to transform, HCSC initially felt like getting over the compliance hurdle would be impossible. Mark Ardito recounts how easy it actually was once auditors were satisfied with how much better a cloud-native approach was:

Turns out it’s really easy to track a story in [Pivotal] Tracker to a commit that got made in git. So I know the SHA that was in git, that was that Tracker story. And then I know the Jenkins job that pushed it out to Cloud Foundry. And guess what? I have this in the tools. There’s logs of all these things happening. So slowly, I was able to start to prove out auditability just from Jenkins logs, git SHAs, things like that. So we started to see that it became easier and easier to prove audits instead of Word documents, Excel documents — you can type anything you want in a Word document! You can’t fake a log from git and you can’t fake a log in Jenkins or Cloud Foundry.

Automation makes auditors happier and removes huge, time-sucking bottlenecks.

Transform compliance

While you may be able to avoid compliance or eliminate some controls, regulations are more likely unavoidable. Speeding up the compliance bottleneck, then, requires changing how compliance is done. Thankfully, using a build pipeline and cloud platforms provides a deep set of tools to speed up compliance. Even better, you’ll find cloud native tools and processes improve the actual quality and accuracy of compliance.

Compliance as code

Many of the controls auditors need can be satisfied by adding minor steps into your development process. For example, as Boston Scientific found, one of their auditors controls specified that a requirement had to be tracked through the development process. Instead of having to verify this after the team was code complete, they made sure to embed the story ID into each git commit, automated build, and deploy. Along these lines, the OpenControl project has put several years of effort into automating even the most complicated government compliance regimes. Chef’s InSpec project is also being used to automate compliance.

Pro-actively putting in these kinds of tracers is a common pattern form organizations that are looking to automate compliance. There’s often a small amount of scripting required to extract these tracers and present them in a human readable format, but that work is trivial in comparison to the traditional audit process.

Put compliance in the platform

Another common tactic is to put as much control enforcement into your cloud platform as possible. In a traditional approach, each application comes with its own set of infrastructure and related configuration: not only the “servers” needed, but also systems and policy for networking, data access, security settings, and so forth.

This makes your entire stack of infrastructure and software a single, unique unit that must be audited each release. This creates a huge amount of compliance work that needs to be done even for a single line of code: everything must be checked from the dirt to screen. As Raytheon’s Keith Rodwell lays out, working with auditors, you can often show them that by using the same, centralized platform for all applications you can inherit compliance from the platform. This allows you to avoid the time taken to re-audit each layer in your stack.

The US federal government’s cloud.gov platform provides a good example of baking controls into the platform. 18F, the group that built and supports cloud.gov described how their platform, based on Cloud Foundry, takes care of 269 controls for product teams:

Out of the 325 security controls required for Moderate-impact systems, cloud.gov handles 269 controls, and 41 controls are a shared responsibility (where cloud.gov provides part of the requirement, and your applications provide the rest). You only need to provide full implementations for the remaining 15 controls, such as ensuring you make data backups and using reliable DNS (Domain Name System) name servers for your websites.

Organizations that bake controls into their platforms find that they can reduce the time to pass audits from months (if not years!) to just weeks or even days. The US Air Force has had similar success with this approach, bringing security certification down from 18 months to 30 days, sometimes even just 10.

Compliance as a service

Finally, as get deeper into dealing with compliance, you might even find that you work more closely with auditors. It’s highly unlikely that they’ll become part of your product team; though that could happen in some especially compliance-driven government and military work where being compliant is a huge part of the business value. However, organizations often find that auditors are involved closely throughout their software life-cycle. Part of this is giving auditors the tools to proactively check on controls first hand.

Home Depot’s Tony McCulley suggests giving auditors access to your continuous delivery process and deployment environment. This means auditors can verify compliance questions on their own instead of asking product teams to do that work. Effectively, you’re letting auditors peer into and even help out with controls in your software. Of course, this will only works if have a well-structured, standardized platform supporting your build pipeline with good UIs that non-technical staff can access.

Making compliance better

There have obviously been culture shocks. What is more interesting though is that the teams that tend to have the worst culture shock are not those typical teams that you might think of, audit or compliance. In fact, if you’re able to successfully communicate to them what you’re doing, DevOps and all of the associated practices seem like common sense. [Auditors] say, ‘Why weren’t we doing this before?’” — Manuel Edwards, E*TRADE, Jan 2016

The net result of all these efforts to speed up compliance often improves the quality of compliance itself:

  1. Understanding and working with auditors gives the product team the chance to write software that more genuinely matches compliance needs.
  2. The traceability of requirements, authorization, and automated test reports give auditors much more of the raw materials needed to verify compliance.
  3. Automating compliance reporting and baking controls into the platform creates much more accurate audits and can give so called “controls” actual, programmatic control to enforce regulations.

As with any discussion that includes the word “automation,” some people take all of this to mean that auditors are no longer needed. That is, we can get rid of their jobs. This sentiment then gets stacked up into the eternal “they” antipattern: “well, they won’t change, so we can’t improve anything around here.

But, also as with any discussion that includes to word “automation,” things are not so clear. What all of these compliance optimizations point to is how much waste and extra work there is in the current approach to compliance.

This often means auditors working overtime, on the weekend, and over holidays. If you can improve the tools auditors use you don’t need to get rid of them. Instead, as we can do with previously overworked developers, you end up getting more value out of each auditor and, at the same time, they can go home on-time. As with developers, happy auditors mean a happier business.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Rule 1: Don’t go to meetings. Rule 2: See rule 1.

Coffee is for coders.

Whether you’re doing waterfall, DevOps, PRINCE, SAFe, PMBOK, ITIL, or whatever process and certification-scheme you like, chances are you’re not using your time wisely. I’d estimate that most of the immediate, short-term benefit organizations get from switching to cloud native is simply because they’re now actually, truly following a process which both focuses your efforts on creating customer value (useful software that helps customers out, making them keep paying or pay you more) and managing your time wisely. This is like the first 10–20 pounds you lose on any diet: that just happens because you’re actually doing something where before you were doing nothing.

Less developer meetings, more pairing up

When it comes to time management, eliminating meetings is the easiest, biggest productivity booster you can do. Start with developers. They should be doing actual work (probably “coding”) 5–6 hours a day and go to only a handful of meetings a week. If the daily stand-up isn’t getting them all the information they need for the day, look to improve the information flow or limit it to just what’s needed.

Somewhat counter-intuitively, pairing up developers (and other staff, it turns out) will increase productivity as well. When they pair, developers are better synced up on most knowledge they need, learning how all parts of the system work with a built in tutor in their pair. Keeping up to speed like this means the developers have still less meetings to go to, those ones where they learn about the new pagination framework that Kris made. Pairing helps with more than just knowledge maintenance. While it feels like there’s a “halving” of developers by pairing them up, as one of the original pair programming studies put it: “the defect removal savings should more than offset the development cost increase.” Pairs in studies over the past 20+ years have consistently written higher quality code and written it faster than solo coders.

Coupled with the product mindset to software that involves the whole team in the process from start to end, they’ll be up to speed on the use cases and customers. And, by putting small batches in place, the amount of up-front study needed (requiring meetings) will be reduced to bite-sized chunks.

It takes a long time to digest 300 pages

We’re going to need a lot more coffee to get through this requirements meeting.

The requirements process is a notorious source of wasteful meetings. This is especially true when companies are still doing big, up-front analysis to front-end agile development teams.

For example, at a large health insurance company, the product owner at first worked with business analysts, QA managers, and operations managers to get developers synced up and working. The product owner quickly realized that most of the content in the conversations was not actually needed, or was overkill. With some corporate slickness, the product owner removed the developers from this meeting-loop, and essentially /dev/null’ed the input that wasn’t needed.

Assign this story to management

Staff can try to reduce the amount of meetings they go to (and start practices like pairing), but, to be effective, managers have the responsibility to make it happen. At Allstate, managers would put “meetings” on developers calendars that said “Don’t go to meetings.” When you read results like Allstate going from 20% productivity to 90% productivity, you can see how effective eliminating meetings, along with all their other improvements, can be on an organization.

If you feel like developers must go to a meeting, first ask how you can eliminate that need. Second, track it like any other feature in the release, accounting for the time and cost of it. Make the costs of the miserable visible.

This concept of attending less meetings isn’t just for developers,The same productivity outcomes can be achieved to QA, the product owners, operations, and everyone else. Once you’ve done this, you’ll likely find having a balanced team easier and possible. Of course, once you have everyone on a balanced team, following this principle is easier.Reducing the time your staff spends in meetings and, instead, increasing the time they spend coding, designing, and doing actual product management (like talking with end users!) get you the obvious benefits of increasing productivity by 4x-5x.

If you feel you cannot do this, at least track the time you’re losing/using on meetings. A good rule of thumb is that context switching (going from one task to another) takes about 30 minutes. So, an hour long meeting will actually take out 2 hours of an employee’s time. To get ahold of how you’re choosing to spend your time, in reality, track these as tasks somehow, perhaps even adding in stories for “the big, important meeting.” And then, when you’re project tracking make sure you actually want to spend your organization’s time this way. If you do: great, you’re getting what you want! More than likely, spending time doing anything by creating and shipping customer value isn’t something you want to keep doing.

It may seem ridiculous to suggest that paying attention to time spent in meetings is even something that needs to be uttered. In my experience, management may feel like meetings are good, helpful, and not too onerous. After all, meetings are a major tool for managers to come to learn how their businesses are performing, discuss growth and optimization options, and reach decisions. Meetings are the whiteboards and IDEs of managers. Management needs to look beyond the utility meetings give them, and realize that for most everyone else, meetings are a waste of time.

For more on improving software in your organization check out my 49 pages in a fancy PDF on the topic.

De-shittifying Tech T-Shirts

I have a lot of tech t-shirts. Here’s an overview of my personal style and opinions. There’s a lot of politics in t-shirt selection, much of it good, still even more of it driven by aesthetics. I’m not seeking to win any points in those games (well, except maybe that all genders should have shirts that are designed for them), just telling you what I like.

Why? I get asked for input on t-shirts at least twice a year (often more). Here’s a URL for that input. And, I end-up getting a lot of tech t-shirts. Thankfully, my mom really likes them, so about 2–3 times a year I give her a couple t-shirt grocery bags full of t-shirts, the shitty ones.

Less shit on the shirt

First, some general comments:

  1. I don’t like any shit on the back of the shirt, unless it’s a tiny brand name or URL right at the top.
  2. I don’t like those shirts with a big, sticky feeling print thing on them (with an exception for pure awesomeness as you’ll see in a couple of them). I think that means I like “screen-print” shirts.
  3. They, of course, have to be that super-soft material. Those “beefy-t” shirts go right into the plastic bag of shirts I give to my mom (well, actually, I just don’t pick them up in less I get tricked into doing so).
  4. I’m overweight — and I think most people who get tech t-shirts are (ducks)— so I don’t like those “slim fit” shirts. No one wants to see me act out the hit song, “My Humps.”
  5. You gotta have women sizes, of course. (Close followers will instantly notice that we don’t do that over at the podcast — need to add a card in Trello posthaste!)
  6. Pictures and designs instead of just words are good, but words are fine.
  7. In general, your company’s logo is crap on a shirt. And for God’s sake, don’t put in on the sleeve. Don’t put anything on the sleeve.
  8. Speaking of logos, those t-shirts where you list a bunch of sponsor logos on the back are garbage.
  9. Colors: this is tricky. I clearly like grey shirts instead of bright colors. Also, I generally don’t like black, as Dan Baskette put it: “I’m not at a Motley Crue concert, so don’t give me a black t-shirt.” Actual color (blue, green, red, etc.) is probably OK. But. I like grey.

Here’s a selection. These are not all, by far, t-shirts from tech conferences, but most of them could be and illustrate my taste:

The DevOpsDays Austin people always do well. The MSP shirt is good too.
The Pickle Rick shirt is an example of a bunch of shit on the front being OK because it’s awesome.
I like grey.
The Kansas City one is a good example of a bunch of shit on the front without being shitty.
Pretty basic, and both brand names, but both good. I have three of the Pivotal ones; they’re good.

Apparently, I buy a lot of (super-fucking-expensive-oh-my-God-I-should-just-be-a-dandy-fellow-and-shop-at-Nordstrom-oh-I’m-supporting-indepedent-aritsts-OK-then-here’s-my-wallet-and-ATM-PIN) Cotton Bureau shirts.

Bonus! Hoodies’n’shit

Occasionally, you get lucky and there’s a hoodie or jacket. First, hoodies and jackets are super-awesome to get at a conference. The OpenStack people are really good at this, and at Pivotal we’ve had several internal conferences that were awesome on this front too.

For me, hoodies and jackets have slightly different rules:

  1. I’m not in a motorcycle gang, so I don’t want any shit on the back.
  2. Same for the front.
  3. That said, there are some exceptions if it’s subtle. The two OpenStack hoodies I have are good examples of this.
  4. It’s OK to just discreetly put your company’s name and logo on the left breast.
  5. A thin-hoodie is actually pretty nice — I have an OpenStack hoodie that’s an excellent example of this, it’s a good “layering” thing versus the ultra thick ones.
  6. When it comes to fabric, I think “beefy-t” is fine.
  7. My Pivotal hoodie has a clever feature: the Pivotal name is embroidered on the rim of the hood. Nifty!

A selection (sadly, I don’t think anyone’s ever given me a jacket — you know who you are!):

Notice the subtle left breast brand, and the fun brand on the hood’s rim.
A good example of acceptable shit on the back. Putting city names of past conferences is also an ongoing, fun thing for OpenStack hoodies.
A thin hoodie, plus almost imperceptible shit on the back (it uses city names of previous conferences to write out “OpenStack”).
TaskTop has nice jackets, where a brand name up in the usual spot is fine. These were those somewhat hard-shell North Face jackets, or in that same style. Very nice.

T-shirt what thou wilt shall be the whole of the law

Like I said, it’s not like I have any opinions on the matter of tech conference t-shirts. Nope.

This is not me, but look how cool that dude looks! You can too!

Cloud Native Works in Government — the IRS, US Air Force, and contractors

“We have already slashed the time needed to implement new ideas by 70 percent while avoiding hundreds of millions of dollars in costs.” M. Wes Haga, Chief of Mission Applications and Infrastructure Programs for Air Force Research Lab

Slowly but surely, the US government is improving how they do software. Working at Pivotal, I’m lucky to see some of this change and talk with the people who’ve actually done it. Just as we’re seeing huge improvements in the private sector with Pivotal’s cloud native approach, we’re now seeing successful examples of transformation in government. As with any sweeping transformation trend, there are several early case studies that have proven change is possible in the government. The cloud native practices of agility, DevOps, and relying on cloud platforms are spreading through the US Federal government and it is encouraging and cool to see the outcomes they have enabled.

People often complain about red-tape, funding problems, staff’s unwillingness to change, and an overall defeatist attitude. These cases show not only that the cloud native approach works, giving agencies and the military new, modernized capabilities with clear, positive ROI, but also show that it’s possible. In fact, it’s not as hard as it may seem.

IRS

If you’ve seen my talks, this IRS story is one of my favorite cases of what it means to do “digital transformation.”

The IRS had been using call centers for many, many years to provide basic account information and tax payment services. Call centers are expensive and error prone: one study found that only 37% of calls were answered. Over 60% of people calling the IRS for help were simply hung-up on! With the need to continually control costs and deliver good service, the IRS had to do something.

In the consumer space, solving this type of account management problem has long been taken care of. It’s pretty easy in fact; just think of all the online banking systems and paying your monthly cellphone bills. But at the IRS, viewing your transactions had yet to be digitized.

When putting software around this, the IRS first thought that they should show you your complete history with the IRS, all your transactions. This confused users and most of them still wanted to pick up the phone. Think about what a perfect failure that is: the software worked exactly as designed and intended, it was just the wrong way to solve the problem. Thankfully, because they were following a small batch process, they caught this very quickly, and iterated through different versions of it until they hit on a simple finding: when people want to know how much money they owe the IRS, they just want to know how much money they owe the IRS. When this version of the software was tested, people didn’t need to use the phone.

Now, if the IRS was on a traditional 12 to 18 months cycle (or longer!) think of how poorly this would have gone, the business case would have failed, you would probably continue to have a dim view of IT and the IRS. But, by thinking about software correctly — in an agile, small batch way — the IRS did the right thing, not only saving money, but also solving people’s actual problems.

Digitization projects like this, however, can be hard in the government due to the all too well meaning process and oversight. The IRS has been working with Pivotal to introduce a very advanced agile approach, e.g., shipping frequently, pairing across roles, and intense user-testing. Along the way, they had to manage various stakeholders expectations, winning over their trust, interest, and eventually support for transforming how the IRS does their software.

This project has great results: after some onerous up-front red-tape transformation, they put an app in place which allows people to look up their account information, check payments due, and pay them. As of October 2017, there have been over 2 million users and the app has processed over $440m in payments.

Check out this interview with Andrea Schneider (IRS) & Lauren Gilchrist (Pivotal) for the story and details, and an older but helpful overview of the project from Andrea:

Keeping the Air Force Flying

It’s rare to get details on military IT projects, so these stories are particularly delicious as it’s a literal case of “digital transformation,” going from analog to digital.

The US military has for a long time realized that they need to rapidly respond to changes in the field, not only a weekly or daily basis, but on an hourly basis. Software drives a huge amount of how the military operates now, “Everything we do in the military, and everything we do in combat, is now software based,” as Lt. Col. Enrique Oti put it. With so much reliance on software, when most IT projects take five to seven years to ship, there’s a bit of a crisis in how IT is done. “This idea of not taking action is not an option that the United States Army actually has,” said Army CIO Lt. Gen. Bruce Crawford in a recent talk.

Much can be blamed on the procurement process (and the associated needs of oversight, but overall the issue is putting more agile approach to software in place. The Air Force has several projects under its harness that are showing the way.

One of them is a story of literally going from analog to digital. They’d been planning out refueling schedules in the Middle East with a large white board. While the staff were working earnestly, it took about 8 hours and, clearly, was not the ideal state for planning something as vital as refueling.

After working with Pivotal, they digitized this process and dramatically reduced the time it took to prepare the whiteboard. They shipped their first version in 120 days (an amazing speed for any organization, private or public sector). Even better, they now regularly ship new features each week, continually improving the system. Moving from shipping every 5 years to every week, adding in the ability to adapt to new needs and operational challenges means this piece of software is directly supporting and improving the overall mission.

Because they could schedule more precisely, they were also able to remove one tanker from regular usage each day (see at about 1h47m in this video), saving about a million dollars a day. The ROI on this project, clearly, was off the charts. In fact, they were able to make back their investment in this project in seven days, based on the fuel savings. They were also able to cut the staff needed dramatically, while at the same time improving the service and freeing up staff to work on other important missions and tasks.

Looking forward, this also opened up the possibility to integrate other data into this planning, and provide this schedule to other processes. But in a software-driven organization, there’s plenty of other opportunities. They’re now working on seven more applications, including, a dynamic targeting tool. More broadly, this approach to development reduces risks of all type, but especially blow up budgets. As M. Wes Haga put it:

Previously, every time we added a new capability, we would have had to build, test, and deploy the entire IT stack. A mistake could cost $100 million, likely ending the career of anyone associated with that decision. A smaller mistake is less often a career-ender and thus encourages smart and informed risk-taking.

Contractors too…

“You gave me what I asked for, but not really what I wanted.”

Raytheon is with the program as well, having recognized the need to need to become more agile in its delivery practices. The software needs to evolve as quickly as possible, years long contracts just won’t cut it. As one of Raytheon’s engineers put it: “employing Agile and DevOps is going to speed up the software lifecycle, getting new features into the hands of the men and women of the Armed Forces a lot quicker.”

They’ve been working with Pivotal to switch over to faster feedback cycles and apply DevOps practices to their software life-cycle.

Working with the Air Force, as with all these types of transformations, they started with one project, built up skills and knowledge, and have been expanding to other products. The first project was the Air Force’s Air and Space Operations Center Weapon System (AOC Pathfinder). They’re also working on one of the Air Forces intelligence systems, the Distributed Common Ground System.

Software release cycle speed (from years to months, if not weeks) is important in these systems, but matching the evolving and emerging needs for those systems is equally — perhaps even more! — important. “The DevOps model allows our customers to ask for the products they really want,” Raytheon’s Quynh Tran said, “The results [are that] we are shortening deployment times and prioritizing work based on their needs. We’re going to be better at meeting their expectations…. Military users get their requests changed in months instead of years and see the results of continuous feedback.”

See also this interview with Keith Salisbury.

(Thanks to @dormaindrewitz who helped me track down many of the facts and figures above.)

Building trust with internal marketing, large and small

Most companies don’t realize the amount of work required to fully transform their approach to creating and caring for software. Scaling up the improvements learned and put into place by your initial teams relies on building trust and understanding in the overall organization. For whatever reason, most people in large organizations are resistant to change and, what with the frequent introduction of process improvement programs, skeptical of the flavor of the week of the syndrome. A large part of scaling up digital transformation, then, is internal marketing. And it’s a lot more than most people anticipate.

Beyond Newsletters

Once you nail down some, initial, successful applications, start a program to tell the rest of the organization about these projects. This is beyond the usual email newsletter mention, often quickly leading to internal “summits” with speakers from your organization going over lessons learned and advice for starting new cloud native projects.

You have to promote your change, educate people, and overall “sell” it to people who either don’t care, don’t know, or are resistant. These events can piggyback on whatever monthly “brown-bag” sessions you have and should be recorded for those who can’t attend. Somewhat early on, management should set aside time and budget to have more organized summits. You could, for example, do a half day event with three to four talks by team members from successful projects, having them walk through the project’s history and advice on getting started.

Building Trust

This internal marketing works hand-in-hand with starting small and building up a collection of successful projects. As you’re working on these initial projects, spend time to document the “case studies” of what worked and didn’t work, and track the actual business metrics to demonstrate how the software helped your organization. You don’t so much want to just how fast you can now move, but you want to show how doing software done in this new way is strategic for the business.

Content-wise, what’s key in this process is for staff to talk with each other, about your organization’s’ own software, market, and challenges faced. I find that organizations often think that they face unique challenges. Each organization does have unique hang-ups and capabilities, so people in those organization tend to be most interested in how they can apply the wonders of cloud native to their jobs, regardless of whatever success they might hear about at conferences or, worse, vendors with an obvious bias. Hearing from each other often gets beyond this sentiment that “change can’t happen here.”

Once your organization starts hearing about these success, you’ll be able to break down some of the objections that stop the spread positive change. As Amy Patton at SPS Commerce put it, “having enough wins, like that, really helped us to keep the momentum going while we were having a culture change like DevOps.”

Winning over process stakeholders

The IRS provides an example of using release meetings to slowly win over resistant middle-management staff and stakeholders. Stakeholders felt uncomfortable letting these detailed requirements evolve over each iteration. As with most people who’re forcedencouraged to move from waterfall to agile, they were skeptical that the final software would have all the features they initially wanted.

While the team was, of course, verifying these evolving requirements with actual, in production user testing, stakeholders were uncomfortable. These skeptics were used to comfort of lots of up-front analysis and requirements, exactly spelling out which features would be implemented. To start getting over this skepticism, the team used their release meetings to show off how well the process was working, demonstrating working code and lessons learned along the way. These meetings went from five skeptics to packed, standing room only meetings with over 45 attendees. As success was built up and the organizational grape-vine filled with tales of wins, interest grew and with it, trust in the new system.

The next step: training by doing

As the organizations above and others like Verizon and Target demonstrate, internal marketing must be done “in the small,” like this IRS case and, eventually, “in the large” with internal summits.

Scaling up from marketing activities is often done with intensive, hands-on training workshops called “dojos.” These are highly structured, guided, but real release cycles that give participants the chance to learn the technologies and styles of development. And because they’re working on actual software, you’re delivering business value along the way: it’s training and doing.

These sessions also enable the organization to learn the new pace and patterns of cloud native development, as well as set management expectations. As Verizon’s Ross Clanton put it recently:

The purpose of the dojo is learning, and we prioritize that over everything else. That means you have to slow down to speed up. Over the six weeks, they will not speed up. But they will get faster as an outcome of the process.

Scaling up any change to a large organization is mostly done by winning over the trust of people in that organization, from individual contributors, to middle-management, to “leadership.” Because IT has been so untrustworthy for so many decades — how often are projects not only late and over-budget, but then anemic and lame when finally delivered? — the best way to win over that trust is to actually learn by doing and then market that success relentlessly.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

So what exactly should IBM do, and have done?

Now that IBM has ended its revenue losing streak, we’re ready to stick a halo on it:

There is no doubt, though, that there are signs of progress at IBM, which would not comment on its financial picture before the release of the earning report. So much attention is focused on the company’s top line because revenue is the broadest measure of the headway IBM is making in a difficult transformation toward cloud computing, data handling and A.I. offerings for corporate customers.

The new businesses — “strategic imperatives,” IBM calls them — now account for 45 percent of the company’s revenue. And though it still has a ways to go, IBM has steadily built up those operations — and gained converts.

Over all those quarters, there hasn’t been that much good analysis of “what went wrong” at IBM in so much as I haven’t really read much about what IBM should have been doing. What did we expect from them? What should they be doing now and in the future? I don’t know the answers, but I’m damn curious.

“State your deal.”

Since the mid-2000’s, all tech companies have been shit on for not getting to and dominating public cloud faster (there are exceptions like Adobe that get lost in the splurty noise of said shitting on). Huge changes have happened at companies HP/HPE and Dell/EMC/VMware (where I work happily at Pivotal, thank you very much), and you can see Oracle quarterly dance-adapting to the new realities of enterprise IT spending.

For the past 8 or 10 years I’ve had a rocky handle on what it is that IBM sell exactly, and in recent years their marketing around it has been fuzzy.  Try to answer the question “so what is it, exactly, that IBM sells?” A good companion is, “why do customers choose IBM over other options?”

You can’t say “solutions” or “digital transformation.” (I’m aware of some black kettle over here, but I and any Pivotal person could tell you exactly the SKUs, tools, and consulting we sell, probably on an index card). I’m pretty sure some people in IBM know, but the press certainly doesn’t know how the fuck to answer that question (with some exception at The Register and from TPM, grand sage of all IBM coverage).

I’ve been a life-long follower of IBM: my dad worked at the Austin campus, it was a major focus at RedMonk, and, you know, just being in the enterprise tech industry gets your face placed facing Armonk frequently. I feel like I know the company pretty well and have enough of an unnatural fascination to put up with spelunking through them when I get the chance; IBMers seem pleasantly bewildered when the first thing I ask them to do is explain the current IBM hierarchy and brand structure.

But I couldn’t really explain what their deal is now. I mean, I get it: enterprise outsourcing, BPaaS (or did they sell all that off?), some enterprise private cloud and the left over public cloud stuff, mainframe, a bunch of branded middleware (MQ, WebSphere, DB2, etc.) that they seem forbidden to mention by name, and “Watson.”

There are clear products & services (right?)

 

When I’ve been involved in competitive situations with IBM over the years, what they’re selling is very, very straight forward: outsourcing, software, and a sense of dependability. But the way they’re talked about in the press is all buzzwordy weirdness. I’m sure blockchain and AI could be a big deal, but their on and off success at doing something everyday, practical with it is weird.

Or, it could just be the difficulty of covering it, explaining it, productizing, and then marketing it. “Enterprise solutions” often amounts to individually customized strategy, programs, and implementations for companies (as it should, most of the time), so you can’t really wrap a clear-cut SKU around that. It’s probably equally hard to explain it to financial analysts.

So, what’s their deal?

Cumulative capex spend by Google, Amazon, and Microsoft since 2001.
How much is that public cloud in the window?

Anyhow, I don’t come here to whatnot IBM (genuinely, I’ve always liked the company and still hope they figure it out), but more out of actual curiosity to hear what they should have been doing and what they should do now. Here’s some options:

  1. The first option is always “stay on target, stay on target,” which is to say we just need to be patient and they’ll actually become some sort of “the business of AI/ML, blockchain, and the same old, useful stuff of improving how companies run IT.” I mean, sure. In that case, going private is probably a good idea. The coda to this is always “things are actually fine, so shut the fuck up with your negativity. Don’t kill my vibe!” And if this it true, IBM just needs some new comms/PR strategies and programs.
  2. You could say they should have done public cloud better and (like all the other incumbent tech companies except Microsoft), just ate it. What people leave out of this argument is that they would have had to spend billions (and billions) of dollars to build that up over the past 10 years. Talk about a string of revenue loosing quarters.
  3. As I’m fiddling around with, they could just explain themselves better.
  4. They should have gotten into actual enterprise applications, SaaS. Done something like bought Salesforce, merged with SAP, who knows. IBM people hated it when you suggested this.
  5. The always ambiguous “management sucks.” Another dumb answer that has to be backed up not with missed opportunities and failures (like public cloud), but also proving that IBM could have been successful there in the first place (e.g., with public cloud, would Wall Street have put up with them loosing billions for years to build up a cloud?)

I’m sure there’s other options. Thinking through all this would be illustrative of how the technology industry works (and not the so called tech industry, the real tech industry).

(Obviously, I’m in a weird position working at Pivotal who sells against IBM frequently. So, feel free to dismiss all this if you’re thinking that, now that you’ve read this swill, you need to go put on a new tin-foil hat because your current one is getting a tad ripe.)

As Rome burns, there’s plenty of money investing in attention aggregation, innovation, and…burgers?

Let the Old Gods bellow and rage in the distance.There are likes to like and pages to page-view. Swipes to swipe. Items to be ordered and thought-leaders to be thought-followed. We’ve got our own temples, up in The Cloud, to be decorated with selfies and festooned with a million paeans to ourselves, our personal brands and our experiences. Our chauffeured chariots to be summoned, literally, on-demand. The app as finger-snap. People are favoriting us as we sleep. At least, they’d better be.Google is doing the work that priests and rabbis used to do. It has answers. Curious children are learning to consult with Alexa and Siri in kindergarten.And our New Gods have found a way to extract tribute from each and every one of these activities.We’re carrying their altars in our pockets.
“American Gods,” Josh Brown

There’s a quandary in there about why the market is up despite all the craziness in DC. The two reasons seem to be: (a.) in this craziness, customers of major companies are escaping into the comfort of the golden arches, Marriott(?), and iPhones, so, (b.) the Pareto minority who actually does all the investing goes to where the customers are going. For the investing group, there’s also some brand-driven devotion to big companies.

Sure: smoke ’em if you got ’em!

Owning half of all advertising is a good business

The Attention Merchants can fill in the gaps of how companies like Google and Facebook are doing so well here: they’re essentially gobbled up the advertising market, this life-blood of most all business, i.e.:

Information cannot be acted upon without attention and thus attention capture and information are essential to a functioning market economy, or indeed any competitive process, like an election (unknown candidates do not win). So as a technology for gaining access to the human mind, advertising can therefore serve a vital function, making markets, elections, and everything that depends on informed choice operate better, by telling us what we need to know about our choices, ideally in an objective fashion.

You could hang a figure on the value of that over 1, 5, 20, 50 years…but, let’s just say it’s a fuck-load lot of money and, thus, valuation in a company. Controlling what people, businesses, and governments spend their attention and money on? Priceless.

Good companies often have good products

Next, you can get the sense with this kind of talk that what’s being valued in companies is “nothing,” just a feeling, a sense. In reality, for example, with companies like Apple and, now, Amazon, long-term strategies (often risky) that result in cash-spewing machines is what’s being valued. The iPhone and it’s software makes a ton of money; AWS throws off cash.

Google has an 88 per cent market share in search advertising. Facebook (including Instagram, Messenger and WhatsApp) controls more than 70 per cent of social media on mobile devices. “Silicon Valley has too much power,” Rana Foroohar

In the pure “dot.com” category, it’s easy to get beguiled and think that “likes” and baby pictures in Facebook, or putting dog-faces on teens, is the thing being valued. Of course they’re not, people’s attention and the ability to keep those people paying attention (“a culture of innovation”) is what’s valued. Advertising is what’s being valued, not whatever “social” is.

Trading on perception…which is built by good products

Now, I don’t actually know how investing works – I’m one of those hoards of Vanguard-drones – but it’s clear that all the interesting stuff is based on predictions about how other buyers will price/value a share. You can sit around and collect dividends (or wait for a company to be bought by another) as your “payout” in equity investing, but that seems to be the boring game (unless you’re an “activist” investor who hype-engineers those two). So, of course, paying attention to people’s perception of a company’s value is what the investing insiders get all worked up about.

But, again, if you look at “the new gods,” most of the companies have actual, valuable businesses. I can speak to the tech companies like Apple, Amazon, Google, and (a bit) Salesforce. They have good things to sell and good strategies backing them.

Netflix, for example

Netflix, which is on the list of “new gods,” is another example. First, it was a better mouse-trap to browse for DVDs online, queue up ones to get, and have them mailed to you rather than going to the rental store. Then, as streaming became technically possible (queue those endless Mary Meeker decks), simply doing that was better than living at the whim of cable companies that seemed like they were over-serving and over-charging. (And meanwhile, TiVo just sort of shit-the-bed on their go at this market-window – maybe the cable companies gleefully starved TiVo with their own DVRs and lack of partnerships).

And, once all of Netflix’s customers had watched that 5% of the streaming catalog that was actually good (I kid! I kid! It’s probably more like 15%, right?), Netflix had to make it’s own original content (and put in exclusive licensing deals). In each round, they had a good product and re-arranged their strategy accordingly (and sometimes it didn’t go well).

(If I knew this industry better, I’d know if my hunch that HBO is the Microsoft here [“fast follower” who was sort of there the whole time with a good product and even evolving, just not getting the glory] was helpful or not.)

“Old Gods” fall


HP(E) and IBM are negative examples here, and Microsoft provides a more positive example. For a long, long time, both HP and IBM were perceived as being rock-solid – their products and services were trusted, worked well, and, thus, were purchased a lot. (I’ll spare you the old IBM adage.)

They had good businesses. But over the past 10 (or even 15) years, each fell behind the times, seemingly willingly: they didn’t evolve their business model, product portfolios, and corporate strategies fast enough. They didn’t change quickly enough, and the worse mistake was that they didn’t realize they needed to change faster and, then, that management didn’t make it happen. HP had got hit up with The Curse of Most M&A Doesn’t Work, But Some of it Really Doesn’t work. In each case, the financials of the company suffered, and so did everyone’s perception of the company.

The point with HP and IBM is: in large, older tech companies you need not only a good product, but you need to a good everything.

Microsoft’s rebirth

Microsoft shows that you can turn that around, and adds more confusion to how investors actually value companies. From what I know, Microsoft has always been a financially good company, but it languished starting in the Internet era, which it barely battled through (to much financial glory after the late 90s).


But as it continued to biff on mobile, SaaS, shoring up desktop sales (I might be wrong on this point), and even cloud (where it’s now considered one of the “top three”), the perception was that Microsoft had lost it, strategically. An early warning sign was screwing up the Danger acquisition, which was a prelude to whatever Nokia was. And, I always found Bing to be overly quixotic: why try? But, really, I suppose you’d want to try to go after that pool of “priceless” advertising money above that Google and Facebook now steel-fist, and analysts would have discounted Microsoft’s share price even more if they didn’t try for a slice of that TAM-pie.

Despite all that, Microsoft seems to have turned it around. Their perception is pretty good now, and they’re out of that share-price plateau of the 2000’s. And, again, what did they do? They made good products, they built a good business, they changed almost everything.

Luck is handy too

You can throw more negative and positive examples on the pile: Yahoo!, how SUSE blossomed after it go out from Novell’s thumb, how AOL lost its way (though, maybe that’s getting better?), SAP & Oracle (deciding which and how each is good or bad is left as an exercise to the reader), etc.

In each case, companies just have to do the simple thing of trying to build a good business, make good products and services, and, well, catch a substantial stream of lucky breaks.

Since I don’t know burgers, payments, and hotels, I can only assume that in Josh’s list of new gods, McDonald’s, Visa, and Marriott are following a similar, annoyingly common sense approach.

Gods become “old god” because they suck versus the new gods

To hop on the American Gods metaphor train, sure, some of the old gods fell into disfavor out of whim (Johnny Appleseed don’t seem half-bad, and Easter seems pretty nice!), but most of them were dumped because they were shitty: blood sacrifice, mind-control, and otherwise treating humans like shit sure seem like a raw deal compared to TV, free-market-money, Jesus, and Paul Bunyan. The old gods stopped trying to innovate, as it were, and got all stuck on hammering in people’s heads, child sacrifice, and hanging humans.

That shit don’t sell now-a-days. So, you know, like the doctor says: don’t do that.

Meanwhile, back to the point

So, still, why’s the money-hole going so well?

You’re wondering how it could be possible that the S&P 500, the Nasdaq 100 and the Dow Jones Industrial Average could be climbing to record highs day after day, given, well, everything.
How is it that stocks can break through to new heights while the country at large seemingly sinks to new depths?

Who really knows why “the market” is “up” when it should be “troubled,”. In general, the way companies are valued and the way businesses run doesn’t seem effected much by cultural strife, change, and, chaos (in the short to medium term, at least). So, if the ruling hill-billy class wants to make a big to-do out of bathrooms, what does “god money” care? If anything, money likes contained chaos, constant change that makes cash turn over and change hands.

Also, of course, Republicans are in power, which makes money-focused people hopeful for tax reductions, repatriation, regulation reduction, and things that are otherwise the opposite of “Democrats wanting to use money to help poor people.” Most investor class people seem to stop reading that sentence after the word “money.”

Finally, you can’t exactly trust anything that Trump and friends say – sure, that 35% border tax would tank huge sectors of the economy, but come on, he’s caved on so many other things…well, actual important, money-related things (though, hey, how am I going to do my pivot tables if I can’t use my laptop on the way back from Zurich?)

There’s an argument to be made that if people can’t maintain steadily, growing salaries, there won’t be enough consumer money sloshing around to spend on things …but if ‘400 wealthiest Americans had “more wealth than half of all Americans combined,”‘ what do they need that other half for but to packaged up their prepared meals and old-man groaning mattresses to be drone delivered?

More, how much influence does the social chaos of state and local government really effect “the market,” and Congress doesn’t seem to actually do anything (nor want to), and we’ve got a little under two years until the next gut-wrenching election night – what if we elect more crazies, but this time they can actually get shit done?!

Don’t worry, though! There’s plenty of time to order five gallon tubs of guacamole and wrastle with (and for) carnies into the office!

Which is to say: in politics, so far, there hasn’t been much more than talk about money. It’s all been able people and culture. Investors don’t invest in people and culture (maybe they use their own cash to buy expensive art, sure), so why should the market be down?

Ode to Airports

An airport is a time pause. It’s an excuse to not stress or try. You’re trapped in the system and will eventually get there. You can’t leave or you’ll have to re-humiliate yourself through security. Airports are even powerful enough to make you cancel meetings if your flight is late, canceled…or you pretend it is. Your wedding could be delayed because of the airport and no one would really fault you.

Everyone is transiting, coming and going, and while the entry fee might exclude the very poor (and the super rich fly their own), you see everyone.

At a major hub, you’ll see people from all over: the guy with the “Ragin’ Cajun” hat, domestic and international grandmas, the harried big city lawyer, the dad-jeans set, and the local staff. People dress in all manners of business-business or super casual for comfort.

The mix of experienced and novice travels creates a crackly dynamic, paired with either overly friendly or direct gate agents. While some can escape to airline lounges, even those environments are little different than the actual terminal: you just get much friendly staff and free drinks and peanuts.

Airports can be calming if you look at them as escapes and the sort of delightful, enforced boredom that I understand meditation to be.

They can be toxic if you stress out about delays, lines, other people, overhead bin space, and how flight delays effect your plans outside the airport. And they can be distracting like an opium den if you let their peaceful hum shut-out your real life.

Don’t ruin your time at the airport. If you let it, it’ll make sure you get back out right where you wanted to go.

Choose your TAM wisely and remember to charge a high price, RethinkDB

[O]ur users clearly thought of us as an open-source developer tools company, because that’s what we really were. Which turned out to be very unfortunate, because the open-source developer tools market is one of the worst markets one could possibly end up in. Thousands of people used RethinkDB, often in business contexts, but most were willing to pay less for the lifetime of usage than the price of a single Starbucks coffee (which is to say, they weren’t willing to pay anything at all). Link

How big is the pie?

Any company selling developers tools needs to figure out the overall market size for what they’re selling. Developers, eager to work tools for themselves (typically, in their mid to late 20s developers work on at least one “framework” project) often fall prey to picking a market that has little to no money and, then, are dismayed when “there’s no money in it.”

What we’re looking for here is a market category and a way of finding how much money is being spent in it. As a business, you want grab as much as the money as possible. The first thing you want to do is make sure there’s enough money for you to care. If you’re operating in a market that has only $25m of total, global spend, it’s probably not worth your while, for example.

Defining your market category, too, is important to find out who your users and buyers are. But, let’s look at TAM-think: finding what the big pie of cash looks like, your Total Addressable Market.

The TAMs on the buffett

If you’re working on developer oriented tech, there are a few key TAMs:

Another interesting TAM for startups in the developer space is a combo one Gartner put out recently put together that shows public and private PaaS, along with “traditional” application platforms: $7.8bn in 2015. 451 has a similar TAM that combines public and private cloud at around $10bn in 2020.

I tried to come up with a public and private PaaS TAM – a very, very loose one – last year and sauntered up to something like $20 to $25bn over the next 5-10 years.

There are other TAMs, to be sure, but those are good ones to start with.

Bending a TAM to your will, and future price changes

In each case, you have to be very, very careful because of open source and public cloud. Open source means there’s less to sell upfront and, that, likely, you’ll have a hard time suddenly going from charging $0 to $1,000’s per unit (a unit is whatever a “seat” or “server” is: you need something to count by!). If you’re delivering your stuff over the public cloud, similar pricing problems arise: people expect it to be really cheap an are, in fact, shocked when it adds up to a high monthly bill.

But briefly: people expect infrastructure software to be free now-a-days. (Not so much applications, which have held onto the notion that they should be paid for: buy the low prices in the app store depress their unit prices too.)

In both cases (open source and public cloud delivery), you’re likely talking a drastically lower unit price. If you don’t increase the overall volume of sales, you’ll whack down your TAM right quick.

So, you have to be really, really careful when using backward looking TAMs to judge what your TAM is. Part of the innovation you’re expected to be doing is in pricing, likely making it cheaper.

The effect is that your marketshare, based on “yesterday’s TAMs,” will look shocking. For example, Gartner pegged the collective revenue of NoSQL vendors (Basho, Couchbase, Datastax, MarkLogic, and MongoDB) at $364M in 2015: 1% of the overall TAM of $35.9bn! Meanwhile, the top three Hadoop vendors clocked in at $323.2M and AWS’s DB estimate was $833.6M.

Pair legacy TAMs with your own bottoms-up TAM

In my experience, the most helpful way for figuring out (really, recomputing TAMs in “real time) is to look at the revenue that vendors in that space are having and then to understand what software they’re replacing. That is, in addition to taking analyst TAMs into perspective, you should come up with your own, bottoms-up model and explain how it works.

If you’re doing IT-lead innovation, using existing (if not “legacy”!) TAMs is a bad idea. You’ll likely end up over-estimating your growth and, worse, which category of software you are and who the buyers are. Study your users and your buyers and start modeling from there, not pivot tables from the north east.

The other angle here is that if you’re “revolutionizing” a market category, it means you’re redefining it. This means there will be no TAM for many years. For example, there was no “IaaS” TAM for a long time, at some point, there was no “Java app server TAM.” In such cases, creating your own TAMs are much more useful.

Finally, once you’ve figured out how big (or small!) your pie of money is, adjust your prices accordingly. More than likely you’ll find that you’ll need to charge a higher price than you think is polite…if you want to build a sustainable, revenue-driven business rather than just a good aggregation startup to be acquired by a larger company…who’ll be left to sort out how to make money.

Keeping sane at the airport

After 10 years of business travel, this is how I cope at the airport:

  • You’ll get there, even if you’re late.
  • Don’t worry about lines, just wait in them.
  • Few people know what they’re doing here, don’t let their stress stress you out.
  • There are no special snowflakes, unless you have a doctor’s note.
  • The word of airline staff is law, you can’t argue against the agent of the FAA.
  • Relax and walk slow.
  • If you want a better experience, pay more or pay your dues.

When in doubt, and even if it contradicts the above, you can always:

  • Move fast and get out of the way.

Change is hard, but possible, or, It’s the still the case that you should stop hitting yourself

Change is hard, but possible, or, It’s the still the case that you should stop hitting yourself

In the corporate clip-art game, ain’t no one’s better than geralt.

Improving is never easy, and that’s certainly true when it comes to how large organizations improve how they do software. While it can seem like a curse, I’m lucky to talk with people at organizations who are struggling to improve. By the nature of the work Pivotal does, we spend a lot of time talking with organizations who want to be more agile, shift to a DevOps approach, and other wise (to a buzz-phrase) become “cloud native.”

As I’m fond of putting it, they just want to get better at software. The first step is to stop hitting yourself, as we’ve discussed before. But that’s just a eye-rolling bon mot, really. You actually have to do some work.

The road to better software is paved with white collar pain. It’s like (as I’m told) when you start working out: it hurts, for, like, several years, and then you sort of start to enjoy it and maybe can live 0.7 years longer.

Let’s look at a couple of those common pains.

The Pain of legacy process (aka “culture”)

Perhaps the most frequent question is something like:

“How do we reconcile [old processes we don’t like] with DevOps [or whatever new way of doing things we now want to do?”

Well, that’s the 23% CAGR over 5 years question right there. I’d start with understanding what DevOps (or whatever process you want to switch to) is, and why it is (DevOps wants to ensure that you can deploy software weekly so you can always be improving it, and that it actually works [has uptime and resilience]).

At that point, you ask “does our current process do that”? If not, then you have to get executives to change how the organization runs. There’s no short cuts or easy cuts, you just have to do it over the course of *years*.

In contrast, to “virtualize,” you sort of just install VMware and after a few years you have huge ROI and savings. (Granted, the truth of virtualization is that it ruffled all sorts of feathers in IT departments in the 2000s and people were all up in arms and chickens without heads running around and cats and dogs living together …we just forget all that ;>)

Put another way “you’re going to change and eliminate those processes. GET READY FOR A SURPRISE!”

“Come on, man! I got five kids to feed!”

Internal selling

One of the chief characteristics of large organizations is that you have to convince the organization to actually do anything. We have these visions that executives in large organizations can actually make the trains run on time, as it were. Nope.

Thus, when it comes to change, you have to spend much of your time doing internal marketing to sell it up the chain, to your peers, and to your own organization.

To my mind, the only ways to do internal marketing well is to either (1.) already be successful, or, (2.) get executives at other companies to tell you and your organization How They Did It And That It Worked.

The first is just a recursive noop (“success breeds success” and other such nonsense). Thankfully, when it comes to the second, there’s a lot help now-a-days, primary in the form of there change agents who’ve gone through this themselves.

How did these executives succeed? By actually trying: picking small projects at first, learning the new way, succeeding (and hiding failures), then trying bigger things, and then telling people about it by making money. After all, success breeds success, right?

They also fire, er, “re-allocate” a lot of people, which they don’t talk about a lot in big glitzy keynotes but do over drinks in loud bars.

Of course, vendors (like myself) saying all this is pretty useless. We’re not trustworthy, after all, and are better at unicorn management and breeding programs than tending to the donkey ranches.

So, let me direct you to some “actual” people who’ve gone through all this:

There’s many more “talks” that aren’t recorded, you just have to find the right people and sit down with them to chat.

How do we migrate legacy software?

There are no good answers here. This is like someone with terminal lung cancer asking for help on stopping smoking. I suppose that’s gas-lighting…but if “legacy” is what’s holding you back it means you’re not managing technical debt well. Stick all your enterprise architects in a room (maybe even have an open bar!) and gently ask them, “so…what would you say you do around here?”

Updating legacy software is hard. The problem with “lift and shift” (which many vendors like to wrap fancy slides around) is that the “cloud native” benefits you’re looking to get are not only from the platform you run your software on, but from how the software itself is written (and, then, obviously, then, how you manage and operate it in production).

Sure, you could just dump some three tier, MVC, hairball into a WAR file and spoot it out into some container orchestrated cloud thing…but all you’ll new have is a big lever that says “reboot” on it. With brute-migration there won’t be:

  1. All the resiliency advantages of little blue/green man deploys, canary parties, feature flag burnings, bulk-bin heads, etc.,
  2. The ability to to start deploying weekly or even daily to improve how software is done (i.e., “you don’t operate in an agile way”)
  3. And, you know, you still have to make sure it all runs in production properly tomorrow.

Worse: the original problem still isn’t fixed. The next time you need to pay down your technical debt so you can improve/do things in a better way, you’ll still have the same old crap weighing you down, just with a different compression format and file extension.

I think there’s plenty of “hacks” to be had to extend legacy software’s value; that is, not having to spend time and money on updating/refactor/rewriting them). I hear Oracle has some bridge-themed tools if you like your current parking arrangements, and there’s always queues, amirght? You could probably do a lot worse than doing some BCG matrix trust-falls to find your low-priority, little used apps and shipping them off to an MSP, AWS, or one of those data centers that’s sitting there purring like an old cat with crusty eyes and renal issues.

The point of Whatever The New Approach You Want To Do is: when you want to write software in the best way possible, do it the new way, not the way you’ve been doing.

There’s some more instructive help from people like my pals Kenny and Rohit, to be sure. You can find plenty of content like this that speaks to how to start picking away at the scabs of legacy. As with peeling off any scab, it’s important to know that the skin underneath it is healed, or you just re-bleed. That’s probably how you should treat migrating legacy applications.

Like I said: no good answers here, just lots of work and risk of bleeding.

That sounds great, but where the hell do I start?

Getting started is vexing. Essentially, you need to pick low-risk projects that are still “material” to the business. I just happen to have a draft of some advice here “from the streets” in this little excerpt from a new booklet I’m working on.

Good luck, be sure to tell us how it goes

If you’re struggling with stopping hitting yourself, the best next step is to find other people who’re struggling and to talk with them. You then need to “see it to believe it” and, then, really, just start trying. There’s no universal bromide or DVD you can install. There is, however, a way of thinking — a process even — you can apply, namely: learning and slowly changing towards the better.

(And, you know, there’s lots of people hiring if you find yourself a rat on a sinking ship.)

Getting Started — picking your first cloud native projects, or, Every Digital Transformation Starts with One Project

This post is pretty old and possibly out of date. There’s updates on this topic and more in my book, Monolithic Transformation.

Every journey begins with a single step, they say. What they don’t tell you is that you need to pick your step wisely. And there’s also step two, and three, and then all the n + 1 steps. Picking your initial project is important because you’ll be learning the ropes of a new way of developing and running software, and hopefully of running your business.

Choosing these first projects wisely is also important for internal marketing and momentum purposes: the smell of success is the best deodorant, as they say, so you want your initial projects to be successful. And…if they’re not, you want to quietly sweep them under the rug so no one notices. Few things will ruin the introduction of a new, proven way of operating into a large organization than failure’s foetidly. Following Larman’s Law, the organization will do anything it can — consciously and unconsciously — to stop change. One sign of weakness early, and your cloud journey will be threatened by status quo zombies.

Project picking peccadilloes

Your initial projects should be material to the business, but low risk. They should be small enough that you can quickly show success in the order of months, and also technically feasible for cloud technologies. These shouldn’t be “science projects” or automation of low value office activities: no virtual reality experiments or conference room schedulers (unless those are core to your business). On the other hand, you don’t want to do something too big, like “migrate the .com site.” As Christopher Tretina recounts Comcast’s initial cloud native ambitions:

We started out last year with a very grandiose vision.. And it didn’t take us too long to realize we had bit off a little more than we could choose. So around mid-year, last year, we pivoted and really tried to hone in and focus on ‘what are just the main services we wanted to deploy that’ll get us the most benefit?’

Your initial projects should also allow you to test out the entire software lifecycle, all the way from conception, to coding, to deployment, to running in production. Learning is a key goal of these initial projects and you’ll only do that by going through the full cycle. As Home Depot’s Anthony McCulley describes the applications chosen in the first 6 or so months of their cloud native roll-out: “they were real apps, I would just say that they were just, sort of, scoped in such a way that if there was something wrong it wouldn’t impact an entire business line.” In Home Depot’s case, the applications chosen were projects like managing (and charging for!) late returns for tool rentals and running the in-store custom paint desk.

A special case for initial projects is picking a microservice to deploy. This is not as perfect case as a full-on, human-facing project, but will allow you to test out cloud native principals. The microservice could be something like a fraud detection or address canonicalization service. This is one approach to migrating legacy applications in reverse order, a strangler from within!

Picking projects by portfolio analysis

There are several ways to select your initial projects following the above criteria. Many Pivotal customers use a method perfected over the past 25 years by Pivotal Labs called “discovery.” In the abstract, it follows the usual BCG matrix approach but builds in intentional scrappiness to ensure that you can quickly do a portfolio analysis with the limited time and attention you can secure from all the stakeholders. The goal is to get a ranked list of projects to do based on the organization’s priorities and the “easiness” of the projects.

First, gather all the relevant stakeholders. This should include a mixture of people from “the business” and IT side, as well as the actual team that will be doing the initial projects. This discovery session is typically led by a facilitator, usually a Pivotal Labs person familiar with coaxing a room through this process.

The facilitator will hand out stacks of sticky notes and markers, asking everyone to write down projects that they think are valuable. What “valuable” is will depend on each stakeholder. We’d hope that the more business minded of them would have a list of corporate initiatives and goals in their heads (or a more formal one they brought to the meeting). One approach used in Lean is to ask management “if we could do one thing better, what would it be?” and start from there, maybe with some five why’s spelunking.

After writing down projects on sticky notes, the discovery process facilitator draws or tapes up a 2×2 matrix that looks like the following:

People in button up shirts prioritizing sticky notes.

The participants then put up their sticky notes in this quadrant, forcing themselves not to weasel out and put the notes on the lines. Once everyone has done this, you get a good sense of projects that all stakeholders think are important, sorted by the criteria I mentioned above: material to the business (“important”) and low risk (“easy”).

If all of the notes are clustered in one quadrant (usually, in the upper right, of course), the facilitator will redo the 2×2 lines to just that quadrant, forcing the decision and narrowing down on just projects to “do now.” The process might repeat itself over several rounds. To force a ranking of projects you might also use techniques like dot voting which will force the participants to really think about how they would prioritize the projects. At the end, you should have a list of projects, ranked by the consensus of the stakeholders in the room.

Like I said: “scrappy.”

Planning out the initial project

Of course, you may want to refine your list even more, but to get moving, the next step is to pick the top project and start breaking down what to do next. How you proceed here is highly dependent on how your product teams break down tasks into stories, iterations, and releases (or epics, sagas, or whatever cutesy terms you like for “bucket of stuff scoped at some hierarchical level with purposefully vague responsibility and temporal connotations”).

More than likely, following the general idea of a small batch process you’ll:

  1. Create an understanding of the user(s) and the “problems” they’re trying to solve with your software through personas and approaches like scenarios or Jobs to be Done,
  2. Come up with several theories for how those problems could be solved
  3. Distill the work to code and test these into stories,
  4. Add in more stories for “non-functional” requirements (like setting up build processes, CI/CD pipelines, testing automation, getting the new ping-pong table setup, etc.),
  5. Arranging them into iteration sized chunks without planning too far ahead (least you’re not able to adapt your work to the user experience and productivity findings from each iteration)

Crafting your hockey stick

Starting small ensures steady learning and helps contain the risk of a “fail fast” approach. But as you learn the cloud native approach better and string up a series of successful projects, you should expect to ramp up quickly. The below shows Home Depot’s ramp up in their first year:

This chart measures application instances in Pivotal Cloud Foundry which does not map exactly to a single application. What’s important is the general shape and acceleration of this curve as they became more familiar with the approach and the platform.

Another Pivotal customer in the telco space started with about 10 unique applications at first and expanded to 100 applications just over half a year later. These were production applications used to manage millions of customer account management and billing tasks.

How do you start: simple

It all sounds simple, and that’s part of the point. When learning something new, you want to start as simple as possible, but not simpler.

 

This post is pretty old and possibly out of date. There’s updates on this topic and more in my book, Monolithic Transformation.