Developers often know what’s slowing them down better than executives. So, like, if you’re a manager, don’t let that happen. Go check on what’s actually going on in the organization instead of just what’s going on in your status meetings. Get the more details and free books mentioned at cote.pizza.

And, for more of these tiny videos, see the full playlist. They’re, you know, fun!

Why change?

This post is an early draft of a chapter in my book,  Monolithic Transformation.

From Michael Gaida.

By now, the reasons to improve how your organization does software are painfully obvious. Countless executives feel this urgency in their bones, and have been saying so for years:

“There’s going to be more change in the next five to ten years than there’s been in the last 50” — Mary Barra, CEO, GM

Intuitively, we know that business cycles are now incredibly fast: old companies die out, or are forced to dramatically change, and new companies rise to the top…soon to be knocked down by the new crop of sharp-toothed ankle biters.

Innosight’s third study of companies’ ability to maintain leadership positions estimates that by 2018, 50% of the companies on the S&P 500 will drop off, replaced by competitors and new market entrants. Staying at the top of your market-heap is getting harder and harder.

Profesor Rita McGrath has dubbed this the age of “transient advantage,” which is an apt way of describing how long — not very! — a company can rely on yesterday’s innovations. A traditional approach to corporate strategy is too slow moving, as she says: “[t]he fundamental problem is that deeply ingrained structures and systems designed to extract maximum value from a competitive advantage become a liability when the environment requires instead the capacity to surf through waves of short-lived opportunities.” Instead, organizations must be more agile: “to win in volatile and uncertain environments, executives need to learn how to exploit short-lived opportunities with speed and decisiveness.”

Software defined businesses

“We’re in the technology business. Our product happens to be banking, but largely that’s delivered through technology.” — Brian Porter, CEO, Scotiabank

We’re now solidly in an innovation phase of the business cycle. Organizations must become faster and more agile in strategy formulation, execution, and adaptation to changing markets. Again and again, IT is at the center of how startups enter new markets (often, disruptively) and how existing enterprises retain and grow market-share.

Organizations are seeking to become software defined businesses. In this mode of thinking, custom written software isn’t just a way of “digitizing” analog process (like making still lengthy mortgage applications or insurance claims processes “paperless”), but the mission critical tool for executing and evolving business models.

While software might have played merely a supporting role in the business for so long, successful organizations are casting software as the star. “It’s no longer a business product conversation, it’s a software product that drives the business and drives the market,” McKesson’s Andy Zitney says, later adding, “[i]t’s about the business, but business equals software now.”

Retail is the most obvious example. There’s an anecdote that Home Depot realized how important innovation was to them when they found out that Amazon sold more hammers than Homer. While other retailers languish, Home Depot grew revenue 7.5% year-over-year in Q4 2017. This isn’t solely due to software, but controlling its own software destiny has played a large part. As CIO Matt Carey says of competition from Amazon, “I don’t run their roadmap; I run my roadmap.”

External competition isn’t the only reason organizations change, especially when it comes to optimizing their internal processes. Duke Energy, for example, realized that creating mobile versions of their internal applications would improve how line-workers coordinated their work in the field. A food service company improved the day-to-day reliability of cooks by introducing apps that walked staff through checklists and videos for food preparation and optimized kitchen staff’s time by better monitoring the temperature of stored food.

These cases can seem pedestrian compared to self-driving cars and AIs that will (supposedly) create cyber-doctors. However, unlike these gee-whiz technologies, these small changes work incredibly fast and have large impacts.

Organizations often focus on the process, not the software

Most large organizations have massive IT departments, and equally large pools of developers working on software. However, many of these organizations haven’t updated their software practices and technologies for a decade or more. The results are predictable as three years of a Cutter Consortium survey shows. The study found that just 30% of respondents felt that IT helped their business innovate. As the chart below shows, this has fallen from about 50 percent in 2013:

Source: “Stat of the Week: What is your IT organization’s role in business innovation?” Cutter Benchmark Review, Vol. 15, №1, July 2015.

This usefulness gap continues because IT departments are using an old approach to software. IT departments still rely on three-tier architectures, process hardened, dedicated infrastructure “service management” processes, and use functional organizations and long release cycles to (they believe) reliably produce software. I have to assume that this “waterfall” method was highly innovative and better than alternatives at the time…years and years ago.

In trying to be reliable and cost effective, IT departments have become excellent at process, even projects. In the 1990s, IT was in chaos with a shift from mainframes to Unix, then to Linux and Windows Server. On the desktop, the Windows GUI took over, but then the web browser hit mid-decade and added a whole new set of transitions and worries. Oh, and then there was the Internet, and the tail-end of massive ERP stand-ups that were changing core business processes. With all this chaos, IT often failed even on the simplest task like changing a password. Addressing this, the IT community created a school of thought called IT Service Management (ITSM) that sought to understand and codify each thing IT did for the business, conceptualizing those things as “services”: email, supply chain management, CRM, and, yes, changing passwords. Ticket desks were used to manage each process, and project management practices erected to lovingly cradle requests to create and change each IT service.

The result was certainly better than nothing, and much better than chaos. However, the ITSM age too often resulted in calcified IT departments that focused more on doing process perfectly than delivering useful services, that is, “business value.” The paladins of ITSM are quick to say this was never the intention, of course. It’s hard to know who’s the blame, or if we just need Jeffersonian table-flipping every ten years to hard reboot IT. Regardless, the traditional way of running IT is now a problem.

Most militaries, for example, can take anywhere between five to 12 years to roll out a new application. In this time, the nature of warfare can change many times over, a generations of soldiers can churn through the ranks, and the original requirements can change. Release cycles of even a year often result in the paradox of requirements perfection. In the best case scenario, the software you specified a year ago is delivered completely to spec, well tested, and fully function. But now, a year later, new competitor and customer demands nullifies the requirements from 12 months ago: that software is no longer needed.

Stretch this out to ten years, and you can see why the likes of US Air Force are treating transforming their software capabilities as a top priority. As General James “Mike” Holmes, Commander, Air Combat Command put it, “[y]ears of institutional risk aversion have led to the strategic dilemma plaguing us today: replacing our 30- year old fleet on a 30-year timeline.”

It’s easy to dismiss this as government work at its worst, clearly nothing like private industry. I’d challenge you, though, to find a large, multinational enterprise that doesn’t suffer from a similar software malaise. This misalignment is clearly unacceptable. IT needs to drastically change or it risks slowing down their organization’s innovation.

Small Batch Thinking

“If you aren’t embarrassed by the first version of your product, you shipped too late.” — Reid Hoffman, LinkedIn co-founder and former PayPal COO

How is software done right, then? Over the past 20 years, I’ve seen successful organizations use the same, general process: continuously executing small batches of software, over short iterations that put a rapid feedback loop in place. IT organizations that follow this process are delivering a different type of outcome than a set of requirements. They’re giving their organization the ability to adapt and change monthly, weekly, even daily.

By “small batches,” I mean identifying the problem to solve, formulating a theory of how to solve the problem, creating a hypothesis that can prove or disprove the theory, doing the smallest amount of application development and deployment needed to test your hypothesis, deploying the new code to production, observing how users interact with your software, and then using those observations to improve your software. The cycle, of course, repeats itself.

The small batch loop.

This whole process should take at most a week — hopefully just a day. All of these small batches, of course, add up over time to large pieces of software, but in contrast to a “large batch” approach, each small batch of code that survives the loop has been rigorously validated with actual users. Schools of thought such as Lean Startup reduce this practice to helpfully simple sayings like “think, make, check.” Meanwhile, the Observe, Orient, Decide, Act (OODA) loop breaks the cycle down into even more precision. However you label and chart the small batch cycle, make sure you’re following a hypothesis driven cycle instead of assuming up-front that you know what how your software should be implemented.

As Liberty Mutual’s’ Chris Bartlow says, “document this hypothesis right because if you are disciplined in doing that you actually can have a more measurable outcome at the end where you can determine was my experiment successful or not.” This discipline gives you a tremendous amount of insight into decisions about the your software — features to add, remove, or modify. A small batch process gives you a much richer, fact-based ability to drive decisions.

“When you get to the stoplight on the circle [the end of a small batch loop] and you’re ready to make a decision on whether or not you want to continue, or whether or not you want to abandon the project, or experiment [more], or whether you want to pivot, I think [being hypothesis driven] gives you something to look back on and say, ‘okay, did my hypothesis come true at all,” Bartlow says, “is it right on or is it just not true at all?”

Long-term, you more easily avoid getting stuck in the “that’s the way we’ve always done it” lazy river current. The record of your experiments will also serve as an excellent report of your progress, even something auditors will cherish once you explain that log to them. These well-documented and tracked records are also your ongoing design history that you rely on to improve your software. The log helps makes even your failures valuable because you’ve proven something that does not work and, thus, should be avoided in the future. You avoid the cost and risk of repeating bad decisions.

In contrast, a “large batch” approach follows a different process: teams document a pile of requirements up front, developers code away at implementing those features, perhaps creating “golden builds” each week or two (but not deploying those builds to production!), and once all of the requirements are implemented and QA’ed, code is finally deployed to production. With the large batch approach, this pile of unvalidated code creates a huge amount of risk.

This is the realm of multi-year projects that either underwhelm or are consistently late. As one manager at a large organization put it, “[w]e did an analysis of hundreds of projects over a multi-year period. The ones that delivered in less than a quarter succeeded about 80 percent of the time while the ones that lasted more than a year failed at about the same rate.”

No stranger to lengthy projects with, big, up-front analysis, the US Air Force is starting to think in terms of small batches for its software as well. “A [waterfall] mistake could cost $100 million, likely ending the career of anyone associated with that decision. A smaller mistake is less often a career-ender and thus encourages smart and informed risk-taking,” said M. Wes Haga.

Shift to user-centric design

If a small batch approach is the tool your organization now wields, a user-centric approach to software design is the ongoing activity you enable. There’s little new about taking a user-centric approach to software. What’s different is how much more efficient and fast creating good user experience and design is done thanks to highly networked applications and cloud-automated platforms.

When software was used exclusively behind the firewall and off networks as desktop applications, software creators had no idea how their software was being used. Well, they knew when there were errors because users fumed about bugs. Users never reported how well things were going when everything was working as planned. Worse, users didn’t report when things were just barely good enough and could be improved. This meant that software teams had very little input into what was actually working well in their software. They were left to, more or less, just make it up as they went along.

This feedback deficit was accompanied by slow release cycles. The complex, costly infrastructure used required a persnickety process of hardware planning, staging, release planning, and more operations work before deploying to production. Even the developers’ environments, needed to start any work, often took months to provision. Resources were scarce and expensive, and the lack of comprehensive automation across compute, storage, networking, and overall configuration required much slow, manual work.

The result of these two forces was, in retrospect, a dark age of software design. Starting in the mid-2000s, the ubiquity of always-on users and cloud automation removed these two hurdles.

Because applications were always hooked up to the network, it was now possible to observe every single interaction between a user and the software. For example, a 2009 Microsoft study found that only about one third of features added to the web properties achieved the team’s original goals — that is, were useful and considered successful. If you can quickly know which features are effective and ineffective, you can more rapidly improve your software, even eliminating bloat and the costs associated with unused, but expensive to support code.

By 2007, it was well understood that cloud automation dramatically reduced the amount of manual work needed to deploy software. The problem was evenly distributing those benefits beyond Silicon Valley and companies unfettered by the slow cycles of large enterprise. Just over 10 years later, we’re finally seeing cloud efficiencies spreading widely through enterprises. For example, Comcast realized a 75 percent lift in velocity and time to market when they used a cloud platform to automated their software delivery pipeline and production environment.

When you can gather, and thus, analyze all user interactions as well as deploy new releases at will, you can finally put a small batch cycle in place. And. this, you can create better user interaction and product design. And as we’ve seen in the past ten years, well designed products handily win out and bring in large profits.

Good user design practices are numerous and situational. Most revolve around talking with actual users and figuring out ways to extract their challenges are and then iteratively work on ways to solve them.

“Instead of starting with the [preconceived] solution,” Pivotal designer Aly Blenkin says, “we start with a general understanding of the problem. We try unpacking that problem and understanding it from the user’s perspective and using that as a foundation to start building out our designs and our ideas. Once we have that foundation, it allows us to eliminate risk and we do that through a balanced team: so having designers, product managers, engineers, data scientists come together with a multi-disciplinary approach to the way we build software.”

Good design is worth spending time on. As Forrester consistently finds, organizations that focus on design tend to perform better financially than those that don’t. As such, design can be a highly effective competitive tool. Looking at the relationship between good design and revenue growth, Forrester found that organizations that focus on better design have a 14% lead on those that don’t. For example, “in two industries, cable and retail, leaders outperformed laggards by 24 percentage and 26 percentage points, respectively.”

I haven’t done a great job at describing what exactly good design looks like, let alone what the day-to-day work is. Let’s next look at simple case study with clear business results as an example.

Case Study: no one wants to call the IRS

From “Minimum Viable Taxes: Lessons Learned Building an MVP Inside the IRS,” Dec 2015.

You wouldn’t think big government, particularly a tax collecting organization, would be a treasure trove of good design stories, but the IRS provides a great example of how organizations are reviving their approach to software.

The IRS historically used call centers to provide basic account information and tax payment services. Call centers are expensive and error prone: one study found that only 37% of calls were answered. Over 60% of people calling the IRS for help were simply hung-up on! With the need to continually control costs and deliver good service, the IRS had to do something.

In the consumer space, solving this type of account management problem has long been taken care of. It’s pretty easy in fact; just think of all the online banking systems you use and how you pay your monthly phone bills. But at the IRS, viewing your transactions had yet to be digitized.

When putting software around this, the IRS first thought that they should show you your complete history with the IRS, all of your transactions, as seen in the before UI example above. This confused users and most of them still wanted to pick up the phone. Think about what a perfect failure that is: the software worked exactly as designed and intended, it was just the wrong way to solve the problem.

Thankfully, because the IRS was following a small batch process, they caught this very quickly, and iterated through different hypotheses of how to solve the problem until they hit on a simple finding: when people want to know how much money they owe the IRS, they only want to know how much money they owe the IRS. When this version of the software was tested, most people didn’t want to use the phone.

Now, if the IRS was on a traditional 12 to 18 months cycle (or longer!) think of how poorly this would have gone, the business case would have failed, and you would probably have a dim view of IT and the IRS. But, by thinking about software in an agile, small batch way, the IRS did the right thing, not only saving money, but also solving people’s actual problems.

This project has great results: after some onerous up-front red-tape transformation, the IRS put an app in place which allows people to look up their account information, check payments due, and pay them. As of October 2017, there have been over 2 million users and the app has processed over $440m in payments. Clearly, a small batch success.

Create business agility with small batches

A small batch approach delivers value very early in the process with incremental releases of feature to production. This contrasts to a large batch approach which waits until the very end to (attempt to) deliver all of the value in one big lump. Of course, delivering early doesn’t delivering 1 year’s worth of work in one week. Instead, it means delivering just enough software to validate your design with user feedback.

Delivering early also allows you to prioritize your backlog, the list of requirements to implement. Organizations delivering weekly often find that a feature has been implemented “enough” and further development on the feature can be skipped. For example, to give people their hotel stay invoice, just allowing them to print a stripped down webpage might suffice instead of writing the code that creates and downloads a PDF. Once further development on that feature is de-prioritised, the team can decided to bring a new feature to the top of the backlog, likely ahead of schedule. This flexibility in priorities is one of the core of reasons agile software delivery makes business more agile and innovative.

Done properly a small batch approach also gives you a steady, reliable release train. This concept means that each week, your product teams will deliver roughly the same amount of “value” to production. Here, “value” means whatever changes they make to the software in production: typically, this is adding code that creates new features of modifies existing ones, but it could also be performance, security improvements, patches that ensure the software runs properly.

A functioning small batch process, then, gives you business agility and predictability. Trying out multiple ideas is now much cheaper, one of the keys to innovating new products and business models. The traditional, larger batch approach often requires millions of dollars in budget, driving the need for high-level approval, driving the need…to wait for the endless round of meetings and finance decisions, often on the annual budget cycle. This too often killed off ideas, as Allstate’s Opal Perry explains: “by the time you got permission, ideas died.” But with an MVP approach, as she contrasts, “a senior manager has $50,000 or $100,000 to do a minimum viable product” and can explore more ideas.

Case study: the lineworker knows best at Duke Energy

Duke Energy wanted to improve how line-workers coordinated their field-work. At first, the vice president in charge of the unit reckoned that a large map showing where all the line workers’ location would help him improve scheduling and work queues.

The team working on this went further than just trusting the VP’s first instincts, doing some field research with the actual line-workers. After getting to know the line-workers, they discovered a solution that redinfed the business problem. While the VP’s map would be a fine dashboard and give more information to central office, what really helped was developing a job assignment application for line-workers. This app would let line-workers locate their peers to, for example, partner with them on larger jobs, and avoid showing up at the same job. The app also introduced an Uber-like queue of work where line-workers could self-select which job to do next.

In retrospect this change seems obvious, but it’s only because the business paid attention to the feedback loop and user research and then reprioritized their software plans accordingly.

Transforming is easy…right?

Putting small batch thinking in place is no easy task: how long would it take you, currently, to deploy a single line of code, from a whiteboard to actually running in production? If you’re like most people, following the official process, it’d take weeks — just getting on the change review board’s schedule would take a week or more, and hopefully key approvers aren’t on vacation. This single-line of code thought experiment will start to flesh out what you need to do — rather, fix — to switch over to doing small batches.

Transforming one team, one piece of software isn’t easy, but it’s often very possible. Improving two applications usually works. How do you go about switching 10 applications over to a small batch process? How about 500?

Supporting hundreds of applications and teams — plus the backing services that support these applications — is a horse of a different color, rather, a drove of horses of many different colors. There’s no comprehensive manual for doing small batches at large scale, but in recent years several large organizations have been stampeding through the thicket. Thankfully, many of them have shared their success, failures, and, most importantly, lessons learned. We’ll look at their learnings next, with an eye, of course, at taming your organization’s big batch bucking.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

We fear change – staff resistant to digital transformation

Introducing new technology in the workplace is making the majority of people – three in five – feel anxious while the same number have concerns over whether their job is safe when it comes to tasks being automated.

And half of staff express such fears about change when businesses introduce any “digital transformation” projects, the study by Goldsmiths University and YouGov found and more than a quarter of business leaders said they meet “resistance” from employees.

And:

The report also identified that only 53 per cent of business are investing in digital transformation, despite the same amount saying their industries faced disruption over the next two years.

More from another article:

One of the more significant findings from the research is that younger workers are more fearful of digital disruption affecting their jobs than older staff are.

The study found that the people most fearful of workplace change fall into the 18-24 age range. Employees who said they feared the least about their jobs being disrupted was the 55-plus age group.

Commissioned by Microsoft, but, whatever.

Source: The way businesses are adopting technology is totally spooking staff, Microsoft study finds

 

Scaling DevOps in large organizations – My April Register column

apololypse_now_smell_of_napalm_in_the_morning

My The Register column this month is on scaling DevOps/cloud-native teams to the entire organization. It’s easy to build one team that does software in a new and exciting way, but how can you move to two teams, five teams, and then 100’s? It goes over the amalgamation of a few case studies and plenty of over-the-top gonzo analogies, per usual.

Check it out, and check out past ones if you’re curious for more.

Often, people just won’t change, or, there is no magic to thaw the frozen mind

Under the auspices of “how can we DevOps around here?” I’ve had numerous conversations with organizations who feel like they can’t get their people to change over to the practices and mind-set that leads to doing better software. The DevOps community has spent a lot of time trying to gently bring along and even brain-hack resistant people and there are numerous anecdotes and studies that doing things in DevOps/cloud native/agile/etc. way not only make businesses run more smoothy and profitably, but make the actual lives of staff better (interested in leaving work at 5pm and not working on weekends?)

Still, for whatever reasons, the idea of “the frozen-middle” is reoccurring. In government work, there’s often the frozen-ground” with staff throwing down their heels as well.

Most of the chatter on this topic is at the team level, and usually teams in already flexible, new organizations (like all those logos on scare-slides in presentations from people like Pivotal: UBER! AIRBNB! TESLA! SOFTWARE-IS-EATING-THE-WORLD, INC.!) If you’re already in a company that can change its culture every year, the pain of these “frozen” people is almost nil in comparison to being in government, a 50-100 years old company, or otherwise “in the real world.”

Right here, I’m supposed to establish empathy with these frozen people to get them to change. I should stop calling them “frozen people” and treating them like “resources,” and think of them as people. That is all true, but what do I do after I’ve cavorted with them in empathy splash-pads for months on end and we still can’t get them to configure the firewall so we can do the DevOps? What do you do with enterprise architects who have become specialists in telling you how impossibly fucked up everything is? How do you work with all those “frozen middle-managers” who are acting as a “shit umbrella” (or maybe in this case, a “rainbow umbrella”?) to all your fanciful DevOps notions?

There’s one last ditch tactic: money and firing. When it comes to changing the culture in a large orginizations, this is the big hammer that upper-level management has. That is, the middle-manager’s manger is the one who has to tools to fix the problem. By design, the surest, easiest, sanctioned way to change behavior is to change cash pay-outs and other compensation based on not only the performance of people (management and staff), but their willingness to adapt and change to new, better ways of operating. That is: you reward and punish people based on them doing what you told them to do, or not.

There’s all sorts of “little and cheap” things to do instead of bringing out the big hammer:

  • Giving out acrylic trophies, showering people in attention from high-level execs
  • Fame and glory internally and externally
  • T-shirts (no, really! They work amazingly well)
  • Sanctioning goofing off (“we may have a 15 day holiday policy, but, you know, you should leave work at 4 every day and don’t log vacation”)

Matt Curry covers most of these management-hacks in a great talk on changing large organizations. But let’s assume those aren’t working.

At this point, “leadership” (middle-management’s management) needs to reward and punish the unfreezing or freezing. If middle-management changes (“unfreezes”), they get paid more. If they don’t unfreeze at least their pay needs to freeze, and at “best” they need to get quarantined or fired (as a tone-note, back when I was young and the idea of getting “laid-off” or “RIFed” started being used I reprogrammed myself to only ever say “fired” in reference to loosing your job: those soft, BS words obscure the fact that you’ve been, well, fired – so, feel free to s/// whatever more HR-correct phrase you like into there).

If you look at the patterns of change in most companies going cloud native, they do exactly this. Organizations that successful change so that they can start doing software better set up separate “labs” where the “digital transformation” happens. These are often logically and physically separate: they often have new names and are often “off-site” or well hidden, in the skunk-works style. They do this because if they stay mentally and physically in the “old” system, they end up getting frozen simply due to proximity. The long-term, managerial strategy, here, is an application of the strangler pattern to the “legacy” organization: eventually, the new “labs” organization, through value to the business and sheer size, becomes the “normal” organization.

The staff process they use – almost as a side-effect! – is to only move over people who are willing to change. I’d argue that “skills” and aptitude have little to do with the selection of people: any person in IT can learn new ways of typing and right clicking if the company trains them and helps them. We’re all smart, here.

What happens is that you cull the frozen people, either leaving them behind in the old organization (where there’s likely plenty of work to be done keeping the legacy systems alive!), or just outright firing them. That’s all brutal and not in the rainbows and sandals spirit of DevOps, but it’s what I see happening out there to successfully change the fate of large organizations’ IT departments.

You don’t really hear about this in cheery keynotes at conferences, but at dark bars and (I shit you not) in quiet parking garages I’ve talked with several executives who paint out this unfreezing-by-attrition process as something that’s worked in their organization. One of them even said “we should have gotten rid of more people.”

This pattern gets more difficult in highly labor-regulated industries – like government work and government contractors – but setting up brand new organizations to shed the frozen old ways is at least something to try.

And, barring all else, as I like to say, if nothing is working, and people won’t actually change, Pivotal is always hiring.

If that’s too grim, don’t despair! My buddies have a much light-hearted, hopefully whack at the problem in a recent discussion of ours: