🗂 Link: Goldman Sachs brings on co-CIO, CTO

The strategy has four parts, according to the presentation. The investment bank wants to:

Offer a digital client experience

Increase automation

Build scalable infrastructure

Make room for innovation

To do so, Goldman Sachs is using 45% of its $4 billion engineering budget in 2019 on investment. The other 55% will be used to run the bank.

In the financial services sector, banks are making huge investments in technology. JPMorgan allocated $10.8 billion last year and has earmarked $11.4 billion for technology in 2019. Bank of America spent $10 billion on tech in 2018.

Source: Goldman Sachs brings on co-CIO, CTO

🗂 Link: Seeing Around Corners — a book review

A competitive arena, as opposed to an industry, is used as a primary lens through which to understand the world and therefore what the effects of a potential inflection point might be. Think of an arena as a definition of the setting where customer need connects with company solution to create value. An arena can be seen as super sized and strategic use cases that enable an organization to concentrate on what is really going on. It provides a tool with sufficient scope for meaningful change without falling into strategic paralysis. This is one idea that make the book worth the read.

Source: Seeing Around Corners — a book review

The Strategy Bottleneck

This is a draft excerpt from a book I’m working on, tentatively titled The Business Bottleneck. If you’re interested in the footnotes, leaving a comment, and the further evolution of the book, check out the Google Doc for it. Also, the previous excerpt, “The Finance Bottleneck.”

Digital transformation is a fancy term for customer innovation and operational excellence that drive financial results. John Rymer & Jeffrey Hammond, Forrester, Feb 2019.

The traditional approach to corporate strategy is a poor fit for this new type of digital-driven business and software development. Having worked in corporate strategy I find that fitting its function to an innovation-led business is difficult. If strategy is done on annual cycles, predicting and proscribing what the business should be doing over the next 12 months, it seems a poor match for the weekly learning you get from a small batch process. Traditionally, strategy defines where a company focuses: which market, which part of the market, what types of products, how products are sold, and therefore, how money should be allocated. The strategy group also suggests mergers and acquisitions, M&A, that can make their plans and the existing business better. If you think of a company as a portfolio of businesses, the strategy group is constantly asses that each business in that portfolio to figure out if it should buy, sell, or hold.

The dominant strategy we care about here goes under the name “digital transformation.” Sort of. The idea that you should use software as a way of doing business isn’t new. A strategy group might define new markets and channels where software can be used: all those retail omnichannel combinations, new partnerships in open banking APIs, and new products. They also might suggest businesses to shut down, or, more likely divest to other companies and private equity firms, but that’s one of the less spoken about parts of strategy: no one likes the hand that pulls the guillotine cord.

A moment of pedantry

First, pardon a bit of strategy-splaining. Having a model of what strategy is, however, is a helpful baseline to discuss how strategy needs to change to realize all these “digital transformation” dreams. Also, I find that few people have a good grasp of what strategy is, nor, what I think it should be.

I like to think of all “markets” as flows of cash, big tubes full of money going from point A to point B. For the most part, this is money from a buyer’s my wallet flowing to a merchant. A good strategy figures out how to grab as much of that cash as possible, either by being the end-point (the merchant), reducing costs (the buyer), or doing a person-in-the-middle attack to grab some of that cash. That cash grabbing is often called “participating in the market.”

When it comes to defining new directions companies can take, “payments” is a good example. We all participate in that market. Payments is one of the more precise names for a market: tools people use to, well, pay for things.

First, you need to wrap your head around the payments industry, this largely means looking at cashless transactions because using cash requires no payment tool. “Most transactions around the world are still conducted in cash,” The Economist explains, “However, its share is falling rapidly, from 89% in 2013 to 77% [in 2019].” There’s still a lot of cash used, oddly in the US, but that’s changing quickly, especially in Asia, for example in China, The Economist  goes on, “digital payments rose from 4% of all payments in 2012 to 34% in 2017.” That’s a lot of cash shifting and now shooting through the payments tube. So, let’s agree that “payments” is a growing, important market that we’d like to “participate” in.

There are two basic participants here:

  1. New companies enter the market by creating new ways of paying for things that compete with existing ways to pay for things. For example, new entrants are services like Alipay, Bunq, Apple Pay, and GrabPay. While this is the domain of startups in most people’s minds, large companies play this role often.
  2. Existing companies both defend their existing businesses and create new ways of paying for things. For example, Dutch banks launched iDEAL several years ago. Existing companies often partner with new entrants, for example: Goldman Sachs provides the backend for Apple Pay and Maybank partnered with GrabPay. Incumbents can also accomplish the second goal by just acquiring the new companies: in general banking, Goldman Sachs acquired Honest Dollar to help it get into consumer banking.

“Strategy,” then, is (1.) deciding to participate in these markets, and, (2.) the exact way these companies should participate, how they grab money from those tubes of cash. Defining and nailing strategy, then, is key to success and survival. For example, an estimated 3.3 trillion dollars flowed through the credit card tube of money in 2016. As new ways of processing payments gain share, they grab more and more from that huge tube of cash. Clearly, this threatens the existing credit card companies, all of whom are coming up with new ways to defend their existing businesses and new payment methods.

As an example of a general strategy for incumbents, a recent McKinsey report on payments concludes:

The pace of digital disruption is accelerating across all components of the GTB value chain, placing traditional business models at risk. If they fail to pursue these disruptive technologies, banks could become laggards servicing less lucrative portions of the value chain as digital attackers address the friction points. To avoid this fate, banks must embrace digitized transaction banking with a goal of eliminating discrepancies, simplifying payments reconciliation, and streamlining infrastructure to operate profitably at lower price points. They must take proactive strategic steps to leverage their current favorable market position, or watch new market entrants pass them by.

That is:

  1. New methods of payments will destroy your business, those pesky “tech companies.”
  2. So you should create some new (not credit card) payment methods 
  3. At the same time make your back-end systems more efficient so they can drive down costs for your existing credit card based business, increasing your profit margins despite overall revenue declining as “tech companies” grab more and more money out of your cash-tubes.
  4. Also, take advantage of your existing capabilities in security, fraud handling, and governance compliance to differentiate both your new, not credit card payment offerings and defend your existing credit card business.

That’s pretty good strategic direction and it comes, as you can see in the PDF, from a very deep analysis of market conditions, and trends – there’s even a Mekko chart!

McKinsey payments Mekko - Screen Shot 2019-08-08 at 2.27.28 PM.png
Source: “Global payments 2018: A dynamic industry continues to break new ground,” McKinsey, Oct 2018.

Now, how you actually put all that into practice is what strategy is. Each company and industry has its own peccadilloes. The reason McKinsey puts out all those fine charts is to do the pre-sales work of getting to invite them in and ask “yes, but how?” 

Getting over digital transformation fatigue

“Software is eating the world.” Pronouncements like this chestnut are by now obvious thanks to many Casandras have grown hoarse over the years. As one executive put it:

We came to the realization that, ultimately, we are a technology company operating in the financial-services business. So, we asked ourselves where we could learn about being a best-in-class technology company. The answer was not other banks, but real tech firms. 

This type of thinking has gone on for years, but change in large organizations has been glacial. If you search for the phrase “digital transformation” you’ll daily find sponsored posts on tech news sites preaching this, as they so often say, “imperative.” They’re long on blood curdling pronouncements and short on explaining what to actually do.

We’re all tired of this facile, digital genuflection. But maybe it’s still needed. 

If survey and sentiment are any indication, digital strategies are not being rolled out broadly across organizations as one survey, below, suggests. It shows that the part of the businesses that creates the actual thing being sold, product design and development, is being neglected:

Forrester digital transformation projects by department.png
Source: answers to the question “Which business processes are the focus of your firm’s most recent digital transformation?” Data from “Kick-Start Your Digital Business Strategy,” Forrester, June 2019.

As with all averages, this means that half of the firms are doing better…and half of them worse. Curiously, IT is getting most of the attention here: as I say, the IT bottleneck is fixed. My anecdotes-as-data studies match up with the attention customer service is getting: as many of my examples here show, like the Orange one, early digital transformation applications focus on moving people from call centers to apps. And, indeed, “improving customer experience” is one of the top goals of most app work I see.

But, it drops off after there. There’s plenty of room for improvement and much work to be done by strategy groups to direct and decide digital strategy. Let’s look at a two part toolkit for how they might could do it:

  1. Sensing your market – how to observe your market to time and plan changes.
  2. Validating strategy – a new method to safely and accurately define what your organization does.

Sensing your market

Changing enterprise strategy is costly and risky. Done too early, and you deliver perfectly on a vision but are unable to scale to more customers: the mainstream is not yet “ready.” Done too late, and you’re in a battle to win back customers, often with price cutting death spirals and comically disingenuous brand changes: you don’t have time for actual business innovation, so you put lipstick on discount pigs.

An innovation strategy relies on knowing the right time to enter the market. You need a strategy tool to continually sense and time the market. Like all useful strategy tools, it not only tells you when to change, but also when to stay the same, and how to prioritize funding and action. Based on our experience in the technology industry, we suggest starting with a simple model based on numerous tech market disruptions and market shifts. This model is Horace Dediu’s analysis of the post-2007 PC market. 2007, of course, is the year the iPhone was introduced. I’m not sure what to call it, but the lack of a label doesn’t detract from its utility. Let’s call it The Dediu Cliff:

The Dediu Cliff. Source: “The rise and fall of personal computing,” Jan 2012, Horace Dediu.

To detect when a market is shifting, Dediu’s model emphasizes looking beyond your current definition of your market. In the PC market, this meant looking at mobile devices in addition to desktops and laptops. Microsoft Windows and x86 manufacturers had long locked down the definition and structure of the PC market. Analyst firms like IDC tracked the market based on that definition and attempted disruptors like Linux desktop aspirants competed on those terms.

When the iPhone and Android were introduced in 2007, the definition of the PC market changed without much of anyone noticing. In a short 10 years, these “phones” came to dominate the “PC” market by all measures that mattered: time spent staring at the screen, profits, share increases, corporate stability and high growth, and customer joy. Meanwhile, traditional PCs were seen mostly as work horses, as commodities like pens and copy machines bought on refresh cycles with little regard to differentiation.

Making your own charts will often require some art. For example, another way to look at the PC market changing is to look at screen time per device, that is, how much time people spend on each device:

Screentime from Statcounter - Screen Shot 2019-08-08 at 2.49.37 PM.png
Screen time, or “engagement” as measured by web traffic. Notice that the analysis of the US market share has  iOS leading above Android. Source: Statcounter, queried 29 July 2019.

You have to find the type of data that fits your industry and the types of trends you’re looking to base strategy on. Those trends could be core assumptions that drive how your daily business functions. For example, many insurance businesses are still based on talking with an agent. So, in the insurance industry, you might chart online vs. offline browsing and buying:

Screen Shot 2019-08-07 at 3.28.14 PM
Source: Gartner L2, July 2019.

While more gradual than Deidu’s PC market chart, this slope will still allow you to track trends. Clearly, some companies aren’t paying attention to that cliff: as the Gartner L2 research goes on to say, once people look to go from quote to purchasing, only 38% of insurance companies allow for that purchase online.

Gaining this understanding of shifts in the very definition of your market is key. Ideally, you want to create the shift. If not, you want to enter the market once the shift is validated, as early as possible, even if the new entrant has single digit market share. Deploying your corporate resources (time, attention, and money) often takes multiple years despite the “overnight success” myths of startups. 

Timing is everything. Nailing that, per industry, is fraught, especially in highly regulated industries like banking, insurance, pharmaceuticals, and other markets that can use regulations to, uh, artificially bolster barriers to entry. Don’t think that high barriers to entry will save you though: Netflix managed to wreak havoc in the cable industry, pushing top telcoes even more into being dumb pipes, moving them to massive content acquisitions to compete.

I suggest the following general tactics to keep from falling off The Dediu Cliff:

  1. Know your customer – study their Jobs to be Done, maintain a good, “speaking” relationship with them.
  2. Consider Cassandras that use footnotes – track trend spotting, especially year over year (over year, over year).
  3. Try new things – experiment and incubate new ideas to continually test and participate in the market.

We’ll take a look at each of these, and then expand on how the third is generalized into your core innovation function.

Know your customer

Measuring what your customer things about you is difficult. Metrics like NPS and churn give you trailing indicators of satisfaction, but they won’t tell you when your customer’s expectations are changing, and, thus, the market.

You need to understand how your customer spends their time and money, and what “problems” they’re “solving” each day. For most strategy groups, getting this hands on is too expensive and not in their skill set. Frameworks like Jobs to Be Done and customer journey mapping can systemize this research, as we’ll see below, using a small batch process to implement your application allows you to direct strategy by observing what your customers actually interact with your business do day-to-day. 

Case Study: “The front door of the store is in your pocket,” Home Depot

In the ever challenging retail world, The Home Depot has managed to prosper by knowing their customer in detail. The company’s omnichannel strategy provides an example. Customers expect “omnichannel” options in retail, the ability to order products online, buy them in-store, order online but pick-up in-store, return items from online in-store…you get the idea. Accomplishing all of those tasks seems simple from the outside, but integrating all of those inventory, supply-chain, and payment systems is extremely difficult. Nonetheless, as Forrester has documented, The Home Depot’s concerted, hard fought work to get better at software is delivering on their omnichannel strategy: “[a]s of fiscal year 2018, The Home Depot customers pick up approximately 50% of all online orders in the store” and a 28% growth in online sales.

Advances in this business have been fueled by intimate knowledge of The Home Depot’s customers and in-store staff by actually observing and talking with them. “Every week, my product and design teams are in people’s homes or [at] customer job sites, where we are bringing in a lot of real-time insights from the customers,” Prat Vemana, The Home Depot’s Chief Digital Office said at the time.

The company focuses on customer journeys, the full, end-to-end process of customers thinking to, researching, browsing, acquiring, installing, and then using a product. For example, to hone in on improving the experience of buying appliances, the product team working on this application spent hours in stores studying how customers bought appliances. They also spent time with customers at home to see how they browsed appliance options. The team also traveled with delivery drivers to see how the appliances are installed.

Here, we see a company getting to know their customer and their problems intimately. This leads to new insights and opportunities to improve the buying experience. In the appliances example, the team learned that customers often wanted to see the actual appliance and would waste time trying to figure out how they could see it in person. So, the team added a feature to show which stores had the appliances they were interested in, thus keeping the customer engaged and moving them along the sales process. 

Spanning all these parts of the customer journey gives the team research-driven insights into how to deliver on The Home Depot’s omnichannel strategy. As customers increasingly start research on their phone, in social media, go instore to browse, order online, pick up instore, have items delivered, and so forth, many industries are figuring out their own types of omnichannel strategies. 

All of those different combinations and changing options will be a fog to strategy groups unless they start to get to know their customers better. As Allianz’s Firuzan Iscan puts it: “When we think from the customer perspective, most of our customers are hybrid customers. They are starting in online, and they prefer an offline purchasing experience. So that’s why when we consider the journey end to end, we need to always take care of online and offline moments of this journey. We cannot just focus on online or offline.”

Corporate strategy didn’t sign up for this

The level of study done at The Home Depot may seem absurd for the strategy team to do. Getting out of the office may seem like a lot of effort, but the days spent doing it will give you a deep, ongoing understanding of what your customers are doing, how you’re fulfilling their needs, and how you can better their overall journey with you to keep their loyalty and sell more to them. Also, it’s a good excuse to get out of beige cubicle farms and dreary conference rooms. Maybe you can even expense some lunches!

As we’ll see, when the product teams building these applications are put in place, strategy teams will have a rich source of this customer information. In the meantime, if you’re working on strategy, you’d be wise to fill that gap however you can. We’ll discuss one method next, listening to those people yelling and screaming doom and disruption.

Consider Cassandras

An early, ignored attempt to warn about that “book seller” in Seattle.

In Western mythos, Cassandra was cursed to always have 100% accurate prophecies but never be believed. For those of us in the tech industry, cloud computing birthed many Cassandras. Now, in 2019, the success of public cloud is indisputable. The on-premises market for hard and software is forever changed. Few believed that a “booker seller” would do much here or that Microsoft could reinvent itself as an infrastructure provider, turning around a company that was easily dismissed in the post-iPhone era.

Despite this, as far back as 2007, early Casandras were pointing out that software developers were using AWS in increasing numbers. Early on, RedMonk made the case that developers were the kingmakers of enterprise IT spend. And, if you tracked developer choice, you’d see that developers were choosing cloud. More Cassandras emerged over the years as cloud market share grew. Traditional companies heard these Cassandras, some eventually acting on the promises.

cloud spend.png
“Follow the CAPEX: Cloud Table Stakes 2018 Edition,” Charles Fitzgerald, February 2019.

Finally, traditional companies took the threat seriously, but as Charles Fitzgerald wickedly chronicled, it was too late. As his chart above shows, entering the public cloud market at this stage would cost $100’s of billions of dollars, each year, to catch up. The traditional companies in the infrastructure market failed to sense and act on The Cliff early enough – and these were tech companies, those outfits that are supposed to outmaneuver and outsmart the market!

Now, don’t take this to mean that these barriers to entry are insurmountable. Historically, almost every tech leader has been disrupted. That’s what happened in this market. There’s no reason to think that cloud providers are immune. We just don’t know when and how they’ll succumb to new competitors or, like Microsoft, have to reinvent themselves. What’s important, rather is for these companies to properly sense and respond to that threat.

There’s similar, though, rearview mirror oriented, stories in many industries. TK( listing or summarizing one in a non-tech company would sure be cool here ).

To consider Cassandras, you need a disciplined process that looks at year over year trends, primarily how your customers spend their time and money. Mary Meeker’s annual slide buffet is a good example: where are your customers spending their time? RedMonk’s analysis of developers is another example. A single point in time Cassandra is not helpful, but a Cassandra that reports at regular intervals gives you a good read on momentum and when your market shifts.

Finally, putting together your own Dediu Cliff can self-Cassandraize you. Doing this can be tricky as you need to imagine what your market will look like – or several scenarios. You’ll need to combine multiple market share numbers from industry analysts into a Cliff chart, updating it quarterly. Having managed such a chart, I can say it’s exhilarating (especially if someone else does the tedious work!) but can be disheartening when quarter by quarter you’re filed into an email inbox labeled “Cassandras.”

Thus far, our methods for sensing the market have been a research, even “assume no friction” methods. Let’s look at the final method that relies on actually doing work, and then how it expands into the core of the new type of strategy and breaking The Business Bottleneck.

Try new things

The best way to understand and call market shifts is to actually be in the market, both as a customer and a producer. Being a customer might be difficult if you’re, for example, manufacturing tractors, but for many businesses being a customer is possible. It means more than trying your competitor’s products. To the point of tracking market redefinition, you want to focus on the Jobs to Be Done, problems customers are solving, and try new ways of solving those problems. If this sounds like it’s getting close to the end goal of innovation, it’s because it is: but doing it in a smaller, lower cost and lower risk way.

For example, if you’re in the utility business, become a customer of in-home IoT devices, and how that technology can be used to steal your customer relationship, further pushing your business into a commodity position. In the PC market, some executives at PC companies made it a point of pride to never have tried, or “understood” the appeal of small screens – that kind of willful, proud ignorance isn’t helpful when you’re trying to be innovative. 

You need to know the benefits of new technologies, but also the suffering your products cause. There’s a story that management at US car manufacturers were typically given a company car and free mechanical service during the day while their car was parked at the company parking lot. As a consequence, they didn’t know first hand how low quality affected the cars. As Nassem Talab would put, they didn’t have any skin in the game…and they lost the game. Regularly put your skin in the game: rent a car, file an insurance claim, fill out your own expenses, travel in coach, and eat at your in-store delis.

Ket to trying new things is to be curious, not only in finding these things, but in thinking up new products to improve and solve the problems you are, now, experiencing first hand.

The goal of trying new things is to experiment with new products, using them to direct your strategy and way of doing business. If you have the capability to test new products, you can systematically sense changes in market definition. Tech companies regularly gloat new ideas as test products to sense customer appetite and, thus, market redefinitions. If you’ve ever used an alpha or beta app, or an invite only app, you’ve played a part in this process. These are experiments, ways the company tries new things. We laud companies like Google for their innovation successes, but we easily forget the long list of failed experiments. The website killedbygoogle.com catalogs 171 products that Google killed. Not all of these are “experiments,” some were long-running products that were killed off. Nonetheless, once Google sensed that an experiment wasn’t viable or a product no longer valid, they killed it, moving on.

When it comes to trying things, we must be very careful about the semantics of “failure.” Usually, “failure” is bad, but when it comes to trying new things, “failure” is better thought of as “learning.” When you fail at something, you’ve learned something that doesn’t work. When you’re feeling your way through foggy, frenetic market shifts requires tireless learning. So, in fact, “failing” is often the fastest way to success. You just need a safe, disciplined system to continually learn.

Validating strategy

Innovation requires failure. There are few guarantees that all that failure will lead to success, but without trying new things, you’ll never succeed at creating new businesses and preventing disruption. Historically, the problems with strategy has been the long feedback cycles required to tell you if your strategy “worked.”

First, budgets are allocated annually, meaning your strategy cycle is annual as well.Worse, to front-load the budget cycle, you need to figure out your strategy even earlier. Most of the time, this means the genesis of your current strategy was two, even three years ago. The innovation and business rollout cycles at most organizations are huge. TK( some long roll out figure). It can be even worse: five years, if not ten years in many military projects. Clearly, in “fast moving markets,” to use the cliché, that kind of idea-to-market timespan is damaging. Competing against companies that have shorter loops is key for organizations now. As one pharmacy executive put it, taking six months to release competitive features isn’t much use if Amazon can release them in two months.

Your first instinct might be the start trying many new things, creating an incubation program as a type of beta-factory of your own. The intention is good, but the risks and costs are too high for most large organizations. Learning-as-failure is expensive and can look downright stupid and irresponsible to share holders. Instead, you need a less costly, lower risk way to fail than throwing a bunch of things at the wall and seeing what sticks.

The small batch cycle

small batch doodle - Screen Shot 2019-08-08 at 3.04.20 PM.png
The small batch cycle.

Many organizations using what we’ll call the small batch cycle. This is a feedback loop that relies on four simple steps:

  1. Identify a problem to solve.
  2. Create a theory of how to solve the problem.
  3. Validate this theory by trying it out in real life.
  4. Analyze the results to see if the theory is valid or not.

This is, essentially, the scientific method. The lean startup method and, later, lean design has adapted this model to software development. This same loop can be applied “above the code” to strategy. This is how you can use failure-as-learning to create validated strategy and, then, start innovating like a tech company. 

As described above, due to long cycles, most corporate strategy is theoretical, at worse, PowerPoint arts and crafts with cut-and-pasting from a few web searches. The implementation details can become dicey and then there’s seeing if customers will actually buy and use the product. In short, until the first customer buys and uses the “strategy,” you’re carrying the risk of wasting all your budget and time on this strategy, often a year or more.

That risk might pay off, or it might not. Not knowing either way is why it’s a risk. A type of corporate “double up to catch up” mentality adds to the risk as well. Because the timeline is so long, the budget so high, and the risk of failure so large, managers will often seek the biggest bang possible to make the business case’s ROI “work.” Taking on a year’s time and $10m budget must have a significant pay off. But with such high expectations, the risk increases because more must be done, and done well. And yet, the potential downside is even higher as well.

This risky mentality has been unavoidable in business for the most part – building factories, laying phone lines, manufacturing, etc. require all sorts of up-front spending and planning. Now, however, when your business relies on software, you can avoid these constraints and better control the risks. Done well, software costs relatively little and is incredibly malleable. It’s, as they say, “agile.” You just need to connect the agile nature of software to strategy. Let’s look at an example.

Case Study: Most viable strategy: Duke Energy validates RFID strategy

As an energy company, Duke Energy has plenty of strategizing to do around issues like: disintermediation from IoT devices, deregulation, power needs for electric vehicles, and improving customer experience and energy conservation. Duke has a couple years of experience being cloud-native, getting far enough along to open up an 83,000-square- foot labs building housing 400 employees working in product teams.

They’re applying the mechanics of small batches and agile software to their strategy creation. “Journey teams” are used to test out strategies before going through the full-blown, annual planning process. “They’re small product-type teams led by design thinkers that help them really map out that new [strategic] journey and then identify [what] are the big assumptions,” Duke’s John Mitchell explained. Once identified, the journey teams test those assumptions, quickly proving or disproving the strategy’s viability.

Mitchell gives a recent example: labor is a huge part of the operating costs for a nuclear power plant, so optimizing how employees spend their time can increase profits and the time it takes to address issues. For safety and compliance reasons, employees work in teams of five on each job in the plant, typically scheduled in hour-long blocks. Often, the teams finish in much less than an hour, creating spare capacity that could be used on another job.

If Duke could more quickly, in near real-time, move those teams to new jobs they could optimize each person’s time. “So the idea was, ‘How can we use technology?’” Mitchell explains. “What if we had an RFID chip on all of our workers? Not to ‘Big Brother’ check in on them,” he quickly clarifies, but to better allocate the spare capacity of thousands of people. Sounds promising, for sure.

Not so fast though, Mitchell says: “You need to validate, will that [approach] work? Will RFID actually work in the plant?” In a traditional strategy cycle, he goes on, “[You’d] order a thousand of these things, assuming the idea was good.” Instead, Duke took a validated strategy approach. As Mitchell says, they instead thought, “let’s order one, let’s take it out there and see if it actually works in plant environment.” And, more importantly, can you actually put in place the networking and software needed: “Can we get the data back in real time? What do we do with data?” The journey team tested out the core strategic theories before the company invested time and money into a longer-term project and set of risks.

Key to all this, of course, is putting these journey teams in place and making sure they have the tools needed to safely and realistically test out these prototypes. “[T]he journey team would have enough, you know, a very small amount of support from a software engineer and designer to do a prototype,” Mitchell explains. “[H]opefully, a lot of the assumptions can be validated by going out and talking to people,” he goes on, “and, in some cases there’s a prototype to be taken out and validated. And, again, it’s not a paper prototype—unless you can get away with it—[it’s] working software.”

Once the strategic assumptions are validated (or invalidated, the entire company has a lot more confidence in the corporate strategy. “Once they … validate [the strategy],” Mitchell explains, “you’ve convinced me—the leader, board, whatever—that you know you’re talking about.”

It’s software

With software, as I laid out in Monolithic Transformation, the key ways to execute the loop are short release cycles, smaller amounts of code in each release, and the infrastructure capabilities to reliably reverse changes and maintain stability if things go wrong. 

These IT changes lead directly to positive business outcomes. Using a small batch cycle increases the design quality and cost savings of application design, directly improving your business. First, the shorter, more empirical, customer-centered cycles mean you better match what your customers actually want to do with your software. Second, because your software’s features are driven by what customers actually do, you avoid overspending on your software by putting in more features than are actually needed. 

For example, The Home Depot kept close to customers and “found that by testing with users early in the first two months, it could save six months of development time on features and functionality that customers wouldn’t use.” That’s 4 months time and money saved, but also functionality in the software that better matches what customers want.

As you mature, these capabilities lead to even wider abilities to experiment with new features like A/B testing, further honing down the best way to match what your software does to how your customers want to use it, and, thus, engage with your business. TK( quick example would be nice here ).

Software is the reason we call tech companies tech companies. They rely on software to run, even define their business. Thus, it’s TK( maybe? ) software strategy where we need to look at next.

The Finance Bottleneck

This is a draft excerpt from a book I’m working on, tentatively titled The Business Bottleneck. If you’re interested in the footnotes, leaving a comment, and the further evolution of the book, check out the Google Doc for it.

The Business Bottleneck

All businesses have one core strategy: to stay alive. They do this by constantly offering new reasons for people to buy from them and, crucially, stay with them. Over the last decade, traditional businesses have been freaked by competitors that are figuring out better offerings and stealing those customers. The super-clever among these competitors innovate entirely new business models: hourly car rentals, next day delivery, short term insurance for jackets, paying for that jacket with your phone, banks with only your iPhone as a branch, incorporating real-time weather information into your reinsurance risk analysis. 

Screen Shot 2019-08-07 at 3.28.14 PM.png
Source: Gartner L2, July 2019.

In the majority (maybe all) of these cases, surviving and innovating is done well with small business and software development cycles. The two work hand-in-hand are ineffective without the other. I’d urge you think of them as the same thing. Instead of business development and strategy using PowerPoint and Machiavellian meeting tactics as their tool, they now use software.

You innovate by systematically failing weekly, over and over, until you find the thing people will buy and the best way to deliver it. We’ve known this for a long time and enshrined it in processes like The Lean Startup, Jobs to Be Done, agile development and DevOps, and disruption theory. While these processes are known and proven, they’ve hit several bottlenecks in the rest of the organization. In the past, we had IT bottlenecks. Now we have what I’ve been thinking of as The Business Bottleneck. There’s several of them. Let’s start by looking at the first, and, thus, most pressingly damaging one. The bottleneck that cuts off business health and innovation before it even starts: finance.

Most software development finance is done wrong and damages business. Finance seeks to be accurate, predictable, and works on annual cycles. This is not at all what business and software development is like. 

Business & software development is chaos

Software development is a chaotic, unpredictable activity. We’ve known this for decades but we willfully ignore it like the advice to floss each day. Mark Schwartz has a clever take on the Standish software project failure reports. Since the numbers in these reports stay the same each year, basically, the chart below shows that that software is difficult and that we’re not getting much better at it:

Screen Shot 2019-08-07 at 3.30.26 PM.png
Source: built from excerpts from the 2009 study and 2015 study.

What this implies, though, is something even more wickedly true: it’s not that these project failed, it was that we had false hopes. In fact, the red and yellow in the original chart actually shows that software is performs consistent to its true nature. Let me rework the chart to show this:

Screen Shot 2019-08-07 at 3.32.07 PM.png
Source: built from excerpts from the 2009 study and 2015 study.

What this second version illustrates is that the time and budget it takes to get software software right can’t be predicted with any useful accuracy. The only useful accuracy is that you’ll be wrong in your predictions. We call it software engineering, and even more accurately “development” because it’s not scientific. Science seeks to describe reality, the be precise and correct – to discover truths that can be repeated. Software isn’t like that at all. There’s little science to what software organizations do, there’s just the engineering mentality of what works with what we have time and budget to do.

Source: from Michael Alba.

What’s more, business development is chaotic as well. Who knows what new business idea, what exact feature will work and be valuable to customers? Worse, there is no science behind business innovation – it’s all trial and error, constantly trying to both sense and shape what people and businesses will buy and at what price. Add in competitors doing the same, suppliers gasping for air in their own chaos quicksand, governments regulating, and culture changing people’s tastes, and it’s all a swirling cipher.

In each case, the only hope is rigorously using a system of exploration and refining. In business, you can study all the charts and McKinsey PDFs you want, but until you actually experiment by putting a product other there, seeing what demand and pricing is, and how your competitors will respond, you know nothing. The same is true for software. 

Each domain has tools for this exploration. I’m less familiar with business development, and only know the Jobs to Be Done tool. This tool studies customer behaviors to discover what products they actually will spend money on, to find the “job” they hire your company to solve, and then change the business to profit from that knowledge.

The discovery cycle in software follows a simple recipe: you reduce your release cycle down to a week and use a theory-driven design process to constantly explore and react to customer preferences. You’re looking to find the best way to implement a specific feature in the UI to maximize revenue and customer satisfaction. That is, to achieve whatever “business value” you’re after. It has many names and diagrams, but I call this process the “small batch cycle.”

THD small batch two up.jpg
The Home Depot illustrates its small batch cycle, Part Vemana and Brooke Creef, 2018. a caption

For example, Orange used this cycle when perfecting its customer billing app. Orange wanted to reduce traffic to call centers, thus lower costs but also driving up customer satisfaction (who wants to call a call center?). By following a small batch cycle, the company found that its customers only wanted to see the last two month’s worth of bills and their employees current data usage. That drove 50% of the customer base to use the app, helping remove their reliance on actual call centers, driving down costly and addressing customer satisfaction.

These business and software tools start with the actual customers, people, who are doing the buying and use these people as the raw materials and lab to run experiments. The results of these experiments are used to validate, more often invalidate theories of what the business should be and do. That’s a whole other story, and the subject of my previous book, Monolithic Transformation.

We were going to talk about finance, though, weren’t we?

The Finance Bottleneck

Finance likes certainly – forecasts, plans, commits, and smooth lines. But if you’re working in the chaos of business and software development, you can’t commit to much. The only certainty is that you’ll know something valuable once you get out there and experiment. At first all you’ll learn is that your idea was wrong. In this process, failure is as valuable as success. Knowing what doesn’t work, a failure, is the path to finding what does work, a success. You keep trying new things until you find success. To finish the absurd truth: failure creates success.

Software organizations can reliably deliver this type of learning each week. The same is true for business development. We’ve known this for decades, and many organizations have used it as their core differentiation engine.

But finance doesn’t work in these clever terms. “What they hell do you mean ‘failure creates success’? How do I put that in a spreadsheet?” we can hear the SVP of Finance saying, “Get the hell out of this conference room. You’re insane.”

Instead, when it comes to software development, finance focuses only on costs. These are easy to know: the costs of staff, the costs of their tools, and the costs of the data centers to run their software. Business development has similar easy to know costs: salary, tools, travel, etc.

When you’re developing new businesses and software, it’s impossible to know the most important number: revenue. Without that number, knowing if costs are good or bad is difficult. You can estimate revenue and, more likely, you can wish-timate it. You can declare that you’re going to have 10% of your total addressable market (TAM). You can just declare – ahem, assume – that you’re chasing a $9bn market opportunity. Over time, once you’ve discovered and developed your business, you can start to use models like consumer spending vs. GDP growth, or the effect of weather and political instability on the global reinsurance market. And, sure, that works as a static model so long as nothing ever changes in your industry.

For software development, things are even worse when it comes to revenue. No one really tells IT what the revenue targets are. When IT is asked to make budgets, they’re rarely involved in, nor given revenue targets. Of course, as laid out here, these targets in new businesses can’t be known with much precision. This pushes IT to just focus on costs. The problem here, as Mark Schwartz points out in all of his books, is that cost is meaningless if you don’t know the “value” you’re trying to achieve. You might try to do something “cheaply,” but without the context of revenue, you have no idea what “cheap” is. If the business ends up making $15m, is $1m cheap? If it ends up making $180m, is $5m cheap? Would it have been better to spend $10m if it meant $50m more in revenue?

 

IT is rarely involved in the strategic conversations that narrow down to a revenue.  Nor are they in meetings about the more useful, but abstract notion of “business value.” So, IT is left with just one number to work with: cost. This means they focus on getting “a good buy” regardless of what’s being bought. Eventually, this just means cutting costs, building up a “debt” of work that should have been done but was “too expensive” at the time. This creates slow moving, or completely stalled out IT. 

A rental car company can’t introduce hourly rentals because the back office systems are a mess and take 12 months to modify – but, boy, you sure got a good buy! A reinsurance company can’t integrate daily weather reports into its analytics to reassess its risk profile and adjust its portfolio because the connection between simple weather APIs and rock-solid mainframe processing is slow – but, sister, we sure did get a good buy on those MIPS! A bank can’t be the first in its market to add Apple Pay support because the payments processing system takes a year to integrate with, not to mention the governance changes needed to work with a new clearinghouse, and then there’s fraud detection – but, hoss, we reduced IT costs by $5m last year – another great buy!

Worse than shooting yourself in the foot is having someone else shoot you in the foot. As one pharmacy executive put it, taking six months to release competitive features isn’t much use if Amazon can release them in two months. But, hey! Our software development processes cost a third less than the industry averages!

Business development is the same, just with different tools and people who wear wing-tips instead of toe-shoes. Hopefully you’re realizing that the distinction between business and software development is unhelpful – they’re the same thing.

The business case is wrong from the start

So, when finance tries to assign a revenue number, it will be wrong. When you’re innovating, you can’t know that number, and IT certainly isn’t going to know it. No one knows the business value that you’re going to create: you have to first discover it, and then figure out how to deliver it profitably.

As is well known, the problem here is the long cycle that finance follows: at least a year. At that scope, the prediction, discovery, and certainty cycle is sloppy. You learn only once a year, maybe with indicators each quarter of how it’s going. But, you don’t really adjust the finance numbers: they don’t get smarter, more accurate, as you learn more each week. It’s not like you can go get board approval each week for the new numbers. It takes two weeks just to get the colors and alignment of all those slides right. And all that pre-wiring – don’t even get me started!

In business and software development, each week when you release your software you get smarter. While we could tag shipping containers with RFID tags to track them more accurately, we learn that we can’t actually collect and use that data – instead, it’s more practical to have people just enter the tracking information at each port, which means the software needs to be really good. People don’t actually want to use those expensive to create and maintain infotainment screens in cars, they want to use their phones – cars are just really large iPhone accessories. When buying a dishwasher, customers actually want to come to your store to touch and feel them, but first they want to do all their research ahead of time, and then buy the dishwasher on an app in the store instead of talking with a clerk. 

These kinds of results seem obvious in hindsight, but business development people failed their way to those success. And, as you can imagine, strategy and finance assumptions made 12 to 18 months ago that drove businesses cases often seem comical in hindsight.

A smaller cycle means you can fail faster, getting smarter each time. For finance, this means frequently adjusting the numbers instead of sticking to the annual estimates. Your numbers get better, more accurate over time. The goal is to make the numbers adjust to reality as you discover it, as you fail your way to success, getting a better idea of what customers want, what they’ll pay, and how you can defend against competition.

Small batch finance

Some companies are lucky to just ignore finance and business models. They burn venture capital funding as fuel to rocket towards stability and profitability. Uber is a big test of this model – will it become a viable business model (profitable), or will it turn out that all that VC money was just subsidizing a bad business model? Amazon is a positive example here, over the past 20 years cash-as-rocket-fuel launched them to boatloads of profit.

Most organizations prefer a less expensive, less risky methods. In these organizations, what I see are programs that institutionalize these failure driven cycles. They create new governance and financing models that enforce smaller business cycles, allowing business and software development to take work in small batches. Allianz, for example, used 100 day cycles discover and validate new businesses. Instead of one chance every 365 days to get it right, they have three, almost four. As each week goes by, they get smarter, there’s less waste and risk, and finance gets more accurate. If their business theory is validated, the new business is graduated from the lab and integrated back into the relevant line of business. The Home Depot, Thales, Allstate, and many others institutionalize similar practices.

allianz digital factory MVP procerss.jpg
Source: “The Shift to a New Digital Allianz Germany,” Dr. Daniel Poelchau, Allianz, CF Summit EU, Oct 2016.

Each of these cycles gives the business the chance to validate and invalidate assumptions. It gives finance more certainly, more precision, and, thus, less errors and risk when it comes to the numbers. Finance might even be able to come up with a revenue number that’s real. That understanding makes funding business and software development less risky: you have ongoing health checks on the viability of the financial investment. You know when to stop throwing good money after bad when you’ve invalidated your business idea. Or, you can change your assumptions and try again: maybe no one really wants to rent cars by the hour, maybe they want scooters, or maybe they just want a bus pass.

Business cases focused on growth, not costs

With a steady flow of business development learning, you can start making growth decisions. If validate that you can track a team of nuclear power plant workers better with RFID badges, thus directing them to new jobs more quickly and reducing costly downtime, you can then increase your confidence that spending millions of dollars to do it for all plant workers with payoff. You see similar small experiments leading to massive investments in omnichannel programs at places like Dick’s Sporting Goods and The Home Depot.

Finance has to get involved in this fail-to-success cycle. Otherwise, business and software development will constantly be driven to be the cheapest provider. We saw how this generally works out with the outsourcing craze of my youth. Seeking to be the cheapest, or the synonomic phrase, the “most cost effective,” option ends up saving money, but paralyzing present and future innovation

Screen Shot 2019-08-07 at 3.38.18 PM.png
“Survey Analysis: IT Is Moving Quickly From Projects to Products,” Bill Swanton, Matthew Hotle, Deacon D.K Wan, Gartner, Oct 

The problem isn’t that IT is too expensive, or can’t prove out a business case. As the Gartner study above shows, the problem is that most financing models we use to gate and rate business and software development are a poor fit. That needs to be fixed, finance needs to innovate. I’ve seen some techniques here and there, but nothing that’s widely accepted and used. And, certainly, when I hear about finance pushing back on IT businesses cases, it’s symptomatic of a disconnect between IT investment and corporate finance.

Businesses can certainly survive and even thrive. The small, failure-to-success learning cycles used by business and software developers works, are well known, and can be done by any organization that wills it. Those bottlenecks are broken. Finance is the next bottleneck to solve for.

I don’t really know how to fix it. Maybe you do! 

Crawl into the bottleneck

After finance, for another time, my old friends: corporate strategy. And if you peer past that blizzard of pre-wired slides and pivot tables, you can see just in past the edges of the next bottleneck, that mysterious cabal called “The C-Suite.” Let’s start with strategy first.

Link: Digital, Strategy and Design

Strategy involves determining the company’s intent. Strategy is expressed in an understanding of the environment, an expression of ambition, decisions regarding the allocation of resources and plan of execution. Strategy provides a perspective on where and how the company will win from the inside out.
Design entails understanding and expressing customer intent. Expressed in terms of persona’s, needs, journey maps, touchpoints and prototypes. Design provides a perspective on how and why customers win from the outside in.

Source: Digital, Strategy and Design

Link: The Demise of Blockbuster, and Other Failure Fairy Tales

Strategy is hard, execution at the middle-management later is harder.

What’s missing from the story is that PARC delivered on its mission. In fact, it saved Xerox from the fate of Kodak. While its copier business was disrupted by smaller Japanese competitors like Canon and Ricoh, one component of the Star system, the laser printer, replaced the revenues lost from its cash cow and Xerox continued to grow. It also earned millions from licensing technology it invented and, it should be noted, from its investment in Apple.
Original source: The Demise of Blockbuster, and Other Failure Fairy Tales

Link: High churn rate in the S&P 500

Innosight’s third study of company’s ability to maintain leadership positions estimates that by 2018, 50% of the companies on the S&P 500 will drop off, replaced by competitors and new market entrants. Staying at the top of your market-heap is getting harder and harder.

This is often used to show how difficult the business world is now. It’s hard enough to get to the top, and hard to stay there.
Original source: High churn rate in the S&P 500

Link: High churn rate in the S&P 500

Innosight’s third study of company’s ability to maintain leadership positions estimates that by 2018, 50% of the companies on the S&P 500 will drop off, replaced by competitors and new market entrants. Staying at the top of your market-heap is getting harder and harder.

This is often used to show how difficult the business world is now. It’s hard enough to get to the top, and hard to stay there.
Original source: High churn rate in the S&P 500

Link: Exploring the map – Wardley Maps

Wardley’s take on riding the diffusion or understand curve:

The uncharted space is where no-one knows what is wanted which forces us to explore and experiment. Change is the norm here and any method that you use must enable and reduce the cost of change. In this part of the map, I tend to use an Agile approach that has been cut right back to the core principles, a very lightweight version of XP or SCRUM.

Of course, as a component evolves and we start to understand it more then our focus changes. Sometime during the stage of custom built we switch and start to think about creating a product. Whilst we may continue to use underlying techniques such as XP or SCRUM, our focus is now on reducing waste, improving measurements, learning and creating that first minimal viable product. We start to add artefacts to our methodology and the activity has more permanence about it as it undergoes this transition. We’ve stopped exploring the uncharted space and started concentrating on what we’ve found. Today, Lean tends to rule the waves here though back in 2005 we were struggling to find something appropriate. The component however will continue to evolve becoming more widespread and defined as it approaches the domain of industrialised volume operations. Our focus again switches but this time to mass production of good enough which means reducing deviation. At this point, Six Sigma along with formalised frameworks such as ITIL then start to rule the waves. Any significant system will have components at different stages of evolution. At any one moment in time, there is no single method that will fit all.
Original source: Exploring the map – Wardley Maps

A series of small projects, building momentum to scale

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Not actually a picture of what’s described here, but it looks cool.

Every journey begins with a single step, they say. What they don’t tell you is that you need to pick your first step wisely. And there’s also step two, and three, and then all of the n + 1 steps. Picking your initial project is important because you’ll be learning the ropes of a new way of developing and running software, and hopefully, of running your business.

When it comes to scaling change, choosing your first project wisely is also important for internal marketing and momentum purposes. The smell of success is the best deodorant, so you want your initial project to be successful. And…if it’s not, you quietly sweep it under the rug so no one notices. Few things will ruin the introduction of a new way of operating into a large organization than initial failure. Following Larman’s Law, the organization will do anything it can — consciously and unconsciously — to stop change. One sign of weakness early, and your cloud journey will be threatened by status quo zombies.

In contrast, let’s look at how the series of small projects strategy played out in the US Air Force.

The USAF had been working for at least 5 years to modernize the 43 applications used in Central Air Operations Command, going through several hundreds of millions of dollars. These applications managed the US’s and allie’s daily air missions throughout Iraq, Syria, Afghanistan, and nearby countries. No small task of import. The applications were in sort need of modernizing, and some weren’t even really applications: the tanker refueling scheduling team used a combination of Excel spreadsheets and a whiteboard to plan the daily jet refueling missions.

Realizing that they’re standard 5 to 12 years cycle to create new applications wasn’t going to cut it, the US Air Force decided to try something new: a truly agile, small batch approach. Within 120 days, a suitable version of the tanker refueling application was in production. The tanker team continued to release new features on a weekly, even daily basis. The project was considered a wild success: the time to make the tanker schedule was reduced from 8 hours to 2, from 8 airmen to 1, and the USAF ended up saving over $200,000 a day in fuel that no longer needed to be flown around as backup for error in the schedule.

Number of USAF CAOC transformed applications over time, starting with 0 and ending with an estimated 18. Sources from several USAF presentations and write-ups.

The success of this initial project, delivered in April of 2017, called JIGSAW, proved that a new approach would work, and work well. This allowed the group driving change at the USAF to start another project, and then another one, eventually getting to 13 projects in May of 2018 (5 in production and 8 in development. The team estimates that by January of 2018 they’ll have 15 to 18 applications in production.

The team’s initial success, though just a small part of the overall 43 applications, gave them the momentum to starting scale change to the rest of the organization and more applications.

Project picking peccadilloes

Picking the right projects to start with is key. They should be material to the business, but low risk. They should be small enough that you can quickly show success in the order of months and also technically feasible for using cloud technologies. These shouldn’t be science projects or automation of low value office activities — no augmented reality experiments or conference room schedulers (unless those are core to your business). On the other hand, you don’t want to do something too big, like migrate the .com site. Christopher Tretina recounts Comcast’s initial cloud-native ambitions in this way:

We started out with a very grandiose vision… And it didn’t take us too long to realize we had bitten off a little more than we could chew. So around mid-year, last year, we pivoted and really tried to hone in and focus on what were just the main services we wanted to deploy that’ll get us the most benefit.

Your initial projects should also enable you to test out the entire software life cycle — all the way from conception to coding to deployment to running in production. Learning is a key goal of these initial projects and you’ll only do that by going through the full cycle.

The Home Depot’s Anthony McCulley describes the applications his company chose in the first six or so months of its cloud-native roll-out. “They were real apps. I would just say that they were just, sort of, scoped in such a way that if there was something wrong, it wouldn’t impact an entire business line.” In The Home Depot’s case, the applications were projects like managing (and charging for!) late tool rental returns and running the in store, custom paint desk.

A special case for initial projects is picking a microservice to deploy. Usually, such a service is a critical backend service for another application. A service that’s taken forever to actually deliver, or has been unchanged and ancient for years is an impactful choice. This is not as perfect a use case as a full-on, human-facing project, but it will allow you to test out cloud-native principals and rack up a success to build momentum. The microservice could be something like a fraud detection or address canonicalization service. This is one approach to migrating legacy applications in reverse order, a strangler from within!

Picking projects by portfolio pondering

There are several ways to select your initial projects. Many Pivotal customers use a method perfected over the past 25 years by Pivotal Labs called discovery. In the abstract, it follows the usual BCG matrix approach, flavored with some Eisenhower matrix. This method builds in intentional scrappiness to do a portfolio analysis with the limited time you can secure from all of the stakeholders. The goal is to get a ranked list of projects based on your organization’s priorities and the easiness of the projects.

First, gather all of the relevant stakeholders. This should include a mix of people from the business and IT sides, as well as the actual team that will be doing the initial projects. A discovery session is typically led by a facilitator, preferably someone familiar with coaxing a room through this process.

The facilitator typically hands out stacks of sticky notes and markers, asking everyone to write down projects that they think are valuable. What “valuable” means will depend on each stakeholder. We’d hope that the more business minded of them would have a list of corporate initiatives and goals in their heads (or a more formal one they brought to the meeting). One approach used in Lean methodology is to ask management this question: “If we could do one thing better, what would it be?” Start from there, maybe with some five whys spelunking.

Once the stakeholders have written down projects on their sticky notes, the discovery process facilitator draws or tapes up a 2×2 matrix that looks like the following:

Participants then put up their sticky notes in the quadrant, forcing themselves not to weasel out and put the notes on the lines. Once everyone finishes, you get a good sense of projects that all stakeholders think are important, sorted by the criteria I mentioned, primarily that they’re material to the business (important) and low risk (easy). If all of the notes are clustered in one quadrant (usually, in the upper right, of course), the facilitator will redo the 2×2 lines to just that quadrant, forcing the decision of narrowing down to just projects to do now. The process might repeat itself over several rounds. To enforce project ranking, you might also use techniques like dot voting which will force the participants to really think about how they would prioritize the projects given limited resources.

At the end, you should have a list of projects, ranked by the consensus of the stakeholders in the room.

Planning out the initial project

You may want to refine your list even more, but to get moving, pick the top project and start breaking down what to do next. How you proceed to do this is highly dependent on how your product teams breaks down tasks into stories, iterations, and releases. More than likely, following the general idea of a small batch process you’ll

  1. Create an understanding of the user(s) and the challenges they’re trying to solve with your software through personas and approaches like scenarios or Jobs to be Done.
  2. Come up with several theories for how those problems could be solved.
  3. Distill the work to code and test your theories into stories.
  4. Add in more stories for non-functional requirements (like setting up build processes, CI/CD pipelines, testing automation, etc.).
  5. Arrange stories into iteration-sized chunks without planning too far ahead (least you’re not able to adapt your work to the user experience and productivity findings from each iteration)

Crafting your hockey stick

Starting small ensures steady learning and helps contain the risk of a fail-fast approach. But as you learn the cloud-native approach better and build up a series of successful projects, you should expect to ramp up quickly. This chart shows The Home Depot’s ramp up in the first year:

Chart shows the number of application instances, which is not 1:1 to applications. The end-point represents about 130 applications, composed of about 900 services. Source: “Cloud-Native at Home Depot, With Tony McCulley,” Pivotal Conversations #45.

The chart measures application instances in Pivotal Cloud Foundry, which does not map exactly to a single application. As of December 2016, The Home Depot had roughly 130 applications deployed in Pivotal Cloud Foundry. What’s important is the general shape and acceleration of the curve. By starting small, with real applications, The Home Depot became learned the new process and at the same time delivered meaningful results that helped them scale their transformation.

See also another post of mine on this topic.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Link: Internal Product Management: the Good, the Bad and the Ugly — Black Swan Farming

For internal products, the “customer” is the enterprise, usually their strategy and execution, aka, “grind and stack.” Staff are, often sadly, meatware enablers in the value stream just like any technology. Optimize the VSM, make more paper.
Original source: Internal Product Management: the Good, the Bad and the Ugly — Black Swan Farming

Link: Do you need a corporate vision in government IT?

“In an organisation like a local authority this is especially tough as they are such disparate entities. Think about it, in what strange universe does it make sense for a single organisation to collect taxes, deliver social care, pick up bins and operate transport? None of these and many of the other services councils deliver have much to do with each other, apart from the coincidence of local delivery… Coming up with a single vision or operating model for such an organisation is pretty tricky therefore, which makes it less likely that transformation teams are going to get one. So, without a clear destination, what should they be doing?… I think the key is to think of councils – and other similar organisations – as groups of individual businesses, rather than a single cohesive organisation.”
Original source: Do you need a corporate vision in government IT?

Link: Agile Strategy: Short-Cycle Strategy Development and Execution

Kind of a good list of how to align short, agile cycles to longer, strategic planning. Key, I think, is understanding the stability and predictably needs of strategic planning and explaining how short agile loops increase the confidence the corporate can have in both it’s plans and better intelligence about the market and what works.

“In practice, the lack of continuous feedback loops between operational units and C-suite leaders leads to the misalignment of resources. Lack of communication makes course adjustment nearly impossible.”
Original source: Agile Strategy: Short-Cycle Strategy Development and Execution

Link: Try to Resist Misinterpreting the Marshmallow Test

If you’re poor in options in the present, you’re less likely to wait for long term profits.

“Following this logic, multiple studies over the years have confirmed that people living in poverty or who experience chaotic futures tend to prefer the sure thing now over waiting for a larger reward that might never come.”
Original source: Try to Resist Misinterpreting the Marshmallow Test

Link: Try to Resist Misinterpreting the Marshmallow Test

If you’re poor in options in the present, you’re less likely to wait for long term profits.

“Following this logic, multiple studies over the years have confirmed that people living in poverty or who experience chaotic futures tend to prefer the sure thing now over waiting for a larger reward that might never come.”
Original source: Try to Resist Misinterpreting the Marshmallow Test

Link: What Your Innovation Process Should Look Like

“Once a list of innovation ideas has been refined by curation, it needs to be prioritized. One of the quickest ways to sort innovation ideas is to use the McKinsey Three Horizons Model. Horizon 1 ideas provide continuous innovation to a company’s existing business model and core capabilities. Horizon 2 ideas extend a company’s existing business model and core capabilities to new customers, markets or targets. Horizon 3 is the creation of new capabilities to take advantage of or respond to disruptive opportunities or disruption. We’d add a new category, Horizon 0, which refers to graveyards ideas that are not viable or feasible.”
Original source: What Your Innovation Process Should Look Like

Link: U.S. CIO Suzette Kent: Don’t change IT modernization plan; ‘turbo boost’ it

When it comes to digital transformation, the goal of businesses is to drive profit, or more broadly, get more money. Finding government’s goal is a tad more tricky. Here’s a good, brief explanation:

‘The end goal of all this, Kent reminded the crowd, is to improve agencies’ ability to achieve their various missions, deliver “excellent” customer service and “be great stewards of taxpayer money.”’

Most people forget that last part: ensuring that the money is well spent.
Original source: U.S. CIO Suzette Kent: Don’t change IT modernization plan; ‘turbo boost’ it

Link: Dell Technologies’ “essential infrastructure” strategy

“We divested Dell services. We divested [VMware’s] vCloud Air, and really began to clean up the portfolio to drive forward Michael [Dell’s] vision that the world is going to need an essential infrastructure company. It might not be the sexiest play in IT, but absolutely at the end of the day, all this stuff has got to run on something. We’re proud to be that something.”
Original source: Dell Technologies’ “essential infrastructure” strategy

So what exactly should IBM do, and have done?

Now that IBM has ended its revenue losing streak, we’re ready to stick a halo on it:

There is no doubt, though, that there are signs of progress at IBM, which would not comment on its financial picture before the release of the earning report. So much attention is focused on the company’s top line because revenue is the broadest measure of the headway IBM is making in a difficult transformation toward cloud computing, data handling and A.I. offerings for corporate customers.

The new businesses — “strategic imperatives,” IBM calls them — now account for 45 percent of the company’s revenue. And though it still has a ways to go, IBM has steadily built up those operations — and gained converts.

Over all those quarters, there hasn’t been that much good analysis of “what went wrong” at IBM in so much as I haven’t really read much about what IBM should have been doing. What did we expect from them? What should they be doing now and in the future? I don’t know the answers, but I’m damn curious.

“State your deal.”

Since the mid-2000’s, all tech companies have been shit on for not getting to and dominating public cloud faster (there are exceptions like Adobe that get lost in the splurty noise of said shitting on). Huge changes have happened at companies HP/HPE and Dell/EMC/VMware (where I work happily at Pivotal, thank you very much), and you can see Oracle quarterly dance-adapting to the new realities of enterprise IT spending.

For the past 8 or 10 years I’ve had a rocky handle on what it is that IBM sell exactly, and in recent years their marketing around it has been fuzzy.  Try to answer the question “so what is it, exactly, that IBM sells?” A good companion is, “why do customers choose IBM over other options?”

You can’t say “solutions” or “digital transformation.” (I’m aware of some black kettle over here, but I and any Pivotal person could tell you exactly the SKUs, tools, and consulting we sell, probably on an index card). I’m pretty sure some people in IBM know, but the press certainly doesn’t know how the fuck to answer that question (with some exception at The Register and from TPM, grand sage of all IBM coverage).

I’ve been a life-long follower of IBM: my dad worked at the Austin campus, it was a major focus at RedMonk, and, you know, just being in the enterprise tech industry gets your face placed facing Armonk frequently. I feel like I know the company pretty well and have enough of an unnatural fascination to put up with spelunking through them when I get the chance; IBMers seem pleasantly bewildered when the first thing I ask them to do is explain the current IBM hierarchy and brand structure.

But I couldn’t really explain what their deal is now. I mean, I get it: enterprise outsourcing, BPaaS (or did they sell all that off?), some enterprise private cloud and the left over public cloud stuff, mainframe, a bunch of branded middleware (MQ, WebSphere, DB2, etc.) that they seem forbidden to mention by name, and “Watson.”

There are clear products & services (right?)

 

When I’ve been involved in competitive situations with IBM over the years, what they’re selling is very, very straight forward: outsourcing, software, and a sense of dependability. But the way they’re talked about in the press is all buzzwordy weirdness. I’m sure blockchain and AI could be a big deal, but their on and off success at doing something everyday, practical with it is weird.

Or, it could just be the difficulty of covering it, explaining it, productizing, and then marketing it. “Enterprise solutions” often amounts to individually customized strategy, programs, and implementations for companies (as it should, most of the time), so you can’t really wrap a clear-cut SKU around that. It’s probably equally hard to explain it to financial analysts.

So, what’s their deal?

Cumulative capex spend by Google, Amazon, and Microsoft since 2001.
How much is that public cloud in the window?

Anyhow, I don’t come here to whatnot IBM (genuinely, I’ve always liked the company and still hope they figure it out), but more out of actual curiosity to hear what they should have been doing and what they should do now. Here’s some options:

  1. The first option is always “stay on target, stay on target,” which is to say we just need to be patient and they’ll actually become some sort of “the business of AI/ML, blockchain, and the same old, useful stuff of improving how companies run IT.” I mean, sure. In that case, going private is probably a good idea. The coda to this is always “things are actually fine, so shut the fuck up with your negativity. Don’t kill my vibe!” And if this it true, IBM just needs some new comms/PR strategies and programs.
  2. You could say they should have done public cloud better and (like all the other incumbent tech companies except Microsoft), just ate it. What people leave out of this argument is that they would have had to spend billions (and billions) of dollars to build that up over the past 10 years. Talk about a string of revenue loosing quarters.
  3. As I’m fiddling around with, they could just explain themselves better.
  4. They should have gotten into actual enterprise applications, SaaS. Done something like bought Salesforce, merged with SAP, who knows. IBM people hated it when you suggested this.
  5. The always ambiguous “management sucks.” Another dumb answer that has to be backed up not with missed opportunities and failures (like public cloud), but also proving that IBM could have been successful there in the first place (e.g., with public cloud, would Wall Street have put up with them loosing billions for years to build up a cloud?)

I’m sure there’s other options. Thinking through all this would be illustrative of how the technology industry works (and not the so called tech industry, the real tech industry).

(Obviously, I’m in a weird position working at Pivotal who sells against IBM frequently. So, feel free to dismiss all this if you’re thinking that, now that you’ve read this swill, you need to go put on a new tin-foil hat because your current one is getting a tad ripe.)