The tiny video toolkit

People ask me how I do the tiny videos. I hope to do a screencast at some point, but in the meantime, here are some notes:

Video recording – I record them on my iPhone 11 Pro, I have Rode Wireless Go mics with a lav mic (these hook directly into the iPhone so the audio track is embedded in the video), a DJI Osmo Mobile gimble (totally not needed), and a cheap tripod. I record in 4k (see below for converting it for web). When I do “in the studio” I use the iPhone as well with Camo Studio and some Eve strip LED lighting. I have a black backdrop behind me. I use the FilMiC Pro Mobile on iOS to record – probably overkill, but if I ever get the remote thing working, it’ll be cool (I’d be able to control my main phone with another phone!). Their DoubleTake app is cool too – I used that for a couple Garbage Chairs of Amsterdam videos to bounce between me and the chair.

Audio – I don’t really do anything with audio now – it gets recorded into the track. It’d be nice to noise cancel, compress, level, and stuff, but, whatever. Once that gets built into LumaFusion, I’ll probably just flip those switches. Descript will level the audio, which is nice. I don’t know, man: the audio is good enough – I could stand to have more gain, but, again: whatever.

Editing – I edit in LumaFusion on iOS. I do most all editing on my iPhone, no shit. I’m often watching my daughter, feeding her, or otherwise somewhere besides a desk, so I’ve gotten really good at editing on my phone. Weird, but I like it. I’ve done it on my iPad and kind of like that less. Video editing software is very personal and muscle memory: I make no claims that what works for me would work for you: just pick something and train your hands to do the things. I could go over my editing style as well which, I like to think, is especially tuned for these short, quick videos.

Subtitles – I started using Descript to get subtitles. It’s good stuff. I’ve done some editing in Descript – it will delete out filler words (“uh,” “like,” etc.) and silence pretty well. I don’t like the video editing in Descript. Sometimes, if I need a Twitter length video (max 2 minutes 20 seconds), I’ll use Descript to edit it down a bit. Then I have separate subtitles for the “everything but Twitter version” and the Twitter one. Sounds like extra work, but it’s actually fine.

Thumbnails – I use Adobe Spark Post. It’s awesome and perfect for this job. I have an Adobe CC subscription, so I occasionally use stock.adobe.com to find zaney things. I also have a storyblocks.com stock footage subscription that I occasionally use for silly interstitials (like clowns in my bozo bit video).

Posting – I do that all manually, per site. I did a rough analysis of where/how to post videos. My finding was that no one clicks on YouTube links: you need to publish the videos “natively” in each service: LinkedIn (best performing for my videos), Twitter, Facebook, Instagram, TikTok. The last three don’t really work well for my videos, so I’ve started ignoring them. To make this clear: you can’t just put a YouTube link in Twitter and LinkedIn for promotion: people won’t click on the link! So, I upload manually to YouTube, studio.twitter.com (a nice find I didn’t know about!), and LinkedIn. The thing with this is just knowing the various formats and subtitle expectations for each. Twitter vidoes need to be max 2 minutes 20 seconds, LinkedIn can be up to 10 minutes, YouTube doesn’t care. Twitter MP4s need to be 500 megs or less, so I encode those to 720p – the others will take 4k, so I upload full 4k to them.

CTAs – you can put links into YouTube videos (“cards” and end frame things) – from what I can tell, no one clicks on those in my videos so I stopped doing them. You can also plop links into the YouTube description: I do this, I don’t know if they work. If you use studio.twitter.com, you can put one link that appears as an overlay to “watch more” (like, link to a full YouTube video) or “visit site” (like, go to a landing page to download my two free books). With LinkedIn, you just put the links in the post.

Promotion – dude, fuck if I know. Hashtags? I’m pretty sure the only way to get better promotion for my videos is to get people much more famous then me to point to them.

Interviewing – if I’m interviewing someone, I do it in Zoom and record the video. I figured out some settings where you can record the gallery view and the switching between active speaker view. The video quality is terrible, but I don’t ever want people to have to mess around.

Streaming – I use OBS with a few core scenes (one big head talking, sharing a screen with a head). The best tip I got on OBS was to tune down the resolution to 720p. While my Netherlands internet can take most anything, I don’t have the compute horse-power to do more. Besides, who’s going to stream 4k? When I stream, OBS records the video and then I take that video and edit it and post to YouTube. I haven’t done much streaming this year…I don’t like it.

Studio stuff – for a mic, I have an Apogee MiC 94k. It’s great! I think there’s a newer model now, probably fine. I currently use an Eve LED strip on the wall in front of me for lighting. I keep it on white at 25% brightness. I hook up my iPhone 11 Pro with Camo Studio so I can use. With the black backdrop I have, I found that messing around with the gamma kind of fades out the background enough (I have no idea what “gamma” is). I, of course, have those boom arm things for the iPhone/camera and mic. Mine are shit, but they work.

LIKE AND SUBSCRIBZ!

Mindfulness from blood-sport

Americans haven’t found a constructive was to discuss inequality and power distribution. We like quick, violent arguments and fights that focus on zero-sum outcomes: one group wins, another losses.

Things like GameStop throw all that inequality and weirdness on the table and so we have a chance to discuss it and act aghast.

In this instance, the aghastness is:

  1. Why can’t I have some of that money?
  2. Is this illogical system worth all the sacrifice and worship we give Finance?
  3. Is this the best thing to spend our time on?

Also, it’s a good story – entertainment with clear villains, but also ambiguous heros.

There’s little, if anything, about race and gender in the discussion, morals even. This is a huge change from years of culture wars. This is just like watching gladiators, context free of any culture wars. It gives you that focus one one thing to exclude all the stuff you’re anxious about. In a gentler system, this would be called “mindfulness”: focusing on “the now” to stop the voices in your head.


For the most part, gladiators were slaves (I think). In the case of GameStop, both sides volunteered.

What I’m saying here is that you can’t have sympathy for either side if you base sympathy giving on: they had no choice or were somehow tricked into the negative situation.

I don’t think that means much, but it does highlight another American oddity: we don’t really think about downside as a real thing. We are raised to value the underdog, and much of our folklore is about the underdog winning. However, that doesn’t happen much. We can’t deal with the concept that people just lose, that you get defeated, that there’s no way to win. We get upset when that happens to us as individuals: it’s not fair!

I don’t know other cultures much, but my sense that this idea that you deserve success is part of American-think.

This idea that you would be resigned to your fate is incredibly un-American. In fact, it’s perhaps the worst sin you can commit: idle hands and all that. When people are poor and underprivileged (until very recently) American culture assumed it was just because they didn’t try hard enough and have given up. Boot-straps and all that.

We can’t conceptualize that most people don’t win most of the time. There must be cultures that are more aligned to this style of thinking.


That’s part of what makes mindfulness and “living in the now” so hard for me to…believe? If I’m not always struggling, planning, worried…bad things will happen. If I give up and accept things as they are, then things will go bad, I’ll lose all my money, security, etc., happiness.

This, of course, isn’t the point of mindfulness. It’s not giving up and letting yourself float around in a sea of shit. But, it’s hard to even think otherwise with this American notion that the only way to be happy is to fight, to work for it and suffer along the way.

Relearning the value of complaining

In my real life, I don’t complain. Even when I get cut.

But complaining is bi-directionally valuable

What I get wrong is that people complaining want immediate action, a fix. This makes complaint stressful, both for me to do and hear. I don’t want people doing things for me, to carry that debt on my books. And when someone complains at me, I get stressed out that I know have to work, and do the right thing. Either way, complaining just opens up another opportunity for struggle and failure.

Instead, complaining is mostly a form of blowing off steam, and even friendship.

I’ve read that angry and defeated apes will hit lower status apes to blow off steam.

In some way, complaining is that, without the hitting. You feel better, and people can also bond with you.

And with me, when I don’t complain, it metaphorically builds up until I get anger and, worse, resentful.

I don’t understand the mechanisms of it at all, and therefore find it hard to do and benefit from, but: sharing your bad feelings and experiences with someone – complaining -, being “heard” is critical for mental well-being.

Getting more eyeballs for your boring-ass enterprise tech videos – analysis and LIFE HACKS from four months of long and tiny b2b videos by channel and numbers

Looking at four months of numbers, here’s my theories of how to get more attention for my enterprise tech videos:

  1. Make short ones, each with one point – 1 minute to 10 minutes.
  2. Post the videos natively to Twitter, YouTube, or whatever channel – don’t rely on people clicking on YouTube.
  3. YouTube is, in general, the worst performer for eyeballs.
  4. LinkedIn is the best all around performer (but, I haven’t found detailed analytics, like seconds watched versus just auto-play).
  5. I haven’t done enough analysis of CTAs (“click here to go to my landing page and move further along the sales funnel to giving us CASH!”) but they’re near impossible – Twitter looks good, but I don’t have enough visibility into the end-to-end funnel.
  6. Thus, following 5: focus on ideas you want in people’s heads (brand, thought lording, reputation, etc.) over clicks/transactions.

Analysis

I do a lot of videos for my work – selling kubernetes and appdev stacks for enterprises, along with the services/consulting that go with it (hey! VMWARE TANZUUUUUU!). Over the past two months I shifted from longer form vidoes (30-50 minutes) to tiny ones.

Sort of counter-intuitively, tiny videos take just as much work as long ones – lots and lots of editing, making subtitles, making zaney thumbnails, and all the usual uploading posting around. Sometimes tony videos take more work than just uploading longer, 45 uncut minutes.

The results are dramatic though: the shorter videos I do get a lot more views and “engagement” than the longer ones. This fits common SEO, social/influencer hustler folklore: no one likes long form content. After over 15 years of podcasting and presenting and blogging, I know that folklore isn’t, you know, universally true.

The Charts

The following tables are incomplete, it focuses on the tiny videos. See the taller table that follows for the numbers for the longer videos. (Click for the larger version of each chart.)

Table 01 shows the Dec 2020 and Jan 2021 tiny videos I did. I’ve been very time constraint of late (we have to – er, get to – home school a seven and ten year old, and also need to watch a 10 month old), so I’ve shifted to doing these small videos in the time I can find, often when I’m taking my baby daughter on a walk and she finally falls asleep:


Tanzu Talk tiny videos (and some long), Dec 2020 to Jan 2020

Table 01: Tanzu Talk tiny videos (and some long), Dec 2020 to Jan 2020.

Table 02 shows the tiny videos I did back in the Spring (2020). I was similarly time-constrained – technically (and, mostly – hey, my therapist has helped me recognize that I’m a workaholic, but, like, the content I produce for work is my passion – my work isn’t just yelling at supply chain people and arts and crafting PowerPoint slides and pivot-tables…OK…I’ll take a breath…) I was on paternity leave, so I had to snatch the times I could. I uploaded these videos to my personal YouTube site (the Dec/Jan ones are on the VMware Tanzu channel), so their YouTube views are shit:

cote.pizza tony videos, Spring 2020

Table 02: cote.pizza tiny videos, Spring 2020.

I call these “cote.pizza” videos because that’s the URL for a CTA I had.

Then, for comparison, Table 03 the views for all the Tanzu Talk videos – most of them are long form and were only hustled with YouTube links in Twitter, LinkedIn, etc.:


All Tanzu Talk videos, tiny and long, 2020

Table 03: All Tanzu Talk videos, tiny and long, 2020

Findings

There are some key findings:

  1. The short videos get a lot more traffic.
  2. Posting the videos natively to Twitter and LinkedIn gets a tremendous amount more traffic than posting links to the YouTube videos. You can see this in Table 01: the videos in December were promoted with links to YouTube, but the ones in January were posted natively to Twitter and LinkedIn. (Some videos were previews of longer ones, like the DevSecOps for Fed one).
  3. I haven’t done a video-by-video analysis, but very few people (if any) will click on a link to YouTube that I post in Twitter or LinkedIn. I don’t know if they click on CTAs either. (There’s some views from Instagram, Facebook, and even TikTok too, but I’m leaving those off from this write-up – they’re not high or consistent enough to consider – you’re better posting Nutella videos to those channels.)
  4. I have no proof of this, but I think adding in subtitles helps. Instagram will auto-generate sub-titles for you, and you can rely on YouTube’s auto-generates srt’s to upload to LinkedIn and Twitter, but I’d use something like Descript to make a “perfect” srt file.
  5. My Minecraft Yeller Thumbnails are the radest shit you will ever see in b2b marketing. COME AT ME. (I discovered Adobe Spark Post which is fucking awesome for this shit.)

Concerns/open questions

The major component I’m missing is following what happens when people click a CTA link. I encoded most all links I use for attribution to me, but I, of course, didn’t tell any of our web-funnel acquisition people this, so I don’t know how get those numbers. This would be extremely valuable info.

On the other hand, the price range of software and services (six to seven figure deals) I help sell is so high that having just one click, or just someone having seen and been influenced by my video evne though clicked nothing trackable.

Also, I’m concerned about echo chambers. Many of the “engagements” (likes and stuff) I get are from co-workers, which I value tremendously! There are, though, a sort of knowable set of “customers” who also engage. I need more insight into how far out of the echo chamber I’m reaching.

Let me state this clearly: I have no idea if all of this is helping the business. BUT IT SURE IS FUN TO DO!

All of that aside, let me tell you a (depressing?) secret: the only thing people care about are raw views. There may be some quibbling about completion rates, CTA following, etc.: but at the end, people will just remember the raw numbers. (Still, I’d like to have more visibility into the money I’m helping bring in and retain, but, hey, as I like to say, “I get paid either way.”)

Next shit to try

  1. “Everyday someone’s born who never watched The Flintstones – Looking at the numbers, not that many people have seen my longer form videos. Very few have watched to the end. If I slice-up and reserve some of those at tiny videos, it won’t be feed them left-overs reposting, it’ll actually be new for many people. I think this is something that us insatiable, completist readers don’t get and why we find re-posting/ICYMI’ing so vile.
  2. People love stuff about auditors/governance and security…but, really, you can’t predict what people like.
  3. Post in LinkedIn – you’ve got ten minutes, that’s a lot more than Twitter’s 2m30s.
  4. In Twitter, you can share access/use for the videos with other people. I need to share this with the people who run @VMwareTanzu and other accounts and see what success they get with posting those videos natively. Based on purely gut feel after looking at some of the videos, this will drive a lot more eyeballs.

Oh and… HEYYYY, GUYYZZZ! Three, two one! LIKE AND SUBSCRIBE BELOW!!

Appendix

Some additional notes as I think of them:

  1. Many of the longer form videos were streamed in Twitch at first. For my stuff, there’s around, I don’t know, 30 maybe 50 or 60 views after streaming in Twitch. During, it’s like zero to five, but usually, like one or two. I don’t really consider Twitch to be, uh, the “right fit” for my content. I think my co-workers who actually code (that’s like watching someone game, right?) have much more success.

How Kubernetes adds agility in challenging times

New article of mine:

The IT outcomes of Kubernetes are clear: 95% of businesses report clear benefits from adopting Kubernetes, with more efficient resource utilization and shorter software development cycles amongst the top benefits cited. The benefits don’t stop with the IT team, though. In an era where IT mostly determines competition and growth, the more agile the technology at the heart of a business, the greater the agility of the business overall.So, the business case for Kubernetes is clear. To those of us in IT at least.It makes sense that empowering development teams to do more in less time has clear benefits for businesses on paper. But given the inherent complexity of Kubernetes, the way in which these benefits actually manifest may not be so clear for those outside the IT department, particularly in the early stages of implementing the technology.Here, we look at why the tangible outcomes Kubernetes can provide across a business are worth overcoming initial challenges it may present, and what it means in the context of global events many organisations have been faced with in 2020 so far.

Read the rest!

Straddling the firewall: cloud from 2010 to 2020 (& what to do next)

I gave the ten year anniversary talk at the CloudAustin meetup. Ten years ago, I gave the first talk. Here’s a bit of the essay I wrote out as I was working on the presentation.

I want to go over the last ten years of cloud, since I gave the first talk at this meetup, back in August 2010. At the time, I was wrapping up my stint at RedMonk, though I didn’t know that until a year later. I went to work at Dell in corporate strategy, helping build the software and cloud strategies and businesses. I went back to being an analyst at 451 Research where I ran to software infrastructure team, and then thanks to my friend, Andrew Shafer, ended up where I am now, at Pivotal, now VMware. I also have three kids. And a dog. And live in Amsterdam.

Staying grounded.

Beware, I still have to pay my bills

Pic: Kadumago, Nov 2019.

You should know that I have biases. I have experiences, ideas, “facts” even that come from that bias. I work at VMware, “VMware Tanzu” to be specific which is VMware’s focus on developers and the enterprise software they write. I’ve always worked on that kind of thing, and often in the interests of “on-premises” IT.

Indeed, at the end, I’m going to tell you that developers are what’s important – in fact, that Tanzu stuff is well positioned for the future I think should exist.

So, you know…whatever. As I used to say at RedMonk: “disclaimer.”

Computers from an outlet

Back in 2013, when I was at Dell, I went on an analyst tour in New England. Matt Baker and I visited IDC, Gartner, Forrester, and some smaller shops in hotel lobbies and fancy restaurants. These kind of analyst meetings are a bit dog and pony, but they give you a survey of what’s going on and a chance to tell analysts what you think reality looks like.

Anyhow, we walked into the Forrester office, a building that was all brand new and optimistic. Some high-level person there way into guitars and, I don’t know, 70’s rock. Along with Forrester branded ballpoint pens, they gave us The Allman Brothers’s at Fillmore East CDs as thank you gifts. For real. (Seriously.) Conferences room names were all rock-and-roll. They had an electric guitar in the lobby. Forrester: not the golden buttoned boating jackets of Gartner, or the short-sleeved button-up white shirts of Yankee-frugal IDC.

Being at Dell, we were interested in knowing one thing: when, and even if, and at what rate, will the on-premises market succumb to public cloud. That’s all any existing vendor wanted to know in the past decade. That’s what that decade was all about: exploring the theory that public cloud would take over on-premises IT.

The Forrester people obliged. The room was full of about 8 or 10 analysts, and the Forrester sales rep. When a big accoiunt like Dell comes to call, you bring a big posse. Almost before Matt could finish saying the word “cloud,” a great debated emerged between two sides of analysts. There were raised voices, gesticulating, and listing back and forth in $800 springy, office chairs. All while Matt and I sort of just sat there listening like a call-center operator who uses long waits for people to pick up the phone as a moment of silence and calm.

All IT will be like this outlet here, just a utility, one analyst kept insisting. No, it’ll be more balanced, both on-premises and in cloud over time, another said. I’d never encountered an analyst so adamant about public cloud, esp. in 2013. It seemed like they were about to spring across the table and throttle their analyst opponent. I’m sure time ran out before anyone had a chance to whiteboard a good argument. Matt and I, entertained, exchanged cards, shook hands, and left with our new Allman Brother’s album.

Public cloud has been slower to gobble up on-premises IT than fanatics thought it would be, even in the late 2000’s. The on-premises vendors deployed an endless amount of FUD-chaff against moving to public cloud. That fear of the new and unknown slowed things down, to be sure.

But I think more practical matters are what keeps on-premises IT alive, even useful. We used to talk about “data gravity” as the thing holding migrations to public cloud back. All that existing data can’t be easily moved to the cloud, and new apps will just be throwing off tons of new data, so that’ll just pull you down more. (I always though, you know, that you could just go to CostCo and buy a few hard-drives, run an xcopy command overnight, and then FedEx the drives to The Cloud. I’m sure there’s some CAP theorem problem there – maybe an Oracle EULA violation?) But as I’ll get to, I think there’s just tech debt gravity – probably a 5 to 10 million applications[1] that run the world that just can’t crawl out of on-premises datacenters. These apps are just hard to migrate, and sometimes they work well enough at good enough cost, that there’s no business case to move them.

Public cloud grows and grows

But, back to public cloud. If we look at it by revenue, it’s clear that cloud has been successful and is used a lot.

[Being respectful of analyst’s work, the reader is asked to open up Gartner’s Nov 13th press release, entitled “Gartner Forecasts Worldwide Public Cloud Revenue to Grow 17% in 2020” and look at the table there. You can find charts as well. Be sure to add up the SaaS-y categories into just one.]

As with all such charts, this is more a fanciful illustration of our instincts. Charts are a way to turn hunches into numbers, transform the qualitative into the quantitative. You can use an even older forecast (also, see the chart…and be sure to take out “advertising,” obviously) to go all the way back to 2010. As such, don’t get hung up on the exact numbers here.

The direction and general sizing in these lines is what matters. SaaS is estimated at $176 billion in 2020, PaaS $39.7 billion, IaaS at $50b.

So. Lots of revenue there. There are a few things to draw from this chart – again, hunches that we can gussy up into charts:

  1. The categorization of SaaS, PaaS, and IaaS stuck. This was finalized in some NIST work, and if you can believe it, categorizing things like this drove a lot of discussion. I participated in at least three of those exercises, at RedMonk, Dell, and then at 451 Research. Probably more! There was a lot of talk about “bursting” and “hybrid cloud.” Traditional IT, versus private cloud. Is it “on-prem” or “on-premises” – and what does it say of your moral character is you incorrectly use the first? Whatever. We returned to the simplicity of apps, devs, and ops.
  2. SaaS is sort of “forgotten” now as a cloud thing. It looms so large that we can’t even see it when we narrow our focus on the other two parts of cloud. Us in the dev and ops crowds just think of SaaS as normal now, not part of this wild, new category of “cloud.” Everyday, maybe even every hour we all use SaaS – the same is true for enterprises. Salesforce’s revenue went from $1.3bn in 2010 to $17.1bn in 2020. We now debate Office 365 vs. GMail, never Exchange. SaaS is the huge winner. Though it’s sort of not in my interests, when people talk about cloud and digital transformation, I tell them that the most useful thing they should probably focus on is just going all SaaS. You see this with all the remote working stuff now – all those firewalls and SharePoints on intranets are annoying, and supporting your entire company working from home requires the performance and scaling of a big time SaaS company. SaaS is what’s normal, so we don’t even think about it anymore.
  3. Below that, is PaaS. The strange, much maligned layer over the decade. We’re always optimistic about PaaS, as you can see in the newer forecast. It doesn’t seem to deliver on that optimism, at least in the mainstream. I certainly hope it will, and think it should. We’ll see this time. I’ve lived through enough phases of PaaS optimism and re-invention (well, all of them, I suppose) that I’m cautious. To be cynical to be optimistic, as I like to quip: it’s only stupid until it works. I’ll return to the glorious future PaaS in a bit. But first…
  4. Below that, we have IaaS – what us tech people think of mostly as cloud. This is just a big pool of hardware and networking. Maybe some systems management and security services if you consider that “infrastructure.” I don’t know. Pretty boring. However, if you’re not buying PaaS, this is the core of what you’re buying: just a new datacenter, managed by someone else. “Just” is insulting. That managed by someone else is everything for public cloud. You take this raw infrastructure, and you put your stuff on it. Forklift your applications, they say, deploy new applications. This is your “datacenter” in the cloud and the way most people think about “cloud” now-a-days: long rows of blinking lights in dark warehouses sometimes with rainbow colored pipes.

IT can’t matter fast enough

Let’s get back to the central question of the past decade: when will public cloud eclipse on-premises IT?

First, let’s set aside the nuance of “private cloud” versus “traditional IT” and just think of it all as “on-premises.” This matters a lot because the nature of the vendors and the nature of the work that IT does changes if you build and manage IT on your own, inside the firewall. The technology matters, but the responsibility and costs for running and maintaining it year after year, decade after decade turns into the biggest, er, headache. It’s that debate from Forrester people: when will IT become that wall outlet that Nicholas Carr predicted long ago? When will IT become a fungible resource that people can shed when all those blinking lights start holding back their business ambitions?

What we were hunting for in the past ten years was a sudden switch over like this, the complete domination of mobile over PCs:

The Dediu Cliff.  Source: “The rise and fall of personal computing,” Jan 2012, Horace Dediu.

This is one of the most brilliant and useful strategy charts you’ll ever see. It shows how you need to look at technology changes, market share changes. Markets are changed by a new entrant that’s solving the same problems for customers, the jobs to be done, but in a different way that’s ignored by the incumbents.[2] This is sort of big “D,” Disruption theory, but more inclusive. Apple isn’t an ankle biter slowly scaling up the legs of Microsoft, they’re a seasoned, monied incumbent, just “re-defining” the market.[3]

Anyhow, what we want is a chart like this for cloud so that we can find when on-premises crests and we need to start focusing on public cloud. Rather, a few years before that so we have plenty of time to invest and shift.  When I did strategy at Dell, Seth Feder kept up this chart for us. He did great work – he was always talking about how good the “R squared” was – I still don’t know what that means, but he seemed happy with it. I wish I could share his charts, but they’re lost to Dell NDAs and shredders. Thankfully, IDC has a good enough proxy, hardware spend on both sides of the firewall:

[The reader is asked to open IDC’s April 2nd, 2020 press release titled “Cloud IT Infrastructure Spending Grew 12.4% in the Fourth Quarter, Bringing Total 2019 Growth into Positive Territory, According to IDC” and contemplate the third chart therein.]

You can’t use this as a perfect guide – it’s just hardware, and, really can you tell when hardware is used for “private cloud” versus “traditional IT”? And, beyond IaaS, we’d like to see this for the other two aaS’s: applications and developers. If only we had Seth still toiling away on his charts for the past decade.

But, once again, a chart illustrates our hunch: cloud is Hemingway’s bankruptcy thing, slowly racing towards a Dediu Cliff. We still don’t know when on-premises compute will suddenly drop, but we should expect it…any year now…

…or decade…

…I guess.  

¯\_(ツ)_/¯

Pre-cliff jumpers

Competition in cloud was fierce. Again, I’m leaving out SaaS – not my area, don’t have time or data. But let’s look at infrastructure, IaaS.

[The reader is asked to open up the 2010 IaaS MQ and the 2020 IaaS MQ.]

It’s worth putting these charts side-by-side. They’re Gartner Magic Quadrants, of course. As with all charts, you can hopefully predict me saying, they illustrate our intuitions. What’s magical (yes!) about the MQ’s is that they show a mixture of sentiment, understanding, and actual feature set. You can see that in play here as we figured out what cloud was.

Infamously, the first IaaS MQ in 2010 has Amazon in the lower right, and a bunch of “enterprise grade” IaaS people up and to the right. Most of us snickered at and were confused by this. But, that 2010 list and ranking reflected how people, esp. big corporate buyers were thinking about what they wanted cloud to be. They wanted it to be like what they knew and were certain they needed, but run by someone else with that capex-to-opex pixie dust people used to obsesses so much about.

Over the next ten years, everyone figured out what public cloud actually was: something different, more or less. Cloud was untethered from those “enterprise grade” expectations. In fact, most companies don’t want “enterprise grade” anymore, it’s not good enough. They want “cloud grade.” Everyone wants to “run like Google,” do DevOps and SRE. Enterprise buyers are no longer focused on continuing with what they have: they want something different.

All those missing dots are the vendors who lost out. There were many, many reasons. The most common initial was, well, “server hugging.” Just a biased belief in on-premises IT because that’s where all the vendor’s money had always came from. People don’t change much after their initial few decades, enterprises even less.[4]

The most interesting sidebars here are Microsoft and Rackspace. Microsoft shed it’s Windows focus and embraced the Linux and open source stack of cloud. Rackspace tried a pre-kubernetes Kubernetes you kids may not remember, OpenStack. I’m hoping one day there’s a real raucous oral account of OpenStack. There’s an amazing history in there that we in the industry could learn from.

But, the real reason for this winnowing is money, pure and simple.

Disruptors need not apply

You have to be large to be a public cloud. You have to spend billions of dollars, every year, for a long time. Charles Fitzgerald has illustrated this over the years:

It costs a lot to save you so much. Source: “Follow the CAPEX: Cloud Table Stakes 2018 Edition,” Charles Fitzgerald, February 2019.

Most of the orange dots from 2010 just didn’t want to do this: spend billions of dollars. I talked with many of them over the past ten years. They just couldn’t wrap their head around it. We didn’t even know it cost that much, really. Instead, those orange dots fell back on what they knew, tying to differentiate with those “enterprise grade” features. Even if they wanted to and did triy, they didn’t have the billions in cash that Amazon, Microsoft, and Google had. Disruption is nice, but an endless cash-gun is better.

I mean, for example, what was Dotcloud going to do in the face of this? IBM tried several times, had all sorts of stuff in its portfolio, even acquiring its way in with SoftLayer – but I think they got distracted by “Watson” and “Smart Cities.” Was Rackspace ever going to have access to that much money? (No. And in fact, they went private for four years to re-work themselves back into a managed service provider, but all “multi-cloud” now.)

There are three public clouds. The rest of us just sell software into that. That’s, more less, exactly what all the incumbents feared as they kept stacking servers in enterprise datacenters: being at the whim of a handful of cloud providers, just selling adornments.

$80bn in adornments

Source: “Investing City” on Seeking Alpha, originally from Pivotal IPO investor presentation.

While the window for grabbing public cloud profits might have closed, there’s still what you do with all that IaaS, how you migrate your decades of IT to and fro, and what you do with all the “left overs.” There’s plenty of mainframe-like, Microfocus-y and Computer Associates type of revenue to eek out of on-premises, forever.

Let’s look at “developers,” though. That word – developers – means a lot of things to people. What I mean, here, is people writing all those applications that organizations (mostly large ones) write and run on their own. When I talk about “developers,” I more mean whatever people are in charge of writing and running an enterprises (to use that word very purposefully) custom-written software.

Back when Pivotal filed to IPO in March of 2018, we estimated that the market for all of that would be $80.4bn, across PaaS and on-premises.

This brings us back to PaaS. No one says “PaaS” anymore, and the phrase is a bit too narrow. I want to suggest, sort of, that we stop obsessing over that narrow definition, and instead focus on enterprise developers and in-house software. That’s the stuff that will be installed on, running on, and taking advantage of cloud over the next ten years. With that wider scope, an $80bn market doesn’t seem too far fetched.

And it’s real: organizations desperately want to get good at software. They’ve said this for many years, first fearing robot dogs – Google, Amazon. AirBnB, Tesla…whatever. After years of robot dog FUD, they’ve gotten wiser. Sure, they need to be competitive, but really modernizing their software is just table stakes now to stay alive and grow.

What’s exciting, is that organizations actually believe this now and understand software enough to understand that they need to be good at it.

Source: “Improving Customer Experience And Revenue Starts With The App Portfolio,” Forrester Consulting, commissioned by VMware, March, 2020.

We in IT might finally get what we want sometime soon: people actually asking us to help them. Maybe even valuing what we can deliver when we do software well.

Beyond the blinking cursor

Obsessing over that Dediu Cliff for cloud is important, but no matter when it happens, we’ll still have to actually do something with all those servers, in the public cloud or our own datacenters. We’ve gotten really good at building blinking cursor boxes over the past ten years. Blinking cursor boxes

IT people things: developers write code, operations people put together systems. They also have shaky budgets – most organizations are not eager to spend money on IT. This often means IT people are willingly forced to tinker with building their own software and systems rather than purchasing and reusing others.[5] They love building blinking cursor boxes instead of focusing on moving pixels on the screen.

A blinking cursor box is yet another iteration of the basic infrastructure needed to run applications. Applications are, of course, the software that actually moves pixels around the screen: the apps people use to order groceries, approve loan applications, and other thrilling adventures in computing. Applications are the things that actually, like, are useful…that we should focus. But, instead, we’re hypnotized by that pulsing line on a blank screen.

Us vendors don’t help the situation. We compete and sell blinking cursor boxes! So we each try to make one, constantly. Often a team of people makes one blinking cursor box, gets upset at the company they work for, grabs a bunch of VC money, and then goes and makes another blinking cursor box. So many blinking cursors. The public clouds are, of course, big blinking cursor boxes. There was a strange time when Eucalyptus tried fight this trend and to do a sort of Turing Test on the Amazon’s blinking cursor. That didn’t go well for the smaller company. Despite that, slowly, slowly us vendors have gotten closer to a blinking cursor layer of abstraction. We’re kind of controlling our predilections here. It seems to be going well.

Month 13: now, the real work can begin.

Over the past ten years, we’ve seen many blinking cursor boxes come and go: OpenStack, Docker, “The Datacenter of the Future,” and now (hopefully not going) Kubernetes. (There was also still virtualization, Windows, Linux, and mainframes. Probably some AS/400s if we dig around deep enough.) Each of these new blinking cursor boxes had fine intentions to improve on the previous blinking cursor boxes. These boxes are also open source, meaning that theoretically IT people in organizations could download the code, build and configure their own blinking cursor boxes, all without having to pay vendors. This sometimes works, but more often than not what I hear of are 12 month or more projects to stand up the blinking cursor box du jour…that don’t even manage to get the cursor blinking. The screen is just blank.

In larger organizations, there are usually multiple blinking cursor box programs in place, doubling, even tripling that time and money burn. These efforts often fail, costing both time and millions in staff compensation. An almost worse effect is when one or more of the efforts succeeds, kicking off another year of in-fighting between competing sub-organizations about which blinking cursor box should be the new corporate standard. People seem to accept such large-scale, absurdly wasteful corporate hijinks – they probably have more important things to focus on like global plagues, supply chain issues, or, like, the color palette of their new logo.

As an industry, we have to get over this desire to build new blinking cursor boxes every five or so years, both at vendors and enterprises. At the very least we should collaborate more: that seems to be the case with Kubernetes, finally.

Even in a world where vendors finally standardize on a blinking cursor box, the much more harmful problem is enterprises building and running their own blinking cursor boxes. Think, again, of how many large organizations there are in the world, F500, G2,000 – whatever index you want to use. And think of all the time and effort put in for a year to get a blinking cursor box (now, probably Kubernetes) installed from scratch. Then think of the need in three months to update to a new version; six months at the longest, or you’ll get trapped like people did in the OpenStack year and next thing you know it, running a blinking cursor box from the Victorian era. Then think of the effect to add in new databases and developer frameworks (then new versions of those!), security, and integrations to other services. And so on. It’s a lot of work, duplicated at least 2,000 times, more when you include those organizations allow themselves to build competing blinking cursor boxes.

Obviously, working for a vendor that sells a blinking cursor box, I’m biased. At the very least, consider the costs over five to ten years of running your own cloud, essentially. Put in the opportunity cost as well: is that time and money you could instead be spending to do something more useful, like moving pixels around on your customer’s screen?

Once you free up resources from building another blinking cursor box, you can (finally) start focusing on modernizing how you do software. Next: one good place to start.

Best practices, do them

As I’ve looked into how organizations manage to improve how they do software over the years I’ve noticed something. It’ll sound uselessly simple when you read it. Organizations that are doing well follow best practices and use good tools. Organizations that are struggling don’t. This is probably why we call them best practices.

Survey after survey of agile development usage will show this, every year. Simple practices and tools like unit testing are widely followed, but adherence to other practices quickly falls off. How many times have you heard people mention a “sit down stand-up meeting,” or say something like “well, we don’t follow all the agile practices we learned in that five day course – we adapted them to fit us”?

I like to use CI/CD usage as a proxy for how closely people are following best practices.

One of the most important tools people have been struggling to use is continuous integration and continuous delivery (CI/CD[6]). The idea that you can automate the drudgery of building and testing software, continuous integration, is obviously good. The ability to deploy software to production every week, if not daily, is vital staying competitive with new features and getting fast feedback from users on your software’s usefulness.

Very early on, if you’re not doing CI/CD your strategy to improve how you’re doing software – to progress with your cloud strategy, even – is probably going to halt. It’s important! Despite this, for the past ten plus years, usage been poor:

Source: State of Agile Surveys, 3rd through 14th, VersionOne/CollabNet/digital.ai. CI/CD not tracked in 5th/2009. Over the years, definitions change, “delivery” and “deployment” are added; but, these numbers are close enough to other surveys to be useful. See more CI/CD surveys: Forrester survey (2019), DZone CD reports (2014, 2015, 2016, 2017, 2019).

Automating builds and tests with continuous integration is clearly easier (or seen as more valuable?) than continuous delivery. And there’s been an encouraging rise in CD use over the past ten years.

Still, these numbers are not good. Again, think of those thousands of large organizations across the world, and then that half of them are not doing CI, and then that 60% of them are not doing CD. This seems ludicrous. Or, you know, great opportunity for improvement…and all that.

Take a look at what you’re doing – or not doing! – in your organization. Then spend time to make sure you’re following best practices if you want to perform well.

Stagnant apps in new clouds

Over the past ten years, many discussions about cloud centered around technologies and private or public cloud. Was the cloud enterprise grade enough? Would it cost less to run on premises? What about all my data? Something-something-compliance. In recent years, I’ve heard less and less of those conversations. What people worry about now is what they already have: thousands and thousands of existing applications that they want to move to cloud stacks.

To start running their business with software, and start innovating how they do their businesses, they need to get much better at software: be like the “tech companies.” However, most organizations (“most all,” even!) are held back by their existing portfolio of software: they need to modernize those thousands and thousands of existing applications.

Modernizing software isn’t particularly attractive or adventurous. You’re not building something new, driving new business, or always working with new technologies. And when you’re done, it can seem like you’ve ended up with exactly the same thing from the outside. However, management is quickly realizing that maintaining the agility of their existing application portfolio is key if they want future business agility.

In one survey, 76% of senior IT leaders said they were too invested in legacy applications to change. This is an embarrassing situation to be in: no one sets off to be trapped by the successes of yesterday. And, indeed, many of these applications and programs were probably delivered on the promise of being agile and adaptable. And now, well, they’re not. They’re holding back improvement and killing off strategic optionality. Indeed, in another survey on Kubernetes usage, 49% of executives said integrating new and existing technology is the biggest impediment to developer productivity.[7]

After ten years…I think we’ve sort of decided on IaaS, on cloud. And once you’ve got your cloud setup, your biggest challenge to fame and glory is modernizing your applications. Otherwise, you’ll just be sucking up all the same, old, stagnating water into your shiny new cloud.


[1] This is not even an estimate, just a napkin figure. AirFrance-KLM told me they’re modernizing over 2,000 applications. Let’s take the so called Fortune 500, and multiply it out: 500 x 2,000 = 1,000,000. Now, AirFrance-KLM is a big company, but not the biggest by far. A company like JPMC has many more applications. Also, there are governments, militaries, and other large organizations out there beyond the F500. Or you could have started with the Global 2,000. So, let’s assume that there are “millions” of apps out there. (Footnote to footnote: what is an “app”? If you’re asking this, let’s call it a “workload” and see if that satisfies you enough to go back to the main text.)

[2] Yes, yes – this isn’t strictly true. Taking out costs can change markets because you’ve freed up so much cash-flow, lowered prices, etc. There’s “globalization.” Regulations can dramatically change things (AT&T, unbundling, abdicating moral responsibility in publishing, etc.). Unexpected, black swans can destroy fragile parts of the market, table-flipping everything. And more what have you’s &co.’s. Thank you for your feedback; we appreciate your business; this call will now disconnect.

[3] For more on how to use this kind of chart in the strategy world, see Rita McGrath’s books, most recently, Seeing Around Corners.

[4] Some years ago, it was popular to cite such studies as “at the current churn rate, about half of S&P 500 companies will be replaced over the next ten years.” I, dear reader, have been guilty of using such weird mental gymnastics, a sort of Sugar Rush grade, neon-track of logic. (Also, there’s private equity, M&A, the 2008 financial collapse, globalism, cranky CEOs, and all manner of other things that change that list regardless of the, well, business acumen of the victims – pouring gasoline onto the scales of the fires of creative destruction.)

[5] Public cloud has been an interesting inroad against this: you have to pay for public cloud, there’s no way around it. Still, people seem to like it more than paying larger, upfront software licensing fees. In public cloud, the initial process is often much more pleasant than a traditional acquisition process. The sales process of signing up without talking to a person, testing out the software, using it for real…all without having to setup servers, networking, install and upgrade software.

[6] There’s a lot of hair-splitting between “continuous delivery” versus “continuous deployment.” I don’t know. At a certain level of organizational management, the distinction becomes less useful than spending your mental effort on more pressing strategic and management riddles. I think it’s notable, that the jargon we use is “CI/CD” not “CI/CD/CD” (or maybe, more delightfully, CI/CD2?)

[7] I’m fond of citing this survey for another reason that shows how misaligned executive and developer believes can be: 29% of developers said that “Access to infrastructure is the biggest impediment to developer productivity”…but only 6% of executives agreed.

Things to do in Austin for foreigners short on time

An older post from my newsletter, but probably still helpful.

Breakfast tacos at tamalehouse

I navigate life, mostly, though food, rather, eating. Thus, my suggestions for things to do in Austin are primarily about things to eat. Also, I have a nine year old and have lived in Amsterdam for a year. So, my knowledge of “the hot spots” is about a decade out of date. Some of the places I recommend below may even be closed!

Nonetheless, here’s what I would do, and try to do, when I go back:

  • Eat breakfast tacos in the morning – the breakfast taco is a tortilla with scrambled eggs, cheese, and other things, usually bacon. If you’re vegan, get potatoes and beans, maybe gucamole. I would start with the basic, which is both the benchmark and the standard: eggs, cheese, bacon. Breakfast tacos must be served in a flour tortilla, white flour preferably. Breakfast tacos on corn tortillas are a fraud and should be stomped on. As a bonus round, try a migas breakfast taco. In fact, I would suggest pairing a standard breakfast taco with a migas one: eat the standard first, and then the migas one. If you’ve never had a migas breakfast taco, you want to prime your mouth with the standard, a sort of palet cleansing. Order these at aa Taco Deli, a Torchy’s, or any restaurant with a Spanish word and a number in it’s name, e.g., “Arranda’s #4.” If you can go a little out of the way, got to Tamale House or Mi Madre’s. I would recommend ordering extra salsa to put on the tacos (see below).
    Continue reading “Things to do in Austin for foreigners short on time”

The ethics of selling software, the find the good first test

View this post on Instagram

Riot!

A post shared by Michael Coté (@bushwald) on

Should you be selling software to ICE? How about companies that make missiles, canned food that’s bought and eaten by drone pilots? How about only divisions of defense companies that make the defensive weapons, like anti-aircraft radar? Even authoritative regimes need asset management software that tracks what type of lightbulbs are used in the tourist department’s waiting room.

This is an ethical question programmers ask themselves sometimes, especially programmers employees by large enterprise software vendors killing time at an open source conference, wondering how they got so much mud on their pants cuffs.

I don’t know the answer, really. I have a philosophy degree from 2000, which means I’m from the school or philosophy that says “can you ever know anything? Let’s go get some pho.”

Some software is like canned food: a commodity item with faceless consumers. You’re not going to stop making canned food (or deodorant) because people you disagree with use it. You won’t kill httpd because some evil actor installs it and uses it to help oppress or kill people. Why? I mean, because it’s impossible to know who uses it?

Things get more complicated with a SaaS, like GMail. In theory you could investigate who’s using it. This could be like finding copyrighted material or porn on YouTube, a lot of work, but a business priority, so you figure out how to do it. You could deny email to bad actors.

But then what about, like, kubernetes? Should we be excited or depressed that it can run on fighter jets? Do we distinguish between defense vs. attack? Do we need to read some big ethics of war reading list to figure out if a first strike is actually defensive or offensive?

Are we going to use some observability to detect when the fighter jet is defending versus attacking, or just doing some air show for retires and kids, and shut down kubernetes according to our ethics (air shows use a lot of fuel after all, and those kids on grandpa’s shoulders are going to suffer the consequences of that fun afternoon in later years – oh, and also the scenario where you kill people).

Like I said, I don’t know the answer. (Yay gen-x, or whatever.)

I would just suggest a different approach to analyzing the question, different than I see most people discussing it. Most people ask the question “how do we identify when our customer is evil and, thus, we should deny them our awesome software?”

Instead, I would ask it slightly differently, “prove that the software will be used for good by the customer.” That is, instead of finding the evil, prove that the customer will do good.

I’m not suggesting ignoring evil done – you look for that too after finding the good. But, you’ll have a different approach and less continuous discussion if you start with “before you can use my software, you have to show me the good you’ll do, prove that you’re going to use it in ways I agree with.”

Theoretically, you’ll arrive at the same place as if you started with finding the evil. Whether you start by focusing on how delicious the punch is first, or start with there’s a turd in the bowl first, you should reach the same conclusion.

Make your conference talk about one small thing

The content at most conferences is midling. Most talks should be focused on one small thing, not an overview of everything (with the rare exception of an opening, level-setting talk, perhaps). Tech people fall prey to laundry listing a bunch of things because we get excited to learn new stuff, we love tools and new things and concepts – and usually we want to share that excitement, or at least show off and have the joy of hearing yourself talk (an under rated joy). But, listing everything you can think of on a given topic is usually a bad approach for a talk. A series of lectures is where to be comphrensive, or a book.

Sometimes, a laundry list of tactics is good, a table of contents for further work. A live demo is another thing too: you usually want to see the full-cycle of how an idea gets coded into an application, deployed, debugged, etc.

A story can be good if it’s a case study, but even then you probably want to conclude with one thing: “what we found was that we should have involved procurement at the beginning.”

Memorable talks usually have one idea, though.

A talk should be mostly the conclusion for a those laundry lists, those lecture series: here’s the one thing we found after all that work we described in chapters one to fifteen, or, the one idea we had, the one problem we solved that we didn’t even know we had, the idea you are (too) comfortable with that you need to change to stop suffering/unlock your potential, the one action I want you to take.

As with all people who give advice, I don’t follow it.

Governance hacks – business cases

Cut from my writing up of AirFrance-KLM’s modernization strategy for it’s 2,000+ apps.

For each major decision (like modernizing an application, moving an application team to a new toolchain, putting a new platform in place, and other major changes to do how you do software), always have a business case. You have to avoid local optimization too: make sure you focus on the big picture, looking at dev, ops, and the overall business outcome. What does it mean to speed up the release cycle? Does introducing new services and capabilities make your daily business run more efficiently, or attract new customers? Does it help prevent security problems or add in more reliability? As they say “what is this in service of?”

This is especially important for avoiding gratuitous transformation, gold-plating, and other fixing it if it ain’t broke anti-patterns. Also, it will help you show people why it’s worth changing if everything seems to be going well. “[T]eams have applications who are working and sometimes are working quite, quite well,” Jean-Pierre Brajal says, “So people come to us and say, “well, why do I have to cancel my application? It’s already running.” You can use a business case to show the benefits.

Also, he notes, it’s important to make sure you’re improving the process end-to-end, not just one component. Let’s say you make automating testing the software better, but don’t address deploying the software. Now, when you’ve moved the team off their old system – that was working – you’ve introduced a new problem, a new bottleneck that they didn’t used to have to deal with.

There’s a lot more in the talk.

Successful pundit tactics

  • Make shooting fish in a barrel seem interesting and insightful. Facebook is evil, Amazon is rapacious.
  • Layer financial analysis into your big claims – talking about valuation, share price, cash flows is impressive.
  • Stick to your stock phrases and concept models. Eventually, they’ll stick. Or, if people don’t laugh at them and repeat them, you can come up with new ones. Own a category and the associated words.
  • Make definitive statements, e.g., Microsoft will burry Slack with teams, like they did Netscape.
  • Find a contrary position that promotes a social good. Or just a contrarian position.
  • Find a position that other pundits don’t have. The “blue ocean” thing. Scott Galloway likes the idea that tech people could easily decide to do social good, but don’t because there’s no profit and there’s no punishment.
  • Point out that people can just actually do things, that it’d be easy to solve problems if you tried. This is a Matthew Yglesias rhetorical trick. The unspoken implication is that they choose not to and are hypocrites or, at best, flaccid.
  • Convert a year’s worth of blog posts, newsletter missives, etc. into a book.
  • Make predictions, wild ones. People love predictions and it truly doesn’t matter if they come true or not.
  • Be independent, not unbiased. To make all these wild-claims you need financial security (or to not care about it) so you can lash-out, er, comment on every opportunity.
  • Always unmask motivations instead of attacking a person’s character – explain why people (or movements) are motivated to do something, not that they’re bad people. Once you explain why they’re motivated to do something:
  • Point out how the rival position leads to unintended consequences, often contradicting the original goals. Too much NIMBY leads to a housing shortage, pushing less wealthy people out of the neighborhood, increasing gentrification, driving wealthy people into your neighborhood and developers to only make sure bets instead of novel ones – now you’re the bourgeoisie!
  • Pointing out that “the medium is the message” (people’s desires and how they express/pursue them in the world shapes what they do and create, regardless of the Truth of the matter) gets them agog every time.
  • Most importantly: never answer the question you were asked, answer the question you wished you were asked and that you have an answer for.
  • However: 15% to 20% of the time let yourself make shit up and just go into a screed of disjoint nonsense that’s poetic and invigorating. We value and respect the insane, inchoate genius more than we’d like to admit. People need a break form being serious all the time to stay sane.

See most all of them in action here.

Write every day even if it’s not on topic

I hear people say they write every day, they make a habit of it. I always assumed this meant writing on topic, on whatever your projects are.

Really, it just means write something. Even complaining about yourself or writing a description of a bird. The point is to be practicing, to stay in training, to keep your sword-mind sharp, as Tyrion once explained when it comes to reading (the other daily practice).

Often, nothing for your deadlines will come. For some people it comes in little slices that will add up over time and be edited into perfection, for others (like me) it’s all at once in a deluge of words that need to be typed down before they flow away down the gutter.

Writing every day gives you the chance to start, and to get closer to finishing. You can rely on the muse to inspire your writing, but the muse needs to know when and where to find you.

And, even if it’s just journaling, you’ll help your psychology, and create a log of what happened that day – filling the tanks for when the muse does come. Think of artists doodling and doing “studies” all the time. So many good books are just worked over diary entries and collections of anecdotes.

(The above is pulled – like those little slices – from writing advice I’ve read elsewhere [esp. the muse metaphor]. Related writing tip: don’t worry about hyperlinks if you don’t have the time: click publish and move on.)

Banking “disruption,” or whatever – part 01

There’s near universal sentiment that traditional banks need to shift to improve and protect their businesses against financial startups, so called “FinTechs.” These startups create banks that are often 100% online, even purely as a mobile app. The release of Apple Pay highlights how these banks are different: they’re faster, more customer experience focused, and innovate new features. 

The core reason FinTechs can do all of this is because they’re good at creating well designed software that feels natural to people and allows these FinTechs to optimize the banking experience and even start innovating new features. People like banking with them!

These FinTechs are growing quickly, For example, N26 grew from 100,000 accounts in 2015 to 3.5m this year. Still, existing banks don’t seem to be feeling too much pain. In that same period, JPMC went from 39.2m digital accounts to 49m, adding 19.8m accounts. Even if it’s small or hard to chart, market share is being lost and existing banks are eager to respond. And, of course, the FinTechs are eager to take advantage of slower moving banks with the $128bn of VC funding that’s fueled FinTech growth.

I wanted to get a better handle on all this, so I’ve put together this “hot take” on digital banking, FinTech’s, whatever. My conclusion is that these new banks take advantage of having a clean-slate – a lack of legacy baggage in business models and technology stacks – to focus most of their attention on customer experience, doing software really well. This is at the heart of most “tech companies” operational differentiation, and it’s no different in banking. 

Large, existing banks may be “slow moving,” but they have deep competitive advantages if they can address the legacy of past success: those big, creaking backend systems and a culture of product development that, well, isn’t product development. Thankfully, there are several instances and case studies of banks keeping transforming how they do business.

That Apple Card sure looks cool

As with you, I’m sure, I’m curious about the excited around the Apple Card. It looks cool, with features like quick activation and tight (perhaps too tight!) integration with the iPhone. The card benefits aren’t too great compared to what’s widely available: the Apple Card gives you 1% to 3% cash back on purchases, with 3% only for Apple purchases.

Two other features got me thinking though.

The cash back amounts show up in your account by the end of the day. In contrast many credit cards offer cashback, it can take weeks or even months for that show up on your account – that cash back period, is perhaps not surprisingly hard to fine for cards. 

The Apple Card has a really quick activation process. Traditionally, getting your account setup, activating a card, can take days to weeks – usually, you need a card snail mailed to you. But once you setup your account, you can start using tap-to-pay with your phone. When I moved to Amsterdam, I setup an ABN AMRO account, and last week I setup an N26 account. In both instances, I had to wait several days to get a physical debit card. I could start transferring money instantly, however. 

There’s no guarantee that the Apple Card will be a competitive monster. Per usual, the huge customer base and trust Apple has boosts their chance. As Patrick McGee at The Financial Times notes: “JD Power survey published last week, before the card was even available, found that 52 per cent of those aged between 18 and 29 were aware of it; of those, more than half were likely to apply.” Apple usually has a great attach rate between the iPhone and new products. Signs point to the Apple Card working out well for Apple and their partners.

Shifting the market with innovation…right?

That snazzy UI and zippy features make me wonder, though, why is this new? Why aren’t these boring, commodified features in banking yet? Let’s broaden this question to banking in general, mostly retail or consumer banking for discussing here. 

Perhaps we have an innovation gap in banking, something that’s likely been ignored by existing banks for many years. These FinTechs, and other innovation-focused companies like Apple, have been using innovation as crowbars to take market share, coming up with better ways of servicing customers and new features.

Is that innovation getting FinTechs new business and sucking away customers from existing banks? To get a handle on that kind of market share shift I like to use a chart I call The Dediu Cliff to think about startups vs. incumbents. It’s a simple, quick way of showing how market share shifts between those two, how startups gain share and incumbents lose it. You chart out as many years as you can in a 100% area graph showing the shift in market share between the various players. Getting that data for banking has so far proved difficult, but let’s take a swag at it anyhow.

Whatever the business models, financial services executives seem to think so as one PWC survey found: 73% of those executives “perceive consumer banking as the one most [banking products] likely to be disrupted by FinTech.” Being lazy, I found a pre-made data set to show this, in Sweden thanks to McKinsey:

Sweden - Screen Shot 2019-08-14 at 4.55.06 PM.png
Sources: “Disruption in European consumer finance: Lessons from Sweden,” Albion Murati, Oskar Skau, and Zubin Taraporevala, McKinsey, April 2018; “New rules for an old game: Banks in the changing world of financial intermediation,” Miklos Dietz, Paul Jenkins, Rushabh Kapashi, Matthieu Lemerle, Asheet Mehta, Luisa Quetti, McKinsey, Nov 2018. 

As the report notes, Sweden is very advanced in digital banking. In comparison, they estimate that in the UK the “specialist” firms have less than 20% share. In this dataset, “specialist” isn’t exactly all new and fun FinTech startups, but this chart shows the shift from “universal,” traditional banks to new types of banks and services. There’s a market shift.

If I had more time, I’d want to make a similar Dediu Cliff for more than just Sweden. As a bad, but quick example, comparing JPMC’s retail banking customer growth to N26’s:

100% area - Screen Shot 2019-08-14 at 4.55.09 PM.png
Sources: “How JPMorgan Is Preparing For The Next Generation Of Consumer Banking,” CBInsights, August, 2018; JPMC 2018 annual report; “N26 is now one of the highest valued FinTechs globally,” N26 Blog, July, 2019.

 

This chart is not too useful because it shows just one bank to one FinTech, though. And JPMC is much lauded for its innovation abilities. At the end, in the summer of 2019 JPMC has 62m household customers, with 49m being “digital,” and N26 has 3.5m, all “digital” we should assume. Here’s the breakdown:

 

bar chart - Screen Shot 2019-08-14 at 4.55.11 PM.png
Sources: “How JPMorgan Is Preparing For The Next Generation Of Consumer Banking,” CBInsights, August, 2018; JPMC 2018 annual report; “N26 is now one of the highest valued FinTechs globally,” N26 Blog, July, 2019.

Growth, as you’d expect, is something else: JPMC had a CAGR of 8%, while N26’s was 227%. If N26 survives, that of course means their growth will flatten, eventually.

Even if it’s hard to chart well, we should take it that the new bread of FinTechs are taking market share. Financial services executives seem to think so as one PWC survey found: 73% of those executives “perceive consumer banking as the one most [banking products] likely to be disrupted by FinTech.” 

To compound the fogginess, as in the original Dediu Cliff, charting the dramatic shift from PCs to smart phones, the threat often comes from completely unexpected competitors. The market is redefined, from just PCs for example, to PCs and smart phones. This leaves existing businesses (PC manufacturers) blind-sided because their markets are redefined. Customer’s desires and buying habits change: they want to spend their computer share of wallet and time on iPhones, not Wintels. 

Taking this approach in banking, there are numerous FinTechs going over underserved markets that are “underbanked” and usually deprioritized by existing banks. This is a classic, “Big D” disruption strategy. One of the more fascinating examples are ride-sharing companies that become de facto banks because they handle the money otherwise bankless drivers earn.

There’s also a hefty threat from behemoth tech companies outside of banking that are stumbling into finance. Companies like Alibaba and WeChat have huge presences in payments and Facebook is always up to something. These entrants could prove to be the most threatening long term if they redefine what the market is and how it operates.

Differentiating by focusing on people

So, there is a shift going on. What are these FinTechs doing? Let’s simplify to three things:

  1. Mobile – an emphasis on mobile as the core branch and workflow, often 100% mobile.
  2. Speed – from signing up, to transferring money, to, as with the Apple Card, faster cash back. While it’ll take awhile to get my card, actually signing up with N26 was quick, including taking pictures of my Netherlands residency card for ID verification. I signed up at 11:29am and was ready to go at 4:05pm, on a Sunday no less.
  3. Innovation – sort of. It’s not really about new features, but innovations in how people interact with the banks. N26 let you create “spaces” which are just sub accounts used to organize budgets and reports; bunq lets you create 25 new accounts; many FinTechs (like the Apple Card) bundle in transaction type reporting and budgeting tools. All of those are interesting, but not ground breaking…yet. 

From a competitive analysis stand-point, what’s frustrating is that feature-by-feature, traditional banks and FinTechs seems to be on par. Throw in services like mint.com and all the supposedly new features that FinTechs have seem to be available don’t look so unique anymore. Paying with your phone amazing, to be sure, but that’s long been done by existing banks.

For all the charts and surveys you can pile on, the difference amounts to a subjective leap of faith. FinTech companies are more customer centric, focusing on the customer experience. When you look at the broader “tech companies” that enterprises aspire to imitate, customer experience is one of the primary differentiators. Their software is really good. More precisely, how their software helps people accomplish tasks is well designed and ever improving.

There’s a sound vision to be plucked from that for banks: “Live more, bank less,” as DBS Bank  in Singapore puts it.

Unshackled

Responding to all of this seems easy on the face of it: if these FinTechs can do it, why not the thousands of developers with their bank-sized budgets do it?

As ever, banks suffer from the shackles of success: all the existing processes, IT, and thought technologies that was wildly successful and drives their billions in revenue….but hasn’t been modernized in years, or even decades.

In part 2, we’ll look at what banks can do to unshackle themselves, and maybe slip on some new shackles for the next ten years.

(There are some footnotes that didn’t get over here.  For those, and if you want to see me wrastlin’ through part two, or leave a comment, check out the raw Google Doc of this.)

The Finance Bottleneck

This is a draft excerpt from a book I’m working on, tentatively titled The Business Bottleneck. If you’re interested in the footnotes, leaving a comment, and the further evolution of the book, check out the Google Doc for it.

The Business Bottleneck

All businesses have one core strategy: to stay alive. They do this by constantly offering new reasons for people to buy from them and, crucially, stay with them. Over the last decade, traditional businesses have been freaked by competitors that are figuring out better offerings and stealing those customers. The super-clever among these competitors innovate entirely new business models: hourly car rentals, next day delivery, short term insurance for jackets, paying for that jacket with your phone, banks with only your iPhone as a branch, incorporating real-time weather information into your reinsurance risk analysis. 

Screen Shot 2019-08-07 at 3.28.14 PM.png
Source: Gartner L2, July 2019.

In the majority (maybe all) of these cases, surviving and innovating is done well with small business and software development cycles. The two work hand-in-hand are ineffective without the other. I’d urge you think of them as the same thing. Instead of business development and strategy using PowerPoint and Machiavellian meeting tactics as their tool, they now use software.

You innovate by systematically failing weekly, over and over, until you find the thing people will buy and the best way to deliver it. We’ve known this for a long time and enshrined it in processes like The Lean Startup, Jobs to Be Done, agile development and DevOps, and disruption theory. While these processes are known and proven, they’ve hit several bottlenecks in the rest of the organization. In the past, we had IT bottlenecks. Now we have what I’ve been thinking of as The Business Bottleneck. There’s several of them. Let’s start by looking at the first, and, thus, most pressingly damaging one. The bottleneck that cuts off business health and innovation before it even starts: finance.

Most software development finance is done wrong and damages business. Finance seeks to be accurate, predictable, and works on annual cycles. This is not at all what business and software development is like. 

Business & software development is chaos

Software development is a chaotic, unpredictable activity. We’ve known this for decades but we willfully ignore it like the advice to floss each day. Mark Schwartz has a clever take on the Standish software project failure reports. Since the numbers in these reports stay the same each year, basically, the chart below shows that that software is difficult and that we’re not getting much better at it:

Screen Shot 2019-08-07 at 3.30.26 PM.png
Source: built from excerpts from the 2009 study and 2015 study.

What this implies, though, is something even more wickedly true: it’s not that these project failed, it was that we had false hopes. In fact, the red and yellow in the original chart actually shows that software is performs consistent to its true nature. Let me rework the chart to show this:

Screen Shot 2019-08-07 at 3.32.07 PM.png
Source: built from excerpts from the 2009 study and 2015 study.

What this second version illustrates is that the time and budget it takes to get software software right can’t be predicted with any useful accuracy. The only useful accuracy is that you’ll be wrong in your predictions. We call it software engineering, and even more accurately “development” because it’s not scientific. Science seeks to describe reality, the be precise and correct – to discover truths that can be repeated. Software isn’t like that at all. There’s little science to what software organizations do, there’s just the engineering mentality of what works with what we have time and budget to do.

Source: from Michael Alba.

What’s more, business development is chaotic as well. Who knows what new business idea, what exact feature will work and be valuable to customers? Worse, there is no science behind business innovation – it’s all trial and error, constantly trying to both sense and shape what people and businesses will buy and at what price. Add in competitors doing the same, suppliers gasping for air in their own chaos quicksand, governments regulating, and culture changing people’s tastes, and it’s all a swirling cipher.

In each case, the only hope is rigorously using a system of exploration and refining. In business, you can study all the charts and McKinsey PDFs you want, but until you actually experiment by putting a product other there, seeing what demand and pricing is, and how your competitors will respond, you know nothing. The same is true for software. 

Each domain has tools for this exploration. I’m less familiar with business development, and only know the Jobs to Be Done tool. This tool studies customer behaviors to discover what products they actually will spend money on, to find the “job” they hire your company to solve, and then change the business to profit from that knowledge.

The discovery cycle in software follows a simple recipe: you reduce your release cycle down to a week and use a theory-driven design process to constantly explore and react to customer preferences. You’re looking to find the best way to implement a specific feature in the UI to maximize revenue and customer satisfaction. That is, to achieve whatever “business value” you’re after. It has many names and diagrams, but I call this process the “small batch cycle.”

THD small batch two up.jpg
The Home Depot illustrates its small batch cycle, Part Vemana and Brooke Creef, 2018. a caption

For example, Orange used this cycle when perfecting its customer billing app. Orange wanted to reduce traffic to call centers, thus lower costs but also driving up customer satisfaction (who wants to call a call center?). By following a small batch cycle, the company found that its customers only wanted to see the last two month’s worth of bills and their employees current data usage. That drove 50% of the customer base to use the app, helping remove their reliance on actual call centers, driving down costly and addressing customer satisfaction.

These business and software tools start with the actual customers, people, who are doing the buying and use these people as the raw materials and lab to run experiments. The results of these experiments are used to validate, more often invalidate theories of what the business should be and do. That’s a whole other story, and the subject of my previous book, Monolithic Transformation.

We were going to talk about finance, though, weren’t we?

The Finance Bottleneck

Finance likes certainly – forecasts, plans, commits, and smooth lines. But if you’re working in the chaos of business and software development, you can’t commit to much. The only certainty is that you’ll know something valuable once you get out there and experiment. At first all you’ll learn is that your idea was wrong. In this process, failure is as valuable as success. Knowing what doesn’t work, a failure, is the path to finding what does work, a success. You keep trying new things until you find success. To finish the absurd truth: failure creates success.

Software organizations can reliably deliver this type of learning each week. The same is true for business development. We’ve known this for decades, and many organizations have used it as their core differentiation engine.

But finance doesn’t work in these clever terms. “What they hell do you mean ‘failure creates success’? How do I put that in a spreadsheet?” we can hear the SVP of Finance saying, “Get the hell out of this conference room. You’re insane.”

Instead, when it comes to software development, finance focuses only on costs. These are easy to know: the costs of staff, the costs of their tools, and the costs of the data centers to run their software. Business development has similar easy to know costs: salary, tools, travel, etc.

When you’re developing new businesses and software, it’s impossible to know the most important number: revenue. Without that number, knowing if costs are good or bad is difficult. You can estimate revenue and, more likely, you can wish-timate it. You can declare that you’re going to have 10% of your total addressable market (TAM). You can just declare – ahem, assume – that you’re chasing a $9bn market opportunity. Over time, once you’ve discovered and developed your business, you can start to use models like consumer spending vs. GDP growth, or the effect of weather and political instability on the global reinsurance market. And, sure, that works as a static model so long as nothing ever changes in your industry.

For software development, things are even worse when it comes to revenue. No one really tells IT what the revenue targets are. When IT is asked to make budgets, they’re rarely involved in, nor given revenue targets. Of course, as laid out here, these targets in new businesses can’t be known with much precision. This pushes IT to just focus on costs. The problem here, as Mark Schwartz points out in all of his books, is that cost is meaningless if you don’t know the “value” you’re trying to achieve. You might try to do something “cheaply,” but without the context of revenue, you have no idea what “cheap” is. If the business ends up making $15m, is $1m cheap? If it ends up making $180m, is $5m cheap? Would it have been better to spend $10m if it meant $50m more in revenue?

 

IT is rarely involved in the strategic conversations that narrow down to a revenue.  Nor are they in meetings about the more useful, but abstract notion of “business value.” So, IT is left with just one number to work with: cost. This means they focus on getting “a good buy” regardless of what’s being bought. Eventually, this just means cutting costs, building up a “debt” of work that should have been done but was “too expensive” at the time. This creates slow moving, or completely stalled out IT. 

A rental car company can’t introduce hourly rentals because the back office systems are a mess and take 12 months to modify – but, boy, you sure got a good buy! A reinsurance company can’t integrate daily weather reports into its analytics to reassess its risk profile and adjust its portfolio because the connection between simple weather APIs and rock-solid mainframe processing is slow – but, sister, we sure did get a good buy on those MIPS! A bank can’t be the first in its market to add Apple Pay support because the payments processing system takes a year to integrate with, not to mention the governance changes needed to work with a new clearinghouse, and then there’s fraud detection – but, hoss, we reduced IT costs by $5m last year – another great buy!

Worse than shooting yourself in the foot is having someone else shoot you in the foot. As one pharmacy executive put it, taking six months to release competitive features isn’t much use if Amazon can release them in two months. But, hey! Our software development processes cost a third less than the industry averages!

Business development is the same, just with different tools and people who wear wing-tips instead of toe-shoes. Hopefully you’re realizing that the distinction between business and software development is unhelpful – they’re the same thing.

The business case is wrong from the start

So, when finance tries to assign a revenue number, it will be wrong. When you’re innovating, you can’t know that number, and IT certainly isn’t going to know it. No one knows the business value that you’re going to create: you have to first discover it, and then figure out how to deliver it profitably.

As is well known, the problem here is the long cycle that finance follows: at least a year. At that scope, the prediction, discovery, and certainty cycle is sloppy. You learn only once a year, maybe with indicators each quarter of how it’s going. But, you don’t really adjust the finance numbers: they don’t get smarter, more accurate, as you learn more each week. It’s not like you can go get board approval each week for the new numbers. It takes two weeks just to get the colors and alignment of all those slides right. And all that pre-wiring – don’t even get me started!

In business and software development, each week when you release your software you get smarter. While we could tag shipping containers with RFID tags to track them more accurately, we learn that we can’t actually collect and use that data – instead, it’s more practical to have people just enter the tracking information at each port, which means the software needs to be really good. People don’t actually want to use those expensive to create and maintain infotainment screens in cars, they want to use their phones – cars are just really large iPhone accessories. When buying a dishwasher, customers actually want to come to your store to touch and feel them, but first they want to do all their research ahead of time, and then buy the dishwasher on an app in the store instead of talking with a clerk. 

These kinds of results seem obvious in hindsight, but business development people failed their way to those success. And, as you can imagine, strategy and finance assumptions made 12 to 18 months ago that drove businesses cases often seem comical in hindsight.

A smaller cycle means you can fail faster, getting smarter each time. For finance, this means frequently adjusting the numbers instead of sticking to the annual estimates. Your numbers get better, more accurate over time. The goal is to make the numbers adjust to reality as you discover it, as you fail your way to success, getting a better idea of what customers want, what they’ll pay, and how you can defend against competition.

Small batch finance

Some companies are lucky to just ignore finance and business models. They burn venture capital funding as fuel to rocket towards stability and profitability. Uber is a big test of this model – will it become a viable business model (profitable), or will it turn out that all that VC money was just subsidizing a bad business model? Amazon is a positive example here, over the past 20 years cash-as-rocket-fuel launched them to boatloads of profit.

Most organizations prefer a less expensive, less risky methods. In these organizations, what I see are programs that institutionalize these failure driven cycles. They create new governance and financing models that enforce smaller business cycles, allowing business and software development to take work in small batches. Allianz, for example, used 100 day cycles discover and validate new businesses. Instead of one chance every 365 days to get it right, they have three, almost four. As each week goes by, they get smarter, there’s less waste and risk, and finance gets more accurate. If their business theory is validated, the new business is graduated from the lab and integrated back into the relevant line of business. The Home Depot, Thales, Allstate, and many others institutionalize similar practices.

allianz digital factory MVP procerss.jpg
Source: “The Shift to a New Digital Allianz Germany,” Dr. Daniel Poelchau, Allianz, CF Summit EU, Oct 2016.

Each of these cycles gives the business the chance to validate and invalidate assumptions. It gives finance more certainly, more precision, and, thus, less errors and risk when it comes to the numbers. Finance might even be able to come up with a revenue number that’s real. That understanding makes funding business and software development less risky: you have ongoing health checks on the viability of the financial investment. You know when to stop throwing good money after bad when you’ve invalidated your business idea. Or, you can change your assumptions and try again: maybe no one really wants to rent cars by the hour, maybe they want scooters, or maybe they just want a bus pass.

Business cases focused on growth, not costs

With a steady flow of business development learning, you can start making growth decisions. If validate that you can track a team of nuclear power plant workers better with RFID badges, thus directing them to new jobs more quickly and reducing costly downtime, you can then increase your confidence that spending millions of dollars to do it for all plant workers with payoff. You see similar small experiments leading to massive investments in omnichannel programs at places like Dick’s Sporting Goods and The Home Depot.

Finance has to get involved in this fail-to-success cycle. Otherwise, business and software development will constantly be driven to be the cheapest provider. We saw how this generally works out with the outsourcing craze of my youth. Seeking to be the cheapest, or the synonomic phrase, the “most cost effective,” option ends up saving money, but paralyzing present and future innovation

Screen Shot 2019-08-07 at 3.38.18 PM.png
“Survey Analysis: IT Is Moving Quickly From Projects to Products,” Bill Swanton, Matthew Hotle, Deacon D.K Wan, Gartner, Oct 

The problem isn’t that IT is too expensive, or can’t prove out a business case. As the Gartner study above shows, the problem is that most financing models we use to gate and rate business and software development are a poor fit. That needs to be fixed, finance needs to innovate. I’ve seen some techniques here and there, but nothing that’s widely accepted and used. And, certainly, when I hear about finance pushing back on IT businesses cases, it’s symptomatic of a disconnect between IT investment and corporate finance.

Businesses can certainly survive and even thrive. The small, failure-to-success learning cycles used by business and software developers works, are well known, and can be done by any organization that wills it. Those bottlenecks are broken. Finance is the next bottleneck to solve for.

I don’t really know how to fix it. Maybe you do! 

Crawl into the bottleneck

After finance, for another time, my old friends: corporate strategy. And if you peer past that blizzard of pre-wired slides and pivot tables, you can see just in past the edges of the next bottleneck, that mysterious cabal called “The C-Suite.” Let’s start with strategy first.

An unused executive dinner speech

I hosted an executive dinner a few weeks ago. I’d put together this opening talk, introducing the customer who was kind enough to come go through their story. I didn’t really get a chance to give it, which was probably for the best. Maybe next time

Thanks for coming – we’re glad y’all took the time. I know it’s hard.

My favorite thing about Pivotal is that I get to meet new people, computer people. My wife is always befuddled that I’m a wallflower in most company, but then, turn into an extrovert around computer people. So, it’s nice to meet more people like myself.

I’ve been at Pivotal almost five years and I’ve seen people like yourselves go through all sorts of transformations. They’re getting better at doing software. That’s setting them up to change how they run their business, to change what their business is, even. You can call it innovation, or whatever. Anyhow, I collect these stories – especially the parts where things go wrong – and in the Pivotal spirit of being kind and doing the right thing, try to help people who’re getting started by telling them the lessons learned.

Tonight, I’m hoping we can just get to know each other, especially amongst yourselves – us Pivotal people know each other well already!.

Most organizations feel like they’re the only ones suffering. They think their problems are unique. I get to talk with a lot of organizations, so I see the opposite. In general, everyone has the same problems and usually the same solutions.

Given that, I’d encourage you to talk with each other about what you’re planning, or going through. Chances are someone right next to you is in the same spot, or, if you’re lucky, has gotten past it.

As an example of that, we’re lucky that [customer who’s here] wanted share what’s been going on at [enterprise]. There’s lots of great stories in there…

So, let’s hear them…then let’s eat!

Discussing the common “CIO agenda”

I get asked to talk with “executives” more and more. That’s part of why Pivotal moved me over to Europe. People make lots of claims about what executives want to hear, the conversations you can have with them as a vendor. They don’t have time. You have have to be concise. They don’t want to hear the details. They just want to advance their careers.
None of those are really my style, even part of my core epistemes. When I have a good conversation with anyone, it’s because we’re both curious about something we don’t know. The goal is to understand it, sort of hold it out on a meat-selfie-stick and look at it from all angles. This find that most people, especially people in management positions charged with translating corporate strategy to cash enjoy this. Some don’t, of course.
Anyhow, I’ve been writing down some common themes and “unknowns” for IT executives:
  1. Innovation – use IT to help change how the current business functions and create new businesses. Rental car companies want to streamline the car pick-up process, governments want to go from analog and phone driven fulfillment to software, insurers want to help ranchers better track and protect the insured cows. Innovation is now a vacuous term, but when an organization can reliably create and run well designed software, innovation can actually mean something real, revenue producing, and strategic.
  2. Keep making money – organizations already have existing, revenue producing businesses, often decades old. The IT supporting those businesses has worked for all that time – and still works! While many people derisively refer to this as “keeping the lights on,” it’s very difficult to work in the dark. Ensuring that the company can keep making money from their existing IT assets is vital – those lights need to stay on.
  3. Restoring trust in IT’s capabilities – organizations expect little from IT and rarely trust them with critical business functions, like innovating. After decades of cost cutting, outsourcing, and managing IT like a series of projects instead of a continuous stream of innovation. The IT organization has to rebuild itself from top to bottom – how it runs infrastructure, how it developer and runs software, and the culture of IT. Once that trust is built, the business needs to re-set its expectations of what IT can do, reinventing IT back into everyday business.
What happens next is the fun part: how do executives reprogram their organization to do the above?
That’s my take on “to talk with executives,” then: learning what they’re doing, even validating my assumptions like the above. This is, or course, filled in with all sorts of before/afterr performance anecdotes (“proof points” and “cases”). Those are just conversational accelerants, though. They’re the things that move the narrative forward by keeping the reader engaged, so to speak, by keeping you interested (my self as well).
Anyhow. Even all this is a theory on my part, something to be validated. As I have more of these conversations, we’ll see what happens.

DevOps, monolithic architectures, craftsmanship – an unpublished interview

I’m too wordy when I reply to reporters. This is mostly true everywhere I produce content. I don’t like trite, simple answers. Brevity and clarity makes me suspicious, especially on topics I know well. As a consequence, I don’t think this interview by email was ever published.


What’s a DevOps advocate?

If you mean what I do, it means studying  people and organizations who are trying trying to improve how they do software, summarizing all those, ongoing, into several different types of content, and then trying to help, advise, educate people on how they can improve how they do software. A loop of learning and then trying to teach, in a limited way. For example, I’m working on finish up a book that contains a lot of this stuff that I’ve found over the past couple of years.

What is the foundation of DevOps: automation, agility, tools, continuous or all of them?

Yes, those are the core tools. The traditional foundation is “CALMS” which means Culture, Automation, Lean, Measurement, and Sharing. Ultimately, these are things any innovation-driven process follows, but they’re called out explicitly because traditional IT has lost its way and doesn’t usually focus on these common sense thing. A lot of what DevOps is trying to do is just get people to follow better software development and delivery practices…ones they should have been doing all along but got distracted from with outsourcing, SLAs, cost cutting, and the idea of treating IT like a service, or utility rather than an innovation engine for “the business.”

Anyhow, CALMS means:
  • Culture – the norms, processes, and methodology IT follows. You want to shift from a project delivery culture to a product culture, from service management to innovation. Defining “culture,” let along how to change it and how to use it is slippery. I wrote up what I’ve figured out so far here.
  • Automation – this is the easiest to understand of all the DevOps things. It means, to focus on automating as much as possible. If you find yourself manually doing some configuration or whatever, or relying on people opening a ticket to get something
  • (Like a database, etc.), figure out how to automate that instead.
  • Lean – software development has been borrowing a lot from Lean for the past 15 years. DevOps takes most all of it, but the key concepts it brings in are eliminated waste (effort spent that has “no value” to customers, in IT, often wait time for things like setting up servers and such) and working on incremental, more frequent (like weekly) releases rather than big, yearly releases.
  • Measurement – DevOps, like agile, is actually very disciplined if done properly. In addition to monitoring your applications and such in production, in order to continuously improve, DevOps is interested in measuring metrics around process. How many bugs are in each release? How frequently do we deploy software? And so forth. The point is to use these measurements to indicate areas of improvement and figure out if you’re actually improving or not.
  • Sharing – this was added after the initial four concepts. It’s straight forward and means that people across groups and even across organizations should share knowledge with each other. It also means, within organizations, having more unified teams of people rather than different groups that try to work with each other.
Today, we can ship every day. What impact for the teams and developers?

Shipping more frequently means you have more input on the usefulness of your software and it also adds much more stability and predictably into your software process. Because you’re shipping weekly, or daily, you can observe how people use your software and make very frequent changes to improve your software. There’s a loop of trying our a new feature, releasing it and observing how people use it, and then coming up with a new way to solve that problem better.

Stability and predictability are introduced because you establish a realistic rate of feature delivery each week. When you’re delivering each week, you quickly learn how much code (or features) you can do each week. This means that rather than having developers estimate how many features they can deliver in a year, for example, you learn how much they can actually deliver each week. Estimates are pretty much always wrong, and complete folly. But, once you calibrate and know how many features the team can deliver each week, they’re predictable and the overall process is more stable.

Monolithic’ architecture vs modular’ approach. Are we talking micro-service? Container?

Yes, a monolithic architecture implies software that’s made of many different parts, but that all depend strongly on each other. To be frank, it also means software that’s complex, poorly tested, and, thus, not well understood. “Monolith” is often used for “software I’m scared to change,” that is, “legacy software.” In contrast, if you’re fine to change software and don’t fear doing so, you just call it “software.”

A microservice architecture is the current approach to break up “monoliths” into more independent components, different services that evolve on their own but are composed together for an application. Buying a product online is a classic example. If you look at the product page, it could be composed of many different services: pictures of the product, figuring out the pricing for your region, checking inventory for the product, listing reviews, etc. A monolithic architecture would find all of that information all at once, in “one” piece of code. An application following a microservices architecture would treat all of these things as third party, not under your control services and compose the page from calling all those services.

To over simplify it, we used to call this idea “mashups” in the Web 2.0 era: pulling data from a lot of different sources and “mashing” that data up into a web page. All the rotating ads and suggested content you see on news sites are a metaphoric example as well: each of those components are pulled in from some other service rather than managed and collected together by the news site CMS. This is why the ads and suggested content are often awful, of course: there’s no editorial control over them.

Infra as Code? Another thing?

“Infrastructure as code” means using automation tools the building and configuring of servers (the software parts, not the hardware) and other “infrastructure” and then treating those automation workflows as if they were software code: you check them into version control and track them like a version of your application. This means that you can check out, for example, a version of the server you’re configuring and automatically create it. The point of doing this is get more visibility and control over that configuration by removing manual, human-driven configuring and such. Humans create errors, forget how things were done, have bad hair days, and otherwise foul things up. Computers don’t (unless those annoying humans tell them to).

For you, what is the ideal architecture?

An annoying, though accurate answer would be “it depends. I don’t really code anymore, so I couldn’t really say. Usually, you start with the minim needed and just add in more complex architectures as needed. That sounds like the opposite of architecture, but it’s worse to end up with something like all those giant, built out cities that end up having few people living in them.

Kanban, craftsmanship: friend or enemy of DevOps?

Kanban is used a lot in DevOps, maybe not fully. But, the idea of having cards that represent a small feature, a backlog that contains those cards ranked by some priority, and then allowing people to pull those cards and put them in columns marked something like “working on” and “complete” is used all the time.

I’m not sure what “craftsmanship” is in this context, but it it means perfecting things like some master furniture maker, most DevOps people would encourage you to instead “release” the cabinets more frequently to find out how they should be designed than assuming you knew what was needed and working on it all at once: maybe they want brutalist square legs instead of elegant rounded legs topped with a swan.

 

And, of course, if “craftsmanship” means “doing a good job and being conscious of how you’re evolving your trade,” well, everyone would say they do that, right? :)

You own it

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Anywhere there is lack of speed, there is massive business vulnerability:

Speed to deliver a product or service to customers.

Speed to perform maintenance on critical path equipment.

Speed to bring new products and services to market.

Speed to grow new businesses.

Speed to evaluate and incubate new ideas.

Speed to learn from failures.

Speed to identify and understand customers.

Speed to recognize and fix defects.

Speed to recognize and replace business models that are remnants of the past.

Speed to experiment and bring about new business models.

Speed to learn, experiment, and leverage new technologies.

Speed to solve customer problems and prevent reoccurrence.

Speed to communicate with customers and restore outages.

Speed of our website and mobile app.

Speed of our back-office systems.

Speed of answering a customer’s call.

Speed to engage and collaborate within and across teams.

Speed to effectively hire and onboard.

Speed to deal with human or system performance problems.

Speed to recognize and remove constructs from the past that are no longer effective.

Speed to know what to do.

Speed to get work done.

— John Mitchell, Duke Energy.

When enterprises need to change urgently, in most cases, The Problem is with the organization, the system in place. Individuals, like technology, are highly adaptable and can change. They’re both silly putty that wiggle into the cracks as needed. It’s the organization that’s obstinate and calcified.

How the organization works, it’s architecture, is the totally the responsibility of the leadership team. That ream owns it just like a product team owns their software. Leadership’s job is to make sure the organization is healthy, thriving, and capable.

DevOps’ great contribution to IT is treating culture as programmable. How your people work is as agile and programmable as the software. Executives, management, and enterprise architects — leadership — are product managers, programmers, and designers. The organization is their product. They pay attention to their customers — the product teams and the platform engineers — and do everything possible to get the best outcomes, to make the product, the organization, as productive and well designed as possible.

I’ve tried to collect together what’s worked for numerous organizations going through — again, even at the end, gird your brain-loins, and pardon me here — digital transformation. Of course, as in all of life, the generalized version of Orwell’s 6th rule applies: “break any of these rules rather than doing anything barbarous.

As you discover new, better ways of doing software I’d ask you to share those learnings a widely as possible, especially outside of your organization. There’s very little written on the topic of how regular, large organization managing the transformation to becoming software-driven enterprises.

Know that if your organization is dysfunctional, is always late and over budget, that it’s your fault. Your staff may be grumpy, may seem under-skilled, and your existing infrastructure and application may be pulling you down like a black-hole. All of that is your product: you own it.

As I recall, a conclusion is supposed to be inspirational instead of a downer. So, here you go. You have the power to fix it. Hurry up and get to work.

This post is an early draft of a chapter in my book,  Monolithic Transformation.

Enterprise architecture still matters

This post is an early draft of a chapter in my book,  Monolithic Transformation.

A typical enterprise CAB.

We had assumed that alignment would occur naturally because teams would view things from an enterprise-wide perspective rather than solely through the lens of their own team. But we’ve learned that this only happens in a mature organization, which we’re still in the process of becoming. — Ron van Kemenade, ING.

The enterprise architect’s role in all of this deserves some special attention. Traditionally, in most large organizations, enterprise architects define the governance and shared technologies. They also enforce these practices, often through approval processes and review boards. An enterprise architect (EA) is seldom held in high regard by developers in traditional organizations. Teams (too) often see EAs as “enterprise astronauts,” behind on current technology and methodology, meddling too much in day-to-day decisions, sucking up time with change-advisory boards (CABs), and forever working on work that’s irrelevant to “the real work” done in product teams.

It’s popular, even, for the DevOps community to poke fun at them, going so far as to show that the traditional, change advisory board methods of governance actually damage the organization. “Using external change approval processes such as a change advisory board, as opposed to peer-based code review techniques,” Jez Humble writes summarizing the 2014 DevOps Report, “significantly impacts throughput while doing almost nothing to improve stability.”

If cruel, this sentiment often has truth to it. “If I’m doing 8 or 15 releases a week,” HCSC’s Mark Ardito says, “how am I going to get through all those CABs?” While traditional EAs may do “almost nothing” of value for high performing organizations, the role does play a significant part in cloud native leadership.

First, and foremost, EAs are part of leadership, acting something like the engineer to the product manager on the leadership team. An EA should intimately know the current and historic state of the IT department, and also should have a firm grasp on the actual business IT supports.

While EAs are made fun of for ever defining their enterprise architecture diagrams, that work is a side-effect of meticulously keeping up with the various applications, services, systems and dependencies in the organization. Keeping those diagrams up-to-date is a hopeless task, but the EAs who make them at least have some knowledge of your existing spaghetti of interdependent systems. As you clean-up this bowl of noodles, EAs will have more insights into the overall system. Indeed, tidying up that wreckage is an under appreciate task.

The EA’s dirty hands

I like to think of the work EAs do as gardening the overall organization. This contrasts with the more tops-down idea of defining and governing the organization, down to technologies and frameworks used by each team. Let’s look at some an EAs gardening tasks.

Setting technology & methodology defaults

Even if you take an extreme, developer friendly position, saying that you’re not going to govern what’s inside each application, there are still numerous points of governance about how the application is packaged, deployed, how it interfaces and integrates with other applications and services, how it should be instrumented to be managed, and so on. In large organizations, EAs should play a large role in setting these “defaults.” There may be reasons to deviate, but they’re the prescribed starting points.

As Stuart Charlton explains:

I think that it’s important that as you’re doing this you do have to have some standards about providing a tap, or an interface, or something to be able to hook anything you’re building into a broader analytics ecosystem called a data-lake — or whatever you want to call it — that at least allows me to get at your data. It’s not you know, like “hey I wrote this thing using a gRPC and golang and you can’t get at my data!” No you got to have something where people can get at it, at the very least.

Beyond software, EAs can also set the defaults for the organization’s meatware, all the process, methodology, and other “code” that actual people execute. Before Home Depot started standardizing their process, Tony McCully says, “everyone was trying to be agile and there was this very disjointed fragmented sort of approach to it You know I joke that we know we had 40 scrum teams and we were doing it 25 different ways.” Clearly, this is not ideal, and standardizing how your product teams operate is better.

It may seem constricting at first, but setting good defaults leads to good outcomes like Allstate reporting going from 20% developer productivity to over 80%. As someone once quipped: they’re called “best practices” because they are the best practices.

Gardening product teams

First, someone has to define all the applications and services that all those product teams form around. At a small scale, the teams themselves can do this, but as you scale up to 1,000’s of people and 100’s of teams, gathering together a Star Wars scale Galactic Senate is folly. EAs are well suited to define the teams, often using domain-driven design (DDD) to first find and then form the “domains” that define each team. A DDD analysis can turn quickly into its own crazy wall of boxes and arrows, of course. Hopefully, EAs can keep the lines as helpfully straight as possible.

It’s always spaghetti.

Rather than checking in on how each team is operating, EAs should generally focus on the outcomes these teams have. Following the rule of team autonomy (described elsewhere in this booklet), EAs should regularly check on each team’s outcomes to determine any modifications needed to the team structures. If things are going well, whatever’s going on inside that black box must be working. Otherwise, the team might need help, or you might need to create new teams to keep the focus small enough to be effective.

Gardening microservices

Most cloud native architectures use microservices, hopefully, to safely remove dependencies that can deadlock each team’s progress as they wait for a service to update. At scale, it’s worth defining how microservices work as well, for example: are they event based, how is data passed between different services, how should service failure be handled, and how are services versioned?

@pczarkowski asks, “do you even microservice?”

Again, a senate of product teams can work at a small scale, but not on the galactic scale. EAs clearly have a role in establishing the guidance for how microservices are done and what type of policy is followed. As ever, this policy shouldn’t be a straight-jacket. The era of SOA and ESBs has left the industry suspicious of EAs defining services. Those systems became cumbersome and slow moving, not to mention expensive in both time and software licensing. We’ll see if microservices avoid that fate, but keeping the overall system light-weight and nimble is clearly a gardening that EAs are well suited for.

Platform operations

As we’ll discuss later, at the center of every cloud native organization is a platform. This platform standardizes and centralizes the runtime environment, how software is packaged and deployed, how it’s managed in production, and otherwise removes all the toil and sloppiness from traditional, bespoke enterprise application stacks. Most of the platform cases studies I’ve been using, for example, are from organizations using Pivotal Cloud Foundry.

Occasionally, EAs become the product managers for these platforms. The platform embodies the organization’s actual enterprise architecture and evolving the platform, thus, evolves the architecture. Just as each product team orients their weekly software releases around helping their customers and users, the platform operations team runs the platform as a product.

EAs might also get involved with the tools groups that provide the build pipeline and other shared services and tools. Again, these tools embody part of the overall enterprise architecture, more of the running cogs behind all those boxes and arrows.

As a side-effect of product managing the platform and tools, EAs can establish and enforce governance. The packaging, integration, runtime, and other “opinions” expressed in the platform can be crafted to force policy compliance. That’s a command-and-control way of putting it, and you certainly don’t want your platform to be restrictive. Instead, by implementing the best possible service or tool, you’re getting product teams to follow policy and best practices by bribing them with easy of use and toil-reduction.

It’s the same as always

I’ve highlighted just three areas EA contribute to in a cloud native organization. There are more, many of which will depend on the peccadilloes of your organization, for example:

  • Identifying and solving sticky cultural change issues is one such, situational topic. EAs will often know individual’s histories and motivations, giving them insights into how to deal with grumps that want to stall change.
  • EA groups are well positioned to track, test, and recommend new technologies and methodologies. This can become an “enterprise astronaut” task of being too far afield of actual needs and not understanding what teams need day-to-day, of course. But, coupled with being a product manager for the organizations’ platform, scouting out new technologies can be grounded in reality.
  • EAs are well positioned to negotiate with external stakeholders and blockers. For example, as covered later, auditors often end-up liking the new, small batch and platform-driven approach to software because it affords more control and consistency. Someone has to work with the auditors to demonstrate this and be prepared to attend endless meetings that product team members are ill-suited and ill-tempered for.

What I’ve found is that EAs do what they’ve always done. But, as with other roles, EAs are now equipped with better process and technology to do their jobs. They don’t have to be forever struggling eyes in the sky and can actually get to the job of architecting, refactoring, and programming the enterprise architecture. Done well, this architecture becomes a key asset for the organization, often the key asset of IT.

Though he poses it in terms of the CIO’s responsibility, Mark Schwartz describes the goals of enterprise architects well:

The CIO is the enterprise architect and arbitrates the quality of the IT systems in the sense that they promote agility in the future. The systems could be filled with technical debt but, at any given moment, the sum of all the IT systems is an asset and has value in what it enables the company to do in the future. The value is not just in the architecture but also in the people and the processes. It’s an intangible asset that determines the company’s future revenues and costs and the CIO is responsible for ensuring the performance of that asset in the future.

Hopefully the idea of architecting and then actually creating and gardening that enterprise asset is attractive to EAs. In most cases, it is. Like all technical people, they pine for the days when they actually wrote software. Now’s their chance to get back to it.

Check out the video version of this:

This post is an early draft of a chapter in my book,  Monolithic Transformation.