Companies that loose billions have a hard time being successful

How all these unprofitable companies sustaining high valuations:

bending reality today has three elements: a vision, fast growth, and financing.

But:

A few firms other than Amazon have defied the odds. Over the past 20 years Las Vegas Sands, a casino firm, Royal Caribbean, a cruise-line company, and Micron Technology, a chip-maker, each lost $1bn or more for two consecutive years and went on to prosper. But the chances of success are slim. Of the current members of the Russell 1000 index, since 1997 only 37 have lost $1bn or more for at least two years in a row. Of these, 21 still lose money.

Source: SchumpeterFirms that burn up $1bn a year are sexy but statistically doomed

Docker CEO Steve Singh Interview: All About That Migration To Cloud

The single biggest one is the move to public cloud, and this is where Docker is focused today. This is the number one area that we are putting all our investment in. We have this great container platform that allows you to do a lot of things, but just like any company, we need to pick an area of focus and for us, helping customers take legacy apps, moving them to the Docker platform, and allowing them to run it on any infrastructure because it’s hybrid cloud world, does a couple of things — it drives massive savings for customers, typically 50 percent cost reduction in a cost structure, but it also opens up real opportunities for the customer and our partners to innovate within that environment

Also, this is an insanely good example of a fluffy leather chair conference interview, plus, The Channel filter.

More:

Where does the 50 percent savings come from? A few different areas. The biggest is, honestly, in the mass reduction in number of VMs [virtual machines] and that’s not good or bad, it’s just the reality. The other is that there is a massively increased density factor on compute, and so we can put a lot more workloads on a fewer number of servers. If you are a [company like] Nestle, and you are going to take a bunch of information and business systems and move it to the public cloud, doing a one-to-one move is not necessarily all that advantageous.

Partners:

When I joined Docker I had a good conversation with someone over at Microsoft that said ‘I’d love to partner with you.’ His view was, the more people move to Docker, the more business they get on Azure. In fact, for every dollar we generate, he generates $7.

Momentum and the EBIT(A) chase:

we’re growing at 150 percent-plus year over year and expect that to continue for at least another few years. I’m hoping to get to profitability in mid-2019, and that’s important

Source: Docker CEO Steve Singh On The VMware Relationship, Security, And The Opportunities Around Containers For Partners

More movement usually means more business, the growth imperative

I was CEO of the Wireless Industry Association and I was proud of the job that I did. But the least proud moment of my public policy life was when I opposed the commission’s efforts to allow people to take their phone numbers with them when they switched from, say, AT&T to T-Mobile. [When arguing against this policy], I couldn’t go out and say, “We think it’s a really bad idea because in the current situation consumers are trapped with their carrier and can’t leave us without giving up their phone numbers.” That’s not a real winner. So the argument I made was, “This is going to take money that should be spent on infrastructure and expanding connectivity.” I regret that argument. Saying, “It is going to slow down our incentive to invest,” is everybody’s first line of defense. It’s balderdash. The reason you invest is to get a return. Companies don’t say, “Well, I’m not going to invest because I might trigger some regulations.” Their question is: “Am I going to make a return off of this?” Broadband is a high-margin operation. You can make a return off of it. The facts speak for themselves. Since the Open Internet rule was put in place, broadband investment is up, fiber connections are up, usage of broadband is up, investment in companies that use broadband is up, and revenues in the broadband providers are up, because people are using it more.
Tom Wheeler

There’s two good points here:
1. When people have a high churn rate between services, it may be annoying from a “lazy”/predictable perspective, but it means there’s more chances to sell old and new things to them when they switch.
2. Complaints from companies that amount to “this new regulation/tax/etc. will make us not want to invest” are largely crap. Companies have to invest in new businesses or they die. Whatever the friction, they’ll figure it out, and if they can’t, they can die so that new players can have a go. Businesses don’t need to be eternal.

WTF is “digital transformation”? Beyond AI and VR for practical, software-driven innovation – My January Register Column

At the top of the year, companies are setting their IT agendas. Most high level executives seem to be lusting for “digital transformation,” but that phrase is super-squishy. In my Register column this month, I offered my advice on what it should be: simply “digitizing” existing, manual work-flows by perfecting how you do software.

This, of course, is the core of what I work on at Pivotal; see my wunderkammer of knowledge, the soon to be PDF’ed “Crafting your cloud native strategy,” for example.

What do these opportunities look like in businesses? Here’s a chunk that cut out of the piece that provides some examples:

A project to “digitize” the green card replacement program in the US provides a good example of the simple, pragmatic work IT departments should be curating for 2017. Before injecting software into process it’d “cost about $400 per application, it took end user fees, it took about six months, and by the end, your paper application had traveled the globe no less than six times. Literally traveled the globe as we mailed the physical papers from processing center to processing center.”

After discovering agile and cleaning up the absurd government contracting scoping (a seven year project costing $1.2bn, before accounting for the inevitable schedule and budget overruns), a team of five people successfully tackled this paper-driven, human process. It’s easy to poke fun at government institutions, but if you’ve applied for a mortgage, life insurance, or even tried to order take out food from the corner burger-hut, you’ll have encountered plenty of human-driven processes that could easily be automated with software.

After talking with numerous large organizations about their IT challenges, to me, this kind of example is what “digital transformation” should mostly about, not introducing brain-exploding, Minority Report style innovation. And why not? McKinsey recently estimated that, at best, only 29% of a worker’s day-to-day requires creativity. Much of that remaining 71% is likely just paid-for monotony that could be automated with some good software slotted into place.

That last figure is handy for thinking about the opportunity. You can call it “automation” and freak out about job stealing, but it looks like a huge percentage of work can be “digitized.”

Check out the full piece.

Making new products out of nonconsumption

“Too often, organizations are myopic. They only look for growth in the customer base they already serve. But by looking for nonconsumers and exploring what they are trying to accomplish — rather than focusing on their personal characteristics, purchasing patterns, or product preferences — organizations can discover the potential for new growth.”

Source: The Power of Designing Products for Customers You Don’t Have Yet

Deciding where the Docker ecosystem will make money

The Docker forking hoopla is providing an interesting example, in realtime, of how open communities figure out monetization.

#RealTalk: Open communities are not immune to C.R.E.A.M.

One of the most important decisions an open source community makes is where and how it will make money. I always liked Eclipse’s take because they’re mega clear on this topic; the ASF plays this goofy game where they try really hard to pretend they don’t need to answer the question, which itself is an answer, resulting in only the occasional quagmire; Linux has a weird situation where RedHat figured out This One Cool Trick to circumvent the anti-commercial leanings of the GPL; MySQL has a weird dual licensing model that I still don’t fully grasp the strategic implications of; RIP Sun.

The role of standards plays another defining role when it comes to monetization. Think of Java/J(2)EE, vs .Net, vs PHP (a standard-less standard?), vs HTML and WS-*. vs, the IETF/ISOC RFC-scape that defines how the internet works. While not always, by far, standards are often used tactically to lesson the commercial value (or zero it out completely) of any given component “lower” in the stack, pushing the money “up” the stack to the software that implements, uses, or manages the standard. Think of how HMTL itself is “of no value” (and was strategically pushed that way early on), but that the entire SaaS market is something like a $37.7bn market, part of the overall $90.3bn that, arguably, uses HTMLas one of the core technologies in the stack, at the UI layer, (along with native mobile apps. now).

The dynamics of how open source, standards, and the closed source around it are defined and who “controls” them are one of the key strategic processes in infrastructure software.

The Docker ecosystem is sorting out monetization

Right now, you can see this process in action in the Docker ecosystem. Product management decisions at Docker, Inc. are forcing the community to wrestle with how ecosystem members will make money, including Docker Inc. itself.

By “ecosystem,” I mean “all the people and companies that are involved in coding up Docker and/or selling Docker-based products and services.” Actual end-users play a role, of course, but historically don’t have as much power as we’d like at this stage of an open communities formation.

End-users have to vote with their feet and, if they have one, wallets – whether wearing expensive loafers (enterprise) or cracked sandals (paying with nothing but the pride of ubiquity) – which, by definition, is hard to do until a monetization strategy is figured out, or completely lumped all together.

Looking just at the “vendors,” then, each ecosystem member is trying to define which layers of the “stack”‘will be open, and thus, free, and which layers will be closed, and thus, charged for. Intermixed with this line drawing is determining who has control over features and standards (at which level) and, as a result, the creation of viable business models around Docker.

Naturally, Docker, Inc. wants as big slice of that pie as possible. The creator of any open technology has to spend a lot of nail-biting time essentially deciding how much money and market-share it wants to give up to others, even competitors. “What’s in it for me?” other vendors in the ecosystem are asking…and Docker Inc.’s answer is usually either some strategic shoe-gazing or a pretty straight forwardly the reply “less than you’d like.”

As a side note, while I don’t follow Docker, Inc. as an analyst any more (so I’m not mega up-to-date), it seems like the company consistently puts the end-users first. They’re looking to play the Tron role in this ecosystem most valiantly. This role doesn’t, really, conflict at all with elbowing for the biggest slice of the pie.

chart_vendors-focused-on-deployment-platforms-orchestration-developer-tools-2
From The New Stack’s Docker & Container Ecosystem research

Similar to Docker Inc’s incentives to maintain as much control as possible, the “not-Docker, Inc.” members of the ecosystem want to commoditize/open the lower levels of the stack (the “core”), and leave the upper layers as the point of commoditization. This is the easiest, probably most consistently successful business model for infrastructure software: sell proprietary software that manages the “lower,” usually low cost to free, layers in the stack. From this perspective, not-Docker, Inc. members want to fence in the core Docker engine and app packaging scheme as the “atomic unit” of the Docker ecosystem. Then, the not-Docker, Inc.’s want to keep the management layer above that atomic unit for themselves to commercialize (here “orchestration,” configuration management, and the usual systems management stuff) . But, of course, Docker Inc. is all like “nope! That’s my bag o’ cash.”

As explained by one of those ecosystem vendors, who works at Red Hat:

And while I personally consider the orchestration layer the key to the container paradigm, the right approach here is to keep the orchestration separate from the core container runtime standardization. This avoids conflicts between different layers of the container runtime: we can agree on the common container package format, transport, and execution model without limiting choice between e.g. Kubernetes, Mesos, Swarm.

We saw similar dynamics – though by no means open source – in the virtualization market. VMware started with the atomic unit of the hypervisor (remember when we were obsessed with that component in the stack and people used that word a lot?), allowing the ecosystem to build out management on-top of that “lower” unit.

Then, as VMware looked to grow it’s TAM, revenue, and, thus, share price and market-cap, it expanded upward into management. At this point, VMware is a, more or less, the complete suite (or “solution” as we used to call it) of software you need for virtualization. E.g., they use phrases like “Software Defined Datacenter” rather than “virtualization,” indicative of the intended full-scope of their product strategy. (I’m no storage expert, but I think storage and maybe networking?is the last thing VMware hasn’t “won” hands down.)

“What, you don’t like money?”

Screenshot 2016-09-01 12.09.56.png
From one of Donnie’s recent presentations.

All of this is important because over the next 10-15 years, we’re talking about a lot of money. The market window for “virtualization” is open and wildcatters are sniffing on the wafting smell the money flitting through. Well, unless AWS and Azure just snatches it all up, or the likes of Google decides to zero the market.

We used to debate the VMware to Docker Inc. comparison and competitive angle a lot. There was some odious reaction to the idea that Docker Inc. was all about slipping in a taking over VMware’s C.R.E.A.M. At one point, that was plausible from a criss-cross applesauce state of the market, but now it’s pretty clear that, at least from an i-banker spreadsheet’s perspective, VMware’s TAM is the number your doinking around with.

Figuring out that TAM and market size gives you a model for any given ecosystem member’s potential take over the next 10 years. That’s a tricky exercise, though, because the technology stack and market are being re-defined. You’ve got the core virtualization and container technology, then the management layer, and depending on if you’re one of the mega-tech vendors that does software and hardware, you’ve got actual server, storage, and networking revenue that’s dragged by new spend on “containers,” and then you’ve got the bogie of whatever the “PaaS-that-we-shall-not-call-PaaS” market becomes (disclaimer: that’s the one I work in, care a great deal about, am heavily incentivized to see win, and am rooting for – roll in the bias droids!).

I skipped figuring out the market size last year when I tried to round-up the Docker market. Needless to say, I’d describe it as “fucking-big-so-stop-asking-questions-and-ride-the-God-damn-rocket.”

Looking at it from a “that giant sucking sound” perspective, most all of the members in the Docker ecosystem will be in a zero-sum position if Docker Inc moves, and wins, the upper management layers. Hence, you see them fighting tooth-and-nail to make sure Docker Inc is, from their perspective, kept in their place.

To be effective, strategy needs living context

[T]he art of strategy based upon situational awareness remains one of those topics which are barely covered in business literature. The overwhelming majority depends upon alchemist tools such a story telling, meme copying and magic frameworks like SWOTs. It is slowly changing though and every day I come across encouraging signs.

Other than the “why don’t you tell me how you really think” tone there at the end (hey, I clearly have nothing wrong with that kind of dismissive style), that fits my experience working on strategy.

Your strategy team is forced to freeze time at the launch of the process, looking at their industry as an unchanging process (value chain diagrams, anyone?). As most strategy work takes 3-6 months at best (if not a year to year and half to fit into the corporate budget cycle and then get through The New Years hang over: no one really starts working again until February, then the business units have to plan, allocate budget, and execute), you’re behind: you’re looking at an understanding of the world that’s around a year out of date.

Worse than this, strategy teams are rarely given the tools (time, money, authority, and staff) to actually test out any theories, let alone learn from adapt the results of those tests. There’s no room for OODA/PDCA/lean startup, small batch thinking20.

Centralized strategy in a large company is weird in how unhelpful it can be for industries that are constantly changing or threatened by competitors. Like so many other corporate functions, the fix looks to be shortening the cycle time and getting as close to the actual work and customers as possible.

That’s a long way from the drab cubes of strategy drones and the luxurious double cubes with round tables of their bosses.

Source: What makes a map?

Mature Software Is Hard: HPE Looking to Divest?

Rumors are HPE is looking to sell of some older software assets, Autonomy, Mercury, and Vertical. Acquisition prices from Bloomberg:

  • Autonomy: $10.3bn in 2011
  • Mercury: $4.5bn in 2006
  • Vertica: ~$350m in 2011

It’s that bugbear cloud, James over at RedMonk, said back in June in his report on the company’s big conference:

Make no mistake – Cloud is a forcing factor for pretty much all of the issues facing incumbent enterprise suppliers today. Cloud is putting pressure on all enterprise software markets – applications, hardware, networking, security, services, software, storage etc.

That said, I’d theorize that these are all reliable businesses with reliable customer bases. Their revenue may be declining and they may not be all “SaaS-y,” but for the right price PE firms could probably do alright.

Continue reading Mature Software Is Hard: HPE Looking to Divest?

Thinking wrong about knowledge workers screws up their productivity

…or: “Knowledge work is a lot more like cloud than traditional IT.”

Of course, it is most certainly not in the interest of knowledge workers to go to their bosses and declare that they have “spare capacity.” At best, they might then be judged in performance reviews as having an easy job and being not very productive. At worst, the bosses might decide that these employees could be cut. Thus it is to every knowledge worker’s benefit to look busy all the time. There is always a report to write, a memo to generate, a consultation to run, a new idea to explore. And it is in support of this perceived survival imperative that the second driver of productivity—knowledge transfer—gets perverted.

The rest of the piece is good stuff. Notice how much of the thinking follows the same pattern of opex vs. capex thinking of cloud, and the somewhat similar notions of continuous delivery. I’d also add that if you follow a small batch (smaller amounts of work delivered more frequently, rather than big projects delivered once), you’re given more opportunity to re-allocate your “knowledge workers” to different projects. As the author points out, this means you have to rejigger how HR/roles and responsibilities work; staff policies don’t currently favor moving people from project to project like you see in (management) consulting.

Couple this with the “you need to constantly be coming up with new businesses” pressure from Transient Advantage, and you have good operating theory.

Solving the conundrums of our father’s strategies

So here we are, as of this writing a good twenty-nine years after the “hatchet job,” and Kodak has declared bankruptcy. The once-humming factories are literally being blown up, and the company’s brand, which Interbrand had valued at $14.8 billion in 2001, fell off its list of the top one hundred brands in 2008, with a value of only $3.3 billion. 6 It really bothered me that the future was so visible in 1980 at Kodak, and yet the will to do anything about it did not seem to be there. I asked Gunther recently why, when he saw the shifts coming so clearly, he did not battle harder to convince the company to take more forceful action. He looked at me with some surprise. “He asked me my opinion,” he said, “and I gave it to him. What he did beyond that point was up to him.” Which is entirely characteristic of scientists like Gunther. They may see the future clearly, but are often not interested in or empowered to lead the charge for change. Why do I know this story so well? He happens to be my father. —The End of Competitive Advantage, Rita McGrath.

You don’t get a sudden, personal turn like that in business books much. It evoked one of the latent ideas in my head: much of my interest in “business” and “strategy” comes from dad’s all too typical career at IBM in the 80s and 90s.

Sometime in the early 80s – or even late 70s? – my dad started working at IBM in Austin on the factory floor, printed circuit boards I believe. He’d tell me that he’d work the late shift, third shift and at 6am in the morning, stop by 7-11 with his buddies to get a six pack and wait in the parking lot of the Poodle Dog bar for it to open at 8.

He moved up to management, and eventually into planning and forecasting. All for hardware. I remember he got really excited in the late 80s when he got a plotter at home so he could work on foils, that is, transparencies. We call these “slides” now: you can still get a that battlefield-twinkle out of old IBM’ers eyes if you say “foils.”

Eventually, he lived the dictum of “I’ve Been Moved” and went up to the research triangle for a few years, right before IBM divested of his part of the company selling it to Multek (at least he got to return to Austin).

As you can guess, his job changed from a long-term one where his company had baseball fields and family fun days (where we have an outdoor mall, The Domain now) to the usual transient, productivity harvesting job. He moved over to Polycom eventually where he spent the rest of his career helping manage planning and shipping, on late night phone calls to Thailand manufacturers.

In addition to always having computers around – IBM PCs of course! – there was always this thing of how a large tech company evolves and operates. At the time, I don’t think I paid much attention to it, but it’s a handy reference now that I spend most of my time focused on the business of tech.