Choose your TAM wisely and remember to charge a high price, RethinkDB

dollar-1968712_1920

[O]ur users clearly thought of us as an open-source developer tools company, because that’s what we really were. Which turned out to be very unfortunate, because the open-source developer tools market is one of the worst markets one could possibly end up in. Thousands of people used RethinkDB, often in business contexts, but most were willing to pay less for the lifetime of usage than the price of a single Starbucks coffee (which is to say, they weren’t willing to pay anything at all). Link

How big is the pie?

Any company selling developers tools needs to figure out the overall market size for what they’re selling. Developers, eager to work tools for themselves (typically, in their mid to late 20s developers work on at least one “framework” project) often fall prey to picking a market that has little to no money and, then, are dismayed when “there’s no money in it.”

What we’re looking for here is a market category and a way of finding how much money is being spent in it. As a business, you want grab as much as the money as possible. The first thing you want to do is make sure there’s enough money for you to care. If you’re operating in a market that has only $25m of total, global spend, it’s probably not worth your while, for example.

Defining your market category, too, is important to find out who your users and buyers are. But, let’s look at TAM-think: finding what the big pie of cash looks like, your Total Addressable Market.

The TAMs on the buffett

If you’re working on developer oriented tech, there are a few key TAMs:

Another interesting TAM for startups in the developer space is a combo one Gartner put out recently put together that shows public and private PaaS, along with “traditional” application platforms: $7.8bn in 2015. 451 has a similar TAM that combines public and private cloud at around $10bn in 2020.

I tried to come up with a public and private PaaS TAM – a very, very loose one – last year and sauntered up to something like $20 to $25bn over the next 5-10 years.

There are other TAMs, to be sure, but those are good ones to start with.

Bending a TAM to your will, and future price changes

In each case, you have to be very, very careful because of open source and public cloud. Open source means there’s less to sell upfront and, that, likely, you’ll have a hard time suddenly going from charging $0 to $1,000’s per unit (a unit is whatever a “seat” or “server” is: you need something to count by!). If you’re delivering your stuff over the public cloud, similar pricing problems arise: people expect it to be really cheap an are, in fact, shocked when it adds up to a high monthly bill.

But briefly: people expect infrastructure software to be free now-a-days. (Not so much applications, which have held onto the notion that they should be paid for: buy the low prices in the app store depress their unit prices too.)

In both cases (open source and public cloud delivery), you’re likely talking a drastically lower unit price. If you don’t increase the overall volume of sales, you’ll whack down your TAM right quick.

So, you have to be really, really careful when using backward looking TAMs to judge what your TAM is. Part of the innovation you’re expected to be doing is in pricing, likely making it cheaper.

The effect is that your marketshare, based on “yesterday’s TAMs,” will look shocking. For example, Gartner pegged the collective revenue of NoSQL vendors (Basho, Couchbase, Datastax, MarkLogic, and MongoDB) at $364M in 2015: 1% of the overall TAM of $35.9bn! Meanwhile, the top three Hadoop vendors clocked in at $323.2M and AWS’s DB estimate was $833.6M.

Pair legacy TAMs with your own bottoms-up TAM

In my experience, the most helpful way for figuring out (really, recomputing TAMs in “real time) is to look at the revenue that vendors in that space are having and then to understand what software they’re replacing. That is, in addition to taking analyst TAMs into perspective, you should come up with your own, bottoms-up model and explain how it works.

If you’re doing IT-lead innovation, using existing (if not “legacy”!) TAMs is a bad idea. You’ll likely end up over-estimating your growth and, worse, which category of software you are and who the buyers are. Study your users and your buyers and start modeling from there, not pivot tables from the north east.

The other angle here is that if you’re “revolutionizing” a market category, it means you’re redefining it. This means there will be no TAM for many years. For example, there was no “IaaS” TAM for a long time, at some point, there was no “Java app server TAM.” In such cases, creating your own TAMs are much more useful.

Finally, once you’ve figured out how big (or small!) your pie of money is, adjust your prices accordingly. More than likely you’ll find that you’ll need to charge a higher price than you think is polite…if you want to build a sustainable, revenue-driven business rather than just a good aggregation startup to be acquired by a larger company…who’ll be left to sort out how to make money.

Moving beyond the endless debate on bi-modal IT

I get all ants-in-pants about this whole bi-modal discussion because I feel like it’s a lot of energy spent talking about the wrong things.

This came up recently when I was asked about “MVP”, in a way that basically was saying “our stuff is dangerous [oil drilling], so ‘minimal’ sounds like it’d be less safe.” I tried to focus them on the “V” and figure out what “viable” was for their situation. The goal was to re-enforce that the point of all this mode 2/small batch/DevOps/PDCA/cloud native/OODA nonsense is to keep iterating to get to the right code.

Part of the continual consternation around bi-modal IT – sad/awesome mode – is misalignment around that “viability” and scoping on “enterprise” projects. This is just one seam-line around the splits of the discussion being unhelpful

Bi-strawperson

The awesome mode people are like:

You should divide the work into small chunks that you release to production as soon as possible – DevOps, Agile, MVP, CI/CD – POW! You have no idea what or how you should implement these features so you need to iteratively do it cf. projectcartoon.com

And the sad mode folks are like:

Yes, but we have to implement all this stuff all at once and can’t do it in small slices. Plus, mainframes and ITIL.

Despite often coming off as a sad mode apologist, I don’t even know what the sad mode people are thinking. There’s this process hugger syndrome that, well on both sides really, creates strawpeople. The goal of both methods is putting out software that makes users more productive, including having it actually work, and not overpaying for the whole thing.

The Enemy is finding any activity that doesn’t support those goals and eliminated it as much as possible. In this, there was some good scrabbling from the happy mode people laughing at ITSM think early on, but at this point, the sad people have gotten the message, have been reminded of their original goal, and are now trying to adapt. In fact, I think there’s a lot the “sad mode” people could bring to the table.

To play some lexical hopscotch, I don’t think there is a “mode 1.” I think there’s just people doing a less than awesome job and hiding behind a process-curtain. Sure, it may to be their choice and, thus, not their fault. “Shitty jobs are being done,” if you prefer the blamelesss-veil of passive voice.

Fix your shit

When I hear objections to fixing this situation, I try to b nice and helpful. After all, I’m usually there as part of an elaborate process to get money from these folks in exchange for helping them. When they go all Eeyore on me, I have to reframe the room’s thinking a little bit without getting too tough love-y.

“When I put these lithium batteries in this gas car, it doesn’t seem to work. So electric cars are stupid, right?

You want to walk people to asking “how do we plan out the transition from The Old Way That Worked At Some Point to The New Way That Sucks Less?” They might object with a sort of “we don’t need to change” or the even more snaggly “change is too hard” counter-point.

I’m not sure there are systems that can just be frozen in place and resist the need to change. One day, in the future, any system (even the IRS’!) will likely need to change and if you don’t already have it setup to change easily (awesome mode), you’re going to be in a world of hurt.

The fact that we discuss how hard it is to apply awesome mode to legacy IT is evidence that that moment will come sooner than you think.

(Insert, you know, “where’s my mobile app, Nowakowski?” anecdote of flat-footedness here.)

ITIL end in tears(tm)

The royal books of process, ITIL, are another frequent strawperson that frothy mouthed agents of change like to light up. Few things are more frustrating than a library of books that cost £100 each. There’s a whole lot in there, and the argument that the vendors screw it all up is certainly appetizing. Something like ITIL, though, even poorly implemented falls under the “at least it’s an ethos” category.

Climbing the Value Chain

I’m no IT Skeptic or Charles T. Betz, but I did work at BMC once: as with “bi-modal,” and I really don’t want to go re-read my ITIL books (I only have the v2 version, can someone spare a few £100’s to read v3/4?), but I’m pretty sure you could “do DevOps” in a ITIL context. You’d just have to take out the time consuming implementation of it (service desks, silo’d orgs, etc.).

Most of ITIL could probably be done with the metaphoric (or literal!) post-it notes, retrospectives, and automated audit-log stuff that you’d see in DevOps. For certain, it might be a bunch of process gold-plating, but I’m pretty sure there’s no unmovable peas under all those layers of books that would upset any slumbering DevOps princes and princesses too bad.

Indeed, my recollection of ITIL is that it merely specifies that people should talk with each other and avoid doing dumb shit, while trying to improve and make sure they know the purpose/goals of and “service” that’s deployed. They just made a lot of flow charts and check lists to go with it. (And, yeah: vendors! #AmIrightohwaitglasshouse.)

Instead of increasing the volume, help spray away the shit

That gets us back to the people. The meatware is what’s rotting. Most people know they’re sad, and in their objections to happiness, you can find the handholds to start helping:

Yes, awesome mode people, that sounds wonderful, just wonderful. But, I have 5,000 applications here at REALLYSADMODECOGLOBAL, Inc. – I have resources to fix 50 of them this year. YOUR MOVE, CREEP!

Which is to say, awesome mode is awesome: now how do we get started in applying it at large orginizations that are several fathoms under the seas of sad?

The answer can’t be “all the applications,” because then we’ll just end up with 5,000 different awesome modes (OK, maybe more like 503?) – like, do we all use Jenkins, or CircleCI, or Travis? PCF, Docker, BlueMix, OpenShift, AWS, Heroku, that thing Bob in IT wrote in his spare time, etc.

Thus far, I haven’t seen a lot of commentary on planning out and staging the application of mode 2. Gartner, of course, has advice here. But it’d be great to see more from the awesome mode folks. There’s got to be something more helpful than just “AWESOME ALL THE THINGS!”

Thanks to Bridget for helping draw all this blood out while I was talking with her about the bi-modal pice she contributed to.

Questioning DRY

tl;dr

Recently, I’ve been in conversations where people throw some doubt on DRY. In the cloud native, microservices mode of operating where independent teams are chugging along, mostly decoupled from other teams, duplicating code and functionality tends to come more naturally, even necessarily. And the benefits of DRY (reuse and reducing bugs/inconstancy from multiple implementation of the same thing), theoretically, no longer are more valuable than the effort put into DRYing off.

That’s the theory a handful of people are floating, at least. I have no idea if it’s true. DRY is such an unquestionable tenant of all programming think that it’s worth tracking it’s validity as new modes of application development and deployment are hammered out. Catching when old taboos flip to new truths is always handy.
Continue reading Questioning DRY

The Problem with PaaS Market-sizing

personal-791370_1920

Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20-25bn sometime in the future (5-10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.

A bias against private PaaS

This number is contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just Force.com, Heroku, and parts of AWS, Azure, and Google. This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.

I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, PaaS for large organizations seems to be private PaaS

(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform.)

How much do you have budgeted?

With this premise – that people want private PaaS – I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:

  • IDC’s Application Development forecast puts the application development market (which includes ALM tools and platforms) at $24bn in 2015, growing to $30bn in 2019. The commentary notes that the influence of PaaS will drive much growth here.
  • Recently from Ovum: “Ovum forecasts the global spend on middleware software is expected to grow at a compound annual growth rate (CAGR) of 8.8 percent between 2014 and 2019, amounting to $US22.8 billion by end of 2019.”
  • And there’s my old pull from a Goldman Sachs report that pulled from Gartner, where middleware is $24bn in 2015 (that’s from a Dec 2014 forecast).

When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20-25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.

Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”

So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.

(Ironic-clipart from my favorite source, geralt.)

Solving the conundrums of our father’s strategies

So here we are, as of this writing a good twenty-nine years after the “hatchet job,” and Kodak has declared bankruptcy. The once-humming factories are literally being blown up, and the company’s brand, which Interbrand had valued at $14.8 billion in 2001, fell off its list of the top one hundred brands in 2008, with a value of only $3.3 billion. 6 It really bothered me that the future was so visible in 1980 at Kodak, and yet the will to do anything about it did not seem to be there. I asked Gunther recently why, when he saw the shifts coming so clearly, he did not battle harder to convince the company to take more forceful action. He looked at me with some surprise. “He asked me my opinion,” he said, “and I gave it to him. What he did beyond that point was up to him.” Which is entirely characteristic of scientists like Gunther. They may see the future clearly, but are often not interested in or empowered to lead the charge for change. Why do I know this story so well? He happens to be my father. —The End of Competitive Advantage, Rita McGrath.

You don’t get a sudden, personal turn like that in business books much. It evoked one of the latent ideas in my head: much of my interest in “business” and “strategy” comes from dad’s all too typical career at IBM in the 80s and 90s.

Sometime in the early 80s – or even late 70s? – my dad started working at IBM in Austin on the factory floor, printed circuit boards I believe. He’d tell me that he’d work the late shift, third shift and at 6am in the morning, stop by 7-11 with his buddies to get a six pack and wait in the parking lot of the Poodle Dog bar for it to open at 8.

He moved up to management, and eventually into planning and forecasting. All for hardware. I remember he got really excited in the late 80s when he got a plotter at home so he could work on foils, that is, transparencies. We call these “slides” now: you can still get a that battlefield-twinkle out of old IBM’ers eyes if you say “foils.”

Eventually, he lived the dictum of “I’ve Been Moved” and went up to the research triangle for a few years, right before IBM divested of his part of the company selling it to Multek (at least he got to return to Austin).

As you can guess, his job changed from a long-term one where his company had baseball fields and family fun days (where we have an outdoor mall, The Domain now) to the usual transient, productivity harvesting job. He moved over to Polycom eventually where he spent the rest of his career helping manage planning and shipping, on late night phone calls to Thailand manufacturers.

In addition to always having computers around – IBM PCs of course! – there was always this thing of how a large tech company evolves and operates. At the time, I don’t think I paid much attention to it, but it’s a handy reference now that I spend most of my time focused on the business of tech.

Getting collaboration right in Agile & DevOps – Press Pass

I don’t do press passes as much as I did when I was an analyst, but here’s one from a recent email interview for a ProjectsAtWork story:

Q: What’s your favorite tip to improve collaboration when an organization moves to agile and DevOps?

A: I think the core DevOps thing with collaboration is getting people to trust each other. Most corporate cultures are not built on people trusting each other and feeling comfortable: they’re based in competitive, zero sum structures or command and control management at best.

Organizations that are looking to DevOps for help are likely trying to innovate new software and services and so they have to shift to a mode of operating that encourages collaboration and creativity. Realizing that is a critical step: we want to create and run new software, so we need to understand and become a software producing organization.

In contrast, if you operate differently if you’re just driving down costs each quarter and not creating much with IT. We’d counter-argue that if you’re a large organization and you’re not worrying about software then you’ll be creamed by your competition who is becoming a software organization.

If forced to pick one tip to increase collaboration I would say: do it by starting to work. How you do this is to pick a series of small projects and slowly expand the size of the projects. These projects should be low profile, but have direct customer/revenue impact so that they’re real. It’s important for these projects to be actual applications that people use, not just infrastructure and back-end stuff. It will help the team understand the new way of operating and at the same time help build up momentum and success for company wide transformation later down the road.

As a basic tactic, Andrew Shafer has a fun, effective tactic about having each people on the team wrote fantasy press releases about each other to start to build trust.

(See the full piece by Will Kelly over on the site.)

Dealing with “disposable software” for enterprises

With consumer SaaSes and mobile apps coming and going, I’ve been thinking of the idea of “disposable software”: apps that last a year or so, but aren’t guaranteed to last longer. In the consumer space, there’s rarely been a guarantee that free software will last – that’s part of the “price” you pay for free.

This mentality is getting into business software more and more, however, and I don’t think “enterprises” are prepared for it. Part of the premium you pay for enterprise software should include the guarantee that it will have a longer life-cycle, but it’s worth asking if it does.

Also, it’s good for enterprises to be aware of vendors, particularly open source driven ones, are putting out code that might be “disposable.” The prevailing product management think nowadays encourages experimenting and trying things out: abandoning “failed” experiments and continuing successful ones. Clearly, if you’re a “normal” enterprise, you want to avoid those failed experiments and, at best, properly control and govern your use of them.

Of course, there are trade-offs:

  • With consumer, experiment-driven software, you’re always getting the newest thinking, which might turn out to be a good idea and provide your business with differentiating, “secret sauce”; or it might be a failed experiment that gets canceled
  • With “enterprise,” stable software you can generally count on it existing and being supported next year; but you’ll often be behind the curve on innovation, meaning you’ll have to layer on the “secret sauce” on your own.

It’s good to engage with both types of strategies, you just have manage the approach to hedge the risks of each.

The many meanings of “cloud broker”

From coverage of a keynote at Gartner DC:

That new role has less to do with managing disparate bits of infrastructure and more to do with selecting the best infrastructure strategy to provide a specific service. The toolbox they can select from includes on-premise or colocation data centers and cloud – private, public, or hybrid, on-prem or outsourced.

For as long as cloud has been around, the idea of a “cloud broker” has existed. For awhile, it meant software (or an “as a Service”) acting as a marketplace, like an App Store, that people would select IT services from.

It also can mean a market where you are continually buying the cheapest price, sort of arbitraging between the ever lowering costs of public clouds, some how magically loving workloads cheaply and quickly enough between these clouds to save money and time. You’ll hear people say “bursting” a lot here.

Of late I’ve noticed a more normal definition: the act of the IT department serving as a curator, service provider, and accountant for cloud services from vendors. I mean, that’s a large part of what IT has done all along, so it makes sense.

“The role of IT is shifting to become an intermediary between the customer and the data center and the service provider,” Bittman, a Gartner VP and distinguished analyst, said. “The service provider might be you, but it might be Google, or it might be Salesforce. It comes down to delegating responsibility.”

Today, digital business capabilities drive 18 percent of enterprise revenue, Raymond Paquet, a managing VP at Gartner, said. The analysts expect that portion to grow to 25 percent in two years and more than double by 2020, reaching 41 percent.

This last bit is what feels like s more dramatic change: IT being called on to help run the business, not just keep the lights on.

Use agile for speed, not cutting costs – Agile survey from Gartner AADI

23102100289_d7158d5bc6_k

I’m at Gartner AADI this year, the first time I’ve been to a Gartner conference. One of the sessions was a read-out of a recent survey about Agile. While a small sample set – “167 IT leaders in 33 countries” – it was screened for people who were familiar with or doing agile of some kind. As with all types of surveys like this, it’s interesting to look at the results both with respect to “what people are doing” and how they’re thinking about what they’re doing. Here’s some slides I took pictures of during the talk:

Organization's Profile - Gartner AADI agile survey readout

Why organizations do agile - Gartner AADI agile survey readout

Agile methodologies in use  - Gartner AADI agile survey readout

When agile practices were adopted - Gartner AADI agile survey readout

Agile stumbling blocks - Gartner AADI agile survey readout

My first take-aways are:

  • Well, Scrum is popular.
  • Most of the “stumbling blocks” are, of course, meatware problems: people and “culture.”
  • Pair programming, as always, gets no respect.
  • Organizations want to use agile for speed, not for cutting costs.

Pivotal Cloud Foundry 1.6, getting beyond the blinking cursor into the application layer

pivotal cloud foundry platform diagram

There’s a new release of Pivotal Cloud Foundry out this week. We’ve been seeing great pick-up from customers, and the nature of conversations I’ve been seeing while visiting them has been changing from operations, IaaS-driven topics to discussions about improving application development and delivery. This release also reflects that shift “up the stack.” Here’s my brief take on how things are going for Pivotal Cloud Foundry.

The most typical path to using Pivotal Cloud Foundry

First, this is how I see most customers arriving at Pivotal Cloud Foundry:

Who does Pivotal see as their toughest competition? According to Watters, that distinction belongs to AWS. Cloud customers often believe that AWS itself is enough. [James] Watters says that there wouldn’t even be the concept of cloud-native apps without Amazon, but “people need more than just Amazon to be successful.” Watters believes that some of Pivotal’s best customers are those who first tried to creates platforms themselves, but then asked “what’s the right thing to do for my organization?”

The rest of the piece is a good, brief overview of the new feature in Pivotal Cloud Foundry 1.6.

What I see in this release is a movement “up the stack” to address application architecture and development concerns. You can see this in the incorporation of Spring Cloud (which supports, among many other things, a microservices approach), support for .Net (almost every large organization wants and needs this for the way they develop applications), and the numerous integrations with ALM tools (like Cloudbees, GitLabs, etc.).

For many years – and still! – the focus of “cloud” has been on the infrastructure layer: setting up the “operating system” for the cloud, your big datacenter, and everything that results in that magical blinking cursor:

I think of this as the “blinking cursor” problem. You know that softly pulsing cursor: it’s the result of millions ­—if not billions! — of dollars spent on cloud projects. These “private cloud” projects see companies redoing how their IT department provides infrastructure. They move from physical to virtual management; move from manual ticket processing to self­-service, automated provisioning; and after efforts that must have seemed like building all of the furniture for a new IKEA store with just a pocket knife, they might end up with their own cloud. And then, after all of this, they’ve gotten the blinking cursor up! The servers are ready to use! Now the hard work of designing, developing, deploying, and managing the applications that run the business starts. There is little wonder that 95% of folks in [a poll asking “what went wrong with your private cloud project?”] were not completely satisfied with their private cloud projects.

I still see much of the conversation centering around getting the blinking curser up, and too little on how to create and manage good applications. So, obviously I like our new positioning “up the stack,” not only providing application-centric services, cloud-ified middleware, and the operations capabilities needed keep those application up and running.

In addition to the actual product, you can see this reflected on the team (the evangelist/advocate/community team) I’m on where we’ve added people who focus on explaining how to do better software development, in addition to the more operations-centric people we started with.

Momentum: customer and ecosystem growth and character

Momentum wise, I measure Pivotal Cloud Foundry based on customers and the overall Cloud Foundry ecosystem.

Customer wise, we’ve gone from about $40m in bookings in 2014 to a $100m annual bookings run-rate this year. Those are two, slightly different type numbers, but you can get a feel for the amount of business we’ve been doing, and more important, the high growth and fast traction we’re getting. What I like about out customer base is that they’re everyday, big brands and companies. This not only means I can better explain what I do to my non-tech friends and relatives, but also means we have a sustainable customer base: these Global 2,000 customers aren’t going away anytime soon, esp. if they keep up the strategy that brought them to Pivotal Cloud Foundry: transforming to a software defined business.

There’s a Cloud Foundry Summit this week in Berlin and it evidenced the ecosystem momentum around Cloud Foundry, the open source project that Pivotal Cloud Foundry is based on. There’s now just north of 50 members. When you look at those logos notice how many non-tech companies are on there: it’s still mostly tech companies who want to use or extend Cloud Foundry, but there’s a delightful number of non-tech companies who want to support the platform that’s supporting their business. And, of course, the work with Microsoft to support .Net brings that whole ecosystem very close as well. As I mentioned above, many of the every organization I talk with really wants .Net support. Another interesting thing to watch is growth in use of Azure; that’s an option that I hear companies exploring a lot now-a-days, and, indeed, as Microsoft said in the press around this release, “[t]he demand for Azure was so high that we already have Fortune 100 customers building their next-generation applications with Pivotal Cloud Foundry on Azure.”

Obviously, working at Pivotal I’m highly biased on all this. Still, I think there’s good evidence that things are panning out. My main hope, as always, is that we can help improve the state of software, globally, and, thus, improve how organizations are operating.

More on Pivotal Cloud Foundry 1.6: