Scaling DevOps in large organizations – My April Register column

apololypse_now_smell_of_napalm_in_the_morning

My The Register column this month is on scaling DevOps/cloud-native teams to the entire organization. It’s easy to build one team that does software in a new and exciting way, but how can you move to two teams, five teams, and then 100’s? It goes over the amalgamation of a few case studies and plenty of over-the-top gonzo analogies, per usual.

Check it out, and check out past ones if you’re curious for more.

Are tech H1-B visas actually that big of a deal? How we even evaluate the question?


25720976280_daf8a3d827_k_d

Over the decades, the number of H-1B workers allowed into the US each year has grown. With the 1998 update, the visa cap lifted to 115,000. In 2000, the limit was boosted again, this time up to 195,000. That year, the law was also tweaked so that renewals no longer counted toward the cap. In 2004, the cap was reset to 65,000, but an exemption was added for 20,000 students graduating from US institutions with master’s degrees. Exemptions were also added for workers affiliated with academic institutions, which can include schools and teaching hospitals. According to Ron Hira, a professor of Public Policy at Howard University who has studied the H-1B issue and testified about it before the Senate, the actual number of visas handed out each year has been around 135,000 over the last five years. Link

There’s a good rant on the relative importance of all of this in last week’s Political Gabfest. While us “on-shore” workers in the tech industry may see that 135,000 as a threat to our cashflow, it’s a drop in the bucket of employment in America. As Adam Davidson argues well, therefore, worrying about H1-B visas should be pretty low on the list of how to setup up more people with good jobs:

The question of H-1B visas has rhetorical importance far beyond its actual economic relevance. The unemployment rate for computer and mathematical occupations is, currently, 2.1 per cent. This is what economists consider full employment, meaning that pretty much everyone who wants a job has a job or is in a brief hiatus between positions. The number of jobs in those fields is growing fast—by about twelve per cent a year—and the number of qualified workers is not growing enough to catch up. In short, the plight of computer professionals is on few people’s list of urgent concerns…. According to the Bureau of Labor Statistics, ten thousand computer professionals start a new job every working day. In this context, the eighty-five thousand foreigners given H-1B visas each year represent little more than statistical noise.

He goes off an a political jag after this saying that the H1-B discussion is a proxy for “fear of brown people,” which certainly has appeal to leftist people like myself. There’s a business question here, too, though: are H1-B visas a good idea and why? Are they ethical and effective?

What types of jobs?

Also, some interesting analysis of the types of jobs H1-B visas are used for. Mostly for jobs at outsourcing firms:

But it’s how H1-B visas are being used by applicants that’s really changed. Data from the 2016 batch of H-1B petitions show that the top 10 sponsors of H-1B visa workers in the US are all corporations with large outsourcing businesses: Indian companies like Infosys, Tata, and Wipro, which pioneered the business, and US-based firms like IBM, Accenture, and Cognizant, which saw the success of the Indian contractors and began offing their own competing outsourcing programs. Those 10 firms have more workers currently employed through the program than the next 90 companies combined, a group that includes all of America’s largest tech companies and banks.

So, the discussion about H1-B visas in tech ii, by bulk, about the 60,000+ jobs in IT outsourcing. This is in addition to the estimated 1.7m off-shore jobs in outsourcing already.

In theory, most of these are “lower value” jobs where you’re more operating IT (help-desks, managing the daily operations of enterprise applications) rather than creating it (like programming). Anecdotally, there’s still programming running around in there, esp. when it comes to modernizing applications. The going theory is that you can’t just slot in workers on higher-value IT work like writing custom software.

How do you think about all this?

There’s an odd ethical vs. business-sense argument scurrying about as well that I’ve never seen addressed. One, you’d seem to be happy that the H1-B visa worker was getting work. By nature of accepting the job and up-rooting themselves, it must be good for them: or, at least, better than other alternatives. Also, if it’s actually cheaper to get the same services/output from an H1-B visa worker, why would you pay more for “native” worker? On the other hand, it’s equally confusing to figure out what companies “owe” workers that they’re firing in favor of the H1-B visa workers.

Tech companies like to skirt all that by talking about “we have to hire from a global pool,” which is fine if you’re hiring for an individual with unique skills. However, the divide between outsourcing firms and tech companies suggests that the bulk of H1-B visa hires in tech are not for all the super unique, AI experts that may not live on-shore. Then again, it’s insulting to even think that: why do I value one set of people over another in any context?

Businesses say they’re not satisfied

However we figure out talking about this, it’s clear from surveys that companies are dodgy on the value of outsourcing. As I summarizing some HfS work recently:

Outsourcers too often do exactly what the contract (from five to ten years ago) says instead of helping you innovate and keep the business growing. Itʼs little wonder that in a recent study, more than 75% of senior executives said they want to replace their legacy outsourcers because those providers are so unwilling to change to new models.


If we take Adam Davidson’s perspective, it’s not really even a problem worth thinking about (versus all the other hair-on-fire issue we have). However, when it comes to outsourcing (which I’ve shifted to because so many H1-B visa workers end up at outsourcing firms), it’s clear that we could be doing much better.

Spanning goes private, what might happen next?

Long ago, Spanning Sync was the only viable way to synchronize your GMail calendar and contacts with the (then) OS X iCal and Address Book. It was great! I also know one of the original founders, Charlie Wood, and we’d talk from time to time about the growing company. At some point, it became a Google Apps (now “G Suit”) back-up service that had a clever value prop: cloud storage, sure, but it’s not redundant you know, you gotta do the basics.

Anyhow, I always kept a close eye on the company. It was a little odd to see EMC buy them back in 2014: as VMware demonstrated with their dropbox competition products years ago, Apple is pretty goofy here, and even Google has demonstrated over the year, large software companies are pretty and at long-term plays for individual software; Microsoft is of course an exception with Office and sort of proves the rule.

We’ll see what Insight Venture Partners does with them. I’m guessing if you just left Spanning alone, more or less, it’d turn into a cash machine at some point. That said, I don’t think Dropbox and Box are exactly profitable. Here’s Box’s last four financial years:

…but it seems like a back-up service could controls costs better and do a lot less marketing: Box and Dropbox have been acquiring companies and re-positioning themselves as they go from more than just to cloud storage to something like “sort of Office, but not really, but maybe – or like Trello… er… let’s acquire another company and go to a conference where we have wooden floors and free espresso in the booth and think about this at next year’s company retreat in Italy.” (I KID! I KID!)

Spanning Momentum

Here’s some Spanning momentum from one of the write-ups:

Spanning has seen 70 percent year-over-year revenue growth and more than 7,000 customers, according to a press release. It restored around 18 million items for customers in 2016, and expects to continue growth with its global data center expansion, and distribution agreements with major channel partners.

A wet-finger-in-the-wind business case

It’s hard to quickly find pricing for Spanning on their page (smells like enterprise software!), but a few searches, particularly from Spiceworks, says it’s like $35 a month.

There’s certainly discounts on some of those customers, but let’s say the revenue would be a max of $2,940,000 annual to something like $1.5m on the low-end if you do all sorts of discounting on clusters of users.

Now, 70% y/y growth is pretty impressive, but not too insane for a relativly new offering. Let’s say they do that two more years and then it goes down to like 30 or 40% for any length of out years we care about.

Then, let’s just take a swag at storage costs. Who knows if they use S3, but let’s assume they can get down to similar pricing, we’ll take S3’s mid-tier: $0.0125/GB/month. My work Google Drive says it’s 22 GB, but I save a lot more stuff than most people do. Let’s just go with 20GB as an average. Then let’s assume you at least duplicate it, so you’re paying for 40 GB a month (across two cloud zones), which is $6/year. (Let’s ignore networking transfer charges – adding that in is left as a exercise for the reader!)

Then you need all the meat-sacks. You could probably get by with 6 to 12 product staff (programmers, product manager – you probably outsource design at this point as needed).

You need the CEO, HR, CFO, and probably 1-2 people to work for them (6 people max); you could probably cut out HR depending on how Insight likes to run HR (outsourced or pooled across companies). Maybe the CFO, but probably not.

I’m no enterprise SaaS business expert, but I’m guessing it’s marketing and sales heavy, so:

Then you need probably 2-3 people in marketing (if you were slick, you could outsource a lot of this, esp. for something as easy to understand as “backup”): 5-10 face-to-face enterprise hustlers, and let’s say a team of 5 “inside/web” sales people who send all those annoying “Re: catching-up. I see you read out white paper on BACKUP. Would you like to talk more? Are you the right person at your organization?” emails. So, max 18.

That’s around 36 people, which seems really low to me. But, if you were, I don’t know, a private equity firm, you’d probably think that was OK, if not a little heavy for a company that basically just copies files from one place to the other (yes, I’m being MBA-fatuous).

Without getting a spreadsheet to do some clustering, doing salary cost across such a diverse set is hard. Many of them are in Austin (I assume, still), so let’s just of with $150,00 all-in per head (I’m sure the admin staff and your “strategic account” sales people get paid well plus extra comp, and the more senior tech staff get paid more). So, that’s something like $5,400,000 in people expenses. Then there’s going to conferences, probably a large ad budget, that nice office they have in downtown Austin (which I think is an EMC office, so they’ll get the boot?) which means buying a lot of organic beef-jerky and craft beer etc., then there’s flying those 5-10 enterprise hustlers around and their $70-100 a day per diems, plus wining and dining. Let’s just trow in another million and go to $6.5m.

So, with some mumbo jumbo business casing (I grow revenue by 70% for two years, then level it off to 30% for the last two years; I grow staff up to 60 people max), you have something like this:

Screenshot 2017-04-23 09.16.10

Those storage costs look insanely off. And from their press release, they claim to have actual data-centers (probably co-lo’d racks that are, at best, caged for compliance reasons, far from “having data centers”), which sounds like building your own, which might actually be as cheap, or slightly higher.

Who knows. Cloud storage is insanely cheap, so maybe that figure isn’t so bonkers. Of course, you need networking transfer chargers, etc. So, double, even quadruple the cost if you care too: still “nothing,” relative to the other numbers.

With this kind of Sunday morning, armchair analysis, there’s no end of flaws. Like I should have found the comparable costs, growth, the TAM, and staffing for Box, BackBlaze, etc., and even made sure I actually understand Spanning’s business model, but: ¯_(ツ)_/¯

Over years, that’s a pretty small gap to close to be profitably, and there’s a lot of things to play with in the spreadsheet (can we fire most all the sales and marketing people and go pure channel, hiring up a biz dev team of 2-3 people to get 5 or so key channel partners?).

It’s probably even easier to bundle up the company for sale to another large company after a few years. Someone like Microsoft or Salesforce might even want them to add that functionality to their own products, or any company that’s concerned about filling in it’s “enterprise SaaS” strategy gaps.

I’ve always like Spanning (RIP Sync). I hope it works out well!

Advice on introducing DevOps from Merrill Corp & SPS Commerce – Highlights

Nicely moderated by Bridget. Some of my notes and highlights:

  • Amy talks about pace of change, sustaining it in the beginning, etc.
    • The amount of time it took us to get going was a surprise – was longer.
    • If you can start to show results early, it helps build up momentum. “Having enough wins, like that, really helped us to keep the momentum going while we were having a culture change like DevOps.”
    • It takes the right people to keep that energy going, but also be able to go back to the business to show that why we are putting these changes in place.
    • You’re going to be able to see the changes to the business right away.
  • Peg – tools, don’t try to fix the old ones, like ITIL service desk tools. Instead we just had Jenkins open tickets and such, automating the toil of dealing with old tools
  • Global/offshore tactics, from Amy:
    • What with all the retrospective stuff, you need to be able to get teams together, physically. The collaboration angles are much better in person
    • Set-up each “shore” as an architecturally and management island, make them as independent as possible. They also need their own context, not held up by time zones so they don’t need to wait 24-48 hours for authorizations and collaboration. [To my mind, this means taking advantage of the organizational de-coupling you can get with microservices.]
  • Starting change, even when they company needs it. Amy: You have to start with the business need, what’s the big driver behind a change like DevOps. [Managers often don’t make sure they figure this out, let alone decimate it to staff.]

We’re getting exactly the government IT we asked for

If there’s one complaint that I hear consistently in my studies of IT in large organizations, it’s that government IT, as traditionally practiced, is fucked. Compared to the private sector, the amount of paperwork, the role of contractors, and the seeming separation between doing a good job and working software drives all sorts of angst and failure.

Mark Schwartz’s book on figuring out “business value” in IT is turning out to be pretty amazing and refreshing, especially on the topic of government IT. He’s put together one of the better “these aren’t the Droids you’re looking for” responses to ROI for IT.

You know that answer: you just want to figure out the business case, ROI, or whatever numbers driven thing, and all the DevOps-heads are like “doo, doo, doo-doo – driving through a tunnel, can’t hear you!” and then they pelt you with Goldratt and Deming books, blended in with some O’Reilly books and The Phoenix Project. “Also, you’re argument is invalid, because reasons.”

A Zen-like calm comes over them, they close their eyes and breath in, and then start repeating a mantra like some cowl-bedecked character in a Lovecraft story: “survival is not mandatory. Survival is not mandatory. Survival is not mandatory!”

Real helpful, that lot. I kid, I jest. The point of their maniacally confusing non-answers is, accurately, that your base assumptions about most everything are wrong, so before we can even approach something as precise as ROI, we need to really re-think what you’re doing. (And also, you do a lot of dumb shit, so let’s work on that.)

But you know, no one wants to hear they’re broken in the first therapy session. So you have to throw out some beguiling, mind-altering, Lemarchand’s boxes to change the state of things and make sure they come to the next appointment.

Works as Designed

Anyhow, back to Schwartz’s book. I’ll hopefully write a longer book review over at The New Stack when I’m done with it, but this one passage is an excellent representation of what motivates the book pelters and also a good unmasking of why things are the way we they are…because we asked for them to be so:

The US government is based on a system of “checks and balances”—in other words, a system of distrust. The great freedom enjoyed by the press, especially in reporting on the actions of the government, is another indication of the public’s lack of trust in the government. As a result, you find that the government places a high value on transparency. While companies can keep secrets, government is accountable to the public and must disclose its actions and decisions. There is a business need for continued demonstrations of trustworthiness, or we might as well say a business value assigned to demonstrating trustworthiness. You find that the government is always in the public eye—the press is always reporting on government actions, and the public is quick to outrage. Government agencies, therefore, place a business value on “optics”—how something appears to the observant public. In an oversight environment that is quick to assign blame, government is highly risk averse (i.e., it places high business value on things that mitigate risk).

And then summarized as:

…the compliance requirements are not an obstacle, but rather an expression of a deeper business need that the team must still address.

Which is to say: you wanted this, and so I am giving it to you.

The Agile Bureaucracy

The word “bureaucracy” is something like the word “legacy.” You only describe something as legacy software when you don’t like the software. Otherwise, you just call it your software. Similarly, as Schwartz outlines, agile (and all software processes) are insanely bureaucratic, full of rules, norms, and other “governance.” We just happen to like all those rules, so we don’t think of them as bureaucracy. As he writes:

While disavowing rules, the Agile community is actually full of them. This is understandable, because rules are a way of bringing what is considered best practices into everyday processes. What would happen if we made exceptions to our rules—for instance, if we entertained the request: “John wants to head out for a beer now, instead of fixing the problem that he just introduced into the build?” If we applied the rules capriciously or based on our feelings, they would lose some of their effectiveness, right? That is precisely what we mean by sine ira et studio in bureaucracy. Mike Cohn, for example, tells us that “improving technical practices is not optional.”15 The phrase not optional sounds like another way of saying that the rule is to be applied “without anger or bias.” Mary Poppendieck, coauthor of the canonical works on Lean software development, uses curiously similar language in her introduction to Greg Smith and Ahmed Sidky’s book on adopting Agile practices: “The technical practices that Agile brings to the table—short iterations, test-first development, continuous integration—are not optional.” I’ve already mentioned Schwaber and Sutherland’s dictum that “the Development Team isn’t allowed to act on what anyone else [other than the product owner] says.”17 Please don’t hate me for this, Mike, Mary, Ken, and Jeff, but that is the voice of the command-and-control bureaucrat. “Not optional,” “not allowed,” – I don’t know about you, but these phrases make me think of No Parking and Curb Your Dog signs.

These are the kind of thought-trains that only ever evoke “well, of course my intention wasn’t the awful!” from the other side. It’s like with the ITIL and the NRA, gun-nut people: their goal wasn’t to put in place a thought-technology that harmed people, far from it.

Gentled nestled in his wry tone and style (which you can image I love), you can feel some hidden hair pulling about the unintended consequences of Agile confidence and decrees. I mean, the dude is the CIO of a massive government agency, so he be throwing process optimism against brick walls daily and too late into the night.

Learning bureaucracies

The cure, as ever, is to not only to be smart and introspective, but to make evolution and change part of your bureaucracy:

Rules become set in stone and can’t change with circumstances. Rigidity discourages innovation. Rules themselves come to seem arbitrary and capricious. Their original purpose gets lost and the rules become goals rather than instruments. Bureaucracies can become demoralizing for their employees.

So, you know, make sure you allow for change. It’s probably good to have some rules and governance around that too.

Making mainframe applications more agile, Gartner – Highlights

In a report giving advice to mainframe folks looking to be more Agile, Gartner’s Dale Vecchio and Bill Swanton give some pretty good advice for anyone looking to change how they do software.

Here’s some highlights from the report, entitled “Agile Development and Mainframe Legacy Systems – Something’s Got to Give”

Chunking up changes:

  1. Application changes must be smaller.
  2. Automation across the life cycle is critical to being successful.
  3. A regular and positive relationship must exist between the owner of the application and the developers of the changes.

Also:

This kind of effort may seem insurmountable for a large legacy portfolio. However, an organization doesn’t have to attack the entire portfolio. Determine where the primary value can be achieved and focus there. Which areas of the portfolio are most impacted by business requests? Target the areas with the most value.

An example of possible change:

About 10 years ago, a large European bank rebuilt its core banking system on the mainframe using COBOL. It now does agile development for both mainframe COBOL and “channel” Java layers of the system. The bank does not consider that it has achieved DevOps for the mainframe, as it is only able to maintain a cadence of monthly releases. Even that release rate required a signi cant investment in testing and other automation. Fortunately, most new work happens exclusively in the Java layers, without needing to make changes to the COBOL core system. Therefore, the bank maintains a faster cadence for most releases, and only major changes that require core updates need to fall in line with the slower monthly cadence for the mainframe. The key to making agile work for the mainframe at the bank is embracing the agile practices that have the greatest impact on effective delivery within the monthly cadence, including test-driven development and smaller modules with fewer dependencies.

It seems impossible, but you should try:

Improving the state of a decades-old system is often seen as a fool’s errand. It provides no real business value and introduces great risk. Many mainframe organizations Gartner speaks to are not comfortable doing this much invasive change and believing that it can ensure functional equivalence when complete! Restructuring the existing portfolio, eliminating dead code and consolidating redundant code are further incremental steps that can be done over time. Each application team needs to improve the portfolio that it is responsible for in order to ensure speed and success in the future. Moving to a services-based or API structure may also enable changes to be done effectively and quickly over time. Some level of investment to evolve the portfolio to a more streamlined structure will greatly increase the ability to make changes quickly and reliably. Trying to get faster with good quality on a monolithic hairball of an application is a recipe for failure. These changes can occur in an evolutionary way. This approach, referred to in the past as proactive maintenance, is a price that must be paid early to make life easier in the future.

You gotta have testing:

Test cases are necessary to support automation of this critical step. While the tooling is very different, and even the approaches may be unique to the mainframe architecture, they are an important component of speed and reliability. This can be a tremendous hurdle to overcome on the road to agile development on the mainframe. This level of commitment can become a real roadblock to success.

Another example of an organization gradually changing:

When a large European bank faced wholesale change mandated by loss of support for an old platform, it chose to rewrite its core system in mainframe COBOL (although today it would be more likely to acquire an off-the-shelf core banking system). The bank followed a component-based approach that helped position it for success with agile today by exposing its core capabilities as services via standard APIs. This architecture did not deliver the level of isolation the bank could achieve with microservices today, as it built the system with a shared DBMS back-end, as was common practice at the time. That coupling with the database and related data model dependencies is the main technical obstacle to moving to continuous delivery, although the IT operations group also presents cultural obstacles, as it is satis ed with the current model for managing change.

A reminder: all we want is a rapid feedback cycle:

The goal is to reduce the cycle time between an idea and usable software. In order to do so, the changes need to be smaller, the process needs to be automated, and the steps for deployment to production must be repeatable and reliable.

The ALM technology doesn’t support mainframes, and mainframe ALM stuff doesn’t support agile. A rare case where fixing the tech can likely fix the problem:

The dilemma mainframe organizations may face is that traditional mainframe application development life cycle tools were not designed for small, fast and automated deployment. Agile development tools that do support this approach aren’t designed to support the artifacts of mainframe applications. Modern tools for the building, deploying, testing and releasing of applications for the mainframe won’t often t. Existing mainframe software version control and conguration management tools for a new agile approach to development will take some effort — if they will work at all.

Use APIs to decouple the way, norms, and road-map of mainframes from the rest of your systems:

wrapping existing mainframe functions and exposing them as services does provide an intermediate step between agile on the mainframe and migration to environments where agile is more readily understood.

Contrary to what you might be thinking, the report doesn’t actually advocate moving off the mainframe willy-nilly. From my perspective, it’s just trying to suggest using better processes and, as needed, updating your ALM and release management tools.

Read the rest of the report over behind Gartner’s paywall.

The state of Java – My Feburary Register Column

This month, my column at The Register is on the state of Java and the evolving nature of J(2)EE:

Despite all the inside-bickering, lawsuits, a shotgun wedding to Oracle, drawn-out releases, and rivals from PHP, to Rails, to Swift, Java is still in wide use and shows no signs of finally dying. Jobs-wise, you’d be hard-pressed to find a better language than Java as your primary programming language if you wanted to switch from dropping off hot-pies to writing code.

Check out the rest!

Source: Java? Nah, I do JavaScript, man. Wise up, hipster, to the money • The Register

Hardware layoffs at Oracle

Oracle claims the company isn’t closing the Santa Clara facility with this reduction in force. Instead, “Oracle is refocusing its Hardware Systems business, and for that reason, has decided to lay off certain of its employees in the Hardware Systems Division.”

Those hardware employees appear to have been Oracle’s failing SPARC hardware department staffers. In mid 2016, Oracle claimed its new SPARC S7 processor would be offered on Oracle Cloud. The cloud is Oracle’s new revenue hope since its new software licensing revenue plummeted by 20 percent in its last quarter ended December 15. At the same time, Oracle’s hardware revenue had fallen 13 percent.

Link

Choose your TAM wisely and remember to charge a high price, RethinkDB

[O]ur users clearly thought of us as an open-source developer tools company, because that’s what we really were. Which turned out to be very unfortunate, because the open-source developer tools market is one of the worst markets one could possibly end up in. Thousands of people used RethinkDB, often in business contexts, but most were willing to pay less for the lifetime of usage than the price of a single Starbucks coffee (which is to say, they weren’t willing to pay anything at all). Link

How big is the pie?

Any company selling developers tools needs to figure out the overall market size for what they’re selling. Developers, eager to work tools for themselves (typically, in their mid to late 20s developers work on at least one “framework” project) often fall prey to picking a market that has little to no money and, then, are dismayed when “there’s no money in it.”

What we’re looking for here is a market category and a way of finding how much money is being spent in it. As a business, you want grab as much as the money as possible. The first thing you want to do is make sure there’s enough money for you to care. If you’re operating in a market that has only $25m of total, global spend, it’s probably not worth your while, for example.

Defining your market category, too, is important to find out who your users and buyers are. But, let’s look at TAM-think: finding what the big pie of cash looks like, your Total Addressable Market.

The TAMs on the buffett

If you’re working on developer oriented tech, there are a few key TAMs:

Another interesting TAM for startups in the developer space is a combo one Gartner put out recently put together that shows public and private PaaS, along with “traditional” application platforms: $7.8bn in 2015. 451 has a similar TAM that combines public and private cloud at around $10bn in 2020.

I tried to come up with a public and private PaaS TAM – a very, very loose one – last year and sauntered up to something like $20 to $25bn over the next 5-10 years.

There are other TAMs, to be sure, but those are good ones to start with.

Bending a TAM to your will, and future price changes

In each case, you have to be very, very careful because of open source and public cloud. Open source means there’s less to sell upfront and, that, likely, you’ll have a hard time suddenly going from charging $0 to $1,000’s per unit (a unit is whatever a “seat” or “server” is: you need something to count by!). If you’re delivering your stuff over the public cloud, similar pricing problems arise: people expect it to be really cheap an are, in fact, shocked when it adds up to a high monthly bill.

But briefly: people expect infrastructure software to be free now-a-days. (Not so much applications, which have held onto the notion that they should be paid for: buy the low prices in the app store depress their unit prices too.)

In both cases (open source and public cloud delivery), you’re likely talking a drastically lower unit price. If you don’t increase the overall volume of sales, you’ll whack down your TAM right quick.

So, you have to be really, really careful when using backward looking TAMs to judge what your TAM is. Part of the innovation you’re expected to be doing is in pricing, likely making it cheaper.

The effect is that your marketshare, based on “yesterday’s TAMs,” will look shocking. For example, Gartner pegged the collective revenue of NoSQL vendors (Basho, Couchbase, Datastax, MarkLogic, and MongoDB) at $364M in 2015: 1% of the overall TAM of $35.9bn! Meanwhile, the top three Hadoop vendors clocked in at $323.2M and AWS’s DB estimate was $833.6M.

Pair legacy TAMs with your own bottoms-up TAM

In my experience, the most helpful way for figuring out (really, recomputing TAMs in “real time) is to look at the revenue that vendors in that space are having and then to understand what software they’re replacing. That is, in addition to taking analyst TAMs into perspective, you should come up with your own, bottoms-up model and explain how it works.

If you’re doing IT-lead innovation, using existing (if not “legacy”!) TAMs is a bad idea. You’ll likely end up over-estimating your growth and, worse, which category of software you are and who the buyers are. Study your users and your buyers and start modeling from there, not pivot tables from the north east.

The other angle here is that if you’re “revolutionizing” a market category, it means you’re redefining it. This means there will be no TAM for many years. For example, there was no “IaaS” TAM for a long time, at some point, there was no “Java app server TAM.” In such cases, creating your own TAMs are much more useful.

Finally, once you’ve figured out how big (or small!) your pie of money is, adjust your prices accordingly. More than likely you’ll find that you’ll need to charge a higher price than you think is polite…if you want to build a sustainable, revenue-driven business rather than just a good aggregation startup to be acquired by a larger company…who’ll be left to sort out how to make money.

“the obsolescence of Java EE” – Notebook

Bottom line: Java EE is not an appropriate framework for building cloud-native applications.

In preparation for this week’s Pivotal Conversations, I re-read the Gartner write-up on the decline of traditional JEE and the flurry of responses to it. Here’s a “notebook” entry for all that.

From Gartner’s “Market Guide for Application Platforms”

This is the original report from Anne Thomas and Aashish Gupta, Nov 2016. Pivotal has it for free in exchange for leag-gen’ing yourself.
What is an “application platform” vs. aPaaS, etc.?

Application platforms provide runtime environments for application logic. They manage the life cycle of an application or application component, and ensure the availability, reliability, scalability, security and monitoring of application logic. They typically support distributed application deployments across multiple nodes. Some also support cloud-style operations (elasticity, multitenancy and selfservice).

An “aPaaS,” is a public cloud hosted PaaS, of which they say: “By 2021, new aPaaS deployments will exceed new on-premises deployments. By 2023, aPaaS revenue will exceed that of application platform software.”

On the revenue situation:

platforms-and-paas-revenue

Commercial Java Platform, Enterprise Edition (Java EE) platforms’ revenue declined in 2015, indicating a clear shift in the application platform market…. Application platform as a service (aPaaS) revenue is currently less than half of application platform software revenue, but aPaaS is growing at an annual rate of 18.5%, and aPaaS sales will supersede platform software sales by 2023.

And:

Currently, the lion’s share of application platform software revenue comes from license sales of Java EE application servers. From a revenue perspective, the application platform software market is dominated by just two vendors: Oracle and IBM. Their combined revenues account for more than three-quarters of the market.

Decline in revenue for current market leaders IBM and Oracle over last three years (4.5% and 9.5% respectively), meanwhile uptick from Red Hat, AWS, and Pivotal (33.3%, 50.6% and 22.7% respectively).
Decline/shifting is driven by:

given the high cost of operation, the diminishing skill pool and the very slow pace of adoption of new technologies, a growing number of organizations — especially at the low end of the market — are migrating these workloads to application servers or cloud platforms, or replacing them with packaged or SaaS applications.

And:

Java EE has not kept pace with modern architectural trends. Oracle is leading an effort to produce a new version of Java EE (version 8), which is slated to add a host of long-overdue features; however, Oracle announced at Oracle OpenWorld 2016 that Java EE 8 has been delayed until the end of 2017.3 By the time Java EE catches up with basic features required for today’s applications, it will be at least two or three years behind the times again.

Target for cloud native:

Design all new applications to be cloud-native, irrespective of whether or not you plan to deploy them in the cloud…. If business drivers warrant the investment, rearchitect existing applications to be cloud-native and move them to aPaaS.

Vendor selection:

Give preference to vendors that articulate a platform strategy that supports modern application requirements, such as public, private and hybrid cloud deployment, in-memory computing, multichannel clients, microservices, event processing, continuous delivery, Internet of Things (IoT) support and API management.

Responses

Oracle and Java: confusing

Oracle’s stewardship of Java has been weird of late:

It’s all about WebLogic and WebSphere

I think this best sums it all up, the comments from Ryan Cuprak: “What this report is trying to do is attack Oracle/IBM via Java EE.”

I wouldn’t say “attack,” but rather show that their app servers are in decline, as well as TP processing things. The report is trying to call the shift to both a new way of development (cloud native) and the resulting shifts in product marketshare, including new entrants like Pivotal.

I can’t speak to how JEE is changing itself, but given past performance, I’d assume it’ll be a sauntering-follower to adapting technologies; the variable this time is Oracle’s proven ambivalence about Java and JEE, and, thus, funding problems to fuel the change fast enough to keep apace with other things.