A recent rendition of one of my standard talks at the Austin DevOps Meetup. See the slides as well.
A few weeks back my book review of two “the robots are taking over” came out over on The New Stack. Here’s some responses, and also some highlights from a McKinsey piece on automation.
Don’t call it “automation”
There is much more to this topic. Nick Carr’s book, The Glass Cage, has a different perspective. The ramifications of new technology (don’t call it automation) are notoriously difficult to predict, and what we think are forgone conclusions (unemployment of truck drivers even though the tech for self-driving cars needs to see much more diversity of conditions before it can get to the 99%+ accuracy) are not.
Lisanne Bainbridge in her seminal 1983 paper outlines what is still true today.
From that paper:
This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.
When things go wrong, humans are needed:
To take over and stabilize the process requires manual control skills, to diagnose the fault as a basis for shut down or recovery requires cognitive skills.
But their skills may have deteriorated:
Unfortunately, physical skills deteriorate when they are not used, particularly the refinements of gain and timing. This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one. If he takes over he may set the process into oscillation. He may have to wait for feedback, rather than controlling by open-loop, and it will be difficult for him to interpret whether the feedback shows that there is something wrong with the system or more simply that he has misjudged his control action.
There’s a good case made for not only the need for humans, but to keep humans fully trained and involved in the process to handle errors states.
Hiring not abating
Vinnie, the author of one of the books I reviewed, left a comment on the review, noting:
For the book, I interviewed practitioners in 50 different work settings – accounting, advertising, manufacturing, garbage collection, wineries etc. Each one of them told me where automation is maturing, where it is not, how expensive it is etc. The litmus test to me is are they stopping the hiring of human talent – and I heard NO over and over again even for jobs for which automation tech has been available for decades – UPC scanners in groceries, ATMs in banking, kiosks and bunch of other tech in postal service. So, instead of panicking about catastrophic job losses we should be taking a more gradualist approach and moving people who do repeated tasks all day long and move them into more creative, dexterous work or moving them to other jobs.
I think Avent’s worry is that the approach won’t be gradual and that, as a society, we won’t be able to change norms, laws, and “work” over fast enough.
As more context, check out this overview of their own study and analysis from a 2015 McKinsey Quarterly article:
The jobs don’t disappear, they change:
Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank teller’s job was redefined with the advent of ATMs.
our research suggests that as many as 45 percent of the activities individuals are paid to perform can be automated by adapting currently demonstrated technologies… fewer than 5 percent of occupations can be entirely automated using current technology. However, about 60 percent of occupations could have 30 percent or more of their constituent activities automated.
Most work is boring:
Capabilities such as creativity and sensing emotions are core to the human experience and also difficult to automate. The amount of time that workers spend on activities requiring these capabilities, though, appears to be surprisingly low. Just 4 percent of the work activities across the US economy require creativity at a median human level of performance. Similarly, only 29 percent of work activities require a median human level of performance in sensing emotion.
So, as Vinnie also suggests, you can automate all that stuff and have people focus on the “creative” things, e.g.:
Financial advisors, for example, might spend less time analyzing clients’ financial situations, and more time understanding their needs and explaining creative options. Interior designers could spend less time taking measurements, developing illustrations, and ordering materials, and more time developing innovative design concepts based on clients’ desires.
The most recent offshoring survey from Horses for Sources suggests that companies will have less use for traditional IT outsourcing.
When it comes to IT services and BPO, it’s no longer about “location, location, location”, it’s now all about “skills, skills, skills”.
Instead of “commodity” capabilities (things like password resets, routine programming changes, etc.), companies want more highly-skilled, innovative capabilities. Either offshorers need to provide this, or companies will in-source those skills.
Because offshorers typically don’t focus on such “open ended” roles, analysis of the survey suggests offshorers will have less business, at least new business:
aspirations for offshore use between the 2014 and 2017 State of the Industry studies, we see a significant drop, right across the board, with plans to offshore services.
an increasing majority of customers of traditional shared services and outsourcing feel they have wrung most of the juice offshore has to offer from their existing operations, and aren’t looking to increase offshore investments.
What with the large volume of IT offshorers companies do, and how this outsourcing tends to control/limit IT capabilities, paying attention to these trends can help you predict what the ongoing “nature of IT” is in large offshorers.
This fits the offshoring and outsourcing complaining I hear from most all software teams in large organizations.
For the Sun: WTF? files:
Gerstner questioned whether three or four years from now any proprietary version of Unix, such as Sun’s Solaris, will have a leading market position.
One of the more popular theories for the decline of Sun is that they accepted Linux way, way too late. As a counter-example, there’s IBM saying that somewhere around 2006 you’d see the steep decline of the Unix market, including Solaris, of course.
If I ever get around to writing that book on Sun, a chart showing server OS market-share from 2000 to 2016 would pair well with that quote.
If you’ve read Stephen’s fine book, The New Kingmakers, you may recall this relevant passage:
In 2001, IBM publicly committed to spending $1 billion on Linux. To put this in context, that figure represented 1.2% of the company’s revenue that year and a fifth of its entire 2001 R&D spend. Between porting its own applications to Linux and porting Linux to its hardware platforms, IBM, one of the largest commercial technology vendors on the planet, was pouring a billion dollars into the ecosystem around an operating system originally written by a Finnish graduate student that no single entity — not even IBM — could ever own. By the time IBM invested in the technology, Linux was already the product of years of contributions from individual developers and businesses all over the world.
How did this investment pan out? A year later, Bill Zeitler, head of IBM’s server group, claimed that they’d made almost all of that money back. “We’ve recouped most of it in the first year in sales of software and systems. We think it was money well spent. Almost all of it, we got back.”
The open source based data integration (basically, evolved ETL) company Talend IPO’ed this week. It’s a ten year old company, based on open source, with a huge French tie-in. Interesting all around. Here’s some details on them:
- “1,300 customers include Air France, Citi, and General Electric.” That’s way up from 400 back in 2009, seven years ago.
- In 2015 “Talend generated a total revenue of $76 million. Its subscription revenue grew 39% year over year, representing $62.7 million of the total. The company isn’t profitable: it reported a net loss of $22 million for 2015.”
- “…much of that [loss] thanks to the $49 million it spent on sales and marketing,” according yo Julie Bort.
- “Subscription revenue rose 27% to $63m while service fees stayed flat at $13m,” according to Matt Aslett.
- It looks like the IPO performed well, up ~50% from the opening price.
By this point, I’m sure Talend messes around in other TAMs, but way back when I used to follow the business intelligence and big data market more closely, I recall that much of the growth – though small in TAM – was in ETL. People always like the gussy it up as “data integration”: sure thing, hoss.
That seems still be the case as spelled out a recent magic quadrant of the space (courtesy of the big dog in the space, Informatica):
Gartner estimates that the data integration tool market was worth approximately $2.4 billion in constant currency at the end of 2014, an increase of 6.9% from 2013. The growth rate is above the average for the enterprise software market as a whole, as data integration capability continues to be considered of critical importance for addressing the diversity of problems and emerging requirements. A projected five-year compound annual growth rate of approximately 7.7% will bring the total to more than $3.4 billion by 2019
In comparison, here’s the same from the 2011 MQ:
Gartner estimates that the data integration tools market amounted to $1.63 billion at the end of 2010, an increase of 20.5% from 2009. The market continues to demonstrate healthy growth, and we expect a year-on-year increase of approximately 15% in 2011. A projected five-year compound annual growth rate of approximately 11.4% will bring the total to $2.79 billion by 2015.
The OpenStack Summit is in Austin this year, finally! So, I of course submitted several talks. Go over and vote for them – I think that does something helpful, who the hell knows?
Here’s the talks:
- DevOps for Normals – what’s happening as donkeys adopt DevOps – I gave one my first “state of DevOps” style talks back at the Atlanta OpenStack Summit in 2014. We’d just done a little DevOps study at 451 research. Now I give these types of talks a lot, updating them each time with the latest collection of charts and advice.
- Cloud Native Promises in the Land of Continuously Delivered Microservices – this is the talk I have going over exactly what a “cloud platform” is, why you’d care, and what it does for it. More than anything, it’s one of the many attempts to frame up what cloud is: a stack of stuff to help make software delivery better, put another way, the “infrastructure” that makes continuous delivery possible.
- Developer Marketing and Relations: Convincing the “Kingmakers” to give a crap about you – I’ve been trying to put together a panel to talk about developer relations for awhile now. As you may recall, I brain-dumped on that topic into the one (public) long-form report I did at 451 Research. For this panel, I picked a developer (Charles Lowell, The Frontside), a tech journo (Alex Williams, The New Stack), a marketer (Melissa Smolensky, CoreOS), a straight up developer relations person (David Flanders, OpenStack Foundation), and whatever it is I do. This seemed like a good bunch to go over why you’d want to do developer relations, what people do, and what works and doesn’t work.
I’ll be at the Summit regardless, but it’d sure be dandy to do some of the above too.
I started a new column at The Register, on the topic of DevOps. I used the first column to layout the case that DevOps is a thing, and baseline for how much adoption there currently is (enough, but not a lot – a “glass almost half full” type of situation). I was surprised by how many comments it kicked up!
Next up, I’ll try to pick key concepts and explain them, along with best and worst practices for adoption of those concepts. Or whatever else pops up to fill 800 words. Tell me if you have any ideas!
(You may recall had a brief column at The Register back when I was at 451 Research.)
Figuring out the market for PaaS has always been difficult. At the moment, I tend to estimate it at $20-25bn sometime in the future (5-10 years from now?) based on the model of converting the existing middleware and application development market. Sizing this market has been something of an annual bug-bear for me across my time at Dell doing cloud strategy, at 451 Research covering cloud, and now at Pivotal.
A bias against private PaaS
This number is contrast to numbers you usually see in the single digit billions from analysts. Most analysts think of PaaS only as public PaaS, tracking just Force.com, Heroku, and parts of AWS, Azure, and Google. This is mostly due, I think, to historical reasons: several years ago “private cloud” was seen as goofy and made-up, and I’ve found that many analysts still view it as such. Thus, their models started off being just public PaaS and have largely remained as so.
I was once a “public cloud bigot” myself, but having worked more closely with large organizations over the past five years, I now see that much of the spending on PaaS is on private PaaS. Indeed, if you look at the history of Pivotal Cloud Foundry, we didn’t start making major money until we gave customers what they wanted to buy: a private PaaS platform. The current product/market fit, then, PaaS for large organizations seems to be private PaaS
(Of course, I’d suggest a wording change: when you end-up running your own PaaS you actually end-up running your own cloud and, thus, end up with a cloud platform.)
How much do you have budgeted?
With this premise – that people want private PaaS – I then look at existing middleware and application development market-sizes. Recently, I’ve collected some figures for that:
- IDC’s Application Development forecast puts the application development market (which includes ALM tools and platforms) at $24bn in 2015, growing to $30bn in 2019. The commentary notes that the influence of PaaS will drive much growth here.
- Recently from Ovum: “Ovum forecasts the global spend on middleware software is expected to grow at a compound annual growth rate (CAGR) of 8.8 percent between 2014 and 2019, amounting to $US22.8 billion by end of 2019.”
- And there’s my old pull from a Goldman Sachs report that pulled from Gartner, where middleware is $24bn in 2015 (that’s from a Dec 2014 forecast).
When dealing with large numbers like this and so much speculation, I prefer ranges. Thus, the PaaS TAM I tent to use now-a-days is something like “it’s going after a $20-25bn market, you know, over the next 5 to 10 years.” That is, the pot of current money PaaS is looking to convert is somewhere in that range. That’s the amount of money organizations are currently willing to spend on this type of thing (middleware and application development) so it’s a good estimate of how much they’ll spend on a new type of this thing (PaaS) to help solve the same problems.
Things get slightly dicey depending on including databases, ALM tools, and the underlying virtualization and infrastructure software: some PaaSes include some, none, or all of these in their products. Databases are a huge market (~$40bn), as is virtualization (~$4.5bn). The other ancillary buckets are pretty small, relatively. I don’t think “PaaS” eats too much database, but probably some “virtualization.”
So, if you accept that PaaS is both public and private PaaS and that it’s going after the middleware and appdev market, it’s a lot more than a few billion dollars.
(Ironic-clipart from my favorite source, geralt.)
But what we had not fully processed – and perhaps no one else did, either – is that at that moment Software Group, for all intents and purposes, was gone except as an amalgamated category for financial reporting to Wall Street.
So suggests TPM in his coverage of Steve Mills retiring.
The notion that some in the media – who usually have no specific knowledge about Yahoo – have recklessly put forward that Yahoo is “unfixable” and that it should be simply “chopped up” and handed over for nothing to private equity or strategies is insulting to all long-term public shareholders.
- Check out how they make their case
- Use visuals and charts
- The informal nature of their language, e.g., they use the word “stuff” frequently
- Their citations, e.g., citing themselves (I always love a good “Source: Me!”) and citing “Google Images”
These things, in my view, are neither good or bad: I’m more interested in the study of the rhetoric which I find fascinating for investment banker documents/presentations like this.
Not only that, it’s a classic “Word doc accidentally printed in landscape.” The investment community can’t help themselves.
As another note, no need to be such a parenthetical dick, below, to prove the point of a poor M&A history, just let the outcomes speak for themselves, not the people who do them.
They actually do a better job in the very next slide, but that kind to pettiness doesn’t really help their argument. (Their argument is: she’s acquiring her friends.)
This is a type of reverse halo effect: we assume that tree standing goofiness has something to do with the business: an ad hominem attack. But, I think most billionaires probably have picture of themselves in trees, wearing those silly glove shoes, roasting their own coffee, only eating meat they kill themselves, or any number of other affectations that have nothing to do with profit-making, good or bad.