“The research effort included a total of 1,024 individuals, all of whom have a role that involves daily use of Spring.”
77% of the respondents have been using Spring Boot for 3 years or more. So, these are people very familiar with Spring and Java.
Industries: technology companies (30%) and financial services companies (20%). All major sectors are represented, including retail (8%), services (6%), and healthcare (5%).
37% work at organizations of 5,000 to 10,000+ staff. Of that, 28% from 10,000+ orgs.
So, a bit heavy on tech companies, but good enough on both industries and diverse spread of organization size.
“52% of developers surveyed use Spring boot as their only or primary development platform.”
No slow-down in use: “75% of respondents expect Spring Boot usage to grow over the next 2 years.”
Uses, lots of API use, interesting:
Lots of public cloud only use: “When asked where they deploy their Spring Boot apps, 57% of respondents were either deploying exclusively to public cloud (21%) or in a hybrid mode with both on-prem/private and public cloud deployments.”
– Most running in containers – 65% containerize their apps, 30% planning to.
…to run in kubernetes: 44% already running in kubernetes, 31% plan to in the next 12 months.
Air France KLM modernized their payments service recently, EPASS. This is a 12 year old system that provides the backend for processing purchasing airline tickets (and other things, I guess) from numerous front-ends: the web, mobile app, and social apps as well. The system was difficult to scale, it required manually adding new servers and had a long development cycle. As more and more people want to interact with Air France KLM through software (phones, online, in WhatsApp, or whatever other “channel”), they want to be able to evolve their software quickly. They want to use software as a core innovation tool for improving customer experience and, thus, business. So, here we see one of their first experiences modernizing their backend and transforming how they do software.
Talk presented by Oya Ünlü Duygulu and Patrick Zijlstra.
-Rick in intro: transformed payments platform in 6 weeks.
– [Corporate vision] is to provide good, “our purpose as an airline group is create memorable experiences for our passengers.
– So, they want to (1.) focus on customer centricity, (2.) innovation, and, (3.) efficiencies in our processes.
– “Digital” as the primary channel is on the rise. People want to interact through apps and such. So, KLM needs to meet the customers there… “As an airline, we want to be where our passengers are” (~3:00)
– Some examples of digital features: “‘About 10 percent of the ideas actually end up on the market. A recent example of this is the hand baggage check in the KLM app. Through augmented reality travelers can see whether their hand luggage meets the set dimensions. This function went live last month. ‘ Six months ago, a 3D rendering of the business class seats was also shown when checking in online. ‘This with the idea of stimulating the sale of these chairs.”
– For example, listening and interacting with customers in social media [something I’ve done many times – it’s great to chat with someone (or a bot?) in WhatsApps, Twitter, etc. instead of a phone call]. (~3:40) Social media is now “our closest connection to passengers.” And in China: “For instance, Chinese travelers rely immensely on mobile devices. How can these personal devices be used for authentication – identity management, payment etc. to streamline the journey wherever possible? In China, the whole landscape is different, and we need to ensure we aren’t relying overtly on drawing customers only to our touch-points.”
– (~3:10) merged together KLM and Air France backends to get less complexity in the back-end and a unified experience in the front-end for customers. Social media is now “our closest connection to passengers.”
– The use SAFe release trains (which they call “release planes”), mapped to customer value journeys, e.g., sales, paid products, or airport.
– In the digital department, they have about than 50 product teams.
– Planning every three months, come together get a roadmap from the business, and all the teams plan together. Then they start sprinting bi-weekly.
– Also shared services and practices team.
– Their /goals:
– (~5:50) “We are designing our products focused on time to market, innovation, robustness, and security.”
– Focusing on getting CI/CD in place.
– Also, reducing complexity and speeding up business value [realization], so we are moving towards a microservices architecture.
– [Business stuff:] EPASS handles payments from many places, created 12 years ago. Wanted to modernize [not sure why]. They worked with Pivotal Labs on modernization for a six wee project.
– EPASS app – made 12 years ago, handles about 37,000 payments transactions per day. Takes care of all online revenue.
– Six week engagement with Pivotal Labs. This brought expertiese from the outside, combined with their existing skills.
– “Six weeks is very ambitious for such a project, but getting this expertise from Pivotal and their dedication we made a success story at the end.”
– Modernization road-map for EPASS.
– We want to speed up with release cycles, which was then one month. [Move to single piece work-flow: whenever a user story is ready, then it can go live.]
– In six weeks, all the could focus on was transforming the app and moving it to the new platform [PCF]. But, they could also modernize their skills by adding in TDD and pair programming.
– Switches to Patrick.
– (~10:55) – they go over their way of working.
– Inception to set expectations. Outception to look back at what was achieved. Some blocker removal meetings. And the usual agile meetings.
– Two teams: one does modernization, the other delivers business features.
– Worked in one week iterations.
– Doing pair programming. “We noticed that this really increases the code quality that we deliver.”
– (~12:30) EPASS architecture. Was hosted on bare-metal Tomcat server. To scale, had to add new server and put EPASS software on it. This was becoming a hassle and fixing that was a motivation to move to VMware Tanzu.
– (~14:00) new architecture – five different components. Three in Tanzu Application Service.
– After, the majority of things were put in VMware Tanzu…
– [Picked some small things at first to test stuff out, hardcoded secrets but later fixed that – used CredHub – in long term will move to Vault.]
– (~16:00) Used Spring Boot, adding health check [this is good to highlight, that it gets instrumented/observable “for free”].
– “It was invigorating working to work with the Pivotal experts and now there’s more confidence in the team to continue.”
– Used Bamboo, added in automation stuff for deployment…
– Problems: networking problems
– Benefits: response times improved by 10%; “all the power for scaling is within the product team itself” instead of having to work with other groups, file tickets, etc. Also, time to patch is within 72 hours (3 days).
– (~21:08) “The experience was very positive. It was invigorating to work with the Pivotal experts. And, now there’s more confidence within the team to continue to improve the application.”
– The projects have been finished for a few months. No more components in bare-metal Tomcat.
– “From the organization side, there is no more fear of big changes. If such an old application as EPASS can transform, then it’s possible for any application.”
– “So more and more and more applications will be moving to TAS [Tanzu Application Service].”
The report goes over how software development needs and programs should adapt to the urgency that COVID brings. Highlights:
Obviously, teams are working from home more now. This exposes all of the face-to-face, undocumented processes that were happening (“manual processes address handoffs across departmental boundaries”). If there were too many, work can’t happen as well anymore when everyone is working from home.
“The trend catapulted use of Zoom, a videoconference service, from 10 million average daily users to 200 million during March 2020 and introduced us to our coworkers’ tastes in home décor.1 But it also separated millions of workers from the paper files they require to complete their missions, breaking millions of business processes. Paper files are an obvious point of failure, but manual processes based on desktop tools like excel and email lack visibility and tracking that are vital to remote workers.”
Paperless efforts are really on the front-burner now: “any enterprise relying on paper to advance its processes needs to automate just to continue to function.”
Also: “Topping the list for most now are tracking and tracing applications: tracking of employees, people entering hospitals and other sensitive buildings, equipment, facilities, patients, tests, research results, and on and on. These applications are not throwaways; they’re business-critical. and in the public sector, throw in new apps to manage new support, recovery, and stimulus programs or support existing programs straining under unprecedented volumes. Then in financial services, you have new servicing apps: servicing debt, defaults, new rescue programs, moratoriums, etc. (rather than new customers and new products and services). and any enterprise relying on paper to advance its processes needs to automate just to continue to function.”
And, more app types:
This urgency is driving business people to (finally) start getting involved more in IT/software: “as organizations scramble to fix processes and rapidly automate to keep them running, it becomes clear that businesspeople are the primary source of operations insight…. Bringing businesspeople closer to the development process through iterative, rapid prototyping and sometimes allowing them to develop solutions on their own offers promise for much faster and more agile responses to business needs.”
With apps being used remotely (as a SaaS, over the internet), organizations will likely discover new scaling needs – when the apps run outside the corporate network, and are home grown. “Scale becomes a vital focus. Many development teams are getting their first glimpse at what massive scaling looks like for their applications.”
Companies are overworking staff: “a us hospital network made Java developers scramble 24×7 for 20 days to create a visitor registration application to extend its hospital administration system.” [This isn’t sustainable and if done too much will leave a bad taste in staff’s mouth about “agile” and “digital transformation.”]
The authors really like and recommend low-code stuff. This probably makes sense to get a lot of line-of-business people to start putting together wizards and UI-driven workflows around databases, Excel/CSV, and APIs to ERP systems.
Because of lack of skills, they setup a team (named Stratus) to centralize [and also market?] the new platform.
(~4:10) – Building a centralized, standardized platform is important for efficiencies (reducing duplicated efforts) but also governance and maintainability, long-term. “We are building a solid container platform. Why would a team not do it themselves? If you have about 600 teams that want to use containers and everyone would need to build their own unique container platform, it would become unmanageable as a company you cannot control it, and it would be a lot of reinventing the wheel.” MS-Word dictation transcription: And that’s something you don’t want because then things are wasting, the team’s time, the valuable time so therefore it would be good if one team would build a strong foundation or the other teams can build on top of that. Also need to enforce policies/governance/security on the team’s work.
They do the typical platform team stuff – building the platform, running it, evolving it, consulting for it, etc.
They basically use Azure, with some AWS it seems:
~7:30 figuring this out all on your own can be very time consuming – maybe a couple years of learning, PoC’ing, [vendor negotiations], etc. We needed to do it in one. We started with the CNCF reference diagram.
[I didn’t pay attention closely enough much beyond to take good notes.]
I was on vacation last week, so this notebook is a little stale. Perishable news. (JOKES!)
The deal size is $13.7bn, a 30% premium; expected to close in the second half of this year (Todd Bishop)
Highly likely to remain independent: “Reading between the lines of Bezos’ statement, Amazon is signaling that it doesn’t plan to disrupt what Whole Foods is doing with a major shakeup of the retailer’s infrastructure or strategy in the near term. Amazon has a history of allowing acquired companies — from Audible to Twitch to Zappos — to continue operating with relative independence, with some product and feature integrations.” (Ibid.)
Not good for competition
Investors really believe in that AMZN magic: “In total, those five grocery chains [Target, CostCo, Kroger, Walmart, SuperValu] shed about $26.7 billion in market capitalization between the market’s close Thursday and Friday morning, as investors worried that Amazon deeper push into the industry could be a death knell for some.”
EU too: “The worries weren’t just contained to U.S. markets. Some investors in the U.K. and Europe also saw the purchase as a sign that Amazon could take its grocery ambitions global. Shares of French retailer Carrefour fell sharply on the news, about 4%, while in London, Tesco shed 6% and Sainsbury dropped 5%.”
Like Amazon, Whole Foods is big into private label: “Whole Foods generates $2.3 billion worth of private label and exclusive brand sales per year; its private label products account for 32% of items in Instacart’s food category, taking up far more of the shelf than Walmart Grocery (16%) and Peapod (6%).”
(Further) driving down supplier costs: “It’s also possible that Amazon will use Whole Food’s partnerships with suppliers to get more of them on the Amazon platform. Amazon and Whole Foods will be tough negotiators, but the lure of the 300 million customer accounts on Amazon.com, in addition to all of its other CPG-related programs, will be tough to turn down.”
More: “he scale at which Amazon is making use of this strategy should force CPG brands and Big Box retailers to make some major changes to their distribution strategies.”
“The truth, though, is that Amazon is buying a customer — the first-and-best customer that will instantly bring its grocery efforts to scale.”
“What I expect Amazon to do over the next few years is transform the Whole Foods supply chain into a service architecture based on primitives: meat, fruit, vegetables, baked goods, non-perishables”c
“At its core Amazon is a services provider enabled — and protected — by scale.”
This should remind you of the “middle-man”/unpaid for buy in my warehouse/drop-ship type of advanced retail play that the likes of Dell made famous.
I want pizza and baby-wipes, not software – this kind of argument (though, not really “invalid”) makes me bristle. It’s like a pizza company saying they’re a technology company. As long as the pizza comes in the box and the paper-towels come in the mail, they can call themselves whatever they want…but the pizza shop and Amazon are, to me, a pizza and retail company. How they get the pizza into my mouth is not my problem. Since I’m a paying customer in these instances, it’s not like the “you are the product” epiphany of .com, eye-ball companies.
“Grocery remains the most under-penetrated e-commerce category, with less than 5% of sales happening online. However, with 20% of grocery sales estimated to begin online by 2025, brands investing in digital will reap the rewards.” (Elisabeth Rosen)
Online groceries penetration: “The online grocery business is still in its infancy. Last month, for example, 7% of U.S. consumers ordered groceries online, according to Portalatin. Of this group, 52% already has an Amazon Prime account. Groceries represent “the final frontier for Amazon — they haven’t quite cracked the code on that, but they already have a relationship with consumers.”
Mint says that last year, my family of two adults and two kids spent ~$15,000 at the grocery store. So that’s around what you’re upper-middle-class people (or whatever I am somewhere in the 90th percentile) spend, I guess.
For us consumers…
Many predict either free or highly discounted delivery fees for Amazon Prime members. That certainly makes sense as Amazon Video and Music, and Prime Now, shows.
New types of software and delivery mechanisms (SaaS, mobile) mean new problems and scale:
“We were so used to dealing with tens of servers and suddenly it was hundreds and thousands of servers,” which in turn created more work for the development teams.
“The digital expansion of business equals more work and firefighting,” Cox said.
Less time spent doing dumb-shit:
employees used to spend the eight hours of the park closed every night, manually updating each server. Now only one person can update the whole fleet in 30 minutes.
Some guiding principals and management challenges:
Cox said that leading a change of this order of magnitude involved three crucial ingredients:
1. Collaboration: break down silos, mutual objectives.
2. Curiosity: keep experimenting.
3. Courage: candor, challenge, no blaming or witch-hunting.
But these can come with its own leadership challenges, including:
• The politics of command and control.
• How new leadership can take a company in a new direction.
• The blame bias of who versus what.
And, some good motivation:
We keep moving forward, opening up new doors, doing more things because we’re curious.
As you can imagine, things like the so-called “bathroom bill” drive me crazy. It also makes me sad for whatever happened to my fellow Texans, who support it, that they’d be this cruel, uninformed, and ignorant. And, of course, there’s the people effected.
Stealing some of Matt Ray’s notes for our Software Defined Talk recording, here’s a notebook and highlights on the topic.
It’s really depressing how aggressively stupid Texas is sometimes. I don’t blame anyone avoiding it.
“The consequences of this bill are beyond severe. Not only can transgender people be arrested and jailed for using the bathroom, but they will be assumed to be pedophiles, and be put on the Texas sexual predator watch list. So not only is there the possibility of being hauled off to jail during a conference, the arrest will ruin the rest of your life. Just because you need to pass some water.”
More: “The differences on the bathroom bill are substantial. The Senate would require transgender Texans to use the restrooms in publicly owned buildings that match their biological sex and would bar local governments from adopting or maintaining their own laws on the subject. The House version would apply only to elementary and secondary schools; after it passed last weekend, Patrick and others criticized it as a change that does very little.”
How’d it go in North Carolina?
AP analysis of economic effect in North Carolina, from March 2017:
Losses of $313m a year – “$3.76 billion in lost business over a dozen years.”
Some examples, not just bleeding-heart tech companies: “Those include PayPal canceling a 400-job project in Charlotte, CoStar backing out of negotiations to bring 700-plus jobs to the same area, and Deutsche Bank scuttling a plan for 250 jobs in the Raleigh area. Other companies that backed out include Adidas, which is building its first U.S. sports shoe factory employing 160 near Atlanta rather than a High Point site, and Voxpro, which opted to hire hundreds of customer support workers in Athens, Georgia, rather than the Raleigh area.”
Most of it is from businesses like Paypal and Deutsche Bank pulling out – good for them!
“Bank of America CEO Brian Moynihan — who leads the largest company based in North Carolina — said he’s spoken privately to business leaders who went elsewhere with projects or events because of the controversy, and he fears more decisions like that are being made quietly.”
For context, The North Carolina economy: “In 2010 North Carolina’s total gross state product was $424.9 billion. In 2011 the civilian labor force was at around 4.5 million with employment near 4.1 million. The working population is employed across the major employment sectors.”
So, rough estimate of economic impact is: a decrease of 0.07%/year (this is a bad number since it’s based on 2010 GDP and other forward looking estimates, however, it gives you a ball-park sense.) However, see scenario for larger impact for the future below (I mean, not to mention being a dick-heads and treating people as subhuman for no good reason other than being fucking social-idiots):
For Toyota, this means something on the order of 1,000 new jobs in Texas, with an estimated 2,800 existing employees who’ll move to Texas. That’s a lot of new HEB customers, home buyers, and taxpayers.
Now, think of other G2000 companies that would want to move to Texas, or beef up their existing presence. The companies will be deciding what to do in the next 2-3 years, and if they skip on Texas, that will be decades of lost cash, not to mention new Texans.
So, what’s the big deal for those in favor of it in the first place? Well, obviously, the idea that there’s “wide-stances” going on is bunk (more).
One can only conclude that supporters are confused (and, thus, afraid): there’s a fundamental disagreement about gender and sexuality. But, also, there’s just downright discriminatory. We’ve lived through this before with the gay marriage movement int he past 20 years and know how to spot veiled discrimination.
As one ACLU person put it: “that fundamentally [supporters of bathroom bills] just don’t think of transgender people as humans, and they try to erase trans people from existence.””
The Economist describes the people effected: ‘The heart of the bill is its concept of “biological sex”; lawmakers define it as “the physical condition of being male or female, which is stated on a person’s birth certificate”. This definition is fraught for several reasons. First, as many as 1 in 1,500 babies are born with ambiguous genitalia that qualify them as “intersex”, though that designation was only used for the first time last week, when a Brooklyn-born, 55-year-old California resident received a revised birth certificate from New York City in the mail. Second, thousands of the 1.4m transgender Americans have had sex-reassignment surgery, which means that many people who were designated as male or female at birth now have “the physical condition” of being another gender. And for transgender people who retain the biological markers of their original gender identification (because they choose not to undergo surgery or cannot afford it), the fact of their sense of themselves remains. Many transgender women and men feel not only uncomfortable but endangered when being forced to use a bathroom that does not mesh with their identity. In a 2013 paper, Jody Herman, a scholar at the UCLA law school’s Williams Institute, discussed a survey finding that 70% of transgender people “reported being denied access, verbally harassed, or physically assaulted in public restrooms”.’ (More from CNN.)
Is there anything to actually worry about? The article continues: “No similar research bears out the theory that opening bathrooms to transgender people spurs sexual predators to put on lipstick and a dress to target women and young girls in public facilities. Last year, a coalition of organisations dedicated to preventing the abuse of women issued a letter addressing Mr Patrick’s worry. “As rape crisis centers, shelters, and other service providers who work each and every day to meet the needs of all survivors and reduce sexual assault and domestic violence throughout society”, they wrote, “we speak from experience and expertise when we state that these claims are false”. Texas Republicans say that strict gender segregation in public bathrooms is “common sense”, but their appeal to conventional wisdom is not borne out by the evidence. A police department official in Des Moines, Iowa, said he doubts that bathroom tolerance for trans people would “encourage” illicit behaviour. Sex offenders, he said, will find victims “no matter what the laws are”.”
Meanwhile, bathroom bill thinking shows a misunderstanding of the realities of sexual assault: ‘[Laura Palumbo, communications director at the National Sexual Violence Resource Center] said she believes people “must understand the facts about sexual assault,” adding that in 8 out of 10 cases the victim already knows the person who sexually assaulted them, citing Justice Department statistics. However, 64 percent of transgender people will experience sexual assault in their lifetime, she said, citing a study by the National Gay and Lesbian Task Force and National Center for Transgender Equality.’
All of this said, other than “there is no evidence,” it’s surprisingly hard to find any numbers and reports on the topic of “is this actually a problem,” based on past crime and incidents. This is true for both sides of the issues!
That said, the conclusion would, thus, be that there’s no evidence based on historics that there’s anything close to a material, actual problem (sexual assault) going on here. This is not only intellectually (and socially) frustrating, but it also means that all the effort spent on bathroom bills is wasted and should have been spent on fixing real problems that could prevent actual sexual assault.
There’s a few stories out about Canonical, likely centered around some PR campaign that they’re seeking to IPO at some time, shifting the company around appropriately. Here’s some highlights from the recent spate of news around Canonical.
Testing the Red Hat Theory, competing for the cloud-native stack
Why care? Aside from Canonical just being interesting – they’ve been first and/or early to many cloud technologies and containers – there’d finally be another Red Hat if they were public.
Most of the open source thought-lords agree that “there can never be another Red Hat,” so, we’ll see if the Ubuntu folks can pull it off. Or, at the very least, how an pure open source company wangles it out otherwise.
That said, SUSE (part of HPE/Micro Focus) has built an interesting business around Linux, OpenStack, and related stuff. Ever since disentangling from Novell, SUSE has had impressive growth (usually something around 20 and 25% a year in revenue). All is which to, the Red Hat model actually is being used successfully by SUSE, which, arguably, just suffered from negative synergies (or, for those who don’t like big words, “shit the bed”) when it was owned by Novell.
As I’m perhaps too fond of contextualizing, it’s also good to remember that Red Hat is still “just” a $2.5bn company, by revenue. Revenue was $1.5bn in 2014, so, still, very impressive growth; but, that’s been a long, 24 year journey.
Shuttleworth said “in the last year, Ubuntu cloud growth had been 70 percent on the private cloud and 90 percent on the public cloud.” In particular, “Ubuntu has been gaining more customers on the big five public clouds.”
Its OpenStack cloud division has been profitable, said Shuttleworth, since 2015
[Canonical] now has more than 700 paying customers and sees a $1bn business for its OS, applications and IT operations software. Time will tell if this goal is realized.
Canonical claims some 700 customers paying for its support services on top of Ubuntu and other offerings (double the 350 it had three years ago), and to have achieved more than $100m in bookings in its last financial year…. [Overall, it’s] not yet a profitable business (although its Ubuntu unit is). We estimate GAAP revenue of about $95m.
More than 80 workers at Ubuntu-maker Canonical are facing the chop as founder Mark Shuttleworth takes back the role of chief executive officer…. 31 or more staffers have already left the Linux distro biz ahead of Shuttleworth’s rise, with at least 26 others now on formal notice and uncertainty surrounding the remainder
Back to Al on the Job to Be done, building and supporting those new cloud-native platforms:
Rather than offering ways to support legacy applications, the company has placed bets on its Ubuntu operating system for cloud-native applications, OpenStack IaaS for infrastructure management, and Docker and Kubernetes container software.
And, it seems to be working:
Supporting public cloud providers has been a success story for Canonical – year-over-year revenue grew 91% in this area…. Per Canonical, 70% of the guest OS images on AWS and 80% of the Linux images on Microsoft Azure are Ubuntu. Its bare-metal offering, MaaS (Metal as a Service), is now used on 80,000 physical servers.
On OpenStack in particular:
Canonical claims to be building 4,000 OpenStack deployments a month at some 180 vendors…. It claims multiple seven-figure deals (through partners) for its BootStrap managed OpenStack-as-a-service offering, and that the average deal size for OpenStack is trending upward.
Still, there is “no timeline for the IPO.” First, Shuttleworth wants all parts of the slimmed down Canonical to be profitable. Then “we will take a round of investment.” After that, Canonical will go public.
Thus far, it seems like the large banks are fending off digital disruption, perhaps embracing some of it on their own. The Economist takes a look:
“Peer-to-peer lending, for instance, has grown rapidly, but still amounted to just $19bn on America’s biggest platforms and £3.8bn in Britain last year”
“last year JPMorgan Chase spent over $9.5bn on technology, including $3bn on new initiatives”
From a similar piece in the NY Times: “The consulting firm McKinsey estimated in a report last month that digital disruption could put $90 billion, or 25 percent of bank profits, at risk over the next three years as services become more automated and more tellers are replaced by chatbots.”
But: “Much of this change, however, is now expected to come from the banks themselves as they absorb new ideas from the technology world and shrink their own operations, without necessarily losing significant numbers of customers to start-ups.”
Back to The Economistpiece: “As well as economies of scale, they enjoy the advantage of incumbency in a heavily regulated industry. Entrants have to apply for banking licences, hire compliance staff and so forth, the costs of which weigh more heavily on smaller firms.”
Regulations and customer loyalty are less in China, resulting in more investment in new financial tech in Asia:
As another article puts it: “China has four of the five most valuable financial technology start-ups in the world, according to CB Insights, with Ant Financial leading the way at $60 billion. And investments in financial technology rose 64 percent in China last year, while they were falling 29 percent in the United States, according to CB Insights.”
Why? “The obvious reason that financial start-ups have not achieved the same level of growth in the United States is that most Americans already have access to a relatively functional set of financial products, unlike in Africa and China.”
There’s some commentary on the speed of sharing blockchain updates can reduce multi-day bank transfers (and payments) to, I assume, minutes. Thus: ‘“Blockchain reduces the cost of trust,” says Mr Lubin of ConsenSys.’
Fixing legacy problems with new platforms, not easy
Mainframes are a problem, as a Gartner report from last year puts it: “The challenge for many of today’s modernization projects is not simply a change in technology, but often a fundamental restructuring of application architectures and deployment models. Mainframe hardware and software architectures have defined the structure of applications built on this platform for the last 50 years. Tending toward large-scale, monolithic systems that are predominantly customized, they represent the ultimate in size, complexity, reliability and availability.”
But, unless/until there’s a crisis, changes won’t be funded: “Banks need to be able to justify the cost and risk of any modernization project. This can be difficult in the face of a well-proven, time-tested portfolio that has represented the needs of the banking system for decades.”
Sort of in the “but wasn’t that always the goal, but from that same article, Gartner suggests the vision for new fintech: ‘Gartner, Hype Cycle for Digital Banking Transformation, 2015, says, “To be truly digital, banks must pair an emphasis on customer-facing capabilities with investment in the technical, architectural, analytic and organizational foundations that enable participation in the financial services ecosystem.”’
A bit correlation-y, but still useful, from that BCG piece: “While past performance is no guarantee of future results, and even though all the company’s results cannot be entirely attributed to BBVA’s digital transformation plan, so far many signs are encouraging. The number of BBVA’s digital customers increased by 68% from 2011 to 2014, reaching 8.4 million in mid-2014, of which 3.6 million were active mobile users. Because of the increasing use of digital channels and efforts to reconfigure the bank’s branch network—creating smaller branches that emphasize customer self-service and larger branches that provide higher levels of personalized advice through a remote cross-selling support system—BBVA achieved a reduction in costs of 8% in 2014, or €340 million, in the core business in Spain. Meanwhile, the bank’s net profits increased by 26% in 2014, reaching €2.6 billion.”
And a more recent write-up of JPMC’s cloud-native programs, e.g.: ‘“We aren’t looking to decrease the amount of money the firm is spending on technology. We’re looking to change the mix between run-the-bank costs versus innovation investment,” he said. “We’ve got to continue to be really aggressive in reducing the run-the bank costs and do it in a very thoughtful way to maintain the existing technology base in the most efficient way possible.” …Dollars saved by using lower-cost cloud infrastructure and platforms will be reinvested in technology, he said.’ JPMC, of course, is a member of the Cloud Foundry Foundation which means, you know, they’re into that kind of thing.
One of the more common questions I’ve had over the years is: “but, surely, everyone is just in the public cloud, right?” I remember having a non-productive debate with a room full of Forrester analysts back in about 2012 where they were going on and on about on-premise IT being dead. There was much talk about electricity outlets. To be fair, the analysts were somewhat split, but the public cloud folks were adamant. You can see this same sentiment from analysts (including, before around 2011, myself!) in things like how long it’s taken to write about private PaaS, e.g., the PaaS magic quadrant has only covered public PaaS since inception).
Some 65% of enterprise workloads reside in enterprise owned and operated data centers—a number that has remained stable since 2014, the report found. Meanwhile, 22% of such workloads are deployed in colocation or multi-tenant data center providers, and 13% are deployed in the cloud, the survey found….
On-prem solutions remain dominant in the enterprise due to massive growth in business critical applications and data for digital transformation, Uptime Institute said
That survey also found that nearly half of enterprises were still dedicating at least 70% of their yearly budget to traditional, on-premise applications, potentially pointing to growing demand for a hybrid infrastructure….
Nearly 75% of companies’ data center budgets increased or stayed consistent in 2017, compared to 2016, the survey found.
Metrics, KPIs, and what organizations are focusing on (uptime):
More than 90% of data center and IT professionals surveyed said they believe their corporate management is more concerned about outages now than they were a year ago. And while 90% of organizations conduct root cause analysis of an IT outage, only 60% said that they measure the cost of downtime as a business metric, the report found.
Demographics: “responses from more than 1,000 data center and IT professionals worldwide.”
Pretty much all Pivotal Cloud Foundry customers run “private cloud.” Many of them want to move to public cloud in a “multi-cloud” (I can’t make myself say “hybrid cloud”) fashion or mostly public cloud over the next 5 to ten years. That’s why we support all the popular public clouds. Most of them are doing plenty of things in public cloud now – though, not anywhere near “a whole lotta” – and there are of course, outliers.
This does bring up a nuanced but important point: I didn’t check out the types of workloads in the survey. I’d suspect that much of the on-premises workloads are packaged software. There’s no doubt plenty of custom written application run on-premises – even the majority of them per my experience with the Pivotal customer base. However, I’d still suspect that more custom written applications were running in the public cloud than other workloads. Just think of all the mobile apps and marketing apps out there.
[Q3, FY 2017] Also of note, we closed our second OpenShift deal over $10 million and another OpenShift deal over $5 million. And significantly, we actually had over 50 OpenShift deals alone that were six or seven figures, so really strong traction. [Q4, FY 2017] with our largest deals in Q4 approximately one-third had an OpenShift container platform component.
During the final three months of last year, subscription revenue for Red Hat’s application development-related [JBoss, etc] and other emerging technologies – which includes OpenShift – hit $125 million, a 40 percent increase from the same period in 2015, and revenue for the group accounted for about 20 percent of Red Hat’s overall revenues for the fourth quarter.
Today, we also announced that Barclays Bank, the Government of British Columbias Office of the CIO, and Macquarie Bank are also using Red Hat OpenShift Container Platform to modernize application development…. airplane manufacturer Airbus about their DevOps journey, and digital travel platform Amadeus about their transformation of handling 2,000x the number of online transactions…. how Amsterdams Schipol Airport (AMS) is using OpenShift to redefine the in-terminal travel experience, how Miles & More GmbH is better managing rewards programs for travelers, and how ATPCO is rethinking how they publish fare-related data to the airline and travel industry.
Much of the write-up focuses on community momentum, true to Red Hat, open source form:
The OpenShift Commons community has 260+ member organizations….
Red Hat engineers lead or co-lead in 10 of the 24 Kubernetes SIG activities.
Finally, some commentary on their strategic shift to Kubernetes:
The huge architectural shift that we made a few years ago in adopting open standards for containers and the Kubernetes container scheduler has allowed us to delivered a unified platform to containerize existing applications and deliver agility and scalability for cloud-native applications and microservices. We call this combination Enterprise Kubernetes+, or Enterprise-Ready Kubernetes.
I’m doing a podcast with Comcast in a few weeks, so I’ve been going over all their public talks on their cloud-native efforts. They’ve been working with Pivotal since around 2014 and are one of the more impressive customer cases with over a 1,000 applications now on Pivotal Cloud Foundry.
Here are some highlights from the talks I’ve been watching. As always, things I put in square brackets are my own comments, the rest are quotes or summaries of what people said:
(17:00) Every deployment to production took at least 6 weeks, but most commonly around 2 months end-to-end. Which also means you need to plan capacity much in advance.
We started to use virtualization and containerization “well, well before Docker existed… it was some success, we had some improvements, but those improvements were marginal.”
Traditionally, it’d take at least 4-6 months to setup your dev/test infrastructure. But, luckily, virtualization came along.
(9:20) Business drivers… Comcast phone service, set-top boxes get DVRs, VoD, etc. All of these require apps on the backend, so the portfolio of apps starts to grow, and with they way they were before it meant they had to build a new datacenter every six months. Virtualization helped here, of course.
Also, virtualization allowed us to put a service layer [think “platform”] on-top of the infrastructure.
It’d take 4-6 weeks for testing environment, but now it takes 10-15 minutes in a self-service portal.
Demo of using Pivotal Cloud Foundry for much of the automation needed to deploy and scale an application.
(~32:00) We used to have things like “order servers” and “make load-balancer changes” and somewhere in the bottom of the backlog was “write some code and do some testing.” [That is, they were focusing on items with low business value, below “the value line,” rather than customer features.]
“What Cloud Foundry essentially helped us with was to get all those unnecessary user stories out of our backlog so we can focus on the writing code, on testing, and deploying rather than managing infrastructure.”
Would take 3 months to get a server useful, from moment of purchasing to using.
“Over a 100 services run by development teams.”
In functional, silo roles.
(3:45) “We knew we had that large, rigid infrastructure. [Pivotal] Cloud Foundry and it’s adoption really enables us to change that to gain the agility, to gain the elasticity at scale.
Taking away roles to reduce finger-pointing and all the negative stuff, and unified team, of course.
(7:35) Anecdote of Nick going from “ops guy” to writing code and liking coding.
(12:18) ESP router that was a small router written in Go to translate SOAP requests as part of a strangler pattern. Decades old SOA layer that they wanted to modernize. But they couldn’t strip it out, would take so long. So, were going to duck-type as SOA, but do REST and micro services underneath. Strangler pattern, etc. This is what the ESP router does marshals and unmarshalls between microservices and SOAP stuff. But new things need to be done in new style.
Also, “de-mingling data,” moving off Oracle RAC/GoldenGate for multi-site. Some simpler CRUD services to front the data.
(~15:00) Used to take a week+ to deploy the entire stack, but with Pivotal Cloud Foundry it takes minutes. It gives us a great deal of velocity that we’ve never had before. “Sometimes we’ll deploy multiple times an hour.”
(17:00) From 1,000’s of lines of bash to deploy out to various WebLogic clusters, which has for the most part moved to Cloud Foundry.
Improving production updates: bringing new node up and shutting old node down slowly; canary updates, with a CI test suite, then switching over to a production install.
August 1st, 2016 – James Taylor – The Power of Partnership & Building a Cloud Native Tier-1 Platform
“Helping someone put a smile on their face is one of the greatest gifts we can give each other.”
Their VP provides the feedback loop of things to focus on. Right now: reducing technical debt, reducing incidents, increasing velocity, experimentation.
(~6:30) “You can’t move forward – innovate – if you don’t have time to try new things.”
(~18:35) “If you’re spending time configuring a Docker container, that’s time you’re not spending coding or solving a problem.”
(13:51): “At the end of the day, [business] value is what puts money in everyone’s pocket. If our company, Comcast, can’t create something of value, no one’s gonna pay for us…if we can’t create value. So it’s important for us to understand ‘how can you create value?’”
(~22:02, starting epic rant!) “Who is our customer and what value do we bring to our customers…”
If you’re spending money on support, that’s cutting into your margins. A call coming in costs $8 right off the bat, then more as it takes longer. So you want to figure out preventing customer support problems… which points to understanding your customers more.
[A good overview of thinking about “value” in the context of a specific application, their customer activation center, Sparrow.] “If you have a [support] call rate of 30%, you’re probably cutting out all the value… So we try to figure out, how do we prevent calls?” [Very similar to IRS cloud-native story.]
Sparrow: 5 junior Java developers… we built it from scratch in parallel while existing teams maintained the platform… we then had to integrate the processes together… figure out decomposing the monolith platforms, etc….then we had to just cut off stuff when it was too much of a hassle.
August 17th, 2016 – Greg Otto SpringOne Platform keynote
“If Comcast has a problem to solve, there are three possible approaches: solve it themselves by making an investment in teams and resources; solve it through a commercial vendor that could build a product for them; or work with the open source community.”
OpenStack: “In addition to Linux, Comcast is a heavy user of OpenStack. They use a KVM hypervisor, and then a lot of data center orchestration is done through OpenStack for the coordination of storage and networking resources with compute and memory resources. Muehl said that Comcast has roughly a petabyte of memory and around a million virtual CPU cores that they are running under the OpenStack umbrella. As an operator, Comcast does a lot of things around operations, and they use Ansible to deploy and manage OpenStack at scale.”
Cloud Foundry: “They also use Cloud Foundry, but according to Muehl that work is in the very early stages at Comcast.”
May 2015 – Running Cloud Foundry at Comcast talk
Neville George, Sam Guerrero, Tim Leong, Sergey Matochkin
They wanted to make custom URLs.
Used Puppet for stuff.
(~8:30) Their requirements for a platform:
A lot of emphasis on self-service and the micro services benefits of operating independently, product management wise.
They use OpenStack, Docker, and [Pivotal] Cloud Foundry.
Pre-provisioning resources for a pool of containers that are ready to go, etc.
(~27) a couple applications in production today… we’ll be ramping up quickly.
(Either this video or the 2016 one, a few minutes from the end) Q, training mode. A, Sergey: “I can’t say we have a really good training model…. We do brown-bags to have people aware. We focus on 12 factor application model… on overall microservices model, not just to shape application, but also data. Developers need to understand how they [do] applications for PaaS instead of traditional.
What with all the retrospective stuff, you need to be able to get teams together, physically. The collaboration angles are much better in person
Set-up each “shore” as an architecturally and management island, make them as independent as possible. They also need their own context, not held up by time zones so they don’t need to wait 24-48 hours for authorizations and collaboration. [To my mind, this means taking advantage of the organizational de-coupling you can get with microservices.]
Starting change, even when they company needs it. Amy: You have to start with the business need, what’s the big driver behind a change like DevOps. [Managers often don’t make sure they figure this out, let alone decimate it to staff.]
Video: “In 2017 Amazon is expected to spend $4.5bn on television and film content, roughly twice what HBO will spend. But it has a big payoff.”
Prime momentum: “Mr Nowak reckons the company had 72m Prime members last year, up by 32% from 2015.”
Cloud: “Last year AWS’s revenue reached $12bn, up by more than 150% since 2014.”
Anti-trust, in the US: “If competitors fail to halt Amazon’s whirl of activities, antitrust enforcers might yet do so instead. This does not seem an imminent threat. American antitrust authorities mainly consider a company’s effect on consumers and pricing, not broader market power. By that standard, Amazon has brought big benefits.”
Automation across the life cycle is critical to being successful.
A regular and positive relationship must exist between the owner of the application and the developers of the changes.
This kind of effort may seem insurmountable for a large legacy portfolio. However, an organization doesn’t have to attack the entire portfolio. Determine where the primary value can be achieved and focus there. Which areas of the portfolio are most impacted by business requests? Target the areas with the most value.
An example of possible change:
About 10 years ago, a large European bank rebuilt its core banking system on the mainframe using COBOL. It now does agile development for both mainframe COBOL and “channel” Java layers of the system. The bank does not consider that it has achieved DevOps for the mainframe, as it is only able to maintain a cadence of monthly releases. Even that release rate required a signi cant investment in testing and other automation. Fortunately, most new work happens exclusively in the Java layers, without needing to make changes to the COBOL core system. Therefore, the bank maintains a faster cadence for most releases, and only major changes that require core updates need to fall in line with the slower monthly cadence for the mainframe. The key to making agile work for the mainframe at the bank is embracing the agile practices that have the greatest impact on effective delivery within the monthly cadence, including test-driven development and smaller modules with fewer dependencies.
It seems impossible, but you should try:
Improving the state of a decades-old system is often seen as a fool’s errand. It provides no real business value and introduces great risk. Many mainframe organizations Gartner speaks to are not comfortable doing this much invasive change and believing that it can ensure functional equivalence when complete! Restructuring the existing portfolio, eliminating dead code and consolidating redundant code are further incremental steps that can be done over time. Each application team needs to improve the portfolio that it is responsible for in order to ensure speed and success in the future. Moving to a services-based or API structure may also enable changes to be done effectively and quickly over time. Some level of investment to evolve the portfolio to a more streamlined structure will greatly increase the ability to make changes quickly and reliably. Trying to get faster with good quality on a monolithic hairball of an application is a recipe for failure. These changes can occur in an evolutionary way. This approach, referred to in the past as proactive maintenance, is a price that must be paid early to make life easier in the future.
You gotta have testing:
Test cases are necessary to support automation of this critical step. While the tooling is very different, and even the approaches may be unique to the mainframe architecture, they are an important component of speed and reliability. This can be a tremendous hurdle to overcome on the road to agile development on the mainframe. This level of commitment can become a real roadblock to success.
Another example of an organization gradually changing:
When a large European bank faced wholesale change mandated by loss of support for an old platform, it chose to rewrite its core system in mainframe COBOL (although today it would be more likely to acquire an off-the-shelf core banking system). The bank followed a component-based approach that helped position it for success with agile today by exposing its core capabilities as services via standard APIs. This architecture did not deliver the level of isolation the bank could achieve with microservices today, as it built the system with a shared DBMS back-end, as was common practice at the time. That coupling with the database and related data model dependencies is the main technical obstacle to moving to continuous delivery, although the IT operations group also presents cultural obstacles, as it is satis ed with the current model for managing change.
A reminder: all we want is a rapid feedback cycle:
The goal is to reduce the cycle time between an idea and usable software. In order to do so, the changes need to be smaller, the process needs to be automated, and the steps for deployment to production must be repeatable and reliable.
The ALM technology doesn’t support mainframes, and mainframe ALM stuff doesn’t support agile. A rare case where fixing the tech can likely fix the problem:
The dilemma mainframe organizations may face is that traditional mainframe application development life cycle tools were not designed for small, fast and automated deployment. Agile development tools that do support this approach aren’t designed to support the artifacts of mainframe applications. Modern tools for the building, deploying, testing and releasing of applications for the mainframe won’t often t. Existing mainframe software version control and conguration management tools for a new agile approach to development will take some effort — if they will work at all.
Use APIs to decouple the way, norms, and road-map of mainframes from the rest of your systems:
wrapping existing mainframe functions and exposing them as services does provide an intermediate step between agile on the mainframe and migration to environments where agile is more readily understood.
There is much more to this topic. Nick Carr’s book, The Glass Cage, has a different perspective. The ramifications of new technology (don’t call it automation) are notoriously difficult to predict, and what we think are forgone conclusions (unemployment of truck drivers even though the tech for self-driving cars needs to see much more diversity of conditions before it can get to the 99%+ accuracy) are not.
This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.
When things go wrong, humans are needed:
To take over and stabilize the process requires manual control skills, to diagnose the fault as a basis for shut down or recovery requires cognitive skills.
But their skills may have deteriorated:
Unfortunately, physical skills deteriorate when they are not used, particularly the refinements of gain and timing. This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one. If he takes over he may set the process into oscillation. He may have to wait for feedback, rather than controlling by open-loop, and it will be difficult for him to interpret whether the feedback shows that there is something wrong with the system or more simply that he has misjudged his control action.
There’s a good case made for not only the need for humans, but to keep humans fully trained and involved in the process to handle errors states.
For the book, I interviewed practitioners in 50 different work settings – accounting, advertising, manufacturing, garbage collection, wineries etc. Each one of them told me where automation is maturing, where it is not, how expensive it is etc. The litmus test to me is are they stopping the hiring of human talent – and I heard NO over and over again even for jobs for which automation tech has been available for decades – UPC scanners in groceries, ATMs in banking, kiosks and bunch of other tech in postal service. So, instead of panicking about catastrophic job losses we should be taking a more gradualist approach and moving people who do repeated tasks all day long and move them into more creative, dexterous work or moving them to other jobs.
I think Avent’s worry is that the approach won’t be gradual and that, as a society, we won’t be able to change norms, laws, and “work” over fast enough.
Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank teller’s job was redefined with the advent of ATMs.
our research suggests that as many as 45 percent of the activities individuals are paid to perform can be automated by adapting currently demonstrated technologies… fewer than 5 percent of occupations can be entirely automated using current technology. However, about 60 percent of occupations could have 30 percent or more of their constituent activities automated.
Most work is boring:
Capabilities such as creativity and sensing emotions are core to the human experience and also difficult to automate. The amount of time that workers spend on activities requiring these capabilities, though, appears to be surprisingly low. Just 4 percent of the work activities across the US economy require creativity at a median human level of performance. Similarly, only 29 percent of work activities require a median human level of performance in sensing emotion.
So, as Vinnie also suggests, you can automate all that stuff and have people focus on the “creative” things, e.g.:
Financial advisors, for example, might spend less time analyzing clients’ financial situations, and more time understanding their needs and explaining creative options. Interior designers could spend less time taking measurements, developing illustrations, and ordering materials, and more time developing innovative design concepts based on clients’ desires.
When it comes to IT services and BPO, it’s no longer about “location, location, location”, it’s now all about “skills, skills, skills”.
Instead of “commodity” capabilities (things like password resets, routine programming changes, etc.), companies want more highly-skilled, innovative capabilities. Either offshorers need to provide this, or companies will in-source those skills.
Because offshorers typically don’t focus on such “open ended” roles, analysis of the survey suggests offshorers will have less business, at least new business:
aspirations for offshore use between the 2014 and 2017 State of the Industry studies, we see a significant drop, right across the board, with plans to offshore services.
an increasing majority of customers of traditional shared services and outsourcing feel they have wrung most of the juice offshore has to offer from their existing operations, and aren’t looking to increase offshore investments.
What with the large volume of IT offshorers companies do, and how this outsourcing tends to control/limit IT capabilities, paying attention to these trends can help you predict what the ongoing “nature of IT” is in large offshorers.
For the full 2016 year, IBM’s revenues were off 2.1 percent to $79.85 billion, but its “real” systems business, which includes servers, storage, switching, systems software, databases, transaction monitors, and tech support and financing for its own iron, fell by 8.3 percent to $26.1 billion.
Changing the revenue mix:
IBM’s efforts to promote SoftLayer cloud and Watson cognitive computing, mobile and social and marketing software and tools, and security wares – what it calls its strategic imperatives – are almost filling in the gap left behind as the core businesses shrink. IBM wanted these strategic imperative businesses to reach $40 billion and 40 percent of revenues by 2018, and in this quarter it already hit the 40 percent mark, with $33 billion in revenues for 2016–as much because of its overall revenue decline as for the growth in these businesses.
And, some info on their hardware revenue:
IBM sold just over $8 billion in Systems products, and brought $934 million to the middle line as pre-tax income
Schroeter said that Linux-based Power Systems machines now drove 15 percent of revenues, and that is pretty good considering that two years ago it was a few percent of sales.
One could have predicted Mr. Thiel’s affinity for Mr. Trump by reading his 2014 book, “Zero to One,” in which he offers three prongs of his philosophy: 1) It is better to risk boldness than triviality. 2) A bad plan is better than no plan. 3) Sales matter just as much as product.
Some Rumsfeldian level poetry…except the Rumsfeld one was easier to decide:
When I ask about the incestuous amplification of the Facebook news feed, he muses: “There’s nobody you know who knows anybody. There’s nobody you know who knows anybody who knows anybody, ad infinitum.”
Avoiding being all rainbows:
“If you’re too optimistic, it sounds like you’re out of touch,” he says. “The Republicans needed a far more pessimistic candidate. Somehow, what was unusual about Trump is, he was very pessimistic but it still had an energizing aspect to it.”
What exactly do they do?
“One of the things that’s striking about talking to people who are politically working in D.C. is, it’s so hard to tell what any of them actually do,” he says. “It’s a sort of place where people measure input, not output. You have a 15-minute monologue describing a 15-page résumé, starting in seventh grade.”
Small systems are more flexible and malleable; therefore, experiments are easier. Some experiments would work well, others wouldn’t. Because they would keep things small and flexible, however, it would be easy to throw away the mistakes. This would enable the team to pivot, meaning they could change direction based on recent results. It is better to pivot early in the development process than to realize well into it that you’ve built something nobody likes.
Google calls this “launch early and often.” Launch as early as possible even if that means leaving out most of the features and launching to only a few select users. What you learn from the early launches informs the decisions later on and produces a better service in the end.