The Bathroom Bill, Texas SB6 – Notebook

As you can imagine, things like the so-called “bathroom bill” drive me crazy. It also makes me sad for whatever happened to my fellow Texans, who support it, that they’d be this cruel, uninformed, and ignorant. And, of course, there’s the people effected.

Stealing some of Matt Ray’s notes for our Software Defined Talk recording, here’s a notebook and highlights on the topic.

  • The Hillbillies are obsessed with bathrooms
    • It’s really depressing how aggressively stupid Texas is sometimes. I don’t blame anyone avoiding it.
    • “The consequences of this bill are beyond severe. Not only can transgender people be arrested and jailed for using the bathroom, but they will be assumed to be pedophiles, and be put on the Texas sexual predator watch list. So not only is there the possibility of being hauled off to jail during a conference, the arrest will ruin the rest of your life. Just because you need to pass some water.”
  • Current status: The bill is having trouble in the Senate, however, part of it is about removing a requirement to provide multi-user bathrooms in schools.
    • More: “The differences on the bathroom bill are substantial. The Senate would require transgender Texans to use the restrooms in publicly owned buildings that match their biological sex and would bar local governments from adopting or maintaining their own laws on the subject. The House version would apply only to elementary and secondary schools; after it passed last weekend, Patrick and others criticized it as a change that does very little.”

How’d it go in North Carolina?

  • AP analysis of economic effect in North Carolina, from March 2017:
    Losses of $313m a year – “$3.76 billion in lost business over a dozen years.”
    Some examples, not just bleeding-heart tech companies: “Those include PayPal canceling a 400-job project in Charlotte, CoStar backing out of negotiations to bring 700-plus jobs to the same area, and Deutsche Bank scuttling a plan for 250 jobs in the Raleigh area. Other companies that backed out include Adidas, which is building its first U.S. sports shoe factory employing 160 near Atlanta rather than a High Point site, and Voxpro, which opted to hire hundreds of customer support workers in Athens, Georgia, rather than the Raleigh area.”
    Most of it is from businesses like Paypal and Deutsche Bank pulling out – good for them!

    • “Bank of America CEO Brian Moynihan — who leads the largest company based in North Carolina — said he’s spoken privately to business leaders who went elsewhere with projects or events because of the controversy, and he fears more decisions like that are being made quietly.”
  • For context, The North Carolina economy: “In 2010 North Carolina’s total gross state product was $424.9 billion. In 2011 the civilian labor force was at around 4.5 million with employment near 4.1 million. The working population is employed across the major employment sectors.”
  • So, rough estimate of economic impact is: a decrease of 0.07%/year (this is a bad number since it’s based on 2010 GDP and other forward looking estimates, however, it gives you a ball-park sense.) However, see scenario for larger impact for the future below (I mean, not to mention being a dick-heads and treating people as subhuman for no good reason other than being fucking social-idiots):

Money and jobs prospects for Texas

  • Back to Texas, the next 10 years are critical for North Texas. Many large, international enterprises are setting up big campuses up there in DFW.
    For example, Toyota relocated their NA headquarters there.

    • For Toyota, this means something on the order of 1,000 new jobs in Texas, with an estimated 2,800 existing employees who’ll move to Texas. That’s a lot of new HEB customers, home buyers, and taxpayers.
  • Now, think of other G2000 companies that would want to move to Texas, or beef up their existing presence. The companies will be deciding what to do in the next 2-3 years, and if they skip on Texas, that will be decades of lost cash, not to mention new Texans.
  • Also, from Texas Association of Business: “The business group released a study last month warning that legislation like the transgender bathroom bill could cost the state economy up to $8.5 billion a year and threaten 185,000 jobs.” (Meanwhile, that organization has “remained neutral.”)

Why in the first place?

  • So, what’s the big deal for those in favor of it in the first place? Well, obviously, the idea that there’s “wide-stances” going on is bunk (more).
  • One can only conclude that supporters are confused (and, thus, afraid): there’s a fundamental disagreement about gender and sexuality. But, also, there’s just downright discriminatory. We’ve lived through this before with the gay marriage movement int he past 20 years and know how to spot veiled discrimination.
  • As one ACLU person put it: “that fundamentally [supporters of bathroom bills] just don’t think of transgender people as humans, and they try to erase trans people from existence.””
  • The Economist describes the people effected: ‘The heart of the bill is its concept of “biological sex”; lawmakers define it as “the physical condition of being male or female, which is stated on a person’s birth certificate”. This definition is fraught for several reasons. First, as many as 1 in 1,500 babies are born with ambiguous genitalia that qualify them as “intersex”, though that designation was only used for the first time last week, when a Brooklyn-born, 55-year-old California resident received a revised birth certificate from New York City in the mail. Second, thousands of the 1.4m transgender Americans have had sex-reassignment surgery, which means that many people who were designated as male or female at birth now have “the physical condition” of being another gender. And for transgender people who retain the biological markers of their original gender identification (because they choose not to undergo surgery or cannot afford it), the fact of their sense of themselves remains. Many transgender women and men feel not only uncomfortable but endangered when being forced to use a bathroom that does not mesh with their identity. In a 2013 paper, Jody Herman, a scholar at the UCLA law school’s Williams Institute, discussed a survey finding that 70% of transgender people “reported being denied access, verbally harassed, or physically assaulted in public restrooms”.’ (More from CNN.)
  • Is there anything to actually worry about? The article continues: “No similar research bears out the theory that opening bathrooms to transgender people spurs sexual predators to put on lipstick and a dress to target women and young girls in public facilities. Last year, a coalition of organisations dedicated to preventing the abuse of women issued a letter addressing Mr Patrick’s worry. “As rape crisis centers, shelters, and other service providers who work each and every day to meet the needs of all survivors and reduce sexual assault and domestic violence throughout society”, they wrote, “we speak from experience and expertise when we state that these claims are false”. Texas Republicans say that strict gender segregation in public bathrooms is “common sense”, but their appeal to conventional wisdom is not borne out by the evidence. A police department official in Des Moines, Iowa, said he doubts that bathroom tolerance for trans people would “encourage” illicit behaviour. Sex offenders, he said, will find victims “no matter what the laws are”.”
  • Meanwhile, bathroom bill thinking shows a misunderstanding of the realities of sexual assault: ‘[Laura Palumbo, communications director at the National Sexual Violence Resource Center] said she believes people “must understand the facts about sexual assault,” adding that in 8 out of 10 cases the victim already knows the person who sexually assaulted them, citing Justice Department statistics. However, 64 percent of transgender people will experience sexual assault in their lifetime, she said, citing a study by the National Gay and Lesbian Task Force and National Center for Transgender Equality.’
  • All of this said, other than “there is no evidence,” it’s surprisingly hard to find any numbers and reports on the topic of “is this actually a problem,” based on past crime and incidents. This is true for both sides of the issues!
  • That said, the conclusion would, thus, be that there’s no evidence based on historics that there’s anything close to a material, actual problem (sexual assault) going on here. This is not only intellectually (and socially) frustrating, but it also means that all the effort spent on bathroom bills is wasted and should have been spent on fixing real problems that could prevent actual sexual assault.

Canonical refocusing on IPO’ing, momentum in cloud-native – Highlights

Canonical Party

There’s a few stories out about Canonical, likely centered around some PR campaign that they’re seeking to IPO at some time, shifting the company around appropriately. Here’s some highlights from the recent spate of news around Canonical.

Testing the Red Hat Theory, competing for the cloud-native stack

Why care? Aside from Canonical just being interesting – they’ve been first and/or early to many cloud technologies and containers – there’d finally be another Red Hat if they were public.

Most of the open source thought-lords agree that “there can never be another Red Hat,” so, we’ll see if the Ubuntu folks can pull it off. Or, at the very least, how an pure open source company wangles it out otherwise.

That said, SUSE (part of HPE/Micro Focus) has built an interesting business around Linux, OpenStack, and related stuff. Ever since disentangling from Novell, SUSE has had impressive growth (usually something around 20 and 25% a year in revenue). All is which to, the Red Hat model actually is being used successfully by SUSE, which, arguably, just suffered from negative synergies (or, for those who don’t like big words, “shit the bed”) when it was owned by Novell.

As I’m perhaps too fond of contextualizing, it’s also good to remember that Red Hat is still “just” a $2.5bn company, by revenue. Revenue was $1.5bn in 2014, so, still, very impressive growth; but, that’s been a long, 24 year journey.

All these “Linux vendors,”like pretty much everyone else in the infrastructure software market, are battling for control over the new platform, that stack of cloud-y software that is defining “cloud-native,” using containers, and trying to enable the process/mindset/culture of DevOps. This is all in response to responding to enterprises’ growing desire to be more strategic with IT.

Canonical momentum

From Steven J. Vaughan-Nichols:

Shuttleworth said “in the last year, Ubuntu cloud growth had been 70 percent on the private cloud and 90 percent on the public cloud.” In particular, “Ubuntu has been gaining more customers on the big five public clouds.”

And:

Its OpenStack cloud division has been profitable, said Shuttleworth, since 2015

Al Sadowski has an extensive report on Canonical, mentioning:

[Canonical] now has more than 700 paying customers and sees a $1bn business for its OS, applications and IT operations software. Time will tell if this goal is realized.

And:

Canonical claims some 700 customers paying for its support services on top of Ubuntu and other offerings (double the 350 it had three years ago), and to have achieved more than $100m in bookings in its last financial year…. [Overall, it’s] not yet a profitable business (although its Ubuntu unit is). We estimate GAAP revenue of about $95m.

Strategy

On focusing the portfolio, shoring it up for better finances for an IPO:

we had to cut out those parts that couldn’t meet an investors’ needs. The immediate work is get all parts of the company profitable.

To that end, as Alexander J. Martin reports:

More than 80 workers at Ubuntu-maker Canonical are facing the chop as founder Mark Shuttleworth takes back the role of chief executive officer…. 31 or more staffers have already left the Linux distro biz ahead of Shuttleworth’s rise, with at least 26 others now on formal notice and uncertainty surrounding the remainder

Back to Al on the Job to Be done, building and supporting those new cloud-native platforms:

Rather than offering ways to support legacy applications, the company has placed bets on its Ubuntu operating system for cloud-native applications, OpenStack IaaS for infrastructure management, and Docker and Kubernetes container software.

And, it seems to be working:

Supporting public cloud providers has been a success story for Canonical – year-over-year revenue grew 91% in this area…. Per Canonical, 70% of the guest OS images on AWS and 80% of the Linux images on Microsoft Azure are Ubuntu. Its bare-metal offering, MaaS (Metal as a Service), is now used on 80,000 physical servers.

On OpenStack in particular:

Canonical claims to be building 4,000 OpenStack deployments a month at some 180 vendors…. It claims multiple seven-figure deals (through partners) for its BootStrap managed OpenStack-as-a-service offering, and that the average deal size for OpenStack is trending upward.

On IPO’ing

The Vaughan-Nichols piece outlines Shuttleworth’s IPO plans:

Still, there is “no timeline for the IPO.” First, Shuttleworth wants all parts of the slimmed down Canonical to be profitable. Then “we will take a round of investment.” After that, Canonical will go public.

However, Al’s report says:

It is not seeking additional funding at this time.

Probably both are true, and the answer as Shuttleworth says is “well, in a few years once we get the company to be profitable.

More

Banks are handling disruption well – Highlights

Thus far, it seems like the large banks are fending off digital disruption, perhaps embracing some of it on their own. The Economist takes a look:

  • “Peer-to-peer lending, for instance, has grown rapidly, but still amounted to just $19bn on America’s biggest platforms and £3.8bn in Britain last year”
  • “last year JPMorgan Chase spent over $9.5bn on technology, including $3bn on new initiatives”
  • From a similar piece in the NY Times: “The consulting firm McKinsey estimated in a report last month that digital disruption could put $90 billion, or 25 percent of bank profits, at risk over the next three years as services become more automated and more tellers are replaced by chatbots.”
  • But: “Much of this change, however, is now expected to come from the banks themselves as they absorb new ideas from the technology world and shrink their own operations, without necessarily losing significant numbers of customers to start-ups.”
  • Back to The Economist piece: “As well as economies of scale, they enjoy the advantage of incumbency in a heavily regulated industry. Entrants have to apply for banking licences, hire compliance staff and so forth, the costs of which weigh more heavily on smaller firms.”
  • Regulations and customer loyalty are less in China, resulting in more investment in new financial tech in Asia: 
  • As another article puts it: “China has four of the five most valuable financial technology start-ups in the world, according to CB Insights, with Ant Financial leading the way at $60 billion. And investments in financial technology rose 64 percent in China last year, while they were falling 29 percent in the United States, according to CB Insights.”
  • Why? “The obvious reason that financial start-ups have not achieved the same level of growth in the United States is that most Americans already have access to a relatively functional set of financial products, unlike in Africa and China.”
  • There’s some commentary on the speed of sharing blockchain updates can reduce multi-day bank transfers (and payments) to, I assume, minutes. Thus: ‘“Blockchain reduces the cost of trust,” says Mr Lubin of ConsenSys.’

Fixing legacy problems with new platforms, not easy

  • The idea of building banking platforms to clean up the decades of legacy integration problems.
  • Mainframes are a problem, as a Gartner report from last year puts it: “The challenge for many of today’s modernization projects is not simply a change in technology, but often a fundamental restructuring of application architectures and deployment models. Mainframe hardware and software architectures have defined the structure of applications built on this platform for the last 50 years. Tending toward large-scale, monolithic systems that are predominantly customized, they represent the ultimate in size, complexity, reliability and availability.”
  • But, unless/until there’s a crisis, changes won’t be funded: “Banks need to be able to justify the cost and risk of any modernization project. This can be difficult in the face of a well-proven, time-tested portfolio that has represented the needs of the banking system for decades.”
  • Sort of in the “but wasn’t that always the goal, but from that same article, Gartner suggests the vision for new fintech: ‘Gartner, Hype Cycle for Digital Banking Transformation, 2015, says, “To be truly digital, banks must pair an emphasis on customer-facing capabilities with investment in the technical, architectural, analytic and organizational foundations that enable participation in the financial services ecosystem.”’
  • BCG has a prescriptive piece for setting the strategy for all this, from Nov. 2015.

Case studies

  • A bit correlation-y, but still useful, from that BCG piece: “While past performance is no guarantee of future results, and even though all the company’s results cannot be entirely attributed to BBVA’s digital transformation plan, so far many signs are encouraging. The number of BBVA’s digital customers increased by 68% from 2011 to 2014, reaching 8.4 million in mid-2014, of which 3.6 million were active mobile users. Because of the increasing use of digital channels and efforts to reconfigure the bank’s branch network—creating smaller branches that emphasize customer self-service and larger branches that provide higher levels of personalized advice through a remote cross-selling support system—BBVA achieved a reduction in costs of 8% in 2014, or €340 million, in the core business in Spain. Meanwhile, the bank’s net profits increased by 26% in 2014, reaching €2.6 billion.”
  • And a more recent write-up of JPMC’s cloud-native programs, e.g.: ‘“We aren’t looking to decrease the amount of money the firm is spending on technology. We’re looking to change the mix between run-the-bank costs versus innovation investment,” he said. “We’ve got to continue to be really aggressive in reducing the run-the bank costs and do it in a very thoughtful way to maintain the existing technology base in the most efficient way possible.” …Dollars saved by using lower-cost cloud infrastructure and platforms will be reinvested in technology, he said.’ JPMC, of course, is a member of the Cloud Foundry Foundation which means, you know, they’re into that kind of thing.

On-premise IT holding steady around 65% of enterprise workloads – Highlights

barfing cloud.png

One of the more common questions I’ve had over the years is: “but, surely, everyone is just in the public cloud, right?” I remember having a non-productive debate with a room full of Forrester analysts back in about 2012 where they were going on and on about on-premise IT being dead. There was much talk about electricity outlets. To be fair, the analysts were somewhat split, but the public cloud folks were adamant. You can see this same sentiment from analysts (including, before around 2011, myself!) in things like how long it’s taken to write about private PaaS, e.g., the PaaS magic quadrant has only covered public PaaS since inception).

Along these lines, the Uptime Institute has some survey numbers out. Here’s some highlights:

Some 65% of enterprise workloads reside in enterprise owned and operated data centers—a number that has remained stable since 2014, the report found. Meanwhile, 22% of such workloads are deployed in colocation or multi-tenant data center providers, and 13% are deployed in the cloud, the survey found….

On-prem solutions remain dominant in the enterprise due to massive growth in business critical applications and data for digital transformation, Uptime Institute said
Public cloud workload penetration:
Some 95% of IT professionals said they had migrated critical applications and IT infrastructure to the cloud over the past year, according to another recent survey from SolarWinds.
Budgets:

That survey also found that nearly half of enterprises were still dedicating at least 70% of their yearly budget to traditional, on-premise applications, potentially pointing to growing demand for a hybrid infrastructure….

Nearly 75% of companies’ data center budgets increased or stayed consistent in 2017, compared to 2016, the survey found.

Metrics, KPIs, and what organizations are focusing on (uptime):

More than 90% of data center and IT professionals surveyed said they believe their corporate management is more concerned about outages now than they were a year ago. And while 90% of organizations conduct root cause analysis of an IT outage, only 60% said that they measure the cost of downtime as a business metric, the report found.

Demographics: “responses from more than 1,000 data center and IT professionals worldwide.”

Pretty much all Pivotal Cloud Foundry customers run “private cloud.” Many of them want to move to public cloud in a “multi-cloud” (I can’t make myself say “hybrid cloud”) fashion or mostly public cloud over the next 5 to ten years. That’s why we support all the popular public clouds. Most of them are doing plenty of things in public cloud now – though, not anywhere near “a whole lotta” – and there are of course, outliers.

This does bring up a nuanced but important point: I didn’t check out the types of workloads in the survey. I’d suspect that much of the on-premises workloads are packaged software. There’s no doubt plenty of custom written application run on-premises – even the majority of them per my experience with the Pivotal customer base. However, I’d still suspect that more custom written applications were running in the public cloud than other workloads. Just think of all the mobile apps and marketing apps out there.

Also, see some qualitative statements from CIO types.

So, the idea that it’s all public cloud in enterprise IT, thus far, is sort of like, you know: ¯_(ツ)_/¯

Red Hat OpenShift Momentum – Highlights

Brian Gracely of Red Hat (and formally an analyst who did some of the best “cloud-native”/cloud platform work early on) has a momentum post on Open Shift. Here’s my highlights:

Sizing up revenue and deal-size:
[Q3, FY 2017] Also of note, we closed our second OpenShift deal over $10 million and another OpenShift deal over $5 million. And significantly, we actually had over 50 OpenShift deals alone that were six or seven figures, so really strong traction. [Q4, FY 2017] with our largest deals in Q4 approximately one-third had an OpenShift container platform component.
Red Hat hasn’t yet been too clear on OpenShift revenue, so you have to tea-leave out these revenue spreads, which I haven’t really done. Earlier in April, Jeffrey Burt at The Next Platform had this to say:
During the final three months of last year, subscription revenue for Red Hat’s application development-related [JBoss, etc] and other emerging technologies – which includes OpenShift – hit $125 million, a 40 percent increase from the same period in 2015, and revenue for the group accounted for about 20 percent of Red Hat’s overall revenues for the fourth quarter.
Today, we also announced that Barclays Bank, the Government of British Columbias Office of the CIO, and Macquarie Bank are also using Red Hat OpenShift Container Platform to modernize application development…. airplane manufacturer Airbus about their DevOps journey, and digital travel platform Amadeus about their transformation of handling 2,000x the number of online transactions…. how Amsterdams Schipol Airport (AMS) is using OpenShift to redefine the in-terminal travel experience, how Miles & More GmbH is better managing rewards programs for travelers, and how ATPCO is rethinking how they publish fare-related data to the airline and travel industry.
Much of the write-up focuses on community momentum, true to Red Hat, open source form:

The OpenShift Commons community has 260+ member organizations….

Red Hat engineers lead or co-lead in 10 of the 24 Kubernetes SIG activities.
Finally, some commentary on their strategic shift to Kubernetes:
The huge architectural shift that we made a few years ago in adopting open standards for containers and the Kubernetes container scheduler has allowed us to delivered a unified platform to containerize existing applications and deliver agility and scalability for cloud-native applications and microservices. We call this combination Enterprise Kubernetes+, or Enterprise-Ready Kubernetes.
Red Hat’s OpenShift is, of course, a competitor to us over at Pivotal.

Cloud-native at Comcast, working with Pivotal – Highlights

I’m doing a podcast with Comcast in a few weeks, so I’ve been going over all their public talks on their cloud-native efforts. They’ve been working with Pivotal since around 2014 and are one of the more impressive customer cases with over a 1,000 applications now on Pivotal Cloud Foundry.
Here are some highlights from the talks I’ve been watching. As always, things I put in square brackets are my own comments, the rest are quotes or summaries of what people said:

August, 2016 – Empowering Devops with Cloud Foundry – Sergey Matochkin, Neville George; Comcast

  • Sergey Matochkin.
  • Slides.
  • (17:00) Every deployment to production took at least 6 weeks, but most commonly around 2 months end-to-end. Which also means you need to plan capacity much in advance.
  • We started to use virtualization and containerization “well, well before Docker existed… it was some success, we had some improvements, but those improvements were marginal.”
  • Traditionally, it’d take at least 4-6 months to setup your dev/test infrastructure. But, luckily, virtualization came along.
  • (9:20) Business drivers… Comcast phone service, set-top boxes get DVRs, VoD, etc. All of these require apps on the backend, so the portfolio of apps starts to grow, and with they way they were before it meant they had to build a new datacenter every six months. Virtualization helped here, of course.
  • Also, virtualization allowed us to put a service layer [think “platform”] on-top of the infrastructure.
  • It’d take 4-6 weeks for testing environment, but now it takes 10-15 minutes in a self-service portal.
  • Demo of using Pivotal Cloud Foundry for much of the automation needed to deploy and scale an application.
  • (~32:00) We used to have things like “order servers” and “make load-balancer changes” and somewhere in the bottom of the backlog was “write some code and do some testing.” [That is, they were focusing on items with low business value, below “the value line,” rather than customer features.]
  • “What Cloud Foundry essentially helped us with was to get all those unnecessary user stories out of our backlog so we can focus on the writing code, on testing, and deploying rather than managing infrastructure.”
  • (33:45) momentum/proof-points:
  • momemtum
  • 9 PCF instances; 900+ developers; 2,000+ active apps “most of which are in “the critical path of our customer experience”; 4,100 application instances; 2,000 requests per second.
  • Lots of Slack/ChatOps usage for monitoring and such.

August 3rd, 2016 – Transforming the monolith at 20M tph – Nick Beenham, Comcast

  • Slides.
  • Existing state:
    • 250m transaction per day.
    • Would take 3 months to get a server useful, from moment of purchasing to using.
    • “Over a 100 services run by development teams.”
    • In functional, silo roles.
  • (3:45) “We knew we had that large, rigid infrastructure. [Pivotal] Cloud Foundry and it’s adoption really enables us to change that to gain the agility, to gain the elasticity at scale.
  • Taking away roles to reduce finger-pointing and all the negative stuff, and unified team, of course.
  • (7:35) Anecdote of Nick going from “ops guy” to writing code and liking coding.
  • (12:18) ESP router that was a small router written in Go to translate SOAP requests as part of a strangler pattern. Decades old SOA layer that they wanted to modernize. But they couldn’t strip it out, would take so long. So, were going to duck-type as SOA, but do REST and micro services underneath. Strangler pattern, etc. This is what the ESP router does marshals and unmarshalls between microservices and SOAP stuff. But new things need to be done in new style.
  • Also, “de-mingling data,” moving off Oracle RAC/GoldenGate for multi-site. Some simpler CRUD services to front the data.
  • (~15:00) Used to take a week+ to deploy the entire stack, but with Pivotal Cloud Foundry it takes minutes. It gives us a great deal of velocity that we’ve never had before. “Sometimes we’ll deploy multiple times an hour.”
  • (17:00) From 1,000’s of lines of bash to deploy out to various WebLogic clusters, which has for the most part moved to Cloud Foundry.
  • Improving production updates: bringing new node up and shutting old node down slowly; canary updates, with a CI test suite, then switching over to a production install.

August 1st, 2016 – James Taylor – The Power of Partnership & Building a Cloud Native Tier-1 Platform

  • @jctbmwi8
  • “Sparrow, Service Activation Platform.”
  • “Helping someone put a smile on their face is one of the greatest gifts we can give each other.”
  • Their VP provides the feedback loop of things to focus on. Right now: reducing technical debt, reducing incidents, increasing velocity, experimentation.
  • (~6:30) “You can’t move forward – innovate – if you don’t have time to try new things.”
  • (~18:35) “If you’re spending time configuring a Docker container, that’s time you’re not spending coding or solving a problem.”
  • (13:51): “At the end of the day, [business] value is what puts money in everyone’s pocket. If our company, Comcast, can’t create something of value, no one’s gonna pay for us…if we can’t create value. So it’s important for us to understand ‘how can you create value?’”
  • (~22:02, starting epic rant!) “Who is our customer and what value do we bring to our customers…”
  • If you’re spending money on support, that’s cutting into your margins. A call coming in costs $8 right off the bat, then more as it takes longer. So you want to figure out preventing customer support problems… which points to understanding your customers more.
  • [A good overview of thinking about “value” in the context of a specific application, their customer activation center, Sparrow.] “If you have a [support] call rate of 30%, you’re probably cutting out all the value… So we try to figure out, how do we prevent calls?” [Very similar to IRS cloud-native story.]
  • “We’ve been holding technical workshops”: Internal training things every month with Pivotal people, leveraging Pivotal knowledge. With our development teams every month: webinar, or on-site visit.
  • Sparrow: 5 junior Java developers… we built it from scratch in parallel while existing teams maintained the platform… we then had to integrate the processes together… figure out decomposing the monolith platforms, etc….then we had to just cut off stuff when it was too much of a hassle.

August 17th, 2016 – Greg Otto SpringOne Platform keynote

  • Slides.
  • X1 boxes – a new release about once a month.
  • Processing 10’s of millions of transactions on this new platform daily on Pivotal Cloud Foundry/new platform.
  • “About a 75% lift in velocity as well as time to market, and the business is really feeling it.”
  • Developer reactions:
  • comcast what customers are saying.png
  • Momentum Stats:
  • comcast key state from otto.png
    • 40 apps to 900 apps, 2015 to 2016
    • 300 AIs to 4,100 AIs, 2015 to 2016
  • All with “zero outbound marketing from my team, this all word of mouth from all those happy developers.”

June 9th, 2016 – Greg Otto CF Summit keynote

  • “Late last year in 2015” – live in production [on Pivotal Cloud Foundry] with business critical systems from our back-office systems on our Cloud Foundry environment.
  • We put Pivotal Cloud Foundry directly in the customer critical path.
  • Applications doing 30,000 event a second on Cloud Foundry.
  • Started in 2014, met with Pivotal.
  • Had sort of thrown all the people into the Pivotal Cloud Foundry pool, they had to do a lot of research and such.
  • But, people were really interested in the ease of working with the platform [the productivity improvements].
  • Successful prototype app 30 days after platform.
  • Idea to feature, before after: “several weeks, at least”/“2-3 days”
  • Time-line and summary:
  • comcast otto summary.png

June, 2016 – Open source at Comcast story

  • Write-up.
  • “If Comcast has a problem to solve, there are three possible approaches: solve it themselves by making an investment in teams and resources; solve it through a commercial vendor that could build a product for them; or work with the open source community.”
  • OpenStack: “In addition to Linux, Comcast is a heavy user of OpenStack. They use a KVM hypervisor, and then a lot of data center orchestration is done through OpenStack for the coordination of storage and networking resources with compute and memory resources. Muehl said that Comcast has roughly a petabyte of memory and around a million virtual CPU cores that they are running under the OpenStack umbrella. As an operator, Comcast does a lot of things around operations, and they use Ansible to deploy and manage OpenStack at scale.”
  • Cloud Foundry: “They also use Cloud Foundry, but according to Muehl that work is in the very early stages at Comcast.”

May 2015 – Running Cloud Foundry at Comcast talk

  • Neville George, Sam Guerrero, Tim Leong, Sergey Matochkin
  • They wanted to make custom URLs.
  • Used Puppet for stuff.
  • (~8:30) Their requirements for a platform:
  • comcast platform requirements.png
  • A lot of emphasis on self-service and the micro services benefits of operating independently, product management wise.
  • They use OpenStack, Docker, and [Pivotal] Cloud Foundry.
  • Pre-provisioning resources for a pool of containers that are ready to go, etc.
  • (~27) a couple applications in production today… we’ll be ramping up quickly.
  • (Either this video or the 2016 one, a few minutes from the end) Q, training mode. A, Sergey: “I can’t say we have a really good training model…. We do brown-bags to have people aware. We focus on 12 factor application model… on overall microservices model, not just to shape application, but also data. Developers need to understand how they [do] applications for PaaS instead of traditional.

Advice on introducing DevOps from Merrill Corp & SPS Commerce – Highlights

Nicely moderated by Bridget. Some of my notes and highlights:

  • Amy talks about pace of change, sustaining it in the beginning, etc.
    • The amount of time it took us to get going was a surprise – was longer.
    • If you can start to show results early, it helps build up momentum. “Having enough wins, like that, really helped us to keep the momentum going while we were having a culture change like DevOps.”
    • It takes the right people to keep that energy going, but also be able to go back to the business to show that why we are putting these changes in place.
    • You’re going to be able to see the changes to the business right away.
  • Peg – tools, don’t try to fix the old ones, like ITIL service desk tools. Instead we just had Jenkins open tickets and such, automating the toil of dealing with old tools
  • Global/offshore tactics, from Amy:
    • What with all the retrospective stuff, you need to be able to get teams together, physically. The collaboration angles are much better in person
    • Set-up each “shore” as an architecturally and management island, make them as independent as possible. They also need their own context, not held up by time zones so they don’t need to wait 24-48 hours for authorizations and collaboration. [To my mind, this means taking advantage of the organizational de-coupling you can get with microservices.]
  • Starting change, even when they company needs it. Amy: You have to start with the business need, what’s the big driver behind a change like DevOps. [Managers often don’t make sure they figure this out, let alone decimate it to staff.]

The Economist on Amazon – Highlights

  • Video: “In 2017 Amazon is expected to spend $4.5bn on television and film content, roughly twice what HBO will spend. But it has a big payoff.”
  • Prime momentum: “Mr Nowak reckons the company had 72m Prime members last year, up by 32% from 2015.”
  • Cloud: “Last year AWS’s revenue reached $12bn, up by more than 150% since 2014.”
  • Anti-trust, in the US: “If competitors fail to halt Amazon’s whirl of activities, antitrust enforcers might yet do so instead. This does not seem an imminent threat. American antitrust authorities mainly consider a company’s effect on consumers and pricing, not broader market power. By that standard, Amazon has brought big benefits.”

Are investors too optimistic about Amazon?

Making mainframe applications more agile, Gartner – Highlights

In a report giving advice to mainframe folks looking to be more Agile, Gartner’s Dale Vecchio and Bill Swanton give some pretty good advice for anyone looking to change how they do software.

Here’s some highlights from the report, entitled “Agile Development and Mainframe Legacy Systems – Something’s Got to Give”

Chunking up changes:

  1. Application changes must be smaller.
  2. Automation across the life cycle is critical to being successful.
  3. A regular and positive relationship must exist between the owner of the application and the developers of the changes.

Also:

This kind of effort may seem insurmountable for a large legacy portfolio. However, an organization doesn’t have to attack the entire portfolio. Determine where the primary value can be achieved and focus there. Which areas of the portfolio are most impacted by business requests? Target the areas with the most value.

An example of possible change:

About 10 years ago, a large European bank rebuilt its core banking system on the mainframe using COBOL. It now does agile development for both mainframe COBOL and “channel” Java layers of the system. The bank does not consider that it has achieved DevOps for the mainframe, as it is only able to maintain a cadence of monthly releases. Even that release rate required a signi cant investment in testing and other automation. Fortunately, most new work happens exclusively in the Java layers, without needing to make changes to the COBOL core system. Therefore, the bank maintains a faster cadence for most releases, and only major changes that require core updates need to fall in line with the slower monthly cadence for the mainframe. The key to making agile work for the mainframe at the bank is embracing the agile practices that have the greatest impact on effective delivery within the monthly cadence, including test-driven development and smaller modules with fewer dependencies.

It seems impossible, but you should try:

Improving the state of a decades-old system is often seen as a fool’s errand. It provides no real business value and introduces great risk. Many mainframe organizations Gartner speaks to are not comfortable doing this much invasive change and believing that it can ensure functional equivalence when complete! Restructuring the existing portfolio, eliminating dead code and consolidating redundant code are further incremental steps that can be done over time. Each application team needs to improve the portfolio that it is responsible for in order to ensure speed and success in the future. Moving to a services-based or API structure may also enable changes to be done effectively and quickly over time. Some level of investment to evolve the portfolio to a more streamlined structure will greatly increase the ability to make changes quickly and reliably. Trying to get faster with good quality on a monolithic hairball of an application is a recipe for failure. These changes can occur in an evolutionary way. This approach, referred to in the past as proactive maintenance, is a price that must be paid early to make life easier in the future.

You gotta have testing:

Test cases are necessary to support automation of this critical step. While the tooling is very different, and even the approaches may be unique to the mainframe architecture, they are an important component of speed and reliability. This can be a tremendous hurdle to overcome on the road to agile development on the mainframe. This level of commitment can become a real roadblock to success.

Another example of an organization gradually changing:

When a large European bank faced wholesale change mandated by loss of support for an old platform, it chose to rewrite its core system in mainframe COBOL (although today it would be more likely to acquire an off-the-shelf core banking system). The bank followed a component-based approach that helped position it for success with agile today by exposing its core capabilities as services via standard APIs. This architecture did not deliver the level of isolation the bank could achieve with microservices today, as it built the system with a shared DBMS back-end, as was common practice at the time. That coupling with the database and related data model dependencies is the main technical obstacle to moving to continuous delivery, although the IT operations group also presents cultural obstacles, as it is satis ed with the current model for managing change.

A reminder: all we want is a rapid feedback cycle:

The goal is to reduce the cycle time between an idea and usable software. In order to do so, the changes need to be smaller, the process needs to be automated, and the steps for deployment to production must be repeatable and reliable.

The ALM technology doesn’t support mainframes, and mainframe ALM stuff doesn’t support agile. A rare case where fixing the tech can likely fix the problem:

The dilemma mainframe organizations may face is that traditional mainframe application development life cycle tools were not designed for small, fast and automated deployment. Agile development tools that do support this approach aren’t designed to support the artifacts of mainframe applications. Modern tools for the building, deploying, testing and releasing of applications for the mainframe won’t often t. Existing mainframe software version control and conguration management tools for a new agile approach to development will take some effort — if they will work at all.

Use APIs to decouple the way, norms, and road-map of mainframes from the rest of your systems:

wrapping existing mainframe functions and exposing them as services does provide an intermediate step between agile on the mainframe and migration to environments where agile is more readily understood.

Contrary to what you might be thinking, the report doesn’t actually advocate moving off the mainframe willy-nilly. From my perspective, it’s just trying to suggest using better processes and, as needed, updating your ALM and release management tools.

Read the rest of the report over behind Gartner’s paywall.

More on “grim” automation – Notebook

A few weeks back my book review of two “the robots are taking over” came out over on The New Stack. Here’s some responses, and also some highlights from a McKinsey piece on automation.

Don’t call it “automation”

From John Allspaw:

There is much more to this topic. Nick Carr’s book, The Glass Cage, has a different perspective. The ramifications of new technology (don’t call it automation) are notoriously difficult to predict, and what we think are forgone conclusions (unemployment of truck drivers even though the tech for self-driving cars needs to see much more diversity of conditions before it can get to the 99%+ accuracy) are not.

Lisanne Bainbridge in her seminal 1983 paper outlines what is still true today.

From that paper:

This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.

When things go wrong, humans are needed:

To take over and stabilize the process requires manual control skills, to diagnose the fault as a basis for shut down or recovery requires cognitive skills.

But their skills may have deteriorated:

Unfortunately, physical skills deteriorate when they are not used, particularly the refinements of gain and timing. This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one. If he takes over he may set the process into oscillation. He may have to wait for feedback, rather than controlling by open-loop, and it will be difficult for him to interpret whether the feedback shows that there is something wrong with the system or more simply that he has misjudged his control action.

There’s a good case made for not only the need for humans, but to keep humans fully trained and involved in the process to handle errors states.

Hiring not abating

Vinnie, the author of one of the books I reviewed, left a comment on the review, noting:

For the book, I interviewed practitioners in 50 different work settings – accounting, advertising, manufacturing, garbage collection, wineries etc. Each one of them told me where automation is maturing, where it is not, how expensive it is etc. The litmus test to me is are they stopping the hiring of human talent – and I heard NO over and over again even for jobs for which automation tech has been available for decades – UPC scanners in groceries, ATMs in banking, kiosks and bunch of other tech in postal service. So, instead of panicking about catastrophic job losses we should be taking a more gradualist approach and moving people who do repeated tasks all day long and move them into more creative, dexterous work or moving them to other jobs.

I think Avent’s worry is that the approach won’t be gradual and that, as a society, we won’t be able to change norms, laws, and “work” over fast enough.

McKinsey

As more context, check out this overview of their own study and analysis from a 2015 McKinsey Quarterly article:

The jobs don’t disappear, they change:

Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank teller’s job was redefined with the advent of ATMs.

Further:

our research suggests that as many as 45 percent of the activities individuals are paid to perform can be automated by adapting currently demonstrated technologies… fewer than 5 percent of occupations can be entirely automated using current technology. However, about 60 percent of occupations could have 30 percent or more of their constituent activities automated.

Most work is boring:

Capabilities such as creativity and sensing emotions are core to the human experience and also difficult to automate. The amount of time that workers spend on activities requiring these capabilities, though, appears to be surprisingly low. Just 4 percent of the work activities across the US economy require creativity at a median human level of performance. Similarly, only 29 percent of work activities require a median human level of performance in sensing emotion.

So, as Vinnie also suggests, you can automate all that stuff and have people focus on the “creative” things, e.g.:

Financial advisors, for example, might spend less time analyzing clients’ financial situations, and more time understanding their needs and explaining creative options. Interior designers could spend less time taking measurements, developing illustrations, and ordering materials, and more time developing innovative design concepts based on clients’ desires.