Ode to Airports

An airport is a time pause. It’s an excuse to not stress or try. You’re trapped in the system and will eventually get there. You can’t leave or you’ll have to re-humiliate yourself through security. Airports are even powerful enough to make you cancel meetings if your flight is late, canceled…or you pretend it is. Your wedding could be delayed because of the airport and no one would really fault you.

Everyone is transiting, coming and going, and while the entry fee might exclude the very poor (and the super rich fly their own), you see everyone.

At a major hub, you’ll see people from all over: the guy with the “Ragin’ Cajun” hat, domestic and international grandmas, the harried big city lawyer, the dad-jeans set, and the local staff. People dress in all manners of business-business or super casual for comfort.

The mix of experienced and novice travels creates a crackly dynamic, paired with either overly friendly or direct gate agents. While some can escape to airline lounges, even those environments are little different than the actual terminal: you just get much friendly staff and free drinks and peanuts.

Airports can be calming if you look at them as escapes and the sort of delightful, enforced boredom that I understand meditation to be.

They can be toxic if you stress out about delays, lines, other people, overhead bin space, and how flight delays effect your plans outside the airport. And they can be distracting like an opium den if you let their peaceful hum shut-out your real life.

Don’t ruin your time at the airport. If you let it, it’ll make sure you get back out right where you wanted to go.

Are tech H1-B visas actually that big of a deal? How we even evaluate the question?


25720976280_daf8a3d827_k_d

Over the decades, the number of H-1B workers allowed into the US each year has grown. With the 1998 update, the visa cap lifted to 115,000. In 2000, the limit was boosted again, this time up to 195,000. That year, the law was also tweaked so that renewals no longer counted toward the cap. In 2004, the cap was reset to 65,000, but an exemption was added for 20,000 students graduating from US institutions with master’s degrees. Exemptions were also added for workers affiliated with academic institutions, which can include schools and teaching hospitals. According to Ron Hira, a professor of Public Policy at Howard University who has studied the H-1B issue and testified about it before the Senate, the actual number of visas handed out each year has been around 135,000 over the last five years. Link

There’s a good rant on the relative importance of all of this in last week’s Political Gabfest. While us “on-shore” workers in the tech industry may see that 135,000 as a threat to our cashflow, it’s a drop in the bucket of employment in America. As Adam Davidson argues well, therefore, worrying about H1-B visas should be pretty low on the list of how to setup up more people with good jobs:

The question of H-1B visas has rhetorical importance far beyond its actual economic relevance. The unemployment rate for computer and mathematical occupations is, currently, 2.1 per cent. This is what economists consider full employment, meaning that pretty much everyone who wants a job has a job or is in a brief hiatus between positions. The number of jobs in those fields is growing fast—by about twelve per cent a year—and the number of qualified workers is not growing enough to catch up. In short, the plight of computer professionals is on few people’s list of urgent concerns…. According to the Bureau of Labor Statistics, ten thousand computer professionals start a new job every working day. In this context, the eighty-five thousand foreigners given H-1B visas each year represent little more than statistical noise.

He goes off an a political jag after this saying that the H1-B discussion is a proxy for “fear of brown people,” which certainly has appeal to leftist people like myself. There’s a business question here, too, though: are H1-B visas a good idea and why? Are they ethical and effective?

What types of jobs?

Also, some interesting analysis of the types of jobs H1-B visas are used for. Mostly for jobs at outsourcing firms:

But it’s how H1-B visas are being used by applicants that’s really changed. Data from the 2016 batch of H-1B petitions show that the top 10 sponsors of H-1B visa workers in the US are all corporations with large outsourcing businesses: Indian companies like Infosys, Tata, and Wipro, which pioneered the business, and US-based firms like IBM, Accenture, and Cognizant, which saw the success of the Indian contractors and began offing their own competing outsourcing programs. Those 10 firms have more workers currently employed through the program than the next 90 companies combined, a group that includes all of America’s largest tech companies and banks.

So, the discussion about H1-B visas in tech ii, by bulk, about the 60,000+ jobs in IT outsourcing. This is in addition to the estimated 1.7m off-shore jobs in outsourcing already.

In theory, most of these are “lower value” jobs where you’re more operating IT (help-desks, managing the daily operations of enterprise applications) rather than creating it (like programming). Anecdotally, there’s still programming running around in there, esp. when it comes to modernizing applications. The going theory is that you can’t just slot in workers on higher-value IT work like writing custom software.

How do you think about all this?

There’s an odd ethical vs. business-sense argument scurrying about as well that I’ve never seen addressed. One, you’d seem to be happy that the H1-B visa worker was getting work. By nature of accepting the job and up-rooting themselves, it must be good for them: or, at least, better than other alternatives. Also, if it’s actually cheaper to get the same services/output from an H1-B visa worker, why would you pay more for “native” worker? On the other hand, it’s equally confusing to figure out what companies “owe” workers that they’re firing in favor of the H1-B visa workers.

Tech companies like to skirt all that by talking about “we have to hire from a global pool,” which is fine if you’re hiring for an individual with unique skills. However, the divide between outsourcing firms and tech companies suggests that the bulk of H1-B visa hires in tech are not for all the super unique, AI experts that may not live on-shore. Then again, it’s insulting to even think that: why do I value one set of people over another in any context?

Businesses say they’re not satisfied

However we figure out talking about this, it’s clear from surveys that companies are dodgy on the value of outsourcing. As I summarizing some HfS work recently:

Outsourcers too often do exactly what the contract (from five to ten years ago) says instead of helping you innovate and keep the business growing. Itʼs little wonder that in a recent study, more than 75% of senior executives said they want to replace their legacy outsourcers because those providers are so unwilling to change to new models.


If we take Adam Davidson’s perspective, it’s not really even a problem worth thinking about (versus all the other hair-on-fire issue we have). However, when it comes to outsourcing (which I’ve shifted to because so many H1-B visa workers end up at outsourcing firms), it’s clear that we could be doing much better.

Spanning goes private, what might happen next?

Long ago, Spanning Sync was the only viable way to synchronize your GMail calendar and contacts with the (then) OS X iCal and Address Book. It was great! I also know one of the original founders, Charlie Wood, and we’d talk from time to time about the growing company. At some point, it became a Google Apps (now “G Suit”) back-up service that had a clever value prop: cloud storage, sure, but it’s not redundant you know, you gotta do the basics.

Anyhow, I always kept a close eye on the company. It was a little odd to see EMC buy them back in 2014: as VMware demonstrated with their dropbox competition products years ago, Apple is pretty goofy here, and even Google has demonstrated over the year, large software companies are pretty and at long-term plays for individual software; Microsoft is of course an exception with Office and sort of proves the rule.

We’ll see what Insight Venture Partners does with them. I’m guessing if you just left Spanning alone, more or less, it’d turn into a cash machine at some point. That said, I don’t think Dropbox and Box are exactly profitable. Here’s Box’s last four financial years:

…but it seems like a back-up service could controls costs better and do a lot less marketing: Box and Dropbox have been acquiring companies and re-positioning themselves as they go from more than just to cloud storage to something like “sort of Office, but not really, but maybe – or like Trello… er… let’s acquire another company and go to a conference where we have wooden floors and free espresso in the booth and think about this at next year’s company retreat in Italy.” (I KID! I KID!)

Spanning Momentum

Here’s some Spanning momentum from one of the write-ups:

Spanning has seen 70 percent year-over-year revenue growth and more than 7,000 customers, according to a press release. It restored around 18 million items for customers in 2016, and expects to continue growth with its global data center expansion, and distribution agreements with major channel partners.

A wet-finger-in-the-wind business case

It’s hard to quickly find pricing for Spanning on their page (smells like enterprise software!), but a few searches, particularly from Spiceworks, says it’s like $35 a month.

There’s certainly discounts on some of those customers, but let’s say the revenue would be a max of $2,940,000 annual to something like $1.5m on the low-end if you do all sorts of discounting on clusters of users.

Now, 70% y/y growth is pretty impressive, but not too insane for a relativly new offering. Let’s say they do that two more years and then it goes down to like 30 or 40% for any length of out years we care about.

Then, let’s just take a swag at storage costs. Who knows if they use S3, but let’s assume they can get down to similar pricing, we’ll take S3’s mid-tier: $0.0125/GB/month. My work Google Drive says it’s 22 GB, but I save a lot more stuff than most people do. Let’s just go with 20GB as an average. Then let’s assume you at least duplicate it, so you’re paying for 40 GB a month (across two cloud zones), which is $6/year. (Let’s ignore networking transfer charges – adding that in is left as a exercise for the reader!)

Then you need all the meat-sacks. You could probably get by with 6 to 12 product staff (programmers, product manager – you probably outsource design at this point as needed).

You need the CEO, HR, CFO, and probably 1-2 people to work for them (6 people max); you could probably cut out HR depending on how Insight likes to run HR (outsourced or pooled across companies). Maybe the CFO, but probably not.

I’m no enterprise SaaS business expert, but I’m guessing it’s marketing and sales heavy, so:

Then you need probably 2-3 people in marketing (if you were slick, you could outsource a lot of this, esp. for something as easy to understand as “backup”): 5-10 face-to-face enterprise hustlers, and let’s say a team of 5 “inside/web” sales people who send all those annoying “Re: catching-up. I see you read out white paper on BACKUP. Would you like to talk more? Are you the right person at your organization?” emails. So, max 18.

That’s around 36 people, which seems really low to me. But, if you were, I don’t know, a private equity firm, you’d probably think that was OK, if not a little heavy for a company that basically just copies files from one place to the other (yes, I’m being MBA-fatuous).

Without getting a spreadsheet to do some clustering, doing salary cost across such a diverse set is hard. Many of them are in Austin (I assume, still), so let’s just of with $150,00 all-in per head (I’m sure the admin staff and your “strategic account” sales people get paid well plus extra comp, and the more senior tech staff get paid more). So, that’s something like $5,400,000 in people expenses. Then there’s going to conferences, probably a large ad budget, that nice office they have in downtown Austin (which I think is an EMC office, so they’ll get the boot?) which means buying a lot of organic beef-jerky and craft beer etc., then there’s flying those 5-10 enterprise hustlers around and their $70-100 a day per diems, plus wining and dining. Let’s just trow in another million and go to $6.5m.

So, with some mumbo jumbo business casing (I grow revenue by 70% for two years, then level it off to 30% for the last two years; I grow staff up to 60 people max), you have something like this:

Screenshot 2017-04-23 09.16.10

Those storage costs look insanely off. And from their press release, they claim to have actual data-centers (probably co-lo’d racks that are, at best, caged for compliance reasons, far from “having data centers”), which sounds like building your own, which might actually be as cheap, or slightly higher.

Who knows. Cloud storage is insanely cheap, so maybe that figure isn’t so bonkers. Of course, you need networking transfer chargers, etc. So, double, even quadruple the cost if you care too: still “nothing,” relative to the other numbers.

With this kind of Sunday morning, armchair analysis, there’s no end of flaws. Like I should have found the comparable costs, growth, the TAM, and staffing for Box, BackBlaze, etc., and even made sure I actually understand Spanning’s business model, but: ¯_(ツ)_/¯

Over years, that’s a pretty small gap to close to be profitably, and there’s a lot of things to play with in the spreadsheet (can we fire most all the sales and marketing people and go pure channel, hiring up a biz dev team of 2-3 people to get 5 or so key channel partners?).

It’s probably even easier to bundle up the company for sale to another large company after a few years. Someone like Microsoft or Salesforce might even want them to add that functionality to their own products, or any company that’s concerned about filling in it’s “enterprise SaaS” strategy gaps.

I’ve always like Spanning (RIP Sync). I hope it works out well!

The news from Docker-land, plus, the money being fought over – Notebook

With DockerCon this week, there’s no end of Docker quotables and items. Here’s my collection

General momentum

Once landed in an account, Docker usage grows their CEO says:

There has also been expansion within customers, with organizations that start with Docker expanding their usage on average by five times within six months

Way back in 2015, the (now annual?) DataDog study of Docker usage among their customers said that 2/3 of companies that try Docker adopt it. Which is all to say: once it gets in, it spreads.

Moby

A toolkit for putting together docker stacks:

In essence, Moby is the build system that creates Docker Community Edition, which is akin to Fedora, and Docker Enterprise is derived from Moby and is akin to Red Hat Enterprise Linux. Link

People got all freaked out. I’d even say “freaked the fuck out.” Competitors, of course, gloated, if only in silence. Criticism of handling the announcement aside (ideally, you wouldn’t like to kick up a stink), I feel like it was more like a tempest in a teapot.

Docker momentum/penetration and types of applications/workloads

Global 2000 customers have somewhere on the order of thousands to tens of thousands of applications, and across these major firms, less than 5 percent of the applications have been containerized so far. While somewhere between 5 percent and 10 percent of the applications that are being containerized are net-new, microservices-style applications that everyone is talking about all the time, the other 90 percent to 95 percent are just lifting and shifting legacy applications from bare metal or virtual machines to containers. Link

VMware threat…or just legacy gobbling?

Docker bounces back and forth between “replacement for VMware” and “a different thing, so don’t worry about VMware.” In this round of Docker news, there’s been some strong pull towards the “replacement for VMware” camp. To be fair, it’s more like doing both:

In general, says Johnston, customers who move from bare metal or VMs to Docker containers can provision, scale, and deploy applications up to 75 percent faster, and those moving from bare metal to containers can save 50 percent on compute and those who are moving from VMs will save around 25 percent. Link

This might also come from the obvious move to start gobbling up legacy (more accurately “existing”) applications. Here, Docker had two customer reference:

Northern Trust, a leading international financial services company, experienced  deployment times that were 4X faster and noted a 2X improvement in infrastructure utilization

And, Microsoft IT:

Microsoft is not only a partner in this program; their IT organization is also a beta customer.  Microsoft IT increased app density 4X with zero impact to performance and were able to reduce their infrastructure costs by a third.

There was also a story of Visa using Docker:

Kocherlakota said Visa is aiming to move as many workloads at it can to the container model to help improve overall efficiency.

See more on this legacy migration stuff and the program with Avanade, Cisco, HP, and Microsoft from Docker’s Scott Johnson.

Major vendors

Other tech companies are often cautious about working with Docker. They’re not really certain about how it helps or threatens their position in the IT stack and, therefore, their ability to sell higher profit margin products and services. No one wants to become the x86 manufacturer of the cloud (read: low margin, commodity).

I’ve noticed this cautiousness slightly melting as more and more vendors are at least putting their stuff in Docker images and, on the public cloud front, supporting the use of Docker. My company, Pivotal, ingests Docker images.

A brief whack at why Microsoft cares, from Christopher Tozzi:

Although there remains work to do to get Docker on Windows ready for prime time, the platform will be important in helping Windows Server stay as nimble as Linux environments in hosting the workloads of the future…. Microsoft’s interest in Docker may seem strange. Microsoft already offers traditional virtual machine products, most notably Hyper-V. In some respects, Docker containers compete with virtual machine platforms…. But that’s not necessarily the case. Depending on how they’re used, containers can complement virtual machines, rather than replace them. If you use virtual machines to host the environment in which Docker runs, your Docker environment becomes more scalable and portable than it would be if it ran on bare metal. That’s likely the type of use case Microsoft envisions for containers on Windows.

More from Nick Martin on Microsoft and Docker.

Oracle bundling middleware in Docker containers:

Oracle becomes the latest enterprise IT vendor to jump on the Docker container bandwagon as it seeks to expand its reach in the public cloud market. Among the container-based application, middleware and development tools made available on the container platform are Oracle’s MySQL database and its WebLogic server. Those tools are in addition to the more than 100 images of Oracle products already available on Docker Hub, its cloud-based image registry.

So, what’s going on here? Staking a claim on The New Stack

I’m often asked to explain all the various cloud stacks, to help Pivotal buyers sort out what CaaS, PaaS, cloud-native, and “cloud strategy” means. They’re trying to figure out their planning for building out new IT, for “doing DevOps.” It’s a mess out there w/r/t to figuring all this out if you’re not a vendor or analyst who’s steeped in this shoggoth every day.

In all the Docker, container, and cloud-native wars, the revenue battle for vendors is mostly about two things:

  1. The pool of money in simply migrating the VMware workload to a new, more efficient layer, hence the ongoing attention to “the VMware threat” that Docker poses). I’m not sure how big this market is because, as a disruptive shift (cf. Linux vs. UNIX vs. Windows vs. z) part of it is reducing the overall spend through lower prices and more efficient usage. But, the existing virtualization market is best described as “fucking huge.”
  2. Fighting over who “owns” (and therefore collects the most profit from) the stack that companies are using to build and run their software. By my estimate, this is something like around a $20-25bn market in the future. You can see a Spanish Civil War like precursor going on in the Java application server market; it’s spreading to a “World War” with respect to all custom software stacks.

On that second point, here’s my latest attempt to describe how things are shaking out category/definition wise:

Of all the SPI cloud categories, PaaS is the most problematic place as all us vendors hate the PaaS term and are trying to re-define what it means. I would break PaaS into two categories currently: (1.) container orchestration, and, (2.) cloud platform.

Container orchestration takes an IaaS and manages the installation and configuration of container images on your new cloud. By “images” here, I mean that you’ve chosen to put your software (probably custom written software, not packaged software) into containers (or the delegated way we do it with buildpacks in CF), specified how all the different nodes are wired together with all the ACLs and configuration, and then given it over to the orchestration software to deploy those containers, set the configuration, and do the ongoing health-checks/remediation.

Ideally, the orchestration platform should also have “day 2” tools to help you monitor and manager (“fix”) problems that happen in production. I assume things like kubernetes, the Docker/Moby constellation of things, Mesosphere, etc. fit here.

People are obsessed with container orchestration now and it’s pretty much all anyone talks about. I think all this is what’s becoming known as “CaaS” – Containers as a Service.

(On this next section, I’m extremely monetarily biased, of course:) A cloud platform either has or depends on an orchestration layer, but adds in integrated middle-ware, ALM tools (from basics like “cf push”, and an overall programming and deployment model with all the tools and enforcements. Heroku is the classic example here in public cloud, and now Cloud Foundry (CF) has taken over this model in public and private cloud, the second (it seems) where most of the usage and money is, at least in the enterprise space. I’d argue, that CF is the enterprise market-leader (by revenue at least, but increasingly penetration in the F500 – while Pivotal has impressive numbers, throw in the other CF distros and it’s even larger, no doubt); at the very least, “the highest growth and in enterprise production usage.” That all depends how you slice it, and of course my slicing favors me.

A cloud platform “pulls together” everything into a fully working “cloud” that deploy and provisions the servers, builds/maintains/deploys the containers, takes care of your networking configuration and concerns (inc. firewalls, etc.), and configs/manages all the middleware needed (e.g. “I want a database” means you just ask for it, instead of having to configure it and make container images of it and specify how it all works together).

The end goal of a cloud platform is the original end-goal of a PaaS: developers don’t have to “setup” any of the infrastructure or, really, middleware (databases, queues, etc.) that they use: they just write the “business logic” of their applications.

All this standardization is technically “restrictive” (developers can’t just install anything they download off the Internet, it has to be integrated into the platform). This is why we often call this model “opinionated,” but it follows the same contract/promises model that Google SREs follow: we promise we can support your applications in production if you use only the things we support, otherwise it’s all on you.

However, the benefit of such opinions is a huge jump in productivity as we see at all our customers: one Pivotal customer manages 1,000+ applications (all angles toward very frequent, DevOps-style releases for fast feedback loops and all that small batch stuff) with just 4 PCF operations staff, etc.

Our DIY white paper makes the case that snow-flaking this all out is a bad idea. At the very least, if you build your own platform, you should try to just have one used organization wide.

In comparing CaaS and cloud platform, the key distinction to me is that a cloud platform bundles and integrates together all your middleware and “services” frameworks. For example, if you want to do microservices with all the bulk-heads and such, that functionality should be built into the cloud platform – you should have to go read-up how to set most of that up. PCF, of course, has Spring Cloud and more for that. All of the systems management tools (thing used in production to detect and fix problems) should also be built in, or the cloud platform should be instrumented so deeply that third party tools can do the managing as well.

Now, these two categories are likely to converge, and then the discussion will just be which cloud platforms are more featureful and better. It’ll be like battling Java application servers.

I haven’t made one of my own “burger” stacks of all this in a long time, but I think (again, highly biased) the ones we use for PCF are pretty good:

More

In case you don’t know, working at Pivotal, I obviously have a stake in how all this turns out, so I’m biased on multiple angles of the above whether I want to be or not. 

Picking off the slow-movers: $15bn for tech PE now sloshing around at Silverlake, more to come

Silver Lake plans to announce on Tuesday that it has closed its fifth buyout fund at $15 billion, one of the biggest ever dedicated to technology deals. That exceeds the $12.5 billion fund-raising target that the firm had previously aimed for and brings the firm’s total assets and committed capital to about $39 billion.

They seem to get good returns:

Silver Lake’s fourth fund, with $10.5 billion under management, currently boasts returns of nearly 31 percent, according to the data provider PitchBook.

Meanwhile, as Dan Primack mentioned, you can expect $100bn from SoftBank.

What this means is that more older, lower growth software companies will be taken private. More than likely, their day-to-day operations will be optimized to get their cash-flow fixed up and increase profits. These companies can then act as cash machines and find some exit after the PE owners “fix” management and operations problems at the company.

That usually means consolidation, which results in firing people, but also fixing stubborn “frozen middle” problems that have preventing each product line from evolving and getting a better ongoing product/market for, meaning: being something that customers want to use and keep buying. There can also just be a lot of “bloat” in older product lines, esp. when it comes to effective product management, marketing, and developers following old, slow, but comfortable processes.

And, sometimes, as you see at IBM, you just have to shut down old business in favor of building new ones. This means a top-line revenue hit, which means slowing or killing quarterly growth. As IBM has been demonstrating for 20 quarters, when you’re public, ain’t nobody got time for that. In theory, when private, you can choose that option.

As Brenon at 451 has noted, going private deals like these are growing much more than “corporate” acquisitions (like when Microsoft, Cisco, IBM, etc. buy a company to integrate into their product portfolio rather than optimize the company as discussed here). It doesn’t always work, but that “nearly 30%” return indicates that it works more than enough.

Link

With no competition, government websites often have no incentive to be good

In contrast to agile, private-sector companies, the public sector does not face any pressure from competition. When it comes time to renew your license, there is only one place for you to do that: and, unfortunately for Americans, that’s the DMV. With no competitive forces, government agencies do not have to innovate or take bold risks when it comes to digital.

And, as ever, being smart about using updated tools and new methods yield huge productivity results:

While running technology for Obama’s WhiteHouse.gov, open-source solutions enabled our team to deliver projects on budget and up to 75% faster than alternative proprietary-software options. More than anything, open-source technology allows governments to utilize a large ecosystem of developers, which enhances innovation and collaboration while driving down the cost to taxpayers.

While open source has different cost dynamic, I’d suggest that simply switching to new software to get the latest features and mindset that the software imbues gives you a boost. Open source, when picked well, will come with that community and an ongoing focus on updates: older software that has long been abandoned by the community and vendors will stall out and become stale, open or not.

With most large organizations, and especially government, simply doing something will give you a huge boost in all your KPIs in the short term. Picking a thriving, vibrant stack is critical for long term success. Otherwise, five or ten years from now, whether using open or closed source, you’ll end up in the same spot, dead in the water and sucking.

Link

Pivotal Conversations: How microservices enable DevOps, with Josh Long by Pivotal Conversations

It was a pretty good episode:

In preparation for his DevOpsDays Atlanta talk, Josh and Coté (well, mostly Coté) talk about the relationship between microservices and DevOps. They use the CAMS framing to go over how microservices could provide the architectural requirements to make DevOps possible.

Coders work from home more often than those in other jobs

In 2015, an estimated 300,000 full-time employees in computer science jobs worked from home in the US. (This figure also includes related professions such as actuaries and statisticians, but the vast majority are programmers.) Although not the largest group of remote employees in absolute numbers, that’s about 8% of all programmers, which is a significantly larger share than in any other job category, and well above the average for all jobs of just under 3%.

8% is not really that much, but the proportion versus other jobs is large, “more than double.” Of course, doubling, even tripling, small numbers doesn’t really get you that far.

Link

DIUx working in streamlining IT projects at the DoD

Since May 2016, DIUx has completed 21 contracts using other transaction (OT) authority and the average time is 78 days, Shah said at the New America Foundation Future of War summit in Washington.

The mission of DIUx, he said, “is to do agile culture change.…We are never going to be the acquisition arm of the Department of Defense, we’re not the R&D arm of the department.”
DIUx has so far comprised $42 million in program funding, which Shah characterized as a “rounding error of a rounding error” of the DOD budget.

Hey, they’re trying over there in the government. It ain’t easy. I’ve meet with some of the folks there and they sure seem genuine about fixing things up and curious to work closer with the civilian IT world.

When I meet with military people they use the word “agile” over and over: meaning, they’re incredibly interested in modernizing. It’s just the tiny matter of figuring out how to get from here to there.

Link

Frequent flyer programs drive billions(?!) in revenue

Delta Air Lines Inc., the world’s second-largest carrier, said it expects that its American Express partnership will yield $4 billion in revenue per year by 2021, rising by more than $300 million annually until then. Those sums translate to a very high margin of profit, Delta executives have acknowledged, but they’ve decline to specify further. At an investor presentation on March 29, Alaska Air Group Inc. said its Mileage Plan relationship with Bank of America will account for $900 million in annual cash flow, once the airline has fully combined with Virgin America Inc.

Billions per year seems crazy, but I assume they’re not lying.

Link