Link: AT&T’s ‘Public-Cloud First’ Proclamation a Stake in the Ground

For AT&T to now start the process of adopting the public cloud for what are admittedly “non-network applications” is a big move. It shows that even the most stodgy industry verticals are on board with moving to the public cloud. This will provide a significant new revenue stream for those cloud providers but at the same time allow for greater scale that could drive down pricing models.

Source: AT&T’s ‘Public-Cloud First’ Proclamation a Stake in the Ground

How Sainsbury’s uses AWS

On Sainsbury’s move and use of AWS, serverless, and DevOps:

“Our relationship with AWS really kicked off at the point we decided to take our groceries online business and rebuild it in the cloud. This was effectively taking a WebSphere e-commerce monolith with an Oracle RAC database, and moving it, and modularising it, and putting it into AWS,” Sainsbury’s CIO Phil Jordan told the audience.

“That movement of RAC to RDS and that big database migration was all done using AWS services, and now we have a fully fledged cloud-native-ish service that runs groceries online across all of our business. Today, we run about 80 per cent of our groceries online with EC2, and 20 per cent is serverless.”

In total, the company migrated more than 7TB of data into the cloud. As a result, or so Jordan claimed, the mart spends 30 per cent less on infrastructure, and regularly sees a 70-80 per cent improvement in performance of interactions on the website and batch processing. So far, there’s been no “major” outages, said the CIO, without defining “major”.

Moving to the cloud has also helped Sainsbury’s into the warm infinity-looped embrace of DevOps. The company has moved from five to six releases per year to multiple releases per day, said the CIO.

Source: Holy high street, Sainsbury’s! Have you forgotten Bezos’ bunch are the competition?

Check out their talk, scrub in to about 24:10.

Related, the Sainsbury’s tech blog is pretty good.

And, from elsewhere and unrelated to Sainsbury’s, some clearer notion that “serverless” forces an event-driven architecture:

So why can’t we just write an event-driven system for our corporate infrastructure? Our world, is event-driven, and generally, we reduce the complexity of our systems by just defining events. “When there’s an access to the FTP service of upload … do this …”, “When there’s an access on a column on a database … do this “. In an IoT world, with billions of disparate devices, it is the only way to go. And if we are to create truly citizen-focused systems, we need to define the events which trigger. How many organisations could crisply define the operation of their infrastructure and all the interactions that happen?

Rather than just defining a server running Exchange, we could have some code which triggers on “When Bob logs-in open up his mail box”, or “When Alice changes the marks for her students, send an update to the exams office”. This is a world where the complexity of servers moves us towards “The Cloud” as a computation resource. In this way we write rules based on events and enact them in the Cloud. There’s no concept of running Exchange or Web servers.

Link: Comic Relief switched from multi-cloud to serverless with AWS and saw a 93% cost reduction

As a team, going serverless has given us a lot more velocity, we can rapidly release, we can test the same infrastructure we’re deploying in production, in a pull request environment, in a staging environment, we can rapidly retest ideas- and every developer can do that because we’re using Lambda to load test, so the power it gives you as a developer and engineering team is pretty amazing.

Source: Comic Relief switched from multi-cloud to serverless with AWS and saw a 93% cost reduction

Link: Standard Bank contracts with AWS for mass migration to the cloud

The bank has selected AWS as its preferred cloud provider with the intention of porting its production workloads, including its customer facing platforms and strategic core banking applications to the cloud.

From what I can tell talking with banks, they’re over that 2010 thing of “public cloud isn’t secure enough.” Now it’s a scramble to move their shit up there.

Source: Standard Bank contracts with AWS for mass migration to the cloud

Link: AWS’s Snowball Edge

A private cloud box from Amazon:

The Snowball Edge Compute Optimized with GPU includes an on-board GPU that you can use to do real-time full-motion video analysis & processing, machine learning inferencing, and other highly parallel compute-intensive work. You can launch an sbe-g instance to gain access to the GPU.

It has Lamda and EC2 capability, targeted at data
manipulation and getting it into (and out of?) AWS. There’s a lot of IoT stuff in AWS now, opening their platform up to things like smart cities, power grid management, and thermostats and lights and shit.
Original source: AWS’s Snowball Edge

Link: This is the Amazon everyone should have feared — and it has nothing to do with its retail business

“the massive online retailer once again posted its largest quarterly profit in history — $2.5 billion for the quarter — on the back of two businesses that were afterthoughts just a few years ago: Amazon Web Services, its cloud computing unit, as well as its fast-growing advertising business.”

Good charts, too.
Original source: This is the Amazon everyone should have feared — and it has nothing to do with its retail business