You probably still want to know who actually built a given container and what’s running in it.
From an interview with Jeffrey Hammond and Marc Cecere on developer skills gaps. Here, the trend to training with people in person and then (slowly) going back “home”:
[Hammond:] One of the things I think you see is it– so many companies have used the words, partnering model, for years, and it’s been more or less lip service. But you do see a little bit more of a partnering and more highly tailored model. As an example, if you look at some of the projects that we see companies like a Pivotal or an IBM running these days, they may actually start in a garage that is near the organization so in San Francisco, or in London, or in New York City, but they start off-site. And the client’s developers, the client’s business personnel, go to those rooms, those war rooms if you will, and start work. Now, over time, some of that work may migrate offshore, or migrate back to the client. In the case of Pivotal, they’ll run multiple teams through their centers. Allstate is a really good example where they have I think over 100 developers and have kind of been through that process now, and they’ve drained their local talent pool. But it’s much less a, “This is the work we need to do and here’s the requirements and here’s the scope and let’s put this out to bid.” It’s much more a business transformation or a re-engineering type of project that is very high-touch. And I don’t see companies being able to do that if they’re not at least down the street from their clients and from their development shops. So I think it changes the nature of the types of engagement. I think it’s one of the reasons that you’ve seen so many of the large systems integrators buying agency talent as quickly as they can, because when you look at the sort of design experience techniques that are used, journey mapping, ethnography, those sorts of things, at least right now they still tend to be very custom – almost a manual process. You see sticky notes up on walls. You see war rooms. You see an environment that is kind of hard to capture from a remote, tool-based sort of delivery model.
Source: The Battle For Talent, Forrester
I assume this is across distros, and including use of just the ope source stack,
On overview of how Bloomberg is looking at the likes of Pivotal Container Services:
“Many Kubernetes distributions are good on day one, when they’re first deployed,” said Andrey Rybka, technical architect in the office of the CTO at Bloomberg, the global finance, media and tech company based in New York. “But what happens on day two, when something fails? Kubernetes doesn’t [automatically] address things like failures at the physical node level.”
The roadmap for Cloud Foundry Container Runtime includes support for stateful applications based on the StatefulSets feature that became available with Kubernetes 1.7 in June. The foundation also plans to integrate the Istio project, founded by IBM, Google and Lyft in May, which helps to manage network communications between microservices
The three new Puppet products based on Distelli’s technology are Puppet Pipelines for Apps, which automates key application development and delivery tasks; Puppet Pipelines for Containers, which enables users to build Docker images from a repository and deploy them to Kubernetes clusters; and Puppet Container Registry, which gives developers a comprehensive view of their Docker images across all repositories.
Consider the case of the connected cows.
The grand unified, cloud/AI/IoT/serverless theory:
That was the essence of the Build keynote: The cloud interprets IoT telemetry, in real time, with AI. And that AI can, in turn, instruct other IoT devices to do things based on its interpretation.
451 Research’s data points suggest that some workloads are likely to remain on private cloud regardless of any disruptor’s attack. And even with hungry cloud providers eyeing private workloads, growth is likely to continue across all cloud models, not just public cloud.
Whole bunch of survey numbers tryin’ figure out how many workloads will stay on private cloud.
Paying premium bucks to hire influencers for the big cloud migration wars.
Source: What is Microsoft Doing?
Oracle all over that public kubernetes service.
The single biggest one is the move to public cloud, and this is where Docker is focused today. This is the number one area that we are putting all our investment in. We have this great container platform that allows you to do a lot of things, but just like any company, we need to pick an area of focus and for us, helping customers take legacy apps, moving them to the Docker platform, and allowing them to run it on any infrastructure because it’s hybrid cloud world, does a couple of things — it drives massive savings for customers, typically 50 percent cost reduction in a cost structure, but it also opens up real opportunities for the customer and our partners to innovate within that environment
Also, this is an insanely good example of a fluffy leather chair conference interview, plus, The Channel filter.
Where does the 50 percent savings come from? A few different areas. The biggest is, honestly, in the mass reduction in number of VMs [virtual machines] and that’s not good or bad, it’s just the reality. The other is that there is a massively increased density factor on compute, and so we can put a lot more workloads on a fewer number of servers. If you are a [company like] Nestle, and you are going to take a bunch of information and business systems and move it to the public cloud, doing a one-to-one move is not necessarily all that advantageous.
When I joined Docker I had a good conversation with someone over at Microsoft that said ‘I’d love to partner with you.’ His view was, the more people move to Docker, the more business they get on Azure. In fact, for every dollar we generate, he generates $7.
Momentum and the EBIT(A) chase:
we’re growing at 150 percent-plus year over year and expect that to continue for at least another few years. I’m hoping to get to profitability in mid-2019, and that’s important
The details of the acquisition were not disclosed, but we would be surprised if Cisco made back any of the $180m it paid for Composite Software in 2013. Cisco did at least manage to grow the data virtualization business during its ownership. The company told us in September 2016 that it had 250 paying customers for what was then Cisco Data Virtualization (up from 200 at the time of its acquisition of Composite Software). The deal is expected to close in the coming weeks.
Some momentum updates.
Lots of growth, it’s all just public cloud, though.
“We’re seeing a big trend among customers to move cloud stacks inside customer’s data center for security, performance and governance,” Wang told TechCrunch.
There’s not really any qualitative (market share, penetration, or surveys – all pretty easy to lmgtfy) bits here, but I’d take it more as a slightly eyebrow raising thing along the lines of “if even TechCrunch wiffs out private cloud, maybe there’s some fire there.”
Plus, analyst quotes.
The $2.7 million contract involved in the program is between the Air Force and a Silicon Valley company, Pivotal Inc., that has often worked with large corporations such as Ford and Home Depot. The effort is expected to reach beyond the operations center in Qatar to eventually assist in similar U.S. military facilities across the world.
It was a project to digitize refueling aircraft, from the previously analog approach:
The visitors, part of then-Defense Secretary Ash Carter’s new Defense Innovation Board, were surprised to see that the Air Force used a white marker board to plan the elaborate daily effort to refuel aircraft involved in the war in Iraq and Syria, said Joshua Marcuse, the board’s executive director.
The next project will focus on improving the coordination and management of airstrikes. An initial version could be available by next month, and DIUx is hopeful deployed airmen could use it within a few months, Oti said. Other programs planned will focus on compiling analytical data about airstrikes and studying potential targets.
Some BOM’ing of Azure Stack:
Azure Stack is made of two basic components, the underlying infrastructure that customers purchase from one of Microsoft’s certified partners (initially Dell EMC, HPE and Lenovo) and software that is licensed from Microsoft.The software includes basic IaaS functions that make up a cloud, such as virtual machines, storage and virtual networking. Azure Stack includes some platform-as-a-service (PaaS) application-development features including the Azure Container Service and Microsoft’s Azure Functions serverless computing software, plus MySQL and SQL Server support. It comes with Azure Active Directory for user authentication.Customers also have access to a wide range of third-party apps from the Azure Marketplace, including OS images from companies like Red Hat and SuSE, and templates that can be installed to run programs like Cloud Foundry, Kubernetes and Mesosphere.On the hardware side, Azure Stack runs on a hyperconverged infrastructure stack that Microsoft and its hardware vendors have certified. The smallest production-level Azure Stack deployment is a four-server rack with three physical switches and a lifecycle management server host. Individual racks can scale up to 12 servers, and eventually, multiple racks can be scaled together. Dell EMC, HPE and Lenovo are initial launch partners. Cisco plans to offer a certified Azure Stack platform based on its UCS hardware line by the end of 2017 and Huawei will roll out Azure Stack support by the end of 2018.IDC Data Center Networking Research Analyst Brad Casemore says he believes customers will need to run at least a 10 Gigabit Ethernet cabling with dual-port mixing. Converged network interface cards, support for BGP and data center bridging are important too. Microsoft estimates that a full-sized, 12-rack server unit of Azure Stack can supply about 400 virtual machines with 2 CPUs and 7 GB of RAM, with resiliency.
And Lydia explains the “people want private cloud ¯_(ツ)_/¯” angle:
“This is definitely a plus in the Microsoft portfolio,” says Gartner VP and Distinguished Analyst Lydia Leong, but she says it’s not right for every customer. “I don’t think this is a fundamental game-changer in the dynamics of the IaaS market,” she notes, but “this is going to be another thing to compel Microsoft-centric organizations to use Azure.”
Leong expects this could be beneficial for customers who want to use Azure but some reason such as regulations, data sensitivity, or location of data prevents them from using the public cloud. If a customer has sensitive data they’re not willing to put in the public cloud, they could deploy Azure Stack behind their firewall to process data, then relatively easily interact with applications and data in the public cloud.
The writing in this book is good, and I’m always a sucker for noir.
But it gets tiresome after awhile, all the balls-out crazy stuff and topics.
There’s a lot to study about fiction dynamics here though fueled by the picador plotting: lots of interesting characters, lots of mini-plots; paring characters; the weak male/strong female trope; unlimited budget; snarky, but weary direct address tone to the reader; maybe world building, but just as the back-story for the various characters you meet (the serial killer on the airplane, the Roanokes, but the Bob character is ignored/anemic in this respect); social commentary as asides (from Trix, often); sex for titilation.
Obviously I liked it enough to quickly read it.
Good round-up of AWS’s private cloud stuff:
- AWS added on-premises support to its CodeDeploy continuous-delivery service in 2015.
- AWS introduced the Snowball storage server companies could use to copy data and then ship it to the cloud in 2015.
- AWS added on-premises support to its EC2 Run Command tool for running shell scripts on many machines at once in 2016.
- AWS unveiled the Snowmobile truck for copying even larger supplies of data and then hauling it off to Amazon in 2016.
- This past November AWS released a container image of its Amazon Linux server operating system for use on corporate servers.