Mirantis eyes continuous integration of all the things

Mirantis therefore thinks it can do a similar job for other combinations of open source software and that users will welcome such oft-updated bundles as anything that makes developers more productive, and infrastructure more secure, should be welcome.

Source: Mirantis eyes continuous integration of all the things

Puppet’s new pipeline & kubernetes tools

The three new Puppet products based on Distelli’s technology are Puppet Pipelines for Apps, which automates key application development and delivery tasks; Puppet Pipelines for Containers, which enables users to build Docker images from a repository and deploy them to Kubernetes clusters; and Puppet Container Registry, which gives developers a comprehensive view of their Docker images across all repositories.

Source: Puppet Launches Barrage Of Products To Enable ‘New Age’ Of Software Automation And DevOps

Gary Gruver interview on scaling DevOps

I always like his focus in speeding up the release cycle as a forcing function for putting continuous integration in place, both leading to improving how an organization’s software:

I try not to get too caught up in the names. As long as the changes are helping you improve your software development and delivery processes then who cares what they are called. To me it is more important to understand the inefficiencies you are trying to address and then identify the practice that will help the most. In a lot of respects DevOps is just the agile principle of releasing code on a more frequent basis that got left behind when agile scaled to the Enterprise. Releasing code in large organizations with tightly coupled architectures is hard. It requires coordinating different code, environment definitions, and deployment processes across lots of different teams. These are improvements that small agile teams in large organizations were not well equipped to address. Therefore, this basic agile principle of releasing code to the customer on a frequent basis got dropped in most Enterprise agile implementations. These agile teams tended to focus on problems they could solve like getting signoff by the product owner in a dedicated environment that was isolated from the complexity of the larger organization.

And:

You can hide a lot of inefficiencies with dedicated environments and branches but once you move to everyone working on a common trunk and more frequent releases those problems will have to be address. When you are building and releasing the Enterprise systems at a low frequency your teams can brute force their way through similar problems every release. Increasing the frequency will require people to address inefficiencies that have existed in your organization for years.

On how organization size changes your managerial tactics:

If it is a small team environment, then DevOps is more about giving them the resources they need, removing barriers, and empowering the team because they can own the system end to end. If it is a large complex environment, it is more about designing and optimizing a large complex deployment pipeline. This is not they type of challenges that small empowered team can or will address. It takes a more structured approach with people looking across the entire deployment pipeline and optimizing the system.

The rest of the interview is good stuff. Also, I reviewed his book back in November; the book is excellent.

Link

Slow down cowboy, start with just integrate your code regularly and fixing the bugs you find

From Gary Gruver, one of the better “how to do agile and DevOps stuff in large organizations” authors:

For these organizations implementing DevOps principles (the ability to release code to the customer on a more frequent basis while maintaining or improving stability and quality) is more about creating a well-designed deployment pipeline that builds up a more stable enterprise systems on a regular basis so it is much easier to release the code on a more frequent basis.  This is done by creating a deployment pipeline that integrates the code across the enterprise system on a much more frequent basis with automated testing to ensure that new functionality is not breaking existing code and the code quality is kept much closer to release quality.

From my perspective this approach to DevOps is rather different from the more unicorn type approach described in this article.  It though does address the biggest opportunity for improvement that does exist in more large traditional organizations which is coordinating the work across teams.  In these cases the working code in the deployment pipeline is the forcing function used to coordinate work and ensure alignment across the organization.  If the code from different teams won’t work together or it won’t work in production, the organization is forced to fix those issues immediately before too much code is written that will not work together in a production environment.  Addressing these issues early and often in a deployment pipeline is one of the most important things large traditional organizations can and should be doing to improve the effectiveness of their development and deployment processes.

Source: DevOps killing outsourcing? Another point of view – DevOps.com

CloudBees launches certification and new private SaaS offering backed by Jenkins 2.0

2014, when the company pivoted away from its public PaaS offering to focus on Jenkins. That seems to have been the right move – headcount has grown from 60 to 164 since then, and revenue increased 150% year over year in 2015.

There’s pricing in there too and some notes on enterprise customers if you have 451 access.

Source: CloudBees launches certification and new private SaaS offering backed by Jenkins 2.0

Axel Springer | Case Study | Pivotal

“Together, the teams were able to reduce deployment times from 14 hours to 14 minutes, facilitated by Pivotal Cloud Foundry’s integration with Jenkins and Gradle build systems. Since this pilot, Pivotal Cloud Foundry has had zero downtime. It is being maintained by just two operators, using their preferred tools: Logstash, DataDog and PagerDuty. Furthermore, it runs in Axel Springer’s chosen datacenter on European soil.”

Axel Springer | Case Study | Pivotal

CoreLogic | Case Study | Pivotal

14 months down to 6 months, 16 staff down to 8 staff: “[w]hen planning the first product developed on Pivotal Cloud Foundry, CoreLogic allocated a team of 12 engineers with four quality assurance software engineers and a management team. The goal was to deliver the product in 14 months. Instead, the project ultimately required only a product manager, one user experience designer and six engineers who delivered the desired product in just six months.”

CoreLogic | Case Study | Pivotal