“We’re not hand-crafting dovetail joints here. To be ethical engineers in a hyperscale world we need to reason critically about what we build, on a feature-by-feature basis, and stand by our reasoning if it is sound.”
Original source: Ethics? Yeah, that’s great, but do they scale?
“By deploying applications to cloud.gov, agencies can take care of 269 of the 325 controls required by a moderate-impact system, significantly reducing the compliance burden and the time it takes to receive an ATO.”
Original source: CI/CD is possible
As ever, just getting a build pipeline in place is the big, important first step that most need to make: “Continuous integration remains a top priority for development teams with 63 percent of respondents saying they plan to invest in CI tools in 2018. Nearly half of all respondents (47 percent) strongly agree that practicing continuous integration alleviates blockers in the development process. In addition to CI, automation is increasingly top of mind for software professionals as half of respondents report delays in testing, while 58 percent report delays in planning. As a result, 36 percent of IT managers plan to invest in automation tools in 2018 to alleviate these pain points.”
Vendor survey from GitLab: “5,296 software developers, CTOs and software professionals, conducted.”
Original source: DevOps success is about culture, culture culture
“The rush is on for enterprises to build and deploy better software faster, and that’s going to drive a doubling of PaaS adoption — both on premises and in the cloud — in the next 18 months,” Bartoletti said. “In some industries, like financial services and retail, leaders are already differentiating by how well they release high-quality experiences, and many of them are using a Cloud Foundry- or Kubernetes-based container development platform to speed up even further.”
Original source: App development teams brace for big change in 2018
‘Datical automatically examines SQL scripts created by developers and aligns them with a common object model. “We create a package so you have an immutable artifact that goes from development to test to production just like your app code,” Reeves said. The software checks for inefficiencies, such as the use of multiple indices or joins, and flags them before changing the schema…. Datical’s containerized image can be run with Concourse as part of a testing pipeline to enable application development teams to push application and database changes through the release cycle at the same time. The company’s will cross-sell each other products, although the arrangement isn’t exclusive, Reeves said.’
Link to original
Mirantis therefore thinks it can do a similar job for other combinations of open source software and that users will welcome such oft-updated bundles as anything that makes developers more productive, and infrastructure more secure, should be welcome.
Source: Mirantis eyes continuous integration of all the things
The three new Puppet products based on Distelli’s technology are Puppet Pipelines for Apps, which automates key application development and delivery tasks; Puppet Pipelines for Containers, which enables users to build Docker images from a repository and deploy them to Kubernetes clusters; and Puppet Container Registry, which gives developers a comprehensive view of their Docker images across all repositories.
Source: Puppet Launches Barrage Of Products To Enable ‘New Age’ Of Software Automation And DevOps
I always like his focus in speeding up the release cycle as a forcing function for putting continuous integration in place, both leading to improving how an organization’s software:
I try not to get too caught up in the names. As long as the changes are helping you improve your software development and delivery processes then who cares what they are called. To me it is more important to understand the inefficiencies you are trying to address and then identify the practice that will help the most. In a lot of respects DevOps is just the agile principle of releasing code on a more frequent basis that got left behind when agile scaled to the Enterprise. Releasing code in large organizations with tightly coupled architectures is hard. It requires coordinating different code, environment definitions, and deployment processes across lots of different teams. These are improvements that small agile teams in large organizations were not well equipped to address. Therefore, this basic agile principle of releasing code to the customer on a frequent basis got dropped in most Enterprise agile implementations. These agile teams tended to focus on problems they could solve like getting signoff by the product owner in a dedicated environment that was isolated from the complexity of the larger organization.
You can hide a lot of inefficiencies with dedicated environments and branches but once you move to everyone working on a common trunk and more frequent releases those problems will have to be address. When you are building and releasing the Enterprise systems at a low frequency your teams can brute force their way through similar problems every release. Increasing the frequency will require people to address inefficiencies that have existed in your organization for years.
On how organization size changes your managerial tactics:
If it is a small team environment, then DevOps is more about giving them the resources they need, removing barriers, and empowering the team because they can own the system end to end. If it is a large complex environment, it is more about designing and optimizing a large complex deployment pipeline. This is not they type of challenges that small empowered team can or will address. It takes a more structured approach with people looking across the entire deployment pipeline and optimizing the system.
The rest of the interview is good stuff. Also, I reviewed his book back in November; the book is excellent.
From Gary Gruver, one of the better “how to do agile and DevOps stuff in large organizations” authors:
For these organizations implementing DevOps principles (the ability to release code to the customer on a more frequent basis while maintaining or improving stability and quality) is more about creating a well-designed deployment pipeline that builds up a more stable enterprise systems on a regular basis so it is much easier to release the code on a more frequent basis. This is done by creating a deployment pipeline that integrates the code across the enterprise system on a much more frequent basis with automated testing to ensure that new functionality is not breaking existing code and the code quality is kept much closer to release quality.
From my perspective this approach to DevOps is rather different from the more unicorn type approach described in this article. It though does address the biggest opportunity for improvement that does exist in more large traditional organizations which is coordinating the work across teams. In these cases the working code in the deployment pipeline is the forcing function used to coordinate work and ensure alignment across the organization. If the code from different teams won’t work together or it won’t work in production, the organization is forced to fix those issues immediately before too much code is written that will not work together in a production environment. Addressing these issues early and often in a deployment pipeline is one of the most important things large traditional organizations can and should be doing to improve the effectiveness of their development and deployment processes.
Source: DevOps killing outsourcing? Another point of view – DevOps.com