From Gary Gruver, one of the better “how to do agile and DevOps stuff in large organizations” authors:
For these organizations implementing DevOps principles (the ability to release code to the customer on a more frequent basis while maintaining or improving stability and quality) is more about creating a well-designed deployment pipeline that builds up a more stable enterprise systems on a regular basis so it is much easier to release the code on a more frequent basis. This is done by creating a deployment pipeline that integrates the code across the enterprise system on a much more frequent basis with automated testing to ensure that new functionality is not breaking existing code and the code quality is kept much closer to release quality.
From my perspective this approach to DevOps is rather different from the more unicorn type approach described in this article. It though does address the biggest opportunity for improvement that does exist in more large traditional organizations which is coordinating the work across teams. In these cases the working code in the deployment pipeline is the forcing function used to coordinate work and ensure alignment across the organization. If the code from different teams won’t work together or it won’t work in production, the organization is forced to fix those issues immediately before too much code is written that will not work together in a production environment. Addressing these issues early and often in a deployment pipeline is one of the most important things large traditional organizations can and should be doing to improve the effectiveness of their development and deployment processes.