Making mainframe applications more agile, Gartner – Highlights

In a report giving advice to mainframe folks looking to be more Agile, Gartner’s Dale Vecchio and Bill Swanton give some pretty good advice for anyone looking to change how they do software.

Here’s some highlights from the report, entitled “Agile Development and Mainframe Legacy Systems – Something’s Got to Give”

Chunking up changes:

  1. Application changes must be smaller.
  2. Automation across the life cycle is critical to being successful.
  3. A regular and positive relationship must exist between the owner of the application and the developers of the changes.

Also:

This kind of effort may seem insurmountable for a large legacy portfolio. However, an organization doesn’t have to attack the entire portfolio. Determine where the primary value can be achieved and focus there. Which areas of the portfolio are most impacted by business requests? Target the areas with the most value.

An example of possible change:

About 10 years ago, a large European bank rebuilt its core banking system on the mainframe using COBOL. It now does agile development for both mainframe COBOL and “channel” Java layers of the system. The bank does not consider that it has achieved DevOps for the mainframe, as it is only able to maintain a cadence of monthly releases. Even that release rate required a signi cant investment in testing and other automation. Fortunately, most new work happens exclusively in the Java layers, without needing to make changes to the COBOL core system. Therefore, the bank maintains a faster cadence for most releases, and only major changes that require core updates need to fall in line with the slower monthly cadence for the mainframe. The key to making agile work for the mainframe at the bank is embracing the agile practices that have the greatest impact on effective delivery within the monthly cadence, including test-driven development and smaller modules with fewer dependencies.

It seems impossible, but you should try:

Improving the state of a decades-old system is often seen as a fool’s errand. It provides no real business value and introduces great risk. Many mainframe organizations Gartner speaks to are not comfortable doing this much invasive change and believing that it can ensure functional equivalence when complete! Restructuring the existing portfolio, eliminating dead code and consolidating redundant code are further incremental steps that can be done over time. Each application team needs to improve the portfolio that it is responsible for in order to ensure speed and success in the future. Moving to a services-based or API structure may also enable changes to be done effectively and quickly over time. Some level of investment to evolve the portfolio to a more streamlined structure will greatly increase the ability to make changes quickly and reliably. Trying to get faster with good quality on a monolithic hairball of an application is a recipe for failure. These changes can occur in an evolutionary way. This approach, referred to in the past as proactive maintenance, is a price that must be paid early to make life easier in the future.

You gotta have testing:

Test cases are necessary to support automation of this critical step. While the tooling is very different, and even the approaches may be unique to the mainframe architecture, they are an important component of speed and reliability. This can be a tremendous hurdle to overcome on the road to agile development on the mainframe. This level of commitment can become a real roadblock to success.

Another example of an organization gradually changing:

When a large European bank faced wholesale change mandated by loss of support for an old platform, it chose to rewrite its core system in mainframe COBOL (although today it would be more likely to acquire an off-the-shelf core banking system). The bank followed a component-based approach that helped position it for success with agile today by exposing its core capabilities as services via standard APIs. This architecture did not deliver the level of isolation the bank could achieve with microservices today, as it built the system with a shared DBMS back-end, as was common practice at the time. That coupling with the database and related data model dependencies is the main technical obstacle to moving to continuous delivery, although the IT operations group also presents cultural obstacles, as it is satis ed with the current model for managing change.

A reminder: all we want is a rapid feedback cycle:

The goal is to reduce the cycle time between an idea and usable software. In order to do so, the changes need to be smaller, the process needs to be automated, and the steps for deployment to production must be repeatable and reliable.

The ALM technology doesn’t support mainframes, and mainframe ALM stuff doesn’t support agile. A rare case where fixing the tech can likely fix the problem:

The dilemma mainframe organizations may face is that traditional mainframe application development life cycle tools were not designed for small, fast and automated deployment. Agile development tools that do support this approach aren’t designed to support the artifacts of mainframe applications. Modern tools for the building, deploying, testing and releasing of applications for the mainframe won’t often t. Existing mainframe software version control and conguration management tools for a new agile approach to development will take some effort — if they will work at all.

Use APIs to decouple the way, norms, and road-map of mainframes from the rest of your systems:

wrapping existing mainframe functions and exposing them as services does provide an intermediate step between agile on the mainframe and migration to environments where agile is more readily understood.

Contrary to what you might be thinking, the report doesn’t actually advocate moving off the mainframe willy-nilly. From my perspective, it’s just trying to suggest using better processes and, as needed, updating your ALM and release management tools.

Read the rest of the report over behind Gartner’s paywall.

Gary Gruver interview on scaling DevOps

I always like his focus in speeding up the release cycle as a forcing function for putting continuous integration in place, both leading to improving how an organization’s software:

I try not to get too caught up in the names. As long as the changes are helping you improve your software development and delivery processes then who cares what they are called. To me it is more important to understand the inefficiencies you are trying to address and then identify the practice that will help the most. In a lot of respects DevOps is just the agile principle of releasing code on a more frequent basis that got left behind when agile scaled to the Enterprise. Releasing code in large organizations with tightly coupled architectures is hard. It requires coordinating different code, environment definitions, and deployment processes across lots of different teams. These are improvements that small agile teams in large organizations were not well equipped to address. Therefore, this basic agile principle of releasing code to the customer on a frequent basis got dropped in most Enterprise agile implementations. These agile teams tended to focus on problems they could solve like getting signoff by the product owner in a dedicated environment that was isolated from the complexity of the larger organization.

And:

You can hide a lot of inefficiencies with dedicated environments and branches but once you move to everyone working on a common trunk and more frequent releases those problems will have to be address. When you are building and releasing the Enterprise systems at a low frequency your teams can brute force their way through similar problems every release. Increasing the frequency will require people to address inefficiencies that have existed in your organization for years.

On how organization size changes your managerial tactics:

If it is a small team environment, then DevOps is more about giving them the resources they need, removing barriers, and empowering the team because they can own the system end to end. If it is a large complex environment, it is more about designing and optimizing a large complex deployment pipeline. This is not they type of challenges that small empowered team can or will address. It takes a more structured approach with people looking across the entire deployment pipeline and optimizing the system.

The rest of the interview is good stuff. Also, I reviewed his book back in November; the book is excellent.

Link

Saving $20m and going agile in the process

From an interesting sounding panel on government IT:

“We do discovery on a small chunk and then development, and then while that’s going on, we’re starting discovery on the next small chunk, and so on and so forth,” Smith said. “And then when the development is done, we loop back and we do user testing on that piece that’s done. But we don’t release it. That’s … one of the differences between agile and the way we did it. At the end of the phase we release everything.”

Also, some fun notes on consolidating legacy systems and resistance to going agile.

Agile Development’s Biggest Failure Point—and How to Fix It

Companies commonly make one of two mistakes when selecting a product owner. Often they tap a junior employee with ­limited experience and therefore a limited understanding of how the project fits into the larger mission. Product owners need enough seniority to inspire and motivate peers across multiple business units. By earning the respect of teams in customer experience, enterprise architecture, and risk and compliance, for example, the ­product owner can help ensure that ­projects move smoothly without costly ­bottlenecks. Other companies err in the ­opposite direction, selecting a senior ­executive who is too harried to devote ­adequate time and may not adapt well to the highly responsive, iterative nature of agile development.
So what should companies look for when appointing product owners? In our view, the key is to find people who think and ­behave like entrepreneurs.

Much of the advice here falls under the category of “if you do good things, good things happen”:

success comes from simply managing a sound process: conducting market ­research, understanding the customer’s needs, identifying where the product will create the most value, prioritizing the most important features, testing ideas, capturing customer feedback, and continuously ­refining their vision over time.

The tasks is setting up and environment, processes, even “culture” that encloses and rewards good behavior like this. And the protecting that structure from corporate barbarians. That’s a job – and the responsibility – of management. So, perhaps it’s good to get some management consulting advice on what good looks like.

Source: Agile Development’s Biggest Failure Point—and How to Fix It

There’s still a lot of agile to be rolled out

Diego Lo Giudice, vice president and principal analyst with Forrester Research, is one of those who thinks that companies haven’t really adopted Agile as they should have done. “I would say that we’re going to see more Agile because we haven’t done it well enough yet,” he told The Reg. “Organisations say they’re doing it, but they’re struggling to scale it.”

One of the more shocking “turns out” to most of my talks is just this: just under 20 years later, the industry still doesn’t do that much agile.

Source: The dev-astating truth: What’s left to develop? Send in the machines

Bimodal IT leads to technical debt that must be paid, with interest

Instead, it has created a great divide, said Pucciarelli. “This siloed, divided approach brings frustration, disappointment and failure in multiple ways.” For one thing, it doesn’t support healthy team spirit, he said, and the innovation side tends to operate fast to deliver business solutions without the accountability around reliability, quality and security that is expected from the traditional IT side. “It leads to redundancy and inefficiency.”

Source: Bimodal IT leads to technical debt that must be paid, with interest

What is “waterfall”?

“Waterfall” comes up a lot in my line of work: talking with orginizations about how to do software better. People like myself take the definition of “waterfall” for granted and usually don’t spend too much time talking about it, let alone when it might be a good idea of how to do it well.

Someone asked me the simple question “what is waterfall?” as a follow-up to one of my talks recently. Here’s what I typed back:

Continue reading What is “waterfall”?

Silos are often defined and enforced by workstream (in)visibility

The primary measure of progress for the Scrum Team is working software, but the primary measure of progress for the Product Owner is user stories that are “ready,” which is a very nebulous concept.

a Kanban board can make a cross-functional team feel more like a reality. In a large organization, where there are specialist systems, software, hardware, and test engineers, the various disciplines are usually not going to start performing work in each other disciplines, no matter whether or not they are on the same agile team. But by working off the same board, at least they are in the position of being able to visualize each other’s work and see how what they are doing contributes to the success of the whole program.

the advantage of lean and Kanban approaches is that they make explicit and visible the full state of a program in a way that encourages people to make good decisions for improvement to the process as a whole rather than optimizing for one part of the process. This includes the poor product owner, whose work is finished before most agile boards start.

From: “The Case for Lean: Capturing Business Work”