You've decided that you need to connect your mainframe data to the cloud. The previous article in this series outlined several reasons why you might want to do so, and now you're ready to find out how. In this article, we'll explore common, traditional approaches—and their downsides. We'll also look at strategic, tactical, and technical mistakes that companies make when heading down this path so that you can avoid them in your own organization.
First, let's acknowledge that extending a mainframe application in any way can be a frightening proposition. These applications simply aren't designed for extensibility. In some cases, business logic is contained entirely within text‐based user interfaces, making it difficult to automate data entry and retrieval without bypassing all of that business logic.
Mainframe applications typically sit right at the heart of the business. They're one of the most mission‐critical aspects of the business' technology infrastructure, running the applications that make the business work. Messing with that kind of critical component is extremely risky: One wrong move, and you're literally out of business.
Let's look at three traditional approaches that all seek to extend the mainframe application data. For each, we'll examine the details and downsides, and see which ones meet the most important business criteria for this kind of project. Typically, businesses are looking for a solution that
Let's see how three traditional approaches measure up.
With this approach, companies seek to figure out everything they might need to integrate their legacy data into, and to build a solution—either based on the mainframe itself, or integrated from a PC‐based server—that will meet all of those needs. On paper, this solution is elegant: Solve the problem once, in a consistent and well‐planned way. This might involve creating an entire adjunct to the mainframe application that is capable of communicating with external applications via some data‐interchange protocol. Ensure that all the necessary business logic is built‐in to that connectivity application, and enable it to access every aspect of the mainframe data.
In reality, this approach is generally one of those projects that never really get off the ground. To begin, it's a massive undertaking and will generally involve extensive discovery. You'll need to determine the full extent of the data that will require integration, and you'll need to consider all the potential ways in which someone might want to integrate with it from external systems. You'll likely be duplicating a great deal of business logic, giving you another whole mainframe application that has to be maintained (which is certainly a nontrivial concern in and of itself).
It'll be such a massive, all‐hands effort that executive commitment will usually be required, making it much harder to get the necessary approvals to get the project underway. It'll be a high‐profile project, with a lot more eyes on what's happening and a lot more red tape and process to slow things down. You'll be messing with the mainframe itself, which is rightly a cause for concern and caution.
The worst aspect of this approach is that it'll really hold up any specific projects that need this kind of integration. Although your programmers and architects design and build this "unified integration system," projects that desperately need to access just a few bits of data will simply have to wait—jeopardizing business opportunities and holding back the organization as a whole.
In terms of the common‐sense business criteria, how does this "top‐down" approach measure up?
That's not looking like an ideal approach.
A more moderate approach is to simply focus on the project at hand. Rather than trying to create a one‐size‐fits‐all integration layer to the mainframe app, you simply modify the mainframe app to accommodate the most immediate need.
This approach is certainly not without its downsides. In a best‐case scenario, where you own the source code to the application and have experienced developers to do the work, you're looking at a time‐consuming project that entails an incredible amount of business risk—you are, after all, modifying an application that sits in the most mission‐critical position possible. A less‐than‐best‐case scenario will require you to go to the application vendor for modifications or to otherwise outsource the project—which will be expensive. Even if you're using your own development resources, you're likely looking at a long wait because most companies have a significant project list stacked up for their mainframe programmers.
If well‐written, the modifications can often avoid negative performance impact because they'll be integrated directly with a native application. However, this approach also offers a lower return on investment (ROI) because this approach is project‐specific. The next project that comes along will require its own modifications. Eventually, you'll have a heavily‐modified application that's been tweaked to meet the needs of each project that comes along. That kind of heavily‐customized application carries its own significant business risks in terms of long‐term support and operational costs.
How does this approach meet our business criteria?
Not so well. It's still time‐consuming, incredibly risky, and even with your own developers doing the work on your own source code, it's an expensive proposition with poor reusability.
Another approach, one made popular in recent years by vendors who play in this space, is to add a Web Services layer to the mainframe itself. Doing so turns the mainframe into a standards‐compliant Web Services platform—and that obviously has some advantages. A single implementation project can give you a highly‐reusable solution for sharing mainframe data, using industry‐standard protocols and techniques.
On the downside, these projects are often extremely expensive because the vendors that implement them know that you're desperate and that you have few options. Because you're modifying the mainframe itself, this type of project will again require executive commitment, which may or may not be forthcoming. Project implementations are often lengthy, as you'll need to create maps and provide business logic between the Web Services layer and the underlying applications and data. In many ways, this approach shares the same disadvantages as the "top‐down" approach, although this approach generally provides better cross‐project reusability, so there's a better chance for capturing a good ROI in the long run.
There's also a real performance risk to the mainframe because you'll be asking it to perform a kind of work that many mainframe operating systems (OSs) weren't specifically designed for. You'll potentially be loading a great deal of additional transactional workload to the system, so you'll need to be very careful about estimating that traffic and its impact on the core of your business.
Overall, how does this approach stack up?
It's probably the best approach so far, but it's still expensive, and because it touches the mainframe directly, it's risky in terms of business impact and in terms of performance hit.
Looking at these three approaches, can we construct a wish list that avoids some of their common disadvantages while perhaps including some of their advantages? Of course we can. We want an approach that
As you'll learn in the next article in this series, we can definitely meet these needs.