In the previous two articles in this series, we've looked at why companies might want to cloud-enable their mainframe-based data contained in legacy applications, and we've looked at some of the more common, traditional approaches to doing so—and their downsides. In this article, we'll look at newer techniques and technologies that seek to achieve the most important business goals. Specifically, we want an approach that
The right approach is to use a mainframe-connected, cloud-enabled middleman. In other words, an integration system that knows how to talk to your mainframe system's applications and access their data. In addition, this system can expose that data through industry standards such as Web Services, Java Beans, or Microsoft .NET Framework assemblies. In most cases, this middleman will also serve as a conductor of sorts, turning multiple mainframe back-end operations into a single cloud-exposed transaction.
Of course, this approach isn't limited to bringing your mainframe data to the cloud. It can enable many types of interactions with external business partners' systems, expose mainframe-based data to external applications, and even help integrate systems within your own data center.
As Figure 3.1 shows, this "middleman" integration server can be utilized on a per-project basis, enabling you to quickly spin up integration for your immediate needs. In addition, this server permits longer-term return on your investment (ROI) by easily expanding to include future projects.
Figure 3.1: Using a "middleman" integration server.
Best, this "middleman" approach requires no changes to your mainframe.
By not modifying your mainframe or its applications in any way, this approach eliminates most of the downsides of the more traditional approaches we examined in the previous article. It's less likely, for example, that a given project will need executive-level commitment because the mainframe isn't impacted. You're not putting yourself onto your mainframe programmers' long waiting list of projects. You don't even necessarily need to heavily involve your mainframe team because this middleman solution—as we'll see— accesses the mainframe just like a user would.
This approach allows you to quickly focus on integration for a single project—and then integrate other projects as they come up in the future. You don't need an all-encompassing master plan; you can focus on today's need, but you're not giving up extensibility, and you're not starting from scratch with every new little integration project that comes along. Because this approach accesses the mainframe through existing interfaces, it can be implemented very quickly with little or no complex programming. From a performance perspective, it doesn't impact the mainframe any more than an equivalent number of human beings accessing the same data through your existing interfaces.
Let's bring back the comparison chart from the previous article, and see how this approach fits:
That's a pretty good picture.
A benefit of this approach is that you don't have to expose all of your mainframe data— something few companies want to do, anyway. You just expose the pieces of information needed for a particular project. That shortens implementation times, by the way, getting you up and running even more quickly than if you were embarking on an application-wide modification to expose every possible piece of data and process.
With a solution that provides good support for industry standards—Web Services is probably the most popular, although Java Beans and .NET Framework assemblies are also good to have—you'll have flexibility in how you expose that data, opening it up to other systems. You'll also open that data to developers experienced with today's modern, rapidapplication development frameworks, platforms, and architectures.
There are a number of ways in which your mainframe data can be accessed safely and securely, without bypassing business logic. One approach is to simply map data fields to the application screens used for manual data entry. The integration solution can coordinate multiple screens of data so that, for example, what is normally a multi-screen entry operation can be "packaged" as a single Web Services transaction. As Figure 3.2 shows, the integration server is essentially receiving a Web Services transaction, then "manually" entering that data into the same entry screens a human being would use—all automatically, and all behind the scenes.
Figure 3.2: Turning a Web Services transaction into a mainframe operation.
In other cases, you may map fields to a business data layer of the application, utilizing existing business objects and leveraging their embedded business logic to automate the application—using exactly the same components that make the application work today. And you don't have to map the entire mainframe application; you only map the bits of the application that you need to expose. That makes for faster implementation, easier maintenance, and easier long-term operational support.
What you've done is given your mainframe a new lease on life. You've extended the ways in which you can use your legacy application data. You've freed that data, unlocking it from the application and making it available, through standards-based protocols, wherever and whenever you might need it. You've turned your mainframe into a back-end for your own data integration cloud—and you've done it without modifying the mainframe in any way.