Friday, August 27, 2010

SOA everything! Really?

Late in 1999, while I was still working for Insession, our pre-sales group advised me of an opportunity they had run across working with a telephone company. At the time Insession’s sole product was ICE, an implementation on NonStop of IBM’s SNA APPN stack, and the opportunity involved considering the support of a non-SNA protocol as part of ICE, as well as services needed to pull apart the message payloads that would then flow in and out of the stack.

The prospect wanted to modernize the user experience and to do so, wanted to access NonStop applications from out of an industry-standard browser. They wanted Insession to add support for new protocols to ICE including HTTP and HTML. The rest is history: WebGate was created and went on to not only include the support of a HTTP Server but also support for a SOAP Server as well. This represented the first tentative steps taken to ensure applications running on the NonStop server could be easily integrated with the applications on clients, as well as on other servers, using modern user interfaces.

Today the NonStop community has many options available when it comes to modernization and yet, embracing SOA and utilizing modern protocols and services like SOAP and Web services, has only made its presence felt at a relative handful of NonStop servers. There are still many IT executives who just aren’t aware of the capabilities on offer or have elected not to include the NonStop server in their plans for SOA. Sad, considering the NonStop server is perhaps the most appropriate choice when it comes to aggregating information and ensuring mission-critical data is never lost or compromised.

I have already touched briefly on the subject of SOA and Web services in postings to this blog. In the posting of July 7th “Just to keep the business functioning?” I wrote of how the “services” model evolved and, very quickly, astute companies realized that adding Web services powered front-ends to legacy applications bought them time to selectively upgrade business solutions while facilitating modern usage of solutions that kept the business running … (becoming) an effective way of externalizing a wealth of applications to audiences that hadn’t even existed back in the early ‘90s – the business could now go “on-line” and compete globally essentially for free, thanks to the easily accessible internet!

NonStop servers have undergone a tremendous transformation of late and the support of run-time environments, including both Java and the .Net models, is empowering many companies to take even greater interest in SOA. Businesses that had relied on terminal emulation and application pass-through are beginning to capitalize on the wealth of product offerings, so much so, that of late the discussion has turned to whether there’s a need to externalize everything as a service! After all, once the flexibility that comes with SOA becomes known, architects begin to consider using SOA for any and all interactions no matter how short the connections may be.

Where should the lines be drawn and can you go too far in pursuing a services approach when it comes to externalizing your business logic to customers, business partners, and the world in general? Should SOA and Web services also be extended to include inter-system access to business logic even by adjacent servers in the data center?

In the posting of July 7th, I went on to add Web services are not a panacea, much less a silver bullet, and there remains a need to be judicious over when to deploy Web services, but for companies looking to reverse the 70:30 ratio (support for legacy systems versus pursuing innovation), Web services represent the best place to start on a path leading to the type of transformation, and innovation, business is so anxiously looking for!

I have since had a number of discussions following that post, and I have heard of many companies looking to balance the productivity gains that they see coming with SOA and Web services deployments against the performance criteria they also need to meet as part of Service Level Agreements (SLAs). It’s no secret that moving to Web services can generate message sizes that are ten, and often a hundred times, bigger than the original message size. Bandwidth capacity has certainly grown dramatically over the past decade and the arrival of multi-core chip technology made a contribution to how we can handle bigger workloads – but are we pushing a model best serving one purpose into areas where it may be less than the optimal solution? Are we trying to push the proverbial square peg into too many round holes?

In my most recent discussions with comForte presales folks I have begun to see the value that comes with having multiple solutions for better externalizing business logic. Where this becomes more obvious is in exchanges between applications residing within the data center, or where the applications only need to be externalized to in-house usage. There are alternatives to full Web services deployment and the performance gains quickly outweigh any advantages that global standardization on SOAP may provide. Being able to take advantage of different options and leveraging different feature sets can bring with it considerable benefits.

In the latest paper from The Standish Group, “Roadmap to the Megaplex”, you will find a reference in Step 2: User Experience Modernization that suggest there’s two ways to modernize the user experience – rewrite all the screens using a GUI creation tool for instance, or use a conversion tool. There is even more that can be done, and The Standish Group calls out four distinct usage scenarios in this paper – reuse (of what’s already deployed including user screens and inputs), integration (of the old with the new and where current business logic is leveraged in support of new applications), migration (when it’s time to phase out a legacy application in favor of a new product), and core (a reference to where critical business logic and data can be externalized to new architectures including J2EE and ESB implementations).

Modernization of applications will never be pursued by everyone in the same way – the value and history of the business logic and data involved may dictate quite different approaches. Sometimes the extensions to Web services considered important to some companies, including WS-Security and WS-Addressing that benefit greatly when SOAP messages are exchanged between clients and servers, may offer other companies no discernable benefits whatsoever. Worse, the fine granularity that comes with the exchange of SOAP messages may overwhelm the network bandwidth and CPU capacities of the systems involved.

When it comes to modernization and enriching the user experience, SOA will play an important role in transforming business, of that I have no doubt. It’s become the single most acceptable technology for integration and externalizing applications to the world at large. But the degree of adoption will be predicated by where we come from and the work that’s already been done in separating the business logic and data from presentation services. SOA comes with a price and it may not always be the best choice; knowing there’s options and ways to mix and match adjacent technologies can only help businesses better chart their own course to becoming more innovative and competitive.

2 comments:

  1. (Please pardon the duplication if this already appeared.)

    Is the problem inherent in the SOA design approach or is it just that the protocols used to implement SOA are inefficient?

    The only specific problem I think you mentioned is message sizes becoming much larger when using SOA, which sounds like a protocol efficiency problem, which might be easy to address. However, if there is something about SOA that requires messages to contain much more information or that requires many more messages, that probably is a much harder problem to address.

    - Keith Dick

    ReplyDelete
  2. David Finnie posted the following to the LinkedIn group HP NonStop Tandem Professionals and it's probably worth replicating here as it does add another perspective and one I tend to support:

    Richard, yeah - I agree with your suggestion.

    I think that XML is an incredibly powerful, flexible solution which has the advantage that it is human readable. Having said that, large and complex transactions start to become hard to read without the help of an XML reader just due to the size of them.

    But... I think we (as a computing profession) need to always consider horses for courses. Sadly I think historically we have been guilty of implementing "flavour of the month" technologies too universally, and I agree with your suggestion that this is also the case for XML.

    The flexibility of XML is unquestionable. Different versions of application message protocols can be implemented in XML by simply relying on the presence or not of a particular XML element or attribute. But is XML the only flexible message exchange pattern ? Obviously not. It certainly is one of the most lengthy and computationally expensive ones though.

    What if you have a message protocol that you are pretty certain will change very infrequently, if at all ? Surely a compact binary message with a single version number at the beginning is a good candidate for the message definitions ? Easy to build, transmit, and parse. Harder to read by a human, of course - but how often do you need that capability past the initial development effort ?

    TCP/IP headers are not in XML format, and there is a very good reason for that - performance. OK, OK, there's another reason - XML wasn't invented when the initial TCP and IP protocols where being developed. But would they be implemented in XML if we had the time over ? I'm guessing no... Although hardware and compiler technology have improved by orders of magnitude over the years, so have transaction volumes and functionality requirements. Performance is still an important consideration in any computing project.

    XML also has a more subtle advantage - it has the ability to ease the changes required for unforseen events such as the merging of corporate environments caused when one company buys another. If everyone is using a common messaging format like XML and SOAP, for example, then presumably data centres should be able to be integrated more easily. But there are many application message protocols that don't require even consideration of that sort of event.

    Small to medium transaction volumes, requires flexibility of message content, and possibly cross-application integration ? XML is probably a good solution. High volumes, unlikely to change often ? XML is probably massive overkill, and worse still - its requirements on hardware might just make the project fail...

    ReplyDelete