Cloud, SaaS Legacy, and AI
Here, we reflect on the reasoning behind the past, current, and future trends: Cloud deployments, SaaS (Software-as-a-Service), and AI (Artificial Intelligence). The first two can’t be considered trends anymore. It has been the reality for more than a decade. AI is more speculative; knowing it impacts the industry, we still need to find out to what extent that will be.
Empire of the Clouds
Software modernization has recently accelerated for several interconnected reasons: Cloud, then SaaS, and related deviations such as PaaS (Platform-as-a-Service), IaaS (Infrastructure-as-a-Service), etc. For simplicity, let’s call them XaaS (X-as-a-Service). That resulted in DevOps, which, in its turn, has launched a technology race.
Despite all the skepticism a decade ago, most Enterprises moved to the Cloud infrastructure. We will not discuss the pros and cons of such a move and the multiple factors that led to that. I can pay my respects to the tremendous marketing efforts from AWS, Azure, Google Cloud, and smaller cloud vendors to convince business folks to take that step.
We focus on one thesis: Cloud infrastructure works better with services built around Microservice Architecture (MSA). You can run a monolithic application in the Cloud but cannot benefit from scaling, resilience, and using tooling built around the infrastructure effectively.
Legacy doesn’t always mean a monolith. However, most legacy systems are monolithic just because that was the way to design software in the past. So, decomposing a monolith into microservices began as a prerequisite or one of the phases of modernization triggered by the Cloud migration. That has not necessarily been the case everywhere, but there was and still is a trend for such decomposition.
Also, the Cloud caused the rise of a new XaaS market for multiple industries. That shift implied having the MSA to keep up with the rapid pace of business and technology changes. So, organizations had to embrace the new approach and adapt their software for a new way of doing business.
Another factor is the technological shift. Cloud also caused the DevOps movement/philosophy, which changed the software development and operations landscape. A new tooling class emerged in the growing market to develop, deploy, and run software in a new way.
That is quite a simplified explanation, but the sequence is precise: Cloud caused XaaS as a new business form and DevOps as a technological paradigm. All that, also implying MSA, pushed the need to replace legacy systems, sometimes decades in operations, with something new and shiny. No matter what.
It’s Almost Easy
I don’t have numbers on my side, but as an industry, we are not very successful with the total replacement. Even if someone has numbers to prove or oppose my idea, I doubt it. My point is purely subjective based on personal and professional experience and outlook.
The key is that we struggle to get rid of Legacy entirely due to various reasons, such as:
- new systems will become outdated sooner or later anyway
- high costs that do not cover the benefits
- legacy environment impacting a replacement system
The first two points are clear: it is difficult to compete with time, and the complexity of the migration might not meet ROI (return on investment) expectations for the next decade. And it is a general problem; we are more focused on the current and near-future outcomes. We only care a little if the benefits are too far and difficult to reach. We would instead choose a more accessible and quicker resolution. Thus, it will likely be cut off if the legacy migration projects exceed their budget and timeline (I think it is a common issue).
We should remember that a software application co-exists with others. It is always a part of a larger system, a part of a higher-level system, and so on. A system operates in a specific environment that might appear to be a legacy. If we replace a legacy system, that might not impact the environment in an expected way.
Another aspect of a legacy environment is that some lousy design decisions might be imposed for the sake of compatibility. In my practice, while wrapping a COBOL application with a REST API layer supporting JSON format, the team had to support an intermediary XML format conversion. Support of such a 3-side conversion caused multiple problems in the result.
Replacing a legacy environment is a perilous and costly endeavor. That is why it usually ends with sub-optimizing certain parts; let’s call it modernization.
So we understand we can’t fully win in that battle. What can we do:
- Decompose only certain “reasonable” parts as microservices and keep the remaining Legacy wrapped with a modern API interface. I covered that approach in Part 2 and recently learned it is called the Stranger pattern.
- Start building a new system/environment from scratch so you have complete freedom. There are two significant concerns: does the organization possess the expertise and resources to create a new system, and how will it migrate an existing customer? Mainly if the Legacy still generates a considerable income.
- Replace some parts or an entire legacy with one or several SaaS vendors. Let’s talk about that option in detail.
Legacy SaaS
Can SaaS become a legacy? Sure, it can.
Developing a custom application or outsourcing is one of many options. If a legacy was built internally, you can find an appropriate SaaS vendor for relatively small or mid-size legacy applications. For giant multifunctional monoliths, there are fewer chances to find an all-in-vendor. It can be moguls like Salesforce or SAP with various customizations and integrations or a set of specialized SaaS vendors.
If a legacy was a vendor, it will most likely be replaced with another vendor. In the case of a relatively small and non-complex solution with enough business domain and technical expertise, it can be substituted with an internally built service.
There are still open questions about choosing, integrating, and onboarding an org with a SaaS vendor, plus business risk with a vendor lock-in. So, it is all about trade-offs, and, in fact, SaaS vendors might not be much cheaper or even successful.
With several vendors, integrating and operating several SaaS services as a unified system is challenging. Even if you succeed with the initial implementation, there might be a question further about how to replace a SaaS vendor. There might be various reasons behind such a move, such as increased costs, imposed new regulations, not following SLA (Service Level Agreement), or a failed reputation.
Thus, we face a new type of Legacy: not an old monolith but a cloud-native microservice-oriented SaaS. That requires a different approach, so the Composable approach (aka Composable Commerce) composes different vendors into reusable modules. But even that requires heavy effort in designing, integrating, and operating such a system. We will definitely return to that topic in the future.
Deus Ex MachInA
Considering the complexity of the replacement/modernization activities, the industry would have developed tools to help with a monolith decomposition. That is not an easy stack considering the outdated stack, lack of knowledge, and accessibility of non-public enterprise code. And this is where AI can flourish.
And IBM seems to be a leader in leveraging AI to modernize legacy systems. Unsurprisingly, IBM suits that role as, to this day, they serve as mainframes to run legacy software for their customers. And that seems to be a considerable part of their business nowadays.
How AI can help:
- refactor the code
- generate legacy code documentation
- summarizing the code, how the system works
- generate a new code
- translate legacy code to another programming language
IBM’s open-source project, “Minerva,” is to refactor legacy code to move from monolithic toward MSA. It is based on the CARGO algorithm (Context-sensitive lAbel pRopaGatiOn) that allows analysis of how pieces of code are connected and provides some resolution for further refactoring to split code into microservices.
So, it does not do the composition for you, but it helps an understanding with some suggestions on how to proceed. It is now limited to Java, but it might expand to other languages in the future.
Alternatively, IBM conducts COBOL-to-Java translation with their code assistant. Considering that COBOL is still widely used in critical infrastructure and most of them, I assume, are IBM clients. So it is a win-win and tons of money for IBM as it is not open source.
Just look at that beautiful demo:
AI tackles only the code, which is something specific and reflects the current functionality. Modernization should enable further paths to embracing new opportunities for the software. So here we are again at the crossroads, deciding what should go into the new system.
Final Words
So, to conclude that Series, what is the next?
For sure, we will not replace all Legacy software. In a moment, our recently developed software will become the new Legacy. With the current pace of progress, this is a never-ending process until it becomes non-economically feasible. However, it is unlikely with growing AI capabilities, which will mature more and more. There is still a long way to self-improving software. But AI will definitely play a key role in modernizing Legacy.
The concern is that enterprise legacy code is closed, so such vendors as IBM, with their extensive book of business, can leverage that more than anyone. However, other AI tools like MS Copilot will compete with them. Plus, new players might appear.
As we complete migrating on-premise monoliths, we go the next circle and start migrating organizations from Legacy SaaS solutions. That is debatable how it will be. But I think that MACH architecture and a Composable approach are driven by the ability to painlessly replace SaaS vendors with the new, what they call “best-of-breed” solutions.
So, this will be my next focus to research.