Die größte Bank Italiens, Intesa Sanpaolo, hat mission-kritische Monolithen in Microservices umgewandelt, um drei Ziele zu erreichen: Kostenkontrolle, bessere Stabilität und Skalierbarkeit sowie eine höhere Kundenzufriedenheit.
Topic: Monolith Transformation
Intesa Sanpaolo Accelerates Microservice & PaaS Transformation of Mission Critical Applications with vFunction
Italy’s largest bank, Intesa Sanpaolo, transformed mission-critical monoliths into microservices helping them achieve three objectives: cost control, better stability and scalability, and greater customer satisfaction.
Microservices vs. Monoliths: Why Every Developer Must Also Be a Cybersecurity Professional
When Watts Humphrey stated that every business is a software business, organizations realized that their survival depended on software. Today, developers also need to view cybersecurity as part of their responsibilities. It’s not enough to add security as an afterthought.
A global shortage of cybersecurity talent continues, with an estimated 3.4 million positions unfilled in 2022. At the same time, cyberattacks intensify. Forty-eight percent of companies reported a cyberattack in the last year. As the gap between supply and demand widens, organizations will look to software providers for better ways to secure applications and minimize risk.
According to Hiscox’s 2022 report, many organizations are using the US National Institute of Science and Technology’s (NIST) SP800-160 standard as a blueprint for strengthening security defenses. Part of that standard offers a framework for incorporating security measures into the development process.
Patching security weaknesses after release is a little like shutting the barn door after the animals have escaped. Developers chase after the illusive vulnerability, trying to corral and correct it. No matter how hard they try, developers can’t make an existing system as secure as one built with security best practices in mind.
When modernizing legacy systems, developers often adopt a microservices architecture. However, making that the default choice means ignoring the associated security risks. They must assess the potential risks and mitigation methods of monolithic vs. microservice designs to determine the most secure implementation.
Security Risks: Microservices vs. Monoliths
Security, like time, is relative. Is a monolith application less secure than microservices? Not always. For example, a simple monolith application with a small attack surface may be more secure than the same application using microservices.
An attack surface is a set of points on the boundary of a system where bad actors can gain access. A monolith application often has a smaller attack service than its microservice-based counterpart.
That said, attack surfaces are not the only security concerns facing developers as they look to incorporate security into the design of an application. Other areas to consider include coupling, authentication, and containerization.
Security Concern #1: Coupling vs. Decoupling
Legacy software may have thousands of lines of code wrapped into a single application. The individual components are interconnected, creating a tightly coupled piece of software. Microservices, by design, are loosely coupled. Each service is self-contained, resulting in fewer dependencies.
When bad actors compromise monoliths, they gain access to the entire application. The damage can be catastrophic. With microservices, a single compromise does not guarantee access to multiple services.
Related: Monoliths to Microservices 4 Modernization Best Practices
Once exploitation is detected, it can take months to contain. IBM’s latest Cost of a Data Breach report found that the average time to containment was 75 days. The shorter the data breach lifecycle, the lower the cost.
Given the inherent coupling of a monolith, finding the vulnerability can be challenging, especially if dead code has accumulated. The discrete nature of microservices makes it easier for programmers to locate a possible breach, reducing its lifecycle length and associated costs.
Security Concern #2: Attack Surface Sizes
As mentioned above, attack surfaces are points on the boundary of a system where an unauthorized user can gain access. The larger the boundary, the higher the risk. While current methods may default to microservices, they may not be the most secure architecture in every instance.
For example, an application with a few modules will have a smaller attack surface than the multiple microservices required to deliver the same functionality. The individual surfaces would be smaller, but the total surface of the application would be larger.
At the same time, a monolith application can become difficult to manage if it becomes too complex. Most legacy monoliths are complex, with multiple functions, modules, and subroutines. Developers must weigh attack surfaces against complexity when designing an application.
When a vulnerability is identified, it may take hours or even days to locate and patch the weakness in a monolithic application. Microservices are discrete components that enable programmers to find and correct flaws quickly.
Security Concern #3: Authentication Complexity
Monoliths use one-and-done authentication. Since accessing different resources occurs within the same application, identifying the requesting source in each module is redundant. However, that same approach shouldn’t be applied when migrating to a microservices design.
Microservices communicate through application programming interfaces, called APIs, when they need access to another microservice. Every request is an opportunity for compromise. That’s why microservices must incorporate authentication and authorization functionality in their design.
Adding this level of security as an afterthought creates its own set of vulnerabilities. Ensuring that each microservice has an authentication code in place can be challenging, depending on the number of services. If multiple developers are involved, implementation can vary. Finally, programmers from a monolith environment may overlook the requirement if it’s not part of their coding mindset.
Making an application less vulnerable is an essential feature of security by design. Application designs should include robust authentication and authorization code, whether monolith or microservices. Developers should consider a zero-trust implementation that requires continuous verification.
Security Concern #4: Container Weaknesses
Moving applications into containers provides portability, fewer resources, and consistent operation. Both microservices and monoliths can operate in containers. However, containerized environments add another layer of security, provided they are managed correctly. Common security weaknesses include privileges, images, and visibility. Any application running in a container—whether monolith or microservice—shares these risks.
Privileges
Containers often run as users with root privileges because it minimizes potential permission conflicts. When containerized applications need access to resources within the container, developers do not need to worry about installation or read/write failures because of permissions.
However, running containers with root privileges elevates security risks. If the container is compromised, cybercriminals have access to everything in the container. Developers must consider using a rootless implementation or a least-privilege model to restrict access for both microservice and monolithic applications.
Images
A secure pipeline for containerized application images is essential for both monoliths and microservices. Using secured private registries and fixed image tags can reduce the risk of a container’s contents being compromised. Once an image is in production, the security risk increases exponentially.
Visibility
Tracking weaknesses during a container’s lifecycle can mitigate security risks for monoliths and microservices. Developers can deploy scanning and analysis tools to look for code vulnerabilities. They can also use tools for visibility into open-source components or applications.
In 2021, visibility concerns resulted in the federal government issuing scanning requirements for containers. The document outlines the tools needed to assess the container pipeline and images. The guidelines also recommend real-time container monitoring.
Security Concern #6: Monitoring Complexity
Runtime visibility is another security risk. Applications should include event logging and monitoring to record any potential threats. Alerts should be part of any visibility tool so unusual behaviors can be assessed.
Monoliths often have real-time logging in place. This feature was added to help troubleshoot problems in highly complex applications. Writing error messages to a log with identifiers can significantly reduce the time needed to research a weakness and create a fix.
Putting real-time monitoring in place for microservices is far more time-consuming. Logging programs are not written for one large application but for many smaller applications. Many development teams skimp on or even skip monitoring because each microservice is so small it will be easy to find a problem. Unfortunately, in the midst of an attack, it’s rarely easy to find the weakness.
Security By Design
Although improved cybersecurity may not be the motivating factor behind modernizing legacy software, it is an opportunity that should not be wasted. Recent white-hat efforts by Positive Technologies found that 93% of attacks were successful in breaching a company’s internal network. Their selected targets were taken from finance, fuel and energy, government, industry/manufacturing, and IT industries.
Compromised credentials, including administrator passwords, were successfully used in 71% of attacks. The company was able to exploit vulnerabilities in software (60% ) and web (40%) applications. Their efforts highlight the need for strengthening security in deployed applications, whether they are monoliths or microservices.
Security can no longer be an afterthought when it comes to software design. Every developer needs to look at their code through a cybersecurity lens. Neither architecture is perfect, but developers must weigh their advantages and disadvantages to ensure a secure application.
To improve application security, consider including security professionals and using automated tools during development.
- Security Professionals. If an organization has access to security professionals, use them. They can identify methods and tactics that cybercriminals use to compromise systems. With this knowledge, applications can be designed with security in mind.
- Automated Tools. Tools exist to help with migrating legacy applications, securing code under development, and monitoring performance in production. These tools can help developers decide which architecture is appropriate for a given application and facilitate making it as secure as possible.
Just as every company realizes how essential software is to their survival, developers need to acknowledge that cybersecurity must be part of their toolset.
vFunction’s modernization platform for Java applications provides the tools needed to migrate legacy applications. Our Modernization Hub helps move monoliths to microservices and uses AI-based tools to track behaviors. The Hub also performs static code inspection of binaries. These resources make it possible for developers to spend more time ensuring that security protocols and best practices are incorporated as part of the design. Request a demo to learn more about how vFunction can help with your modernization needs.
Correla Modernises their Critical Gemini Application that Supports UK’s National Gas
Energy tech leader, Correla engaged vFunction and Wipro to accelerate and de-risk the modernisation of the Gemini Entry application as well as identify a modernisation plan to transform Gemini into a set of microservices.
Leading financial analysis and education provider modernizes 20 year old monolith with vFunction and AWS
Executive summary
With over $6 billion in annual revenues and 13,000+ employees, this US-based business and financial services company is a leading provider of financial analysis software, consulting, training and software services.
One of the company’s highly profitable and business-critical brands is an educational and training business platform that offers licensing courses, advanced certifications, continuing education and custom training for financial services professionals.
Over 500,000 students have taken courses on this platform, which is an aging monolithic enterprise application built nearly two decades ago. The monolithic architecture is now challenging their engineering teams with a lack of flexibility, development velocity, and accurate documentation, as well as a “black box” scenario resulting from functionality built by multiple teams with various development and release strategies over the years.
The challenges
Aging architecture
The application is a Java monolith developed over nearly two decades across multiple teams with different development practices and project sprints. Various development and release processes made it difficult to comprehend the service footprint and functionalities.
Documentation gaps
Over the years, documentation about the system became muddled. Without a clear grasp of how each part of the application functions, the team was unable to accurately scope and plan modernization efforts without committing to years of reverse engineering.
Stalled manual efforts
As a result of decades of mixed development and release patterns plus a general lack of accurate documentation, modernization efforts stalled. The team estimates that as much as $1 million was spent trying to manually update and support this critical application–without success.
The solution with vFunction
Transformation to microservices
The team’s strategy depended on the ability to transform a monolithic application to a decoupled microservices architecture deployed to AWS–all while shielding customers from negative impacts due to refactoring. Using a combination of vFunction Modernization Hub and AWS Migration Hub Refactor Spaces, the team started by decomposing and extracting functionality for a single critical microservice.
Automation and AI with vFunction
The team used vFunction Modernization Hub to automatically assess, analyze, and extract a single microservice from their monolith without making changes to their existing code base. This allowed them to create fresh APIs, replacing existing entry points, to this isolated service and begin a broader modernization refactoring effort.
Routing and provisioning from AWS
Using AWS Migration Hub Refactor Spaces, the team provisioned a new environment to operate alongside the existing monolithic application. This included routing for the newly extracted microservice, and a set of default routes and services that matched existing API calls for the monolith.
The results
Clearly defined scope and resources
vFunction gave the team insights into the system architecture, suggested domains for new microservices, and the ability to refactor code at the individual class level. This enabled them to accurately determine the scope of the modernization effort, identify the resources, time, and expertise needed, and locate specific areas of functionality where refactoring would provide the
largest impact.
Application scalability
By moving from a complex, on-premise monolith to individual microservices in the cloud, they were able to achieve higher levels of scalability for critical services. Leveraging AWS technologies like EC2 lets the team flexibly manage and provision compute resources to meet their ideal targets without risks to the user experience.
Future risk reduction
This industry leader now has a clear path forward and the right tools to analyze, extract, provision, and host additionally decomposed microservices going forward. The team is confident that this “factory model” approach to modernization will enable them to eliminate risk due to aging code, frameworks, and complexity. This is especially valuable for incorporating future IT functionality as well as new acquisitions for their application portfolio.
US federal agency uses vFunction to successfully modernize apps for cloud mandate
Executive summary
This customer is a government agency headquartered in Washington, DC with over 75,000+ employees providing services to millions of US citizens. The agency was given a mandate to modernize over 100 Java applications and migrate them to the cloud, and after three years, had only modernized and migrated 2 applications.
They were severely challenged by the time and complexity it was taking to manually modernize each app, and were not hopeful about meeting the cloud mandate. They needed another way.
By partnering with Accenture and using vFunction’s application modernization platform, they were able to successfully decompose one of their monolithic applications. When they compared using vFunction’s app modernization platform against their previous manual efforts, they saw a 10X increase in the speed of their modernization project. Best of all, they now have a path forward to modernizing the remaining apps in their organization using the vFunction platform for a repeatable process, allowing them to meet the mandate.
The challenges
Mandate for the cloud
Seeking operational efficiencies and a goal of reducing their on-premise and data center footprint, this customer has a firm goal to transition over 100 Java applications to the cloud. This includes refactoring existing apps to become fully cloud-enabled and migrated to cloud.
Slow and complicated projects
Despite executive support to move to the cloud, the engineering team was unable to reach the velocity required to meet 2022 goals. In the last three years, only two applications had been successfully migrated to cloud infrastructure.
Manual efforts with lack of tools
The challenge to their velocity lay in the team’s lack of modern tooling. At a rate of 1.5 years per app, manual efforts have been unsatisfactory, encouraging the customer to look to automation and AI to speed up their modernization efforts.
The solution with vFunction
Measure acceleration (manual vs automated)
Prior to engaging with vFunction, the customer had established a baseline estimate for their manual efforts by assigning a senior developer to assess and refactor the target application. Then, using vFunction’s automation and AI, they were able to compare the overall time needed, repeatability, and scalability of both the automated and manual processes.
Apply automated testing
The vFunction agent analyzed the application as the team ran automated test scripts run in a preproduction environment. The tests covered regression scenarios that exercised 90% of the application code, providing enough visibility to identify dependencies and recommend a new reference topology.
Identify classes and refactor into microservices
During the analysis phase, the vFunction platform analyzed the dynamic and static results to identify specific domains, entry points, and boundaries. Using the vFunction Studio, they were able to visualize and manually refactor the monolith into a set of 10 individual services eligible for future extraction.
The results
10X time savings
The entire analysis and refactoring effort combined took only 33 hours: 23 hours of automated dynamic analysis to attain 90% coverage by tests in preproduction, and a further 10 hours for refining the reference architecture provided by vFunction. They estimated that this was at least 10X faster than previous manual efforts.
Success confirmed
From the 10 services identified by vFunction, the customer selected a single service for further extraction. Using vFunction, the modernization team successfully built, deployed, and tested this service outside of the monolith–significantly faster than they’d experienced with manual efforts.
Repeatable modernization process
Backed by the successful extraction and deployment of their first service, the team is applying the same methodology across 100+ Java applications slated for cloud modernization. Best of all, they report that “refactoring can be successful even with a team that is not familiar with the target application”.
Breaking Bad Code: Automating the Strangler Fig Pattern
In this webinar, learn how vFunction + AWS Migration Hub Refactor Spaces automate the strangler fig pattern for modernization of complex and legacy .NET and Java monolithic applications.
Monoliths to Microservices: 4 Modernization Best Practices
This post was originally featured on TheNewStack, sponsored by vFunction.
When it comes to refactoring monolithic applications into microservices, most engineering teams have no idea where to start. Additionally, a recent survey revealed that 79% of modernization projects fail, at an average cost of $1.5 million and 16 months of work.
In other articles, we discussed the necessity of developing competencies for assessing your application landscape in a data-driven way to help you prioritize your first big steps. Factors like technical debt accumulation, cost of innovation and ownership, complexity and risk are important to understand before blindly embarking on a modernization project.
Event storming exercises, domain-driven design (DDD), the Strangler Fig Pattern and others are all helpful concepts to follow here, but what do you as an architect or developer actually do to refactor a monolithic application into microservices?
There is a large spectrum of best practices for getting the job done, and in this post, we look at some specific actions for intelligently decomposing your monolith into microservices.
These actions include identifying service domains, merging two services into one, renaming services to something more accurate and removing services or classes as candidates for microservice extraction. The best part: Instead of trying to do any of this manually, we’ll be using artificial intelligence (AI) plus automation to achieve our objectives.
Best Practice #1: Automate the Identification of Services and Domains
Surveys have shown that the days of manually analyzing a monolith using sticky notes on whiteboards take too long, cost too much and rarely end in success. Which architect or developer in your team has the time and ability to stop what they’re doing to review millions of lines of code and tens of thousands of classes by hand? Large monolithic applications need an automated, data-driven way to identify potential service boundaries.
The Real-World Approach
Let’s select a readily available, real-world application as the platform in which we’ll explore these best practices. As a tutorial example for Java developers, Oracle offers a medical records (MedRec) application — also known as the Avitek Medical Records application, which is a traditional monolith using WebLogic and Java EE.
Using vFunction, we will initiate a “learning” phase using dynamic analysis, static analysis and machine learning based on the call tree and system flows to identify ideal service domains.
In Image 1, we see a services graph in which services are shown as spheres of different sizes and colors, as well as lines (edges) connecting them. Each sphere represents a service that vFunction has automatically identified as related to a specific domain. These services are named and detailed on the right side of the screen.
The size of the sphere represents the number of classes contained within the service. The colors represent the level of class “exclusivity” within each service, referring to the percentage of classes that exist only within that service, as opposed to classes shared across multiple services.
Red represents low exclusivity, blue medium exclusivity and green high exclusivity. Higher class exclusivity indicates better boundaries between services, fewer interdependencies and less code duplication. Taken together, these traits indicate that it will be less complex to refactor highly-exclusive services into microservices.
The solid lines here represent common resources that are shared across the services (Image 2). Common resources include things like beans, synchronization objects, read-only DB transactions and tables, read-write DB transactions and tables, websockets, files and embedded files. The dashed lines represent method calls between the services (Image 3).
The black sphere in the middle represents classes still in the monolith, which contains classes and resources that are not specific to any particular domain, and thus have not been selected as candidates for extraction.
By using automation and AI to analyze and expose new service boundaries previously contained in the black box of the monolith, you are now able to begin manipulating services inside of a suggested reference architecture that has cleared the way to make better decisions based on data-driven analysis.
Best Practice #2: Consolidate Functionality and Avoid Duplication
When everything was in the monolith, your visibility was somewhat limited. If you’re able to expose the suggested service boundaries, you can begin to make decisions and test design concepts — for example, identifying overlapping functionality in multiple services.
The Real-World Approach
When does it make sense to consolidate disparate services with similar functionality into a single microservice? The most basic example is that, as an architect, you may see an opportunity to combine two services that appear to overlap — and we can identify these services based on the class names and level of class exclusivity.
In the services graph (Image 4), we see two similar chat services outlined with a white ring: PatientChatWebSocket and PhysicianChatWebSocket. We can see that the physician chat service (red) has 0% dynamic exclusivity and that the patient chat service (blue) has slightly higher exclusivity at 33%.
Neither of these services is using any shared resources, which indicates that we can merge these into a single service without entangling anything by our actions.
By merging two similar services, you are able to consolidate duplicate functionality as well as increase the exclusivity of classes in the newly merged service (Image 5). As we’re using vFunction Platform in this example, everything needed to logically bind these services is taken care of — classes, entry points and resources are intelligently updated.
Merging services is as simple as dragging and dropping one service onto the other, and after vFunction Platform recalculates the analysis of this action, we see that the sphere is now green, with a dynamic exclusivity of 75% (Image 6). This indicates that the newly-merged service is less interconnected at the class level and gives us the opportunity to extract this service with less complexity.
Best Practice #3: Create Accurate and Meaningful Names for Services
We all know that naming things is hard. When dealing with monolithic services, we can really only use the class names to figure out what is going on. With this information alone, it’s difficult to accurately identify which classes and functionality may belong to a particular domain.
The Real-World Approach
In our example, vFunction has automatically derived service domain names from the class names on the right side of screen in Image 7. As an architect, you need to be able to rename services according to your preferences and requirements.
Let’s now go back to the two chat services we merged in the last section. Whereas previously we had a service for both the patient and physician chat, we now have a single service that represents both profiles, so the name PatientChatWebSocket is no longer accurate, and may cause misunderstandings for other developers working on this service in the future. We can decide to select a better name, such as ChatService (Image 7).
In Image 8, we can see another service named JaxRSRecordFacadeBroker (+2). The (+2) part here indicates that we have entry points belonging to multiple classes. You may find this name unnecessarily descriptive, so you can change it simply to RecordBroker.
By renaming services in a more accurate and meaningful way, you can ensure that your engineering team can quickly identify and work with future microservices in a straightforward way.
Best Practice #4: Identify Functionality That Shouldn’t Be a Separate Microservice
What qualities suggest that functionality previously contained in a monolith deserves to be a microservice? Not everything should become a microservice, so when would you want to remove a service as a candidate for separation and extraction?
Well, you may decide that some services don’t actually belong in a separate domain, for example, a filter class that simply filters messages. Because this isn’t exclusive to any particular service, you can decide to move it to a common library or another service in the future.
The Real-World Approach
When removing functionality as a candidate for future extraction as a microservice, you are deciding not to treat this class as an individual entry point for receiving traffic. Let’s look at the AuthenticatingAdministrationController service (Image 9), which is a simple controller class.
In Image 9, we can see that the selected class has low exclusivity by the red color, and also that it is a very small service, containing only one dynamic class, one static class and no resources. You can decide that this should not be a separate service by itself and remove it by dragging and dropping it onto the black sphere in the middle (Image 10).
By relocating this class back to the monolith, we have decided that this particular functionality does not meet the requirements to become an individual microservice.
In this post, we demonstrated some of the best practices that architects and developers can follow to make refactoring a monolithic application into bounded contexts and accurate domains for future microservice extraction.
By using the vFunction Platform, much of the heavy lifting and manual efforts have been automated using AI and data-driven analysis. This ensures that architects and development teams can spend time focusing on refining a reference architecture based on intelligent suggestions, instead of spending thousands of hours manually analyzing small chunks of code without the appropriate “big picture” context to be successful.
Ten AWS Products for Modernizing Your Monolithic Applications
In today’s rapidly changing marketplace environment, companies face an imperative to modernize their business-critical legacy applications. That’s why, as the State of the CIO Study 2022 notes, modernizing legacy systems and applications is currently among the top priorities of corporate CIOs.
In most instances such modernization involves transferring legacy apps to the cloud, which is now the seedbed of technological innovation. Once housed in the cloud, and adapted to conform to the technical norms of that environment, legacy apps can improve their functionality, performance, flexibility, security, and overall usefulness by tapping into a sophisticated software ecosystem that offers a wide variety of preexisting services.
Amazon Web Services (AWS), with a 33% share of the market, is the most widely used cloud service platform. AWS provides users with a wide range of fully managed cloud services that can make modernizing legacy apps far easier than it otherwise would be. These include container management services, Kubernetes services, database and DB migration services, application migration services, API and Security management services, support for serverless functions, and more.
In this article, we want to take a brief look at ten of these key AWS services that companies should research and test to determine how they can best be used in modernizing the organization’s suite of legacy apps. But before looking at the AWS services themselves, we need to understand exactly what modernization aims to achieve.
What Application Modernization is All About: Transforming Monoliths into Microservices
Gartner describes application modernization this way:
“Application modernization services address the migration of legacy to new applications or platforms, including the integration of new functionality to provide the latest functions to the business.”
The major problem with most legacy applications is that the way they are architected makes “the integration of new functionality” extremely difficult. That’s because such apps are typically monolithic, meaning that the codebase is basically a single unit with functions and dependencies interwoven throughout.
Any single functional change could ripple through the code in unexpected ways, which makes adapting the app to add new functions or to integrate with other systems very difficult and risky.
A microservices architecture, on the other hand, is expressly designed to make updating the application easy. Each microservice is a separate piece of code that performs a single task; it is deployed and changed independently of any others. This approach allows individual functions to be quickly and easily updated to meet new requirements without impacting other portions of the application.
The fundamental purpose of legacy application modernization, then, is to restructure the application’s codebase from a monolith to microservices.
Related: Migrating Monolithic Applications to Microservices Architecture
The Importance of Refactoring
How does that restructuring take place? In most instances it begins with refactoring. The Agile Alliance defines refactoring this way:
“Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior.”
Refactoring allows developers to transform a legacy codebase into a cloud-native microservices architecture while not altering its external functionality or user interface. But because the refactored application can fully interoperate with other resources in the cloud ecosystem, updates that were previously almost impossible now become easy. For that reason, refactoring will normally be a key element of any legacy application modernization process.
The Migration “Lift and Shift” Trap
A report from McKinsey highlights a disturbing reality:
“Thus far, modernization efforts have largely failed to generate the expected benefits. Despite migrating a portion of workloads to the cloud, around 80 percent of CIOs report that they have not attained the level of agility and business benefits that they sought through modernization.”
To a significant degree this failure can be attributed to organizations confusing migration with modernization. Far too often companies have focused on simply getting their legacy applications moved to the cloud, as if that in itself constituted a significant level of modernization. That is most emphatically not the case.
The problem is that just removing an application from a data center and rehosting it in the cloud (often called a “lift and shift”) does nothing to change the fundamental nature of the codebase. If it was a monolith before being migrated, it remains a monolith once it gets to the cloud, and retains all the disadvantages of that architecture.
It’s only when a legacy application is not only migrated to the cloud but is refactored from a monolith to a microservices architecture that true modernization can begin. That’s why the modernization services provided by AWS must be evaluated in light of how they facilitate not just the migration, but more importantly the transformation of legacy applications.
Related: Accelerate AWS Migration for Java Applications
Key Modernization Services from AWS
For each of these important AWS services, we’ll provide a brief description along with a link for further information.
1. Amazon EC2 (Elastic Compute Cloud)
Amazon EC2 provides an unlimited number of virtual servers to run your apps. If, for example, you’ve had a particular application running on a physical server in your data center, you can migrate that application to the cloud by launching an EC2 server instance to run it. Rather than having to purchase and maintain your own server hardware, you pay Amazon by the second for each server instance you invoke.
2. Amazon ECS (Elastic Container Service)
Amazon ECS is a container orchestration service that allows you to run containerized apps in the cloud without having to configure an environment for the code to run in. It can be particularly helpful in running microservices apps by facilitating integration with other AWS services. Although container management is normally complex and error-prone, the distinguishing feature of ECS is its “powerful simplicity” that allows users to easily deploy, manage, and scale containerized workloads in the AWS environment.
3. Amazon EKS (Elastic Kubernetes Service)
Kubernetes is an open-source container-orchestration system with which you can automate your containerized application deployments. Amazon EKS allows you to run Kubernetes on AWS without having to install, operate, or maintain your own Kubernetes infrastructure. Applications running in other Kubernetes environments, whether in an on-premises data center or the cloud, can be directly migrated to EKS with no modifications to the code.
4. Amazon VPC (Virtual Private Cloud)
Amazon VPC allows you to define a virtual network (similar to a traditional network you might run out of your data center) within an isolated section of the AWS cloud. Other AWS resources, such as EC2 instances, can be enabled within the network, and you can optionally connect your VPC network with other networks or the internet. All AWS accounts created after December 4, 2013 come with a default VPC that has a default subnet (range of IP addresses) in each Availability Zone. You can also create your own VPC and define your own subnet IP address ranges.
5. AWS Database Migration Service (DMS)
AWS DMS allows you to migrate your databases quickly and securely to AWS. Both homogeneous (e.g. Oracle to Oracle) and heterogeneous (e.g. Oracle to MySQL) migrations are supported. You can set DMS up for either a one-time migration or for continuing replication in which changes to the source DB are continuously applied in real time to the target DB.
6. Amazon S3 / Aurora / DynamoDB / RDS
AWS provides a range of database and data storage services that can simplify the process of migrating data to the cloud:
Amazon S3 (Simple Storage Service) is a high-speed, highly scalable data storage service designed for online backup and archiving in AWS.
Amazon Aurora is “a fully managed relational database engine that’s compatible with MySQL and PostgreSQL.”
Amazon DynamoDB is “a fully managed, serverless, key-value NoSQL database” that provides low latency and high scalability.
Amazon RDS (Relational Database Service) is a managed SQL database service that supports the deployment, operation, and maintenance of seven relational database engines: Amazon Aurora with MySQL compatibility, PostgreSQL, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server.
7. Amazon API Gateway
Amazon API Gateway enables developers to securely create, publish, and manage APIs to connect non-AWS software to AWS-native applications and resources. That kind of integration, which can substantially enhance the functionality of legacy applications, is a fundamental element of the application modernization process.
8. AWS IAM (Identity and Access Management)
AWS IAM allows you to securely manage AWS access permissions for both users and workloads. You can use IAM policies to specify who (or what workloads) can access specific services and resources, and under what conditions. IAM is a feature of your AWS account, and there is no charge to use it.
9. AWS Lambda
AWS Lambda is an event-driven compute service that lets you run code as stateless functions without provisioning or managing servers or storage–also known as Function as a Service (FaaS). With those tasks performed automatically, developers can focus on their application code. Lambda supports several popular programming languages, including C#, Python, Java, and Node.js. Lambda runs a function only when triggered by an appropriate event, and can automatically scale to handle anything from a few requests per day to thousands of requests per second.
10. Amazon Migration Hub Refactor Spaces (MHRS)
Amazon describes Migration Hub Refactor Spaces as “the starting point for customers looking to incrementally refactor applications to microservices.” MHRS orchestrates AWS services to create an environment optimized for refactoring, allowing modernization teams to easily set up and manage an infrastructure that supports the testing, staging, deployment, and management of refactored legacy applications.
How vFunction Works with MHRS
vFunction and MHRS work together to refactor monolithic legacy applications into microservices and to safely stage, migrate, and deploy those microservice applications to AWS. Developers use MHRS to set up and manage the environment in which the refactoring process is carried out, while the vFunction Platform uses its AI capabilities to substantially automate both the analysis and refactoring of legacy applications.The result of this collaboration is a significant acceleration of the process of modernizing legacy apps and safely deploying them to the AWS cloud. To experience first-hand how vFunction and AWS can work together to help you modernize your legacy applications, schedule a demo today.
The CIO Guide to Modernizing Monolithic Applications
As the pace of technological change continues to accelerate, companies are being put under more and more pressure to improve their ability to quickly react to marketplace changes. And that, in turn, is putting corporate CIOs on the hot seat.
In a recent McKinsey survey, 71% of responding CIOs said that the top priority of their CEO was “agility in reacting to changing customer needs and faster time to market.” Those CEOs are looking to digital technology to enable their companies to keep ahead of competitors in a constantly evolving market environment.
CIOs are tasked with providing the IT infrastructure and tools needed to drive the marketplace innovation and agility required to accomplish that goal.
But in many cases CIOs are facing a seemingly intractable problem—they’ve inherited a suite of legacy applications that are indispensable to the company’s daily operations, but which also have very limited capacity for the upgrades necessary for them to be effective in the cloud-native, open-source technological landscape of today.
As a recent report by Forrester puts it,
“Most legacy core software systems are too inflexible, outdated, and brittle to give businesses the flexibility they need to win, serve, and retain customers.”
But because such systems are still critical for day-to-day operations, CIOs can’t just get rid of them. Rather, a way must be found to provide them with the flexibility and adaptability that will enable them to be full participants in the modern technological age.
The Problem with Monoliths
The fundamental cause of the brittleness and inflexibility that characterize most legacy systems is their monolithic arch
itecture. That is, the codebase (which may have millions of lines of code) is a single entity with functionalities and dependencies interwoven throughout. Such applications are extremely difficult to update because a change to any part of the code can ripple through the application, causing unintended operational changes or failures in seemingly unrelated parts of the codebase.
Because they are inflexible and brittle, such applications cannot be easily updated with new features or functions—they were not designed with that capability in mind. A much broader transformation is required, one in which the application’s codebase is restructured in ways that allow it to be upgraded while maintaining the original scope. That broad restructuring is referred to as application modernization.
Application Modernization and The Cloud
What, exactly, is application modernization? Gartner provides this description:
“Application modernization services address the migration of legacy to new applications or platforms, including the integration of new functionality to provide the latest functions to the business.”
There are two key aspects of this definition: migration and integration.
Because the cloud is where the technological action is today, most application modernization efforts involve, as a first step, migrating legacy apps from their original host setting to the cloud. As McKinsey says of this trend:
“CIOs see the cloud as a predominant enabler of IT architecture and its modernization. They are increasingly migrating workloads and redirecting a greater share of their infrastructure spending to the cloud.”
The report goes on to note that McKinsey expects that by 2022, 75% of corporate IT workloads will be housed in the cloud.
That leads to the second element of the Gartner definition: integration. If legacy applications are to be effective in the cloud environment, they must be integrated into the open services-based cloud ecosystem.
That means it’s not enough to simply migrate applications to the cloud. They must also be transformed or restructured so that integration with cloud-native resources is not just possible, but easy and natural.
The fundamental purpose of application modernization is to restructure legacy code so that it is easily understandable to developers, and can be quickly updated to meet new business requirements.
Transitioning From a Monolithic Architecture to Microservices
What does it take to transform legacy apps so that they are not only cloud-enabled, but they fit as naturally into the cloud landscape as do cloud-native systems?
As we’ve seen, the fundamental problem that causes the rigidity and inflexibility that must be overcome in transforming legacy apps is their monolithic architecture. Monolithic applications are self-contained and aren’t always easy to integration with other applications or systems. The codebase is a single entity in which all the functions are tightly-coupled and interdependent. Such an app is, in essence, a “black box” as far as the outside world is concerned—its inputs and outputs can be observed, but its internal processes are entirely opaque.
If an app is to be integrated into the cloud’s open-source ecosystem, its functions must somehow be separated out so that they can interoperate with other cloud services. The way that’s normally accomplished is by refactoring the legacy code into microservices.
Related: Migrating Monolithic Applications to Microservices Architecture
What are Microservices?
Microsoft provides a useful description of the microservices concept:
“A microservices architecture consists of a collection of small, autonomous services. Each service is self-contained and should implement a single business capability.”
The key terms here are “small” and “autonomous.” Microservices may or may not be “small”, but they should be independent, and loosely coupled with a specific functionality to cover. Each is a separate codebase that performs only a single task, and each can be deployed and updated independently of any others. Microservices communicate with one another and with other resources only through well-defined APIs—there is no external visibility or coupling into their internal functions.
Advantages of the microservices architecture include:
- Agility: Because each microservice is small and independent, it can be quickly updated to meet new requirements without impacting the entire application.
- Scalability: To scale any feature of a monolithic application when demand increases, the entire application must be scaled. In contrast, each microservice can be scaled independently without scaling the application as a whole. In the cloud environment, not having to scale the entire app can yield substantial savings in operating costs.
- Maintainability: Because each microservice is small and does only one thing, maintenance is far easier than with a monolithic codebase, and can be handled by a small team of developers.
The key task of legacy application modernization is to decompose a monolithic codebase into a collection of microservices while maintaining the functionality of the original application.
But how is that to be accomplished with legacy code that is little understood and probably not well documented?
Options for Transforming Monolithic Code to Microservices
Gartner has identified seven options for migrating and upgrading legacy systems.
- Encapsulate: Connect the app to cloud-based resources by providing API access to its existing data and functions. Its internal structure and operations remain unchanged.
- Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
- Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.
- Refactor: Restructure and optimize the app’s code to a microservices architecture without changing its external behavior.
- Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
- Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
- Replace: Completely eliminate the original application, and replace it with a new one.
All of these options are sometimes characterized as “modernization” methodologies. Actually, while encapsulating, rehosting, or replatforming do migrate an app (or in the case of encapsulation, its interfaces) to the cloud, no restructuring of the codebase takes place. If the app was monolithic in its original environment, it’s still monolithic once it’s housed in the cloud. So, these methods cannot rightly be called modernization options at all.
Neither does replacement qualify as a modernization option since rather than restructuring the legacy codebase, it throws it out completely and replaces it with something entirely new.
So, to truly modernize a legacy application from a monolith to microservices will involve the use of some combination of refactoring, rearchitecting, and rewriting. Let’s take a brief look at each of these:
- Refactoring: Refactoring will be the first step in almost any process of modernizing monolithic legacy applications. By converting the codebase to a cloud-native, microservices structure, refactoring enables the app to be fully integrated into the cloud ecosystem. And once that’s accomplished, developers can easily update the app with new features to meet specific requirements.
- Rearchitecting: Rearchitecting is usually employed to enable improvements in areas such as performance and scalability, or to add features that are not supported by the original design. Because rearchitecting makes fundamental changes to the structure and operation of the code, it is more complex, time-consuming, and risky than simply refactoring.
- Rewriting: Completely rewriting the legacy code is the most complex, time-consuming, and risky of all the modernization options. It is usually resorted to when developers wish to avoid spending the time and effort required to deconstruct the existing code to understand how it works. Because a rewrite carries the highest risk of causing disruptions to a company’s business operations, it is normally used only as a last resort.
Although rearchitecting or rewriting may be appropriate for some cases, refactoring should always be the starting point because it produces a codebase that developers can easily upgrade with new features or functionality. As McKinsey puts it:
“It [is] critical for many applications to refactor for modern architecture.”
Challenges of Modernization
All of the modernization options, refactoring, rearchitecting, and rewriting, require extensive changes to the legacy application’s codebase. That’s not a task to be undertaken lightly. Legacy apps typically hold onto their secrets very tightly due to several common realities:
- The developers who wrote and maintained the original code, which in some cases is decades old, have by now retired or are otherwise unavailable.
- Documentation, both of the original requirements and modifications made to the code through the years, is often incomplete, misleading, or missing entirely.
- Patches to the code to handle low frequency-of-occurrence exceptions or boundary conditions may not be documented at all, and can be understood only by a minute examination of the code.
- Similarly, changes to business process workflows may have been incorporated through code patches that were never adequately documented or covered by tests. If such workflows are not discovered and accounted for in a modernization effort, important functions of the application may be lost.
Any modernization approach will involve a high degree of complexity, time, and expertise. McKinsey quotes one technology leader as saying,
“We were surprised by the hidden complexity, dependencies and hard-coding of legacy applications, and slow migration speed.”
Building a Modernization Roadmap
If you’re trying to drive to someplace you’ve never been before, it’s very helpful to have a map. That’s especially the case if you’re driving toward modernization of your legacy applications. You need a roadmap.
The first stop on your modernization roadmap will be an assessment of the goals of your business, where you currently stand in relation to those goals, and what you need from your technology to enable you to achieve those goals.
Then you’ll want to develop an understanding of exactly what you want your modernization process to achieve. You’ll analyze your current application portfolio in light of your business and technology goals, and determine which apps must be modernized, what method should be used, and what priority each app should have.
To learn more about creating a modernization roadmap, take a look at the following resource:
Related: Succeed with an Application Modernization Roadmap
Why Automation is Required for Successful Modernization
Converting a monolithic legacy app to a microservices architecture is not a trivial exercise. It is, in fact, quite difficult, labor-intensive, time-consuming, and risky. That is, it is all that if you try to do it manually.
It’s not unusual for a legacy codebase to have millions of lines of code and thousands of classes, with embedded dependencies and hidden flows that are far from obvious to the human eye. That’s why using a tool that automates the process is a practical necessity.
By intelligently performing static and dynamic code analyses, a state-of-the-art, AI-driven automation tool can, in just a few hours, uncover functionalities, dependencies, and hidden business flows that might take a human team months or years to unravel by manual inspection of the code.
And not only can a good modernization tool analyze and parse the monolithic codebase, it can actually refactor and rearchitect the application automatically, saving the untold hours that a team of highly skilled developers would otherwise have to put into the project.
According to McKinsey, companies that display a high level of agility in their marketplaces have dramatically higher rates of automation than those characterized as the “laggards” in their industries.
The vFunction Application Modernization Platform
The vFunction platform was built from scratch to be exactly the kind of automation tool that’s needed for any practical application modernization effort. It has advanced AI capabilities that allow it to automatically analyze huge monolithic codebases, both statically and during the actual execution of the code.
As the vFunction Assessment Hub crawls through your code, it automatically builds a lightweight assessment of your application landscape that helps you prioritize and make a business case for modernization. Once you’ve selected the right application to modernize, the vFunction Modernization Hub takes over, analyzing and automatically converting complex monolithic applications into extracted microservices.
vFunction has been demonstrated to speed up the modernization process by a factor of 15 or more, which can reduce the time required by such projects from months or years to just a few weeks.
If you’d like to experience firsthand how vFunction can help your company modernize its monolithic legacy applications, schedule a demo today.