At the beginning of the software revolution, computer systems were built as monolithic applications. But as they inevitably began to sprawl, the behemoths they grew into became unmanageable. Organizations soon realized that traditional monolithic applications were unsustainable for handling large and complex software applications.
The software industry eventually made the transition to microservices. A 2018 survey revealed that 63% of enterprises were adopting microservices architectures even though converting a monolithic application into microservices is a challenging and daunting task.
In this article, we will explore how organizations can effectively convert monolithic applications to microservices with the least amount of friction.
Converting a Monolithic Application to Microservices: Context and Problem
Typically, monolithic applications are characterized by events and data objects controlled by a single codebase that’s tightly knit. Business logic resides both within the client application and the server codebase. Data is often stored in a centralized database, where functions and methods in the codebase are designed to have direct access to it.
This type of software is inefficient and a nightmare to maintain.
Those who want to move away from monolithic applications aspire to steer themselves in the direction of microservice architectures. This is because microservice allows software development to accelerate through the combined practices of continuous integration and continuous delivery or continuous deployment (CI/CD).
The microservice architecture enables an organization to operate with small, cross-functional teams using the agile methodology. The overriding objective is to make the software engineering process nimble enough so teams can deliver value independently and parallel to each other.
Microservices simplify software development in two main ways:
● Structures the software engineering teams into a small (usually about 5-10 members), manageable size of autonomous teams, with each team handling the responsibilities for one or more services
● Allow components to be deployed independently, thereby simplifying testing
But identifying the context in which the problem exists is only part of the equation. What organizations want is the process of converting a monolithic application to microservices to be efficient, effective, and as painless as possible.
Using selective refactoring and iterative modernization
The best way for organizations to convert from monolithic applications to microservices is through the use of what we call selective refactoring and iterative modernization.
This essentially means that organizations shouldn’t be under obligation or pressure to do this migration all at once. In addition, this technique also advocates that organizations should look toward leveraging existing automation from cloud providers to ease the migration process.
Almost all techniques that adopt selective refactoring paradigm in one way or another include the following methods:
- Transform: This involves creating a parallel process to ensure there’s a smooth transition that doesn’t disrupt existing business operations. This step might involve creating a parallel site in the existing environment or the cloud, but it doesn’t have to be so.
- Co-exist: The new and the old legacy systems are allowed to run concurrently. However, this entails implementing functionality to redirect to the new system when necessary.
- Eliminate: This is the iterative process in which the old functionality is gradually removed and/or modernized. Some techniques, such as the Strangler facade, decide when to redirect traffic away from the old site until its features are entirely replaced.
Deciding How to Breakdown Monolithic Systems into Microservices
The best approach is to use tested design patterns and methods to incrementally migrate legacy systems. Design patterns in software engineering provide us with repeatable patterns, so developers don’t have to reinvent the wheel.
Now we’ve established that a (micro)service must be small enough for easy and quick testing. The methods detailed below provide various techniques on how to do it gracefully. This process simply involves replacing old pieces of functionality with new applications and services.
Decompose by domain lines
Domain-Driven Design (DDD) is a software development methodology that focuses on development through the domain model. Its advantage lies in the fact that the domain model represents real world entities and their relationships.
As a design pattern, domain modeling is highly cognizant of the domain space in which the application operates. This is important as there’s often a gap between understanding the problems of a domain and interpreting its requirements that hampers software development.
DDD allows the development process to focus “attention at the heart of the application, focusing on the complexity that is intrinsic to the business domain itself.”
Because practitioners can identify pertinent domain entities and their relationships, it provides an effective basis for the incremental development of testable and maintainable systems.
Decompose by subdomain Context
Instead of decomposition by domain, the challenge this approach tries to solve is how to decompose the application into services. We’ve already highlighted domains in the preceding section, emphasizing how DDD is concerned with an application’s problem space, which is nothing more than its core business.
Now, domains are composed of subdomains, and these correspond to a different part of the business. Though an application might have multiple subdomains, they can be broadly classified as the following:
● Core: It is the most valuable part of the application and reflects the key differentiator for the business
● Supporting: This isn’t the core but is nevertheless related to what the business does. As such, it can either be implemented in-house or outsourced
● Generic: This is the general, garden-variety type of functionality that isn’t specific to the business alone and therefore can be implemented by off the shelf software
Some examples of subdomains include inventory management, delivery management, order management, and product catalog.
One of the tightropes developers must navigate is how to convert or decompose the application in such a way that newly changed requirements only impact a single service in the system. This is especially salient when coordination is required with multiple teams because a single change affects multiple services.
To solve the riddle of decomposing an application into services, it’s helpful to adopt some object-oriented design (OOD) principles, namely:
● The Single Responsibility Principle (SRP): The core of this principle is that a class in a codebase should only have reason to change
● The Common Closure Principle (CCP): This is a corollary to the SRP and states that classes that change for the same reason should be in the same package
The overarching objective of this method is to ensure that when business rules change, as they most often do, developers are only tasked with making code changes in a small number of areas. This ideally should be in only one package at a time.
However, to decompose by subdomain context, the following is required:
● A stable architecture with cohesive services
● An architecture in which services implement and execute a small set of strongly related functions
● The services are loosely coupled, wherein each service as an API encapsulates its implementation so the underlying code can be changed without affecting other clients of the service
● A service is testable with measurable outcomes
● Each services’ team comprises of only about 6 – 10 members
● These teams should be autonomous
The final result of executing this process properly are:
● The development of services that are cohesive and loosely coupled
● Creation of autonomous and cross-functional teams organized around business, rather than technical value
● Stable software architecture built on stable subdomains
Barriers to successful decomposition by subdomain method
However, software teams must also be aware of the problems they’ll likely encounter with this method and so prepare themselves adequately
● How to identify the subdomain
● How to approach the organizing principle for organization structure
● Addressing the high-level domain model since each subdomain needs to have a crucial domain object
Decompose using the Strangler Fig pattern
At the beginning of this section, we proposed using an incremental, piecemeal process for converting a monolithic application to microservices. In other words, this requires running two parallel systems at the same time.
In the Strangler Fig pattern, you use the facade design pattern to gradually migrate to a new system while still keeping parts of the old legacy system to handle existing features that is yet to migrate.
The name stems from the fact that this method eventually replaces or “strangles” the old system into oblivion. So the distinguishing feature of this architecture is placing the old system behind a facade to break risks into small pieces.
The facade is a design pattern. In this method, our facade represents entry points into the existing systems.
This method involves creating a Strangler facade that intercepts requests going to the backend of the legacy system. The facade subsequently decides whether to route the request either to the new services or the legacy application.
Using this pattern helps to reduce the risk emanating from the migration process. It also makes it easier to spread the developmental process over time as the legacy application is also allowed to continue to function.
The Strangler Pattern is most useful in large systems. So avoid using it in small-size codebases with low complexity.
Like any implementation method, it has its strong suits and weak points. So, here are the pros and cons of using the Strangler architecture pattern:
● An effective way to reduce risk during system transformation
● Nimbly adds new services while the legacy system is still being used
● No service interruption as the old system is kept in play while being refactored to the new, updated versions
● Demands a lot of ongoing attention, especially in the areas of routing and network management
● Runs the risk of falling into “adaptation hell” that digs development teams into a hole. This is because special logic is to needed to accommodate the rerouting of the old service to the new one. This workload and the ensuing complexity mounts when dealing with dozens and perhaps hundreds of services.
● Migration and conversion is hardly a smooth process, so this will inevitably go wrong. As a result, DevOps teams need to have a backup and rollback plan for each refactored instance.
Automated Event Storming
Automated event storming sounds like a technique coming from a multiplayer video game. While it emanates from the principles of domain-driven design (DDD), its usefulness extends beyond the domain of software development and can be used for converting a monolithic application to microservices.
Part of its uniqueness lies in its ability to quicken group learning and accelerate the pace of development teams. Event storming has several advantages, such as providing a fun, lightweight, and rapid process for development teams.
How to implement event storming
The traditional approach to event storming involves a facilitated workshop where everyone engages, and a facilitator guides the process toward a resolution. The goal is to complete a model of the domain.
The group identifies aggregates and bounded contexts. The role of aggregates is to accept commands and accomplish events. As this progresses, the team begins to group aggregates into bounded contexts.
Eventually, the relationships discovered around bounded contexts are used to create a context map. This context map is essentially a model, and this model is subjected to code to validate and verify the model.
Alternatively, automated approaches eliminate the need for such a time-consuming and resource-intensive investment. Organizations who tried this approach quickly found that they couldn’t justify locking all their key architects and developers up in a conference room for weeks, often months at a time.
The approach was unrealistic for the business and unscalable across enterprises that often had hundreds of these applications to modernize. vFunction is an example of an automated approach to event storming that saves time and is built for repeatability and scalability.
The appeal of “go big or go home” might be suitable for other areas of life, but doesn’t apply to converting monolithic applications to microservices.
One of the most challenging challenges of converting a monolithic application to microservices is managing risk. Because they tend to be tightly coupled, any change to a monolithic application is fraught with booby traps and unanticipated hazards.
The selective refactoring and iterative modernization methods mentioned here will help to reduce overall systemic risk. This is because it aims to reduce disruption through discrete and small episodes of change.
At vFunction, we understand these challenges because we’re the first and only platform that has taken up the challenge of automatically transforming complex monolithic Java applications into microservices. Contact us today to help your organization eliminate the hardship, risks, and cost constraints of converting monolithic applications to microservices.