Cloud Modernization After Refactoring: A Continuous Process

Bob Quillin

May 13, 2022

Refactoring is a popular and effective way of modernizing legacy applications. However, to get the maximum benefits of modernization, we should not stop after refactoring. Instead, we should continue modernization after refactoring as part of a process of Continuous Modernization, a term coined by a leading cloud modernization platform.

Continuous Modernization: Modernization after Refactoring

Businesses constantly adapt and improve to handle new opportunities and threats. Similarly, they must also continuously keep upgrading their enterprise software applications. With time, all enterprise applications are susceptible to technical debt accumulation. Often, the only way to repay the debt is to refactor and move to the cloud. This process of application modernization provides significant benefits.

A “megalith” is a large traditional monolithic application that has over 5 million lines of code and 5,000 classes. Companies that maintain megaliths often choose the less risky approach of incremental modernization. So, at a point in time, part of their application may have been modernized to microservices running in the cloud and deployed by CI/CD pipelines. The remaining portion of the legacy app remains untouched. 

Modernization is an Ongoing Process

Three of the most popular approaches to modernization are rehosting, re-platforming, and refactoring.

Rehosting (or Lift and Shift): This involves moving applications to the cloud as-is or with minimal changes. Essentially, you change the place where the application runs. Often, this means migrating your application to the cloud. However, you can move it to shared servers, a private cloud, or a public cloud. 

Re-platforming: The approach takes a newer runtime platform, and inserts the old functionality. You’ll end up with a mosaic that mixes the old in with the new. From the end user’s perspective, the program operates the same way it was before modernization, so they don’t need to learn much in the way of new features. At the same time, your legacy application will run faster than before and be easier to update or repair.

Refactoring:  Refactoring is the process of reorganizing and optimizing existing code. It lets you get rid of outdated code, reduce significant technical debt, and improve non-functional attributes such as performance, security, and usability. By refactoring, you can also adapt to changing requirements since cloud-native, and microservice architectures make it possible for applications to add new features or modify existing ones right away.

Of these, refactoring requires the most effort and yields the most benefits. In addition to code changes, refactoring also includes process-related enhancements like CI/CD to unleash the full power of modernization. Modernization, however, is not a once-and-done activity. In fact, modernization after refactoring is a continuous process.

The Role of DevOps in Modernization

Application modernization and DevOps go hand in hand. DevOps (Development + Operations) is a set of processes, practices, and tools that enable an organization to deliver applications and updates at high velocity. DevOps facilitates previously siloed groups – developers and operations – to coordinate and produce better products.

Continuous integration (CI) and continuous delivery/deployment (CD) are the two central tenets of DevOps. For maximum benefits, modernization after refactoring should include CI and CD.

Continuous Integration: Overview, History, and How It Works

Software engineers work on “branches,” which are private copies of code only they can access. They make the copies from a central code repository, often called a “mainline” or “trunk”. After making changes to their branch and testing, they must “merge” (integrate) their changes back into the central repository. This process could fail if, in the meantime, another developer has also changed the same files. Here, a “merge conflict” results and must be resolved, often a laborious process.

Continuous integration (CI) is a DevOps practice in which software developers frequently merge their code changes into the central depository. Because developers check in code very often, there are minimal merge conflicts. Each merge triggers an automated build and test cycle. Developers fix all problems immediately. CI’s goals are to reduce integration issues, find and resolve bugs sooner, and release software updates faster.

Grady Booch first used the phrase Continuous Integration in his book, “Object-Oriented Analysis and Design with Applications”, in 1994. When Kent Beck proposed the Extreme Programming development process, he included twelve programming practices he felt were essential for developing quality software. Continuous integration was one of them.

How Does Continuous Integration Work?

There are several prerequisites and requirements for adopting CI.

Maintain One Central Source Code Repository

A central source code repository (or repo) under a version control system is a prerequisite for Continuous Integration. When a developer works on the application, they check out the latest code from the repo. After making changes, they merge their changes back to the repo. So, the repo contains the latest, or close to the latest, code at all times.

Automated Build Process

It should be possible to kick off the build with a single command. The build process should do everything – generate the executables, libraries, databases, and anything else needed –to get the system up and running.

Automated Testing

Include automated tests in the build process. The test suite should verify most, if not all, of the functionality in the build. A report should tell you how many tests passed at the end of the test run. If any test fails, the system should mark the build as failed, i.e., unusable.

A Practice of Frequent Code Commits

As mentioned earlier, a key goal of CI is to find and fix merge problems as early as possible. Therefore, developers must merge their changes to the mainline at least once a day. This way, merge issues don’t go undetected for more than a day at the most.

Every Commit Should Trigger a Build

Every code commit should trigger a build on an integration machine. The commit is a success only if the resulting build completes and all tests pass. The developer should monitor the build, and fix any failures immediately. This practice ensures that the mainline is always in a healthy state.

Fast Build Times

The build time is the time taken to complete the build and run all tests. What is an acceptable build time? Developers commit code to the mainline several times every day. The last thing they want to do after committing is to sit around twiddling their thumbs. Approximately 10 minutes is usually acceptable.

Fix Build Breaks Immediately

A goal of CI is to have a release-quality mainline at all times. So, if a commit breaks the build, the goal is not being met. The developer must fix the issue immediately. An easy way to do this is to revert the commit. Also, the team should consciously prioritize the correction of a broken build as a high-priority task. Team members should be careful to only check in tested code.

The Integration Test Environment Should Mirror the Production Environment

The goal of testing is to discover any potential issues that may appear in production before deployment. So, the test environment must be as similar to the production environment as possible. Every difference adds to the risk of defects escaping to production.

Related: Succeed with an Application Modernization Roadmap

Continuous Delivery/Deployment

CD stands for both continuous delivery and continuous deployment. They differ only in the degree of automation.

Continuous delivery is the next step after continuous integration. The pipeline automatically builds the newly integrated code, tests the build, and keeps the deployment packages ready. Manual intervention is needed to deploy the build to a testing or production environment.

In continuous deployment, the entire process is automated. Every successful code commit results in deploying a new version of the application to production without human involvement.

CI streamlines the code integration process, while CD automates application delivery.

Popular CI/CD Tools

There are many CI/CD tools available. Here are the leading ones.

Jenkins

Jenkins is arguably the most popular CI/CD tool today. It is open-source, free, and supports almost all languages and operating systems. Moreover, it comes with hundreds of plugins that make it easy to automate any building, testing, or deployment task.

AWS CodeBuild

CodeBuild is a CI/CD tool that compiles code, runs tests, and generates ready-to-deploy software packages. It takes care of provisioning, managing, and building your build servers. CodeBuild automatically scales and runs concurrent builds. It comes with an IDE (Integrated Development Environment).

GitLab

GitLab is another powerful CI/CD tool. An interesting feature is its ability to show performance metrics of all deployed applications. A pipeline graph feature shows the status of every task. GitLab makes it easy to manage Git repositories. It also comes with an IDE.

GoCD

GoCD from ThoughtWorks is a mature CI/CD tool. It is free and open-source. GoCD visually shows the complete path from check-in to deployment, making it easy to analyze and optimize the process. This tool has an active user community.

CircleCI

CircleCI is one of the world’s largest CI/CD platforms. The simple UI makes it easy to set up projects. It integrates smoothly with Github and Bitbucket. You can conveniently identify failing tests from the UI. It has a free tier of service that you can try out before committing to the paid version.

You should select the CI/CD tool that helps you optimize your software development process.

Related Cloud vs Cloud-Native: Taking Legacy Java Apps to the Next Level

The Benefits of CI and CD

The complete automation of releases — from compiling to testing to the final deployment — is a significant benefit of the CI/CD pipeline. Other benefits of the CI/CD process include:

  • Reduction of deployment time: Automated testing makes the development process very efficient and reduces the length of the software delivery process. It also improves quality.
  • Increase in agility: Continuous deployment allows a developer’s changes to the application to go live within minutes of making them.
  • Saving time and money: Automation results in fast development, testing, and deployment. The saving in time translates to a cost-saving. More time is available for innovation. Code reviewers save time because they can now focus on code instead of functionality.
  • Continuous feedback loop: The CI/CD pipeline is a continuous cycle of building, testing, and deployment. Every time the tests run and find issues, developers can quickly take corrective action, resulting in continuous improvement of the product.
  • Address issues earlier in the cycle: Developers commit code frequently, so merge conflicts surface early. Every check-in generates a build. The automated test suite runs on each build, so the team catches integration issues quickly.
  • Testing in a production-like environment: You mitigate risks by setting up a production environment clone for testing.
  • Improving team responsiveness: Everyone on the team can change code, respond to feedback, and respond promptly to any issues.

These are some notable benefits of modernization.

CI and CD: differences

There are fundamental differences between continuous integration and continuous deployment.

For one, CI happens more frequently than CD.

CI is the process of automating the build and testing code changes. CD is the process of automating the release of code changes.

CI is the practice of merging all developer code to the mainline several times a day. CD is the practice of automatically building the changed code and testing and deploying it to production.

Continuous Modernization after Refactoring

We started this article by stating that application modernization is often the only way software teams can pay off their technical debt. We also mentioned continuous modernization. Companies are increasingly leaning toward continuous modernization. They constantly monitor technical debt, make sure they have no dead code, and ensure good test coverage. Their goal is to prevent the modernized code from regressing.  

How to Build Continuous Modernization Into Your CI/CD Pipeline

We have seen the many benefits that CI/CD provides. As more and more companies realize the benefits of continuous integration and deployment, expectations are ever-increasing. Companies expect every successful dev commit to be available in production in minutes. For large teams, this could imply several hundred or thousand deployments every day. Let’s look at how to continuously modernize the CI/CD pipelines so that they don’t end up in a bottleneck.

  • Keep scaling the CI/CD platforms: You must continuously scale the infrastructure needed to provide fast builds and tests for all team members.
  • Support for new technologies: As the team starts using new languages, databases, and other tools, the CI/CD platform must keep up.
  • Reliable tests: You should have confidence in the automated tests. All tests must be consistent. You must optimize the number of tests to control test execution time.
  • Rapid pipeline modification: The team should be able to reconfigure pipelines rapidly to keep up with changing requirements.

Next Steps Toward Continuous Modernization

vFunction, which has developed an AI and data science-powered platform to transform legacy applications into microservices, helps companies on their path towards continuous modernization. There are two related tools:

  • vFunction Assessment Hub, which is an assessment tool for decision-makers that analyzes the technical debt of a company’s monolithic applications, accurately identifies the source of that debt and measures its negative impact on innovation
  • the vFunction Modernization Hub, which is an AI-driven modernization solution that automatically transforms complex monolithic applications into microservices, restoring engineering velocity, increasing application scalability, and unlocking the value of the cloud.

These tools help organizations manage their modernization journey.

vFunction Assessment Hub measures app complexity based on code modularity and dependency entanglements, measures the risk of changes impacting stability based on the depth and length of the dependency chains, and then aggregates these to assess the overall technical debt level. It then benchmarks debt, risk, and complexity against the organization’s own estate, while identifying aging frameworks that could pose future security and licensing risks. vFunction Assessment Hub integrates seamlessly with the vFunction Modernization Hub which can directly lead to refactoring, re-architecting, and rewriting applications with the full vFunction Modernization Hub.

 vFunction Modernization Hub utilizes both deep domain-driven observability via a passive JVM agent and sophisticated static analysis, vFunction Modernization Hub analyzes architectural flows, classes, usage, memory, and resources to detect and unearth critical business domain functions buried within a monolith.

Whether your application is on-premise or you have already lifted and shifted to the cloud, the world’s most innovative organizations are applying vFunction on their complex “megaliths” (large monoliths) to untangle complex, hidden, and dense dependencies for business-critical applications that often total over 10 million lines of code and consist of 1000’s of classes.The convenience of this approach lies in the fact that all this happens behind a single screen. You don’t need to use several tools to perform the analysis or manage the migration. Contact vFunction to request a demo and learn more.

Bob Quillin

Bob Quillin is the Chief Ecosystem Officer for vFunction, responsible for developer advocacy, marketing, and cloud ecosystem engagement. Bob was previously Vice President of Developer Relations for Oracle Cloud Infrastructure (OCI). Bob joined Oracle as part of the StackEngine acquisition by Oracle in December 2015, where he was co-founder and CEO. StackEngine was an early cloud native pioneer with a platform for developers and devops teams to build, orchestrate, and scale enterprise-grade container apps. Bob is a serial entrepreneur and was previously CEO of Austin-based cloud monitoring SaaS startup CopperEgg (acquired by IDERA in 2013) and held executive and startup leadership roles at Hyper9, nLayers, EMC, and VMware.

Get started with vFunction

See how vFunction can accelerate engineering velocity and increase application resiliency and scalability at your organization.