Category: Uncategorized

Risk-Adjusted ROI Model for Modernization Projects

Over the past few years, we at vFunction have been focusing on the most significant problem inhibiting the third wave of cloud adoption: app modernization.

The first wave of cloud adoption included new apps written for the cloud, the second wave focused on lifting and shifting the low-hanging fruit of apps that can relatively easily be migrated to the cloud, without code or architectural changes, and the third wave, which we are experiencing today, includes the modernization of massive legacy IT to take advantage of the modern cloud services.

When we say modernization, we refer to refactoring or rewriting applications to transform them from a monolithic architecture to microservices, allowing organizations to eliminate technical debt, increase engineering velocity, onboard new developers faster, and increase the scalability of applications.

In recent research we conducted, we found out what many executives know firsthand…that over 70% (!!) of application modernization projects fail, that they last at least 16 months (30% longer than 24 months) and that they cost on average more than $1.5m.

No wonder executives are reluctant to put their careers on the line and embark on these projects. The problem is that they are stuck between the rock and the hard place. If they don’t modernize, they may lose their job since they can’t address the business needs, their development isn’t agile, and they are not supporting their business’s vital need to be competitive. If they do embark on these modernization projects and fail, they may lose their jobs as well.

We believe that modernization that is assisted by AI and automation dramatically disrupts the above convention, and saves executives from the difficult dilemma of “modernize or die.” We’ve created a model that we believe supports this claim. 

The research reveals that executives struggle with the length of projects, as well as the cost of projects. We find that using AI and automation to power modernization reduces the cost by 50%-66% and accelerates time to market by 10x-15x. We see this with our customers and have case studies to show this to be true (see our case studies). However, the research also reveals that risk is a very real obstacle for modernization projects, and this ROI model doesn’t address the significant risk reduction that comes with AI-assisted modernization. 

One could argue that even without incorporating the risk factor the savings and acceleration of AI and automation justify the project, and I would agree with that, but when incorporating the risk factor it becomes a no-brainer.

Let’s use some numbers to substantiate this claim (see the chart below for the calculation). 

Let’s say a medium-sized modernization project of an app that is 7,000 classes (medium complexity, as the number of classes, is a very good proxy for application complexity and possible technical debt) would cost about $1.8m, based on 6 FTEs for 2 years – which falls within the average modernization cost and length based on the Wakefield research.

The same project when using modern AI and automation tools will take only 1 year and requires only 2 FTEs (⅓ resources and half the time) based on our experience at vFunction.

When comparing the total cost of the project, ignoring the risk factor, the AI and the automation-powered project are less than half the price. That seems compelling, however, if we incorporate the risk factor to calculate the risk-adjusted cost we get very different numbers.

The $1.8m manual project has only a 30% success rate (conservatively, based on the research) which means we need to divide the $1.8m by 0.3 to get the true risk-adjusted cost which yields a $6m project cost.

The intuitive meaning of this higher cost is that the project is most likely not going to end in 2 years and not with 12 FTEs…but rather double that time with a lot more resources, therefore getting to a true cost of $6m.

When calculating the AI and automation-powered project cost, we should assume a 90% success rate and therefore the actual cost would be $765,000 divided by 0.9 yielding a true project cost of $850,000.

Now, comparing $6m to $850K…yields a massive ROI of 700%.

risk adjusted roi model for modernization projects

Modernization is indeed risky, lengthy, and costly, but incorporating AI and automation radically changes the economics and risk of these projects and can assist CIOs and CTOs in embarking on modernization projects that are controlled, measured, and have significantly higher chances of success.

Using Machine Learning to Measure and Manage Technical Debt

This post was originally featured on TheNewStack, sponsored by vFunction.

If you’re a software developer, then “technical debt” is probably a term you’re familiar with. Technical debt, in plain words, is an accumulation over time of lots of little compromises that hamper your coding efforts. Sometimes, you (or your manager) choose to handle these challenges “next time” because of the urgency of the current release.

This is a cycle that continues for many organizations until a true breaking point or crisis occurs. If software teams decide to confront technical debt head on, these brave software engineers may discover that the situation has become so complex that they do not know where to start.

The difficult part is that decisions we make regarding technical debt have to balance between short-term and long-term implications of accumulating such debt, emphasizing the need to properly assess and address it when planning development cycles.

The real-world implications of this is seen in a recent survey of 250 senior IT professionals, in which 97% predicted organization pushback to app modernization projects, with the primary concern of both executives and architects being “risk.” For architects, we can think of this as “technical risk” — the threat that making changes to part of an application will have unpredictable and unwelcome downstream effects elsewhere.

The Science Behind Measuring Technical Debt

In their seminal article from 2012, “In Search of a Metric for Managing Architectural Technical Debt”, authors Robert L. Nord, Ipek Ozkaya, Philippe Kruchten and Marco Gonzalez-Rojas offer a metric to measure technical debt based on dependencies between architectural elements. They use this method to show how an organization should plan development cycles while taking into account the effect that accumulating technical debt will have on the overall resources required for each subsequent version released.

Though this article was published nearly 10 years ago, its relevance today is hard to overstate. Earlier this March, it was received the “Most Influential Paper” award at the 19th IEEE International Conference on Software Architecture.

In this post, we will demonstrate that not only is technical debt key to making decisions regarding any specific application, it is also important when attempting to prioritize work between multiple applications — specifically, modernization work.

Moreover, we will show a method that can be used to not only compare the performance of different design paths for a single application, but also compare the technical debt levels of multiple applications at an arbitrary point in their development life cycle.

Accurately Measuring Systemwide Technical Debt

In the IEEE article mentioned above, calculating technical debt is done using a formula that mainly relies on the dependencies between architectural elements in the given application. It is worth noting that the article does not define what constitutes an architectural element or how to identify these elements when approaching an application.

We took a similar approach and devised a method to measure technical debt of an application based on the dependency graph between its classes. The dependency graph is a directional graph G=V, E, in which the V=c1, c2, … is the set of all classes in the application and an edge e=⟨c1, c2⟩E exists between two vertices if class c1 depends on class c2 in the original code. We perform multifaceted analysis on the graph to eventually come up with a score that describes the technical debt of the application. Here are some of the metrics we extract from the raw graph:

  1. Average/median outdegree of the vertices on the graph.
  2. Top N outdegree of any node in the graph.
  3. Longest paths between classes.

Using standard clustering algorithms on the graph, we can identify communities of classes within the graph and measure additional metrics on them, such as:

  1. Average outdegree of the identified communities.
  2. Longest paths between communities.

The hypothesis here is that by using these generic metrics on the dependency graphs, we can identify architectural issues that represent real technical debt in the original code base. Moreover, by analyzing dependencies on these two levels — class and community — we give a broad interpretation of what an architectural element is in practice without attempting to formally define it.

To test this method, we created a data set of over 50 applications from a variety of domains — financial services, eCommerce, automotive and others — and extracted the aforementioned metrics from them. We used this data set in two ways.

First, we correlated specific instances of high-ranking occurrences of outdegrees and long paths with local issues in the code. For example, identifying god classes by their high outdegree. This proved efficient and increased our confidence level that this approach is valid in identifying local technical debt issues.

Second, we attempted to provide a high-level score that can be used not only to identify technical debt in a single application, but also to compare technical debt between applications and to use it to help prioritize which should be addressed and how. To do that, we introduced three indexes:

  1. Complexity — represents the effort required to add new features to the software.
  2. Risk — represents the potential risk that adding new features has on the stability of existing ones.
  3. Overall Debt — represents the overall amount of extra work required when attempting to add new features.

From Graph Theory to Actionable Insights

We manually analyzed the applications in our data set, employing the expert knowledge of the individual architects and developers in charge of product development, and scored each application’s complexity, risk and overall debt on a scale of 1 to 5, where a score of 1 represents little effort required and 5 represents high effort. We used these benchmarks to train a machine learning model that correlates the values of the extracted metrics with the indexes and normalizes them to a score of 0 to 100.

This allows us to use this ML model to issue a score per index for any new application we encounter, enabling us to analyze entire portfolios of applications and compare them to each another and to our precalculated benchmarks. The following graph depicts a sample of 21 applications demonstrating the relationship between the aforementioned metrics:

relationship between the aforementioned metrics

The overall debt levels were then converted into currency units, depicting the level of investment required to add new functionality into the system. For example, for each $1 invested in application development and innovation, how much goes specifically to maintaining architectural technical debt? This is intended to help organizations build a business case for handling and removing architectural technical debt from their applications.

We have shown a method to measure the technical debt of applications based on the dependencies between its classes. We have successfully used this method to both identify local issues that cause technical debt as well as to provide a global score that can be compared between applications. By employing this method, organizations can successfully assess the technical debt in their software, which can lead to improved decision-making around it.

Cloud Modernization Approaches: Choosing Between Rehost, Replatform, or Refactor

In an era when continual digital transformation is forcing marketplaces to evolve with lightning speed, companies can’t afford to be held back by functionally limited and inflexible legacy systems that don’t adapt well to today’s requirements. Software applications that are hard to maintain and support, and that cannot easily incorporate new features or integrate with other systems are a drag on any company’s marketplace agility and ability to innovate.

Yet, many legacy applications are still performing necessary and business-critical functions. Because they remain indispensable to the organization’s daily operations, they cannot simply be abandoned. As a result, companies face a very real imperative to modernize aging applications to meet the rapidly shifting requirements of the marketplace. And for a growing number of them, that means modernizing those applications for the cloud.

Why Companies are Modernizing for the Cloud

Today the cloud is where the action is—where the leading edge of technological innovation is taking place, and where there is an established ecosystem that software can tap into to make use of infrastructure hosting, scaling, and security capabilities that don’t have to be programmed into the application itself.

It’s that ability to leverage a wide-ranging and technically sophisticated ecosystem that makes the cloud the perfect avenue for modernizing a company’s legacy applications.

Gartner estimates that by 2025, 90% of current monolithic applications will still be in use, and that compounded technical debt will consume more than 40% of the current IT budget.

Because software that cannot interoperate in that environment will lose much of its utility, modernizing legacy applications is an urgent imperative for most companies today.

When legacy applications are moved to the cloud and modernized so that they become cloud-enabled, they gain improvements in scalability, flexibility, security, reliability, and availability. What’s more, they also gain the ability to tap into a multitude of already existing cloud-based services, so that developers don’t have to continually reinvent the wheel.

Related: Why Cloud Migration Is Important

Once a company decides that modernization of its legacy applications is a high priority, the next question is how to go about it.

Approaches to Cloud Modernization of Legacy Applications

Gartner has identified seven options that may be useful for modernizing legacy systems in the cloud: encapsulate, rehost, replatform, refactor, re-architect, rebuild, and replace. Experience has shown that for companies beginning their modernization journey, the most viable options are rehosting, replatforming, and refactoring. Let’s take a brief look at each of these.

1. Rehosting (“Lift and Shift”)

Rehosting is the most commonly used approach for bringing legacy applications to the cloud. It involves transferring the application as-is, without changing the code at all, from its original environment to a cloud hosting platform.

For example, one of the most frequently performed rehosting tasks is moving an application from a physical server in an on-premises data center to a virtual server hosted on a cloud service such as AWS or Azure.

Rehosting is the simplest, easiest, quickest, and least risky cloud migration method because there’s no new code to be written and tested. And the demands for technical expertise in the migration team are minimal.

The downside to rehosting is the flipside of its advantages—because no changes are made to the code or functionality of the application, even though it now runs in the cloud it is no more able to take advantage of cloud-native capabilities than it was in its original environment.

On the other hand, simply by being hosted in the cloud the application gains some significant advantages:

Advantages of Rehosting

  • Enhanced security—cloud service providers (CSPs) provide superior data security because their business model depends on it.
  • Greater reliability— CSPs typically guarantee “five 9’s” (99.999%) availability in their Service Level Agreements.
  • Global access—since the user interface (UI) of a web-hosted application is normally delivered through a browser (although the look and operation of the UI may be unchanged), users are no longer tied to dedicated terminals, but, with proper authorization, can access the system through any internet-enabled device anywhere in the world.
  • Minimum risk—because there are no changes to the codebase, there’s little chance of new bugs being introduced into the application during migration.
  • It’s a good starting point—having the application already hosted in the cloud is a good first step toward further modernization efforts.

Disadvantages of Rehosting

  • No improvements in functionality—the code runs exactly as it always has. There are no upgrades in functionality or in the ability to integrate with other cloud-based systems or take advantage of the unique capabilities available to cloud-enabled applications. For example, although cloud-native applications are inherently highly scalable, a legacy application rehosted to the cloud may lack the ability to scale by auto-provisioning additional resources as needed.
  • Potential latency and performance issues—when moving an application unchanged from an on-premises data center to the cloud, latency and performance issues may arise due to inherent cloud network communication delays.
  • Potentially higher costs— while running applications in the cloud that are not optimized for that environment may decrease CapEX expenditures (you don’t have to purchase or maintain hardware), it may actually increase monthly OpEX spending because of excessive cloud resource usage.

When to Use Rehosting

Rehosting may be the best choice for companies that:

  • are just beginning to migrate applications to the cloud, or
  • need to move the application to the cloud as quickly as possible, or
  • have a high level of concern that migration hiccups might disrupt the workflows served by the application

Because of its simplicity, rehosting is most commonly adopted by companies that are just beginning to move applications to the cloud.

2. Replatforming

As with rehosting, replatforming moves legacy applications to the cloud basically intact. But unlike rehosting, minimal changes are made to the codebase to enable the application to take advantage of some of the advanced capabilities available to cloud-enabled software, such as the adoption of containers, DevOps best practices, and automation, as well as improvements in functionality or in the ability to integrate with other cloud resources.

For example, changes might be instituted during replatforming to enable the application to access a modern cloud-based database management system or to increase application scalability through autoscaling.

Advantages of Replatforming

Because it’s basically “rehosting-plus,” replatforming shares the advantages associated with rehosting. Its greatest additional advantage is that it enables the application to be modestly more cloud-compatible, though still falling far short of cloud-native capabilities. But even relatively small improvements, such as the ability to automatically scale as needed, can have a significant impact on the performance and usability of the application.

Replatforming allows you to upgrade an application’s functionality or integration with other systems through a series of small, incremental changes that minimize risk.

Disadvantages of Replatforming

Changes to the codebase bring with them a risk of introducing new code that might disrupt operations. Avoiding such mistakes requires a higher level of expertise in the modernization team, with regard to both the original application and the cloud environment onto which it is being replatformed. It’s easy to get into trouble when inexperienced migration teams attempt to replace functions in the original codebase with supposedly equivalent cloud functions they don’t really understand.

When to use Replatforming

Replatforming is a good option for organizations that want to work toward increasing the cloud compatibility of their legacy applications on an incremental basis, and without the risks associated with more comprehensive changes.

3. Refactoring

According to Agile Alliance’s definition:

“Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior.”

Whereas rehosting and replatforming shift an application to the cloud without changing its fundamental nature, refactoring goes much further. Its purpose is to transform the codebase to take full advantage of the cloud’s capabilities while maintaining the original external functionality and user interface.

Most legacy applications have serious defects caused by their monolithic architecture (a monolithic codebase is organized basically as a single unit). Because various functions and dependencies are interwoven throughout the code, it can be extremely difficult to upgrade or alter specific behaviors without triggering unintended and often unnoticed changes elsewhere in the application.

Refactoring eliminates that problem by helping software transition to a cloud-native microservices architecture. This produces a modern, fully cloud-native codebase that can now be adapted, upgraded, and integrated with other cloud resources far more easily than before the refactoring.

Advantages of Refactoring

  • Enhanced developer productivity—productivity rises when developers work in a cloud-native environment, with code that can be clearly understood, and with the ability to integrate their software with other cloud resources, thereby leveraging existing functions rather than coding them into their own applications.
  • Eliminated technical debt—by correcting all the quick fixes, coding shortcuts, compromises, and just plain bad programming that typically seep into legacy applications over the years, refactoring can eliminate technical debt.
  • Better maintenance—whereas a monolithic codebase can be extremely difficult to parse and understand, refactored code is far more understandable. That makes a huge difference in the application’s maintainability.
  • Simpler integrations—because a microservice architecture is fully cloud-enabled, refactored applications can easily integrate with other cloud-based resources.
  • Greater adaptability—in a microservice-based codebase each function can be addressed independently, allowing modifications to be made cleanly and iteratively, without fear that one change might ripple through the entire system.
  • High scalability—because the codebase has been reshaped into a cloud-native architecture, autoscaling can be easily implemented.
  • Improved performance—the refactoring process optimizes the code for the functions it performs. This usually results in fewer bottlenecks and greater throughput.

Disadvantages of Refactoring

The main disadvantage of the refactoring approach is that it is far more complex, time-consuming, resource-intensive, and risky than rehosting or replatforming. That’s because the code is extensively modified. Refactoring must be done extremely carefully, by experts who know what they are doing, to avoid introducing difficult-to-find bugs or behavioral anomalies into the code. And that increases costs in both time and money.

On the other hand, the automated, AI-driven refactoring tools available today can take much of the complexity, time, cost, and risk out of the refactoring process.

When to Use Refactoring

Companies that need maximum flexibility and agility to keep pace with the demands of customers and the challenges of competitors will typically find that refactoring is their best choice. Though the up-front costs of refactoring are the greatest of the options we’ve considered, the ability of microservices-based applications to use only the cloud resources needed at a particular time will keep long-term operating expenses much lower than can be achieved with the other options.

Choosing Your Modernization Approach

How can you determine the approach you should use for modernizing your legacy applications? Here are some steps you should take:

1. Understand Your Business Strategy and Goals

Why are you considering modernizing your legacy applications? What business interests will be served by doing so? The only way to determine which applications should be modernized and how is to examine how each serves the goals your business is trying to achieve.

2. Assess Your Applications

In light of your business goals, determine which applications are in greatest need of modernization, and what the end-product of that upgrade needs to be.

3. Decide Whether to Truly Modernize or Just Migrate

Rehosting and replatforming are not really about modernizing applications. Rather, their focus is on simply getting them moved to the cloud. That can be the first step in a modernization effort, but just migrating an application to the cloud pretty much as-is does little to enable it to become a full participant in the modern cloud ecosystem.

In general, migration is a short-term, tactical approach, while modernization is a more long-term solution.

4. Repeat Steps 1-3 Again, and Again, and…

Application modernization is not a one-and-done deal. As technology continues to evolve at a rapid pace, you’ll need to periodically revisit these assessments of how well your business-critical applications are contributing to current business objectives, and what improvements might be needed. Otherwise, the software you so carefully modernize today might become, after a few years, your new legacy applications.

Related: Preventing Monoliths: Why Cloud Modernization is a Continuum

Making the Choice

As we’ve seen, rehosting or re-platforming are the quickest, easiest, and least costly ways to bring monolithic application services at least partially into the cloud. But those applications are still hamstrung as far as taking advantage of the cloud’s extensive capabilities is concerned.

Refactoring, on the other hand, is more expensive and time-consuming at the beginning, but positions applications to function as true cloud-native resources that can be much more easily adapted as requirements change. 

If you’ve got an executive mandate to move beyond just a “quick fix” approach to your legacy applications, you should strongly consider refactoring. And remember that by employing today’s sophisticated, AI-driven application modernization tools, the time and cost gaps between refactoring on the one hand, and rehosting or re-platforming on the other, can be significantly narrowed.

A good example of such a tool is the vFunction Platform. It’s a state-of-the-art application modernization platform that can rapidly assess monolithic legacy applications and transform them into microservices. It also provides decision-makers with data-driven assessments of legacy code that allow them to determine how to proceed with their modernization efforts. To see how vFunction can help your company get started on its journey toward legacy application modernization, schedule a demo today.

Modernizing Legacy Code: Refactor, Rearchitect, or Rewrite?

If your company is like most, you have legacy monolithic applications that are indispensable for everyday operations. Valuable as they are, due to their traditional architecture those applications are almost certainly hindering your company’s ability to display the agility, flexibility, and responsiveness necessary to keep up with the rapidly shifting demands of today’s marketplace. That’s why refactoring legacy code should be high on your priority list.

Almost by definition, legacy apps lack the functionality and adaptability required for them to seamlessly integrate with the modern, cloud-based ecosystem that defines today’s technological landscape. In an era when marketplace requirements are constantly evolving, continued dependence on apps with such limitations is a recipe for eventual disaster. That’s why the pressure to modernize is growing by the day.

Why Refactoring Legacy Code is Critical

Most enterprises today realize that they must do something to modernize the legacy apps on which they still depend. In fact, in CIO Magazine’s 2022 State of the CIO survey, 40% of CIOs say modernizing infrastructure and applications is their focus. But what will it take to make legacy app modernization a reality?

The basic issue that makes most legacy applications so ill-suited to fully participate in today’s cloud ecosystem is that they have a monolithic architecture. That means that the code is organized essentially as a single unit, with various functions and dependencies interwoven throughout.

Such code is brittle, inflexible, and hard to understand; modifying its functionality to meet new requirements is typically an extremely difficult and risky process.

As long as an application retains its monolithic structure, there’s little hope of any significant modernization. So, the first step in most efforts to modernize legacy applications is to transform them from a monolithic structure to a cloud-native, microservices architecture. And the first step in accomplishing that transformation is refactoring.

Related: Migrating Monolithic Applications to Microservices Architecture

The refactoring process restructures and optimizes an application’s code to meet modern coding standards and allow full integration with other cloud-based applications and systems.

But why “cloud-based”?

The Importance of the Cloud

The cloud has become the focal point of intense and continuous technological innovation—most software advancements are birthed and deployed in the cloud. That’s why Gartner projects that by 2025, 95% of new digital workloads will be cloud-native. What’s more, according to Forbes, 77% of enterprises, and 73% of all organizations, already have at least some of their IT infrastructure in the cloud.

The cloud is critical to modernization because it provides a well-established software ecosystem that allows newly cloud-enabled legacy apps to tap into a wide range of existing functional capabilities that don’t have to be programmed into the app itself.

That’s why today’s norm for modernizing legacy apps is to start by moving them to the cloud. Once relocated to the cloud and adapted to interoperate in that environment, such applications gain some substantial advantages, including improvements in performance, scalability, security, agility, flexibility, and operating costs.

But the degree to which such benefits are realized depends on how that cloud transfer is accomplished—will the app be optimized for the cloud environment, or just shifted basically intact from its original environment.

Migration vs Modernization

Many companies begin their modernization journey by simply migrating legacy software to the cloud. An app is transferred, pretty much as-is, without altering its basic internal structure. Some minor changes may be made to meet specific needs, but for the most part, the app functions exactly as it did in its original environment.

Because the app retains its original structure and functionality, it also retains the defects that undermine its usefulness in the modern technological context. For example, if the codebase was monolithic before migration, it remains monolithic once it reaches the cloud. Such apps bring with them all the limitations that plague the monolithic architectural pattern, including an inability to integrate with other cloud-based systems.

Migration represents an essentially short-term, tactical approach that aims at alleviating immediate pain points without making fundamental changes to the codebase.

Modernization, on the other hand, is a more long-term, strategic approach to updating legacy apps. The application isn’t simply shifted to the cloud. Rather, as part of the migration process much of the original code is significantly altered to meet cloud-native technical standards.

That enables the app to fully interoperate with other applications and systems within the cloud ecosystem, and thereby reap all the benefits that cloud-native apps inherit.

Options for Modernizing Legacy Applications

Gartner identifies seven options for upgrading legacy systems. These may be grouped into two broad categories:

  • Migration options that simply transfer the software to the cloud essentially as-is
  • Modernization options that not only migrate the application to the cloud but which, as an essential part of the migration process, adapt it to function in that environment as cloud-native software 

Let’s examine Gartner’s list of options in light of that distinction:

Migration methods

  • Encapsulate: Connect the app to cloud-based resources by providing API access to its existing data and functions. Its internal structure and operations remain unchanged.
  • Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  • Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.

Modernization methods

  • Refactor: Restructure and optimize the app’s code to meet modern standards without changing its external behavior.
  • Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  • Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  • Replace: Completely eliminate the original application, and replace it with a new one. This option requires such an extreme investment, in terms of time, cost, and risk, that it is normally used only as a last resort.

Since our concern in this article is with truly modernizing legacy apps rather than just migrating them to the cloud or entirely replacing them, we’ll limit our consideration to the modernization options: refactoring, rearchitecting, and rewriting.

Related: Legacy Application Modernization Approaches: What Architects Need to Know

Refactoring vs Rearchitecting vs Rewriting

Let’s take a closer look at each of these modernization options.

Refactoring

As we’ve seen, refactoring legacy code is fundamental to the modernization process. According to the Agile Alliance, one of the major benefits of refactoring is that it

“improves objective attributes of code (length, duplication, coupling and cohesion, cyclomatic complexity) that correlate with ease of maintenance.”

As a result of those improvements, refactored legacy code is simpler and cleaner; it’s also easier to understand, update with new features, and integrate with other cloud-based resources. Plus, the app’s performance will typically improve.

Because no functional changes are made to the app, the risk of new bugs being introduced during refactoring is low.

One significant advantage of refactoring is that it can (and should) be an incremental, iterative process that proceeds in small steps. Developers operate on small segments of the code and work to ensure that each is fully tested and functioning correctly before it is incorporated into the codebase.

As a result, when refactoring is done correctly, the operation of the overall system is never disrupted. This also eliminates the necessity of maintaining two separate codebases for the original and the updated code.

The fundamental purpose of refactoring legacy code is to convert it to a cloud-native structure that allows developers to easily adapt the application to meet changing requirements. A valuable byproduct of the process is the elimination of technical debt through the removal of the coding compromises, shortcuts, and ad hoc patches that often characterize legacy code.

Rearchitecting

Rearchitecting is used to restructure the application’s codebase to enable improvements in areas such as performance and scalability. It’s often employed when business requirements change and the application needs to add functionality that its current structure doesn’t support. Rearchitecting allows such changes to be incorporated without developers having to rewrite the app from scratch.

Because it goes beyond refactoring by making fundamental changes to the structure and operation of the code, rearchitecting is more complex and time-consuming, and it carries a higher risk of introducing bugs or business process errors into the code.

One of the major risk factors associated with rearchitecting (and with rewriting as well) is that with most legacy applications, documentation about not just the original requirements, but also of how the code has been modified along the way (and for what reasons) is inadequate or entirely missing.

For that reason, any rearchitecting or rewriting effort must be preceded by a thorough assessment of the original code so that developers gain a deep level of understanding before making changes. Otherwise, there is a high risk that even if the new code is technically bug-free, important business processes may be omitted or inadvertently changed because developers overlooked their implementations in the original code.

Rewriting

Full rewrites most often occur with legacy applications that are specialized and proprietary. Usually, the intent is not to modify the functionality or user interface in major ways, but to move to a modern (usually microservices) architecture without having to deconstruct the existing code to understand how it works.

Rewriting allows developers to start with a clean slate and implement the application requirements using modern technologies and coding standards.

As with rearchitecting, rewriting brings with it a significant danger of overlooking business process workflows that are implicit in the legacy code because of ad hoc patches and modifications made over the years, but which were never explicitly documented. Developers also shouldn’t forget that the legacy app is still in use because it works—it will have been heavily debugged and patched through time so that even low probability or extreme operational conditions are handled, if not gracefully, at least adequately.

For these reasons, developers involved in a rewrite must be extremely careful to ensure that all of the application’s use scenarios, whether documented or not, are uncovered and explicitly implemented in the new code.

One of the greatest dangers with a rewrite is that until it is completed, it may be necessary to freeze the functionality of the original app—otherwise, the rewrite is chasing a moving target. And in today’s environment of ever-accelerating technological change, that can be a recipe for disaster.

Joel Spolsky, formerly a Program Manager at Microsoft, and now Chairman of the Board at Glitch, cites a case in point. Netscape was once the leader in the internet browser market, but it made a fatal mistake by attempting a full rewrite of its browser code.

That effort took three years, during which Netscape was unable to update the functionality of its product because the original codebase was frozen. Competitors forged ahead with innovations, and Netscape’s market share plummeted. The company never recovered. According to Spolsky,

Netscape made “the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.”

Doing a complete rewrite of a legacy application may be necessary in some cases, but such a project should not be undertaken without a full evaluation of the associated costs and risks. It’s tempting to just clear the decks and start over without all the complexities of dealing with inherited code. But, as experts like Spolsky are quick to say, doing so is usually a mistake.

Refactoring is Key

Refactoring, rearchitecting, and rewriting are not mutually exclusive options. They can be seen as points along a continuum in the process of modernizing legacy applications:

  1. Start by refactoring legacy code into microservices. This gives the app an essentially cloud-enabled codebase that can be easily integrated with other cloud-based resources and positions it for further updates and improvements.
  2. If the application needs new functionality or performance levels that can’t be achieved with its original structure, rearchitecting may be in order.
  3. If rearchitecting to achieve the required functionality appears to be too complex or risky, starting from scratch by completely rewriting the app may be the best option.

Whichever option is ultimately pursued, refactoring should be the starting point because it produces a codebase that’s far easier for developers to understand and work with than was the original.

Plus, refactoring will unveil hidden dependencies and business process workflows buried in the code that may be missed if a development team goes straight to rearchitecting or rewriting as their initial step.

Note that all of these options require a substantial investment of time and expertise, especially if they are pursued through a mostly manual process using tools that were never designed for application modernization. But that need not, and should not, be the case.

Simplify Legacy App Modernization

The vFunction platform is specifically designed for AI-driven, cloud-native modernization of legacy applications. With it, designers can rapidly and incrementally modernize their legacy apps, and unlock the power of the cloud to innovate and scale.

The vFunction Assessment Hub uses its AI capabilities to automatically assess your legacy applications estate to help you prioritize and make a business case for modernization of a particular app or set of applications. This analysis provides a data-driven assessment of the levels of complexity, risk, and technical debt associated with the application.

Once this assessment has been performed, the vFunction Modernization Hub can then, under the direction of architects and developers, automatically transform complex monolithic applications into microservices. Through the use of these industry-leading vFunction capabilities, the time, complexity, risk, and cost of a legacy app modernization project can be substantially reduced. To see how vFunction can smooth the road to legacy application modernization at your company, schedule a demo today.

The CIO Guide to Modernizing Monolithic Applications

As the pace of technological change continues to accelerate, companies are being put under more and more pressure to improve their ability to quickly react to marketplace changes. And that, in turn, is putting corporate CIOs on the hot seat.

In a recent McKinsey survey, 71% of responding CIOs said that the top priority of their CEO was “agility in reacting to changing customer needs and faster time to market.” Those CEOs are looking to digital technology to enable their companies to keep ahead of competitors in a constantly evolving market environment.

CIOs are tasked with providing the IT infrastructure and tools needed to drive the marketplace innovation and agility required to accomplish that goal.

But in many cases CIOs are facing a seemingly intractable problem—they’ve inherited a suite of legacy applications that are indispensable to the company’s daily operations, but which also have very limited capacity for the upgrades necessary for them to be effective in the cloud-native, open-source technological landscape of today.

As a recent report by Forrester puts it,

“Most legacy core software systems are too inflexible, outdated, and brittle to give businesses the flexibility they need to win, serve, and retain customers.”

But because such systems are still critical for day-to-day operations, CIOs can’t just get rid of them. Rather, a way must be found to provide them with the flexibility and adaptability that will enable them to be full participants in the modern technological age.

The Problem with Monoliths

The fundamental cause of the brittleness and inflexibility that characterize most legacy systems is their monolithic arch

itecture. That is, the codebase (which may have millions of lines of code) is a single entity with functionalities and dependencies interwoven throughout. Such applications are extremely difficult to update because a change to any part of the code can ripple through the application, causing unintended operational changes or failures in seemingly unrelated parts of the codebase.

Because they are inflexible and brittle, such applications cannot be easily updated with new features or functions—they were not designed with that capability in mind. A much broader transformation is required, one in which the application’s codebase is restructured in ways that allow it to be upgraded while maintaining the original scope. That broad restructuring is referred to as application modernization.

Application Modernization and The Cloud

What, exactly, is application modernization? Gartner provides this description:

“Application modernization services address the migration of legacy to new applications or platforms, including the integration of new functionality to provide the latest functions to the business.”

There are two key aspects of this definition: migration and integration.

Because the cloud is where the technological action is today, most application modernization efforts involve, as a first step, migrating legacy apps from their original host setting to the cloud. As McKinsey says of this trend:

“CIOs see the cloud as a predominant enabler of IT architecture and its modernization. They are increasingly migrating workloads and redirecting a greater share of their infrastructure spending to the cloud.”

The report goes on to note that McKinsey expects that by 2022, 75% of corporate IT workloads will be housed in the cloud.

That leads to the second element of the Gartner definition: integration. If legacy applications are to be effective in the cloud environment, they must be integrated into the open services-based cloud ecosystem.

That means it’s not enough to simply migrate applications to the cloud. They must also be transformed or restructured so that integration with cloud-native resources is not just possible, but easy and natural.

The fundamental purpose of application modernization is to restructure legacy code so that it is easily understandable to developers, and can be quickly updated to meet new business requirements.

Transitioning From a Monolithic Architecture to Microservices

What does it take to transform legacy apps so that they are not only cloud-enabled, but they fit as naturally into the cloud landscape as do cloud-native systems?

As we’ve seen, the fundamental problem that causes the rigidity and inflexibility that must be overcome in transforming legacy apps is their monolithic architecture. Monolithic applications are self-contained and aren’t always easy to integration with other applications or systems. The codebase is a single entity in which all the functions are tightly-coupled and interdependent. Such an app is, in essence, a “black box” as far as the outside world is concerned—its inputs and outputs can be observed, but its internal processes are entirely opaque.

If an app is to be integrated into the cloud’s open-source ecosystem, its functions must somehow be separated out so that they can interoperate with other cloud services. The way that’s normally accomplished is by refactoring the legacy code into microservices.

Related: Migrating Monolithic Applications to Microservices Architecture

What are Microservices?

Microsoft provides a useful description of the microservices concept:

“A microservices architecture consists of a collection of small, autonomous services. Each service is self-contained and should implement a single business capability.”

The key terms here are “small” and “autonomous.” Microservices may or may not be “small”, but they should be independent, and loosely coupled with a specific functionality to cover. Each is a separate codebase that performs only a single task, and each can be deployed and updated independently of any others. Microservices communicate with one another and with other resources only through well-defined APIs—there is no external visibility or coupling into their internal functions.

Advantages of the microservices architecture include:

  • Agility: Because each microservice is small and independent, it can be quickly updated to meet new requirements without impacting the entire application.
  • Scalability: To scale any feature of a monolithic application when demand increases, the entire application must be scaled. In contrast, each microservice can be scaled independently without scaling the application as a whole. In the cloud environment, not having to scale the entire app can yield substantial savings in operating costs.
  • Maintainability: Because each microservice is small and does only one thing, maintenance is far easier than with a monolithic codebase, and can be handled by a small team of developers.

The key task of legacy application modernization is to decompose a monolithic codebase into a collection of microservices while maintaining the functionality of the original application.

But how is that to be accomplished with legacy code that is little understood and probably not well documented?

Options for Transforming Monolithic Code to Microservices

Gartner has identified seven options for migrating and upgrading legacy systems.

  1. Encapsulate: Connect the app to cloud-based resources by providing API access to its existing data and functions. Its internal structure and operations remain unchanged.
  2. Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  3. Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.
  4. Refactor: Restructure and optimize the app’s code to a microservices architecture without changing its external behavior.
  5. Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  6. Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  7. Replace: Completely eliminate the original application, and replace it with a new one.

All of these options are sometimes characterized as “modernization” methodologies. Actually, while encapsulating, rehosting, or replatforming do migrate an app (or in the case of encapsulation, its interfaces) to the cloud, no restructuring of the codebase takes place. If the app was monolithic in its original environment, it’s still monolithic once it’s housed in the cloud. So, these methods cannot rightly be called modernization options at all.

Neither does replacement qualify as a modernization option since rather than restructuring the legacy codebase, it throws it out completely and replaces it with something entirely new.

So, to truly modernize a legacy application from a monolith to microservices will involve the use of some combination of refactoring, rearchitecting, and rewriting. Let’s take a brief look at each of these:

  • Refactoring: Refactoring will be the first step in almost any process of modernizing monolithic legacy applications. By converting the codebase to a cloud-native, microservices structure, refactoring enables the app to be fully integrated into the cloud ecosystem. And once that’s accomplished, developers can easily update the app with new features to meet specific requirements.
  • Rearchitecting: Rearchitecting is usually employed to enable improvements in areas such as performance and scalability, or to add features that are not supported by the original design. Because rearchitecting makes fundamental changes to the structure and operation of the code, it is more complex, time-consuming, and risky than simply refactoring.
  • Rewriting: Completely rewriting the legacy code is the most complex, time-consuming, and risky of all the modernization options. It is usually resorted to when developers wish to avoid spending the time and effort required to deconstruct the existing code to understand how it works. Because a rewrite carries the highest risk of causing disruptions to a company’s business operations, it is normally used only as a last resort.

Although rearchitecting or rewriting may be appropriate for some cases, refactoring should always be the starting point because it produces a codebase that developers can easily upgrade with new features or functionality. As McKinsey puts it:

“It [is] critical for many applications to refactor for modern architecture.”

Challenges of Modernization

All of the modernization options, refactoring, rearchitecting, and rewriting, require extensive changes to the legacy application’s codebase. That’s not a task to be undertaken lightly. Legacy apps typically hold onto their secrets very tightly due to several common realities:

  • The developers who wrote and maintained the original code, which in some cases is decades old, have by now retired or are otherwise unavailable.
  • Documentation, both of the original requirements and modifications made to the code through the years, is often incomplete, misleading, or missing entirely.
  • Patches to the code to handle low frequency-of-occurrence exceptions or boundary conditions may not be documented at all, and can be understood only by a minute examination of the code.
  • Similarly, changes to business process workflows may have been incorporated through code patches that were never adequately documented or covered by tests. If such workflows are not discovered and accounted for in a modernization effort, important functions of the application may be lost.

Any modernization approach will involve a high degree of complexity, time, and expertise. McKinsey quotes one technology leader as saying,

“We were surprised by the hidden complexity, dependencies and hard-coding of legacy applications, and slow migration speed.”

Building a Modernization Roadmap

If you’re trying to drive to someplace you’ve never been before, it’s very helpful to have a map. That’s especially the case if you’re driving toward modernization of your legacy applications. You need a roadmap.

The first stop on your modernization roadmap will be an assessment of the goals of your business, where you currently stand in relation to those goals, and what you need from your technology to enable you to achieve those goals.

Then you’ll want to develop an understanding of exactly what you want your modernization process to achieve. You’ll analyze your current application portfolio in light of your business and technology goals, and determine which apps must be modernized, what method should be used, and what priority each app should have.

To learn more about creating a modernization roadmap, take a look at the following resource:

Related: Succeed with an Application Modernization Roadmap

Why Automation is Required for Successful Modernization

Converting a monolithic legacy app to a microservices architecture is not a trivial exercise. It is, in fact, quite difficult, labor-intensive, time-consuming, and risky. That is, it is all that if you try to do it manually.

It’s not unusual for a legacy codebase to have millions of lines of code and thousands of classes, with embedded dependencies and hidden flows that are far from obvious to the human eye. That’s why using a tool that automates the process is a practical necessity.

By intelligently performing static and dynamic code analyses, a state-of-the-art, AI-driven automation tool can, in just a few hours, uncover functionalities, dependencies, and hidden business flows that might take a human team months or years to unravel by manual inspection of the code.

And not only can a good modernization tool analyze and parse the monolithic codebase, it can actually refactor and rearchitect the application automatically, saving the untold hours that a team of highly skilled developers would otherwise have to put into the project.

According to McKinsey, companies that display a high level of agility in their marketplaces have dramatically higher rates of automation than those characterized as the “laggards” in their industries.

The vFunction Application Modernization Platform

The vFunction platform was built from scratch to be exactly the kind of automation tool that’s needed for any practical application modernization effort. It has advanced AI capabilities that allow it to automatically analyze huge monolithic codebases, both statically and during the actual execution of the code.

As the vFunction Assessment Hub crawls through your code, it automatically builds a lightweight assessment of your application landscape that helps you prioritize and make a business case for modernization. Once you’ve selected the right application to modernize, the vFunction Modernization Hub takes over, analyzing and automatically converting complex monolithic applications into extracted microservices.

vFunction has been demonstrated to speed up the modernization process by a factor of 15 or more, which can reduce the time required by such projects from months or years to just a few weeks.

If you’d like to experience firsthand how vFunction can help your company modernize its monolithic legacy applications, schedule a demo today.

Survey: 79% of Application Modernization Projects Fail

In the recent report “Why App Modernization Projects Fail”, vFunction partnered with Wakefield Research to gather insights from 250 IT professionals at a director level or higher in companies with at least 5000 employees and one or more monolithic systems.

Application modernization is not a new concept. That is, if companies are developing software, at some point they will need to modernize it. Because a code base continues to grow, it becomes complex, and engineering velocity slows down.

So what is elevating app modernization to a top priority for so many companies now? We see two major trends that are driving forces in the market:

  1. Digital Transformation – Many companies expedited these initiatives in response to the COVID-19 pandemic
  2. Shift to the Cloud – The benefits of cloud platforms has driven more companies to institute an executive mandate to move to the cloud

We also see competitive pressures increasingly driving companies to embark on modernization projects. Digital natives with software built for the cloud, originating with modern architectures (cloud native) and stacks, are able to rapidly respond to the market with innovative features and functionality, whereas established companies are fighting with scalability and reliability issues—which brings heavy competitive pressure in a fight for customer loyalty. 

Today, companies spend years mired in complex, lengthy, and inefficient app modernization projects, manually trying to untangle monolithic code.

So, it is not surprising that 79% of app modernization projects fail, averaging a cost of $1.5 million and a 16-month timeline.

There are many reasons for this: CIOs are under immense pressure to meet business objectives, having evolved into one of the most strategic roles on the executive team. 

Undoubtedly, this role comes with changing priorities and limited resources. Additionally, architects are charged with modernizing monolithic apps, but often only have limited tools, teams, and time. Given the stakes, it is imperative that the C-Suite has a clear understanding of why modernization projects fail, and how investing in these modernization projects now benefits the company’s present and future. 

To help provide for this, we partnered with Wakefield Research to survey 250 technology professionals—leaders, architects and developers at a director level or above—who have the responsibility of maintaining at least one monolithic app in a company of at least 5,000 employees.

The insights we gleaned say as much about the changing definition of successful outcomes as it does about cultures and how teams are organized to support these projects. The long-held notion of “lift and shift” is no longer considered a successful modernization outcome, and successful projects require a change in organizational structure to support the targeted modernized architecture.

We hope that this report will not only serve as valuable insight for those responsible for app modernization initiatives—but also as a reminder that having the proper tools in use plays an invaluable role in the success (or failure) of every venture.

Legacy Application Challenges and How Architects Can Face Them

Legacy system architectures are quite the challenge when held up to today’s capabilities, and working with them can be frustrating. Software architects head the list of those frustrated because of the numerous struggles they experience with applications designed a decade ago or more.

Many of these problems stem from their monolithic architecture. In contrast to today’s architecture, legacy applications contain tight coupling between classes and complex interdependencies, leading to lengthy test and release cycles.

A lack of agility and slow engineering velocity make it onerous to meet customer requirements. Performance limitations imposed by the architecture result in a poor customer experience. Operational expenses are high.

All this leads to a competitive disadvantage for the company and its clients. 

Overall, legacy systems are hard to maintain and extend. Their design stifles digital modernization initiatives and hinders the adoption of new technologies like AI, ML, and IOT. Security is another crucial concern. There are no straightforward solutions to these problems. These are just a few reasons architects feel pain with legacy systems. 

Let’s examine some issues that legacy systems have and see how modern applications fare in those same areas.

Scalability

The architecture of non-cloud-native applications makes them difficult and expensive to scale. A mature application is still deployed as a monolith, even if parts of it have been “lifted and shifted” to the cloud.

If some parts of the application experience load or performance issues, you cannot scale only those parts. You must scale up the entire application. This would require starting an additional large (or extra-large) compute instance, which can become expensive.

The situation is even more challenging for monolithic applications hosted on on-premise or in data centers. It can take weeks to procure new hardware to scale up. There is no elasticity. Once provisioned, you are stuck with the hardware, even in times of low usage.

Organizations generate data at ever-increasing rates. They must store the data safely and securely in their servers. The cost of acquiring more storage is prohibitive.

Contrast this with a modern application built with microservices. If overall system performance needs a boost, it’s possible to scale only those microservices needed. Because the microservices are decoupled from the monolithic app, it’s possible to use compute instances more efficiently to keep costs from spiraling out of control. 

Modern applications hosted on the cloud can add capacity to handle spikes in demand in seconds. Cloud computing offers elasticity. You can automatically free up excess capacity in times of low usage. So, you trade fixed costs (on-premise data centers and servers) for variable expenses based on your consumption. The latter expenditure is low because cloud operators enjoy economies of scale.

Long Release Cycles

Software development on monolithic applications typically involves long release cycles. Teams tend to follow traditional development processes like Waterfall. The product team first documents all change requirements.

All concerned must review and sign off on the changes. The architecture is tightly-coupled, so many groups are involved. They need to exercise due diligence to avoid undesired side effects because of the lack of modularity. Subsequent changes in requirements result in redoing the entire process. 

After the developers have made the changes, the QA team tests extensively to ensure that there are no breakages. The release process itself is lengthy. You must deploy the entire application, even if you have changed only a minor part.

If you find any issues post-deployment, you must roll back the release, fix the problem, and repeat the release process. Technical debt makes integrating CI/CD into the legacy application workflow difficult. All this contributes to long release cycles.

Modern application developers, however, follow agile processes. Every team works only on one microservice, which they understand well. Microservices are autonomous and decoupled, enabling teams to work independently. Each microservice can be changed and deployed alone. A failure in one microservice does not impact the whole application.

Microservices can run in containers, making them easier to deploy, test, and port to different platforms. Many teams use DevOps techniques like Continuous Integration and Continuous Development (CI/CD). Consequently, developers make releases quickly, sometimes several times a day.

Most importantly, teams get out of the traditional mindset of long release cycles and into a mode where IT is aligned closely with business priorities. 

Accelerating the development process has many advantages. You can go to market faster, gaining a competitive advantage. Customers benefit because they get new features at a rapid clip. The workforce finds more satisfaction in their work.

Long Test Cycles

Legacy applications require a lot of testing effort. There are many reasons for this.

Mature monolithic applications are often poorly documented and understood by only a few employees. As the application ages, the team periodically adds new functionality, but this often happens in a silo, and updates across the organization and documentation are not guaranteed. Hence, the domain knowledge – what functionality the application contains – is not present in all the testers in the team.

This, together with tight coupling and dependencies between different classes in the application, means that testers cannot focus only on testing the changes. They must also verify the functionality far afield because unexpected side-effects could have occurred.

Software crews usually develop legacy applications without writing automated unit test cases, so they don’t have a safety net to rely on when making changes. They must proceed cautiously.

In general, there is no, or insufficient, test automation for monolithic applications. Any automated tests available are developed well after testing, and these only cover a small part of the functionality. Creating automated tests for a legacy application is a long-drawn-out process with an unclear ROI, and it’s rarely taken up. Therefore, testers must manually test everything. For large applications, this could take days or even weeks.

Another common issue with legacy applications is the existence of dead code. Dead code refers to code that is not used anymore but is still lurking in the system. Dead code is problematic on many levels.

It makes it difficult for newcomers to understand the application flow. The inactive code could inadvertently become live with catastrophic results. Dead code is also an indicator of poor development culture.

Hence, testing legacy applications is more of an art– it’s a risky affair. 

Testing microservices is a lot easier. Developers write unit tests alongside creating new features. The tests find any breakages that result from code changes. They are repurposed and added to the CI/CD pipelines.

Hence, the process automatically tests all new builds. It is easy to plug gaps in testing, as they are few. The testing time is compressed into the build and deployment cycle and is usually short. Manual testers can focus on exploratory testing. Overall, the product quality is a lot better.

Related: Four Advantages of Refactoring That Java Architects Love

Security and Compliance

Teams working with legacy applications may be unable to implement security best practices and consequently face more vulnerabilities and cybersecurity risks. The application design may make it incapable of supporting features like multi-factor authentication, audit trails, or encryption. 

Legacy applications can also pose a security threat because they use outdated technology versions that their manufacturers or vendors no longer support (or recommend updating). Security updates are vital to keeping systems secure. Interdependencies may make it impossible to upgrade from unsupported older operating systems and other software. Therefore, the IT team may not be able to address even known security issues. 

There is extensive documentation on known security flaws in legacy applications and platforms. These systems are no longer supported, hence don’t receive patches and updates. They are particularly exposed. Hackers are well aware of their vulnerabilities and can attack them.

If a security breach happens, damage to a company’s reputation takes ages to repair. The public perception that your brand is unsafe never goes away completely. 

Many countries have introduced privacy standards like GDPR to protect personal data. Non-compliance with these standards can cause hefty penalties and fines. Hence, organizations must modernize their legacy systems to adhere to these requirements. They must change how they acquire, store, transmit and use personal data. It is a hard task.

Modern applications live on the cloud. Cloud providers make substantial investments to offer state-of-the-art security. They comply with all security requirements prescribed by regulators, like PCI, SOC1, and SOC2. They also promptly apply the latest security patches to all their systems.

Related: “Java Monoliths” – Modernizing an Oxymoron

Inability to Meet Business Requirements

We have seen that legacy applications cannot meet customer demands on time because of long and unpredictable testing and release cycles. Additionally, the monolithic architecture does not scale efficiently when demand surges. Maintaining legacy applications is costly, laborious, and can begin to take a toll on team morale. It is not easy to find people who are interested or skilled in working with these aging technologies. You must build any new functionality you want to offer yourself, like analytics or IOT. 

All this results in customer dissatisfaction. Clients look for more nimble and agile alternatives. Lack of performance, reliability, and agility, plus high costs, cause a loss of competitive advantage. They impact productivity and profitability both in your organization and for the products and services you deliver to your customer. You cannot meet your business goals.

The ability to respond swiftly to changing conditions is a key differentiator. IT leaders would like to increase agility and provide better quality service while reducing cost and risk.

Companies with cloud-native applications have a significant advantage because their deployment processes are automated end-to-end. They can release code into production hundreds or even thousands of times every day. So, they can rapidly offer their customers new features.

Cloud providers offer several built-in services that you can leverage. AWS, for instance, provides over 200 services like computing, storage, networking, security, and databases. You can start using them in minutes for a pay-as-you-use fee and quickly scale up your app’s functionality.

Poor Customer Experience

Today’s consumers expect a high-quality user experience. Exceptional customer experience allows a product to stand out in a crowded marketplace. It leads to higher brand awareness and increases customer retention, Average Order Value (AOV), and Customer Lifetime Values (CLTV).

Consumers presume immediacy and convenience in their interactions with customer service. Whether they are looking for information, wanting to report an issue, or seeking an update on an earlier request, they want fast and accurate answers. A long wait time has a negative impact.

Slow performance and high latency plague legacy applications even during simple transactions. To this, add buggy app experiences and inadequately addressed business requirements. This leads to a poor customer experience. 

Legacy systems suffer from compatibility issues. They work with data formats that may be out of fashion or obsolete. For example, they may generate reports as text files instead of PDFs, and may not integrate with monitoring, observability, and tracing technologies. 

Customers are now used to accessing services online using a device of their choice at a time that suits them. But most legacy systems don’t support mobile applications. Modern applications can provide 24/7 customer service using AI-powered chatbots.

They provide RPA (Robotic Process Automation) to automate the mechanical parts of a call center employee’s job. Businesses with legacy applications must invest in call centers staffed by expensive personnel or provide support only during business hours.

Legacy applications might serve their customers from on-premise servers located far away. Cloud-based applications can be deployed to servers close to customers anywhere, reducing latency.

Mitigating Architects’ Pains with Legacy Systems

We have seen the pains architects have with legacy systems and how modern applications alleviate these problems. But migrating from a monolith to microservices is an arduous undertaking. 

Every organization’s needs are different, and a one-size-fits-all approach won’t work. Architects must start with an assessment of their existing systems. It may be possible to rehost some parts of the application, whereas others will first need refactoring. In addition, they must also implement the process and tool-related modernization changes like containers and CI/CD pipelines.

So, modernizing is a complicated process, especially when done manually and from the ground up. 

Instead, it is much more predictable and less risky when performed with an automated modernization platform. A platform-centric approach provides a framework with proven best practices that have worked with multiple organizations. vFunction is an application modernization platform built with intuitive algorithms and an AI engine. It is the first and only platform for developers and architects that automatically decomposes legacy monolithic Java applications into microservices. The platform helps in overcoming all the issues inherent in monolithic applications.

Engineering velocity increases, customer experience improves, and the business enjoys all the benefits of being cloud-native. vFunction enables the reuse of the resulting microservices across many projects. The platform approach leads to a scalable and repeatable factory model. To see how we can help with your modernization needs, request a demo today.

Start Your Kubernetes Journey for Legacy Java Applications

Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. It is like the operating system of the cloud. A Kubernetes cluster comprises a control plane (or brain) and worker nodes that run your workloads. Here we’ll discuss if it is worth starting the modernization journey with Kubernetes for legacy Java applications.

Start Your App Modernization Journey with Kubernetes for Legacy Java Applications

We can look at how organizations have traditionally deployed applications to understand how Kubernetes is useful.

In earlier days, organizations ran their applications directly on physical servers. This resulted in resource contention and conflicts between applications. One application could monopolize a system resource (CPU, memory, or network bandwidth), starving other applications. So, the performance of those applications would suffer.

One solution to this problem was to run each application on a separate server. But this approach had two disadvantages – underutilization of compute resources and escalating server costs.

Another solution was virtualization. Virtualization involves creating Virtual Machines (VMs). A VM is a virtual computer that is allocated a set of physical resources on the host system, including its own operating system.

A VM runs only one application. Several VMs can run on a server. The VMs isolate applications from each other. Virtualization offers scalability, as one can add or remove VMs when needed. There is also a high utilization of server resources. Hence, it is an excellent solution and is still popular.

Next, containers appeared on the market, the most popular of which is Docker. Containers are like VMs, but share an operating system with other containers. Hence, they are comparatively lightweight. A container is independent of the underlying infrastructure (server). It is portable across different clouds and operating systems. So, containers are a convenient way to package and deploy applications.

In a production environment, engineers must manage the containers running their apps. They must add containers to scale up and replace or restart a container that has gone down. They must regularly upgrade the application. If all this could be done automatically, life would be easier, especially when dealing with thousands or millions of containers.

This is where Kubernetes comes in. It provides a framework that can run containers reliably and resiliently. It automates the deployment, scaling, upgrading, backup and restoration, and general management of containers. Google, the developer of Kubernetes, has used it to deploy millions of containers per hour. In the past few years, Kubernetes has become the number one container management platform.

What is a Kubernetes Operator?

Managing stateful applications running on Kubernetes is difficult. The Kubernetes Operator helps handle such apps. A Kubernetes Operator is an automated method that packages, maintains, and runs a stateful Kubernetes application. The Operator uses Kubernetes APIs to manage the lifecycle of the software it controls.

An Operator can manage a cluster of servers. It knows the configuration details of the applications running on these servers. So, it can create the cluster and deploy the applications. It can monitor and manage the applications, update them with newer versions, and automatically restart them if they fail. The Operator can take regular backups of the application data.

In short, the Kubernetes Operator replaces a human operator who would otherwise have performed these tasks.

How Should You Run Kubernetes?

There are many options for running Kubernetes. Keep in mind that you will not just need to set up the Kubernetes clusters one time, but you’ll also need to make frequent changes and upgrades.

Adopting Kubernetes for Legacy Java Technologies

Let’s look at how using Kubernetes (or Kubernetes Operators) alone, instead of completely modernizing your applications, makes it easier to work with many traditional Java frameworks, application servers, and databases.

Kubernetes for Java Frameworks

Spring Boot, Quarkus, and Micronaut are popular frameworks for working with Java EE applications in a modern way.

Using Spring Boot with Kubernetes

Spring Framework is a popular, open-source, enterprise-grade framework for creating standalone, production-ready applications which run on the Java Virtual Machine (JVM). Spring Boot is a tool that uses Spring to build web applications and microservices quickly and easily. It allows you to create a Spring app effortlessly.

Deploying a Spring Boot application to Kubernetes involves a few simple steps:

1.   Create a Kubernetes cluster, either locally or on a cloud provider.

2.   If you have a Spring Boot application, clone it in the terminal. Otherwise, create a new application. Make sure that the application has some HTTP endpoints.

3.   Build the application. A successful build results in a JAR file.

4.   Containerize the application in Docker using Maven, Gradle, or your favorite tool.

5.   You need a YAML specification file to run the containerized app in Kubernetes.  Create the YAML manually or using the kubectl command.

6.   Now deploy the app on Kubernetes, again using kubectl.

7.   You can check whether the app is running by invoking the HTTP endpoints using curl.

Check out the Spring documentation for a complete set of the commands needed.

Using Quarkus with Kubernetes:

Quarkus aims to combine the benefits of the feature-rich, mature Java ecosystem with the operational advantages of Kubernetes. Quarkus auto-generates Kubernetes resources based on defaults and user-supplied configuration for Kubernetes, OpenShift, and Knative. It creates the resource files using Dekorate (a tool that generates Kubernetes manifests).

Quarkus then deploys the application to a target Kubernetes cluster by applying the generated manifests to the target cluster’s API Server. Finally, Quarkus can create a container image and save it before deploying the application to the target platform.

The following steps describe how to deploy a Quarkus application to a Kubernetes cluster on Azure. The steps for deploying it on other cloud platforms are similar.

1.       Create a Kubernetes cluster on Azure.

2.       Install the Kubernetes CLI on your local computer.

3.       From the CLI, connect to the cluster using kubectl.

4.      Azure expects web applications to run on Port 80. Update the Dockerfile.native file to reflect this.

5.      Rebuild the Docker image.

6.      Install the Azure Command Line Interface.

7.      Either deploy the container image to a Kubernetes cluster, or

8.     Deploy the container image to Azure App Service on Linux Containers (this option provides scalability, load-balancing, monitoring, logging, and other services).

Quarkus includes a Kubernetes Client extension that enables it to unlock the power of Kubernetes Operators.

Using Kubernetes with Micronaut

Micronaut is a modern, full-stack Java framework that supports the Java, Kotlin, and Groovy languages. It tries to improve over other popular frameworks, like Spring and Spring Boot, with a fast startup time, reduced memory footprint, and easy creation of unit tests.

The Micronaut Kubernetes project simplifies the integration between the two by offering the following facilities:

  • It contains a Service Discovery module that allows Micronaut clients to discover Kubernetes services.
  • The Configuration module can read Kubernetes’ ConfigMaps and Secrets instances and make them available as PropertySources in the Micronaut application. Then any bean can read the configuration values using @Value (or any other method). The Configuration module will monitor changes in the ConfigMaps, propagate them to the Environment, and refresh it. So, these changes will be available immediately in the application without a restart. 
  • The Configuration module also provides a KubernetesHealthIndicator that provides all kinds of information about the pod in which the application is running. 
  • Overall, the library makes it easy to deploy and manage Micronaut applications on a Kubernetes cluster.

Kubernetes for Legacy Java Application Servers

Java Application Servers are web servers that host Java EE applications. They provide Java EE specified services like security, transaction support, load balancing, and managing distributed systems.

Popular Java EE compliant application servers include Apache Tomcat, Red Hat JBoss and Wildfly, Oracle WebLogic, and IBM WebSphere. Businesses have been using them for years to host their legacy Java applications. Let’s see how you can use them with Kubernetes.

Using Apache Tomcat with Kubernetes

Here are the steps to install and configure Java Tomcat applications using Kubernetes:

1.       Build the Tomcat Operator with the source code from Github.

2.       Push the image to Docker.

3.       Deploy the Operator image to a Red Hat OpenShift cluster.

4.       Now deploy your application using the custom resources operator.

  You can also deploy an existing WAR file to the Kubernetes cluster.

Using Red Hat OpenShift / JBoss EAP / Wildfly with Kubernetes

Red Hat OpenShift is a Kubernetes platform that offers automated operations and streamlined lifecycle management. It helps operations teams provision, manage, and scale Kubernetes platforms. The platform can bundle all required components like libraries and runtimes and ship them as one package.

To deploy an application in Kubernetes, you must first create an image with all the required components and containerize it. The JBoss EAP Source-to-Image (S2I) builder tool creates these images from JBoss EAP applications. Then use the JBoss EAP Operator to deploy the container to OpenShift.

The JBoss EAP Operator simplifies operations while deploying applications. You only need to specify the image and the number of instances to deploy. It supports critical enterprise functionality like transaction recovery and EJB (Enterprise Java Beans) remote calls.

Some benefits of migrating JBoss EAP apps to OpenShift include reduced operational costs, improved resource utilization, and a better developer experience. In addition, you get the Kubernetes advantages in maintaining, running, and scaling application workloads.

Thus, using OpenShift simplifies legacy application development and deployment.

Using Oracle WebLogic with Kubernetes

Oracle’s WebLogic server runs some of the most mission-critical Java EE applications worldwide.

You can deploy the WebLogic server in self-hosted Kubernetes clusters or on Oracle Cloud. This combination offers the advantages of automation and portability. You can also easily customize multiple domains. The Oracle WebLogic Server Kubernetes Operator simplifies creating and managing WebLogic Servers in Kubernetes clusters.

The operator enables you to package your WebLogic Server installation and application into portable images. This, along with the resource description files, allows you to deploy them to any Kubernetes cluster where you have the operator installed.

The operator supports CI/CD processes. It facilitates the integration of changes when deploying to different environments, like test and production.

The operator uses Kubernetes APIs to perform provisioning, application versioning, lifecycle management, security, patching, and scaling.

Using IBM WebSphere with Kubernetes

IBM WebSphere Application Server is a flexible and secure Java server for enterprise applications. It provides integrated management and administrative tools, centralized logging, monitoring, and many other features.

IBM Cloud Pak for Applications is a containerized software solution for modernizing legacy applications. The Pak comes bundled with WebSphere and Red Hat OpenShift. It enables you to run your legacy applications in containers and deploy and manage them with Kubernetes.

Related: The Best Java Monolith Migration Tools

Kubernetes with Legacy Java Databases

For orchestrators like Kubernetes, managing stateless applications is a breeze. However, they find it challenging to create and manage stateful applications with databases. Here is where Operators come in.

In most organizations, Database Administrators create database clusters in the cloud and secure and scale them. They watch out for patches and upgrades and apply them manually. They are also responsible for taking backups, handling failures, and monitoring load and efficiency.

All this is tedious and expensive. But Kubernetes Operators can perform these tasks automatically without human involvement.

Let us look at how they help with two popular database platforms, MySQL and MongoDB.

Using MySQL with Kubernetes

Oracle has released the open-source Kubernetes Operator for MySQL. It is a Kubernetes Controller that you install inside a Kubernetes cluster. The MySQL Operator uses Customer Resource Definitions to extend the Kubernetes API. It watches the API server for Customer Resource Definitions relating to MySQL and acts on them. The operator makes running MySQL inside Kubernetes easy by abstracting complexity and reducing operational overhead. It manages the complete lifecycle with automated setup, maintenance, upgrades, and backup.

Here are some tasks that the operator can automate:

  • Create and scale a self-healing MySQL InnoDB cluster from a YAML file
  • Backup a database and archive it in object storage
  •  List backups and fetch a particular backup
  •  Backup databases according to a defined schedule

If you are planning to deploy MySQL inside Kubernetes, the MySQL Operator can do the heavy lifting for you.

Using MongoDB with Kubernetes Operator

MongoDB is an open-source, general-purpose, NoSQL (non-relational) database manager. Its data model allows users to store unstructured data. The database comes bundled with a rich set of APIs.

MongoDB is very popular with developers. However, manually managing MongoDB databases is time-consuming and difficult.

MongoDB Enterprise Kubernetes Operator: MongoDB has released the MongoDB Enterprise Operator. The operator enables users to deploy and manage database clusters from within Kubernetes. You can specify the actions to be taken in a declarative configuration file. Here are some things you can do with MongoDB using the Enterprise Operator:

  • Deploy and scale MongoDB clusters of any size
  • Specify cluster configurations like security settings, resilience, and resource limits
  •  Enable centralized logging

Note that the Enterprise Operator performs all its activities using the OpsManager, the MongoDB management platform.

Running MongoDB using Kubernetes is much easier than doing it manually.

MongoDB comes in many flavors. The MongoDB Enterprise Operator supports the MongoDB Enterprise version, the Community Operator supports the Community version, and the Atlas Operator supports the cloud-based database-as-a-service Atlas version.

How do Kubernetes (and Docker) make a difference with legacy apps?

Using Kubernetes provides tactical modernization benefits but not strategic gains (re-hosting compared to refactoring). 

When legacy Java applications use Kubernetes (or Kubernetes Operators) with Docker, they immediately get some benefits. To recap, they are:

Improved security: Container platforms have security capabilities and processes baked in. One example is the concept of least privilege. It is easy to add additional security tools. Containers provide data protection facilities, like encrypted communication between containers, that apps can use right away.

Simplified DevOps: Deploying legacy apps to production is error-prone because the team must individually deploy executables, libraries, configuration files, and other dependencies. Failing to deploy even one of these dependencies or deploying an incorrect version can lead to problems.

But when using containers, developers build an image that includes the code and all other required components. Then they deploy these images in containers. So, nothing is ever left out. With Kubernetes, container deployment and management are automated, simplifying the DevOps process.

This approach has some drawbacks. There is no code-level modernization, and there are no architectural changes. The original monolith stays intact. There is no reduction of technical debt – the code remains untouched. There are no architectural changes. There is limited scalability – the entire application (now inside a container) has to be scaled. 

With modern applications, we can scale individual microservices. The drawbacks of working on a monolith created long ago with older technology versions,  are unexpected linkages and poorly understood code. Hence, there is no increase in development velocity.

Using Kubernetes is a Start, but What If You Want to Go Further?

Enterprises use containers to build applications faster, deploy to hybrid cloud environments, and scale automatically and efficiently. Container platforms like Docker provide a host of benefits. These include increased ease and reliability of deployment and enhanced security. Using Kubernetes (directly or with Operators) with containers makes the process even better.

We have seen that using Kubernetes with unchanged legacy applications has many advantages. But these advantages are tactical.

Consider a more transformational form of application modernization to get the significant strategic advantages that will keep your business competitive. This would include breaking up legacy Java applications into microservices, creating CI/CD pipelines for deployment, and moving to the cloud. In short, it involves making your applications cloud-native.Carrying out such a full-scale modernization can be risky. There are many options to consider and many choices to make. It involves a lot of work. You’ll want some help with that.

vFunction has created a repeatable platform that can transform legacy applications to cloud-native, quickly, safely, and reliably. Request a demo to see how we can make your transformation happen.

Cloud Modernization After Refactoring: A Continuous Process

Refactoring is a popular and effective way of modernizing legacy applications. However, to get the maximum benefits of modernization, we should not stop after refactoring. Instead, we should continue modernization after refactoring as part of a process of Continuous Modernization, a term coined by a leading cloud modernization platform.

Continuous Modernization: Modernization after Refactoring

Businesses constantly adapt and improve to handle new opportunities and threats. Similarly, they must also continuously keep upgrading their enterprise software applications. With time, all enterprise applications are susceptible to technical debt accumulation. Often, the only way to repay the debt is to refactor and move to the cloud. This process of application modernization provides significant benefits.

A “megalith” is a large traditional monolithic application that has over 5 million lines of code and 5,000 classes. Companies that maintain megaliths often choose the less risky approach of incremental modernization. So, at a point in time, part of their application may have been modernized to microservices running in the cloud and deployed by CI/CD pipelines. The remaining portion of the legacy app remains untouched. 

Modernization is an Ongoing Process

Three of the most popular approaches to modernization are rehosting, re-platforming, and refactoring.

Rehosting (or Lift and Shift): This involves moving applications to the cloud as-is or with minimal changes. Essentially, you change the place where the application runs. Often, this means migrating your application to the cloud. However, you can move it to shared servers, a private cloud, or a public cloud. 

Re-platforming: The approach takes a newer runtime platform, and inserts the old functionality. You’ll end up with a mosaic that mixes the old in with the new. From the end user’s perspective, the program operates the same way it was before modernization, so they don’t need to learn much in the way of new features. At the same time, your legacy application will run faster than before and be easier to update or repair.

Refactoring:  Refactoring is the process of reorganizing and optimizing existing code. It lets you get rid of outdated code, reduce significant technical debt, and improve non-functional attributes such as performance, security, and usability. By refactoring, you can also adapt to changing requirements since cloud-native, and microservice architectures make it possible for applications to add new features or modify existing ones right away.

Of these, refactoring requires the most effort and yields the most benefits. In addition to code changes, refactoring also includes process-related enhancements like CI/CD to unleash the full power of modernization. Modernization, however, is not a once-and-done activity. In fact, modernization after refactoring is a continuous process.

The Role of DevOps in Modernization

Application modernization and DevOps go hand in hand. DevOps (Development + Operations) is a set of processes, practices, and tools that enable an organization to deliver applications and updates at high velocity. DevOps facilitates previously siloed groups – developers and operations – to coordinate and produce better products.

Continuous integration (CI) and continuous delivery/deployment (CD) are the two central tenets of DevOps. For maximum benefits, modernization after refactoring should include CI and CD.

Continuous Integration: Overview, History, and How It Works

Software engineers work on “branches,” which are private copies of code only they can access. They make the copies from a central code repository, often called a “mainline” or “trunk”. After making changes to their branch and testing, they must “merge” (integrate) their changes back into the central repository. This process could fail if, in the meantime, another developer has also changed the same files. Here, a “merge conflict” results and must be resolved, often a laborious process.

Continuous integration (CI) is a DevOps practice in which software developers frequently merge their code changes into the central depository. Because developers check in code very often, there are minimal merge conflicts. Each merge triggers an automated build and test cycle. Developers fix all problems immediately. CI’s goals are to reduce integration issues, find and resolve bugs sooner, and release software updates faster.

Grady Booch first used the phrase Continuous Integration in his book, “Object-Oriented Analysis and Design with Applications”, in 1994. When Kent Beck proposed the Extreme Programming development process, he included twelve programming practices he felt were essential for developing quality software. Continuous integration was one of them.

How Does Continuous Integration Work?

There are several prerequisites and requirements for adopting CI.

Maintain One Central Source Code Repository

A central source code repository (or repo) under a version control system is a prerequisite for Continuous Integration. When a developer works on the application, they check out the latest code from the repo. After making changes, they merge their changes back to the repo. So, the repo contains the latest, or close to the latest, code at all times.

Automated Build Process

It should be possible to kick off the build with a single command. The build process should do everything – generate the executables, libraries, databases, and anything else needed –to get the system up and running.

Automated Testing

Include automated tests in the build process. The test suite should verify most, if not all, of the functionality in the build. A report should tell you how many tests passed at the end of the test run. If any test fails, the system should mark the build as failed, i.e., unusable.

A Practice of Frequent Code Commits

As mentioned earlier, a key goal of CI is to find and fix merge problems as early as possible. Therefore, developers must merge their changes to the mainline at least once a day. This way, merge issues don’t go undetected for more than a day at the most.

Every Commit Should Trigger a Build

Every code commit should trigger a build on an integration machine. The commit is a success only if the resulting build completes and all tests pass. The developer should monitor the build, and fix any failures immediately. This practice ensures that the mainline is always in a healthy state.

Fast Build Times

The build time is the time taken to complete the build and run all tests. What is an acceptable build time? Developers commit code to the mainline several times every day. The last thing they want to do after committing is to sit around twiddling their thumbs. Approximately 10 minutes is usually acceptable.

Fix Build Breaks Immediately

A goal of CI is to have a release-quality mainline at all times. So, if a commit breaks the build, the goal is not being met. The developer must fix the issue immediately. An easy way to do this is to revert the commit. Also, the team should consciously prioritize the correction of a broken build as a high-priority task. Team members should be careful to only check in tested code.

The Integration Test Environment Should Mirror the Production Environment

The goal of testing is to discover any potential issues that may appear in production before deployment. So, the test environment must be as similar to the production environment as possible. Every difference adds to the risk of defects escaping to production.

Related: Succeed with an Application Modernization Roadmap

Continuous Delivery/Deployment

CD stands for both continuous delivery and continuous deployment. They differ only in the degree of automation.

Continuous delivery is the next step after continuous integration. The pipeline automatically builds the newly integrated code, tests the build, and keeps the deployment packages ready. Manual intervention is needed to deploy the build to a testing or production environment.

In continuous deployment, the entire process is automated. Every successful code commit results in deploying a new version of the application to production without human involvement.

CI streamlines the code integration process, while CD automates application delivery.

Popular CI/CD Tools

There are many CI/CD tools available. Here are the leading ones.

Jenkins

Jenkins is arguably the most popular CI/CD tool today. It is open-source, free, and supports almost all languages and operating systems. Moreover, it comes with hundreds of plugins that make it easy to automate any building, testing, or deployment task.

AWS CodeBuild

CodeBuild is a CI/CD tool that compiles code, runs tests, and generates ready-to-deploy software packages. It takes care of provisioning, managing, and building your build servers. CodeBuild automatically scales and runs concurrent builds. It comes with an IDE (Integrated Development Environment).

GitLab

GitLab is another powerful CI/CD tool. An interesting feature is its ability to show performance metrics of all deployed applications. A pipeline graph feature shows the status of every task. GitLab makes it easy to manage Git repositories. It also comes with an IDE.

GoCD

GoCD from ThoughtWorks is a mature CI/CD tool. It is free and open-source. GoCD visually shows the complete path from check-in to deployment, making it easy to analyze and optimize the process. This tool has an active user community.

CircleCI

CircleCI is one of the world’s largest CI/CD platforms. The simple UI makes it easy to set up projects. It integrates smoothly with Github and Bitbucket. You can conveniently identify failing tests from the UI. It has a free tier of service that you can try out before committing to the paid version.

You should select the CI/CD tool that helps you optimize your software development process.

Related Cloud vs Cloud-Native: Taking Legacy Java Apps to the Next Level

The Benefits of CI and CD

The complete automation of releases — from compiling to testing to the final deployment — is a significant benefit of the CI/CD pipeline. Other benefits of the CI/CD process include:

  • Reduction of deployment time: Automated testing makes the development process very efficient and reduces the length of the software delivery process. It also improves quality.
  • Increase in agility: Continuous deployment allows a developer’s changes to the application to go live within minutes of making them.
  • Saving time and money: Automation results in fast development, testing, and deployment. The saving in time translates to a cost-saving. More time is available for innovation. Code reviewers save time because they can now focus on code instead of functionality.
  • Continuous feedback loop: The CI/CD pipeline is a continuous cycle of building, testing, and deployment. Every time the tests run and find issues, developers can quickly take corrective action, resulting in continuous improvement of the product.
  • Address issues earlier in the cycle: Developers commit code frequently, so merge conflicts surface early. Every check-in generates a build. The automated test suite runs on each build, so the team catches integration issues quickly.
  • Testing in a production-like environment: You mitigate risks by setting up a production environment clone for testing.
  • Improving team responsiveness: Everyone on the team can change code, respond to feedback, and respond promptly to any issues.

These are some notable benefits of modernization.

CI and CD: differences

There are fundamental differences between continuous integration and continuous deployment.

For one, CI happens more frequently than CD.

CI is the process of automating the build and testing code changes. CD is the process of automating the release of code changes.

CI is the practice of merging all developer code to the mainline several times a day. CD is the practice of automatically building the changed code and testing and deploying it to production.

Continuous Modernization after Refactoring

We started this article by stating that application modernization is often the only way software teams can pay off their technical debt. We also mentioned continuous modernization. Companies are increasingly leaning toward continuous modernization. They constantly monitor technical debt, make sure they have no dead code, and ensure good test coverage. Their goal is to prevent the modernized code from regressing.  

How to Build Continuous Modernization Into Your CI/CD Pipeline

We have seen the many benefits that CI/CD provides. As more and more companies realize the benefits of continuous integration and deployment, expectations are ever-increasing. Companies expect every successful dev commit to be available in production in minutes. For large teams, this could imply several hundred or thousand deployments every day. Let’s look at how to continuously modernize the CI/CD pipelines so that they don’t end up in a bottleneck.

  • Keep scaling the CI/CD platforms: You must continuously scale the infrastructure needed to provide fast builds and tests for all team members.
  • Support for new technologies: As the team starts using new languages, databases, and other tools, the CI/CD platform must keep up.
  • Reliable tests: You should have confidence in the automated tests. All tests must be consistent. You must optimize the number of tests to control test execution time.
  • Rapid pipeline modification: The team should be able to reconfigure pipelines rapidly to keep up with changing requirements.

Next Steps Toward Continuous Modernization

vFunction, which has developed an AI and data science-powered platform to transform legacy applications into microservices, helps companies on their path towards continuous modernization. There are two related tools:

  • vFunction Assessment Hub, which is an assessment tool for decision-makers that analyzes the technical debt of a company’s monolithic applications, accurately identifies the source of that debt and measures its negative impact on innovation
  • the vFunction Modernization Hub, which is an AI-driven modernization solution that automatically transforms complex monolithic applications into microservices, restoring engineering velocity, increasing application scalability, and unlocking the value of the cloud.

These tools help organizations manage their modernization journey.

vFunction Assessment Hub measures app complexity based on code modularity and dependency entanglements, measures the risk of changes impacting stability based on the depth and length of the dependency chains, and then aggregates these to assess the overall technical debt level. It then benchmarks debt, risk, and complexity against the organization’s own estate, while identifying aging frameworks that could pose future security and licensing risks. vFunction Assessment Hub integrates seamlessly with the vFunction Modernization Hub which can directly lead to refactoring, re-architecting, and rewriting applications with the full vFunction Modernization Hub.

 vFunction Modernization Hub utilizes both deep domain-driven observability via a passive JVM agent and sophisticated static analysis, vFunction Modernization Hub analyzes architectural flows, classes, usage, memory, and resources to detect and unearth critical business domain functions buried within a monolith.

Whether your application is on-premise or you have already lifted and shifted to the cloud, the world’s most innovative organizations are applying vFunction on their complex “megaliths” (large monoliths) to untangle complex, hidden, and dense dependencies for business-critical applications that often total over 10 million lines of code and consist of 1000’s of classes.The convenience of this approach lies in the fact that all this happens behind a single screen. You don’t need to use several tools to perform the analysis or manage the migration. Contact vFunction to request a demo and learn more.

Quality Testing Legacy Code – Challenges and Benefits

Many of the world’s businesses are running enterprise applications that were developed a decade ago or more. Companies built the apps using a monolithic application architecture and hosted them in private data centers. With time, these applications have become mission-critical for the business; however, they come with many challenges as they age. Testing legacy code uncovers some of these flaws.

In many cases, companies developed the apps without following commonly accepted best practices like TDD (Test Driven Development), unit tests, or automated testing. The testers usually created a test-plan document that listed all potential test cases. But as the developers added new features and changed old ones, testing use cases may not have kept up with the changes. As a result, tests were no longer in sync with the application functionality.

Thus, testing became a hit-or-miss approach, relying mainly on the domain knowledge of a few veteran employees. And when these employees left the organization, this knowledge departed with them. The product quality suffered. Customers became unhappy, and employees lost morale. This is especially salient these days, in what is being called The Great Resignation.

Poor Code Quality Affects Business: Prevent It By Testing Legacy Code

Poor code quality can lead to critical issues in the product’s functionality. In extreme cases, these issues can cause accidents or other disasters and even lead to deaths. The company’s reputation takes a hit as the quality of its products plummet.

Poorly written code results in new features taking longer to develop. The product does not scale as usage increases, leading to unpredictable performance. Product reliability is a big question mark. Security flaws make the product vulnerable, inviting the unwelcome attention of cyber-attackers.

Current users leave, and new prospects stay away. The company spends more on maintaining technical debt than on innovation in order to boost consumer and employee confidence.

Ultimately, the company’s standing suffers, as do its revenues. Thus, code quality directly affects a company’s reputation and financial performance.

How Do We Define Code Quality?

How do we go about testing legacy code quality, and what characteristics does good code have? There is no straightforward answer, as coding is part art and part science. Therefore, estimating code quality can be a subjective matter. Nevertheless, we can measure software quality in two dimensions: qualitatively and quantitatively.

Qualitative Measurement of Code Quality

We cannot conveniently or accurately assess code quality in this way with tools. Instead, we must measure them by other means, such as code reviews by experts, or indirectly, by observing the product’s performance. Here are some parameters that help us evaluate code quality.

Extensibility

Software applications must keep changing in response to market and competitor requirements. So, developers should be able to add new features and functionality without affecting other parts of the system. Extensibility is a measure of whether the design of the software easily allows this. 

Maintainability

Maintainability refers to the ease of making code changes and the associated risks. It depends on the size and complexity of the code. The Halstead complexity score is one measure of maintainability. (Note that extensibility refers to adding large chunks of code to implement brand new features, whereas maintainability refers to making comparatively minor changes).

Testability

Testability is a function of the number of test cases needed to test the system by covering all code paths. It measures how easy it is to verify all possible use cases. The cyclomatic complexity score is an indicator of how testable your app is.

Portability

Portability shows how easily the application can run on a different platform. You can plan for portability from the start of development. Keep compiling and testing on target operating systems, set compiler warning levels to the highest to flag compatibility issues, follow a coding standard, and perform frequent code reviews.

Reusability

Sometimes developers use the same functionality in many places across the application. Reusability refers to the ease with which developers can share code instead of rewriting it many times. It is easier to reuse assets that are modular and loosely coupled. We estimate reusability by identifying the interdependencies in the system.

Reliability

Reliability is the probability that the system will run without failing for a period of time. It is also called availability. A measure of reliability is Mean Time Between Failures (MTBF).

To summarize, these parameters are difficult to quantify, and we must determine them by observation over a period. If the application performs well on all these measures, it is likely to be high quality.

Related: How to Conduct an Application Assessment for Cloud Migration

Quantitative Measures of Code Quality

In addition, there are several quantitative metrics for measuring code quality.

Defect Metrics

Quality experts use historical data (pertaining to the organization) to predict how good or bad the software is. They use metrics like defects per hundred lines of code and escaped defects per hundred lines of code to quantify their findings.

Cyclomatic Complexity

The Cyclomatic Complexity metric describes the complexity of a method (or function) by a number. In simplistic terms, it is the number of unique execution paths in the code and hence the minimum number of test cases needed to test it. The higher the cyclomatic complexity, the lower the readability, and the higher the maintainability.

Halstead Metrics

The Halstead metrics comprise a set of several measurements. Their basis is the number of operators and operands in the application. The metrics represent the difficulty in understanding the program, the time required to code, the number of bugs testers should expect to find, and others.

Weighted Micro Function Points (WMFP)

The WMFP is a modern-day successor to classical code sizing methods like COCOMO. WMFP tools parse the entire source code to calculate several code complexity metrics. The metrics include code flow complexity, the intricacy of arithmetic calculations, overall code structure, the volume of comments, and much more.

There are many other quantitative measures that the industry uses in varying degrees. They include Depth of Inheritance, Class Coupling, Lines of Source Code, Lines of Executable Code, and other metrics.

The Attributes of Good Code Quality

We have seen that it is problematic to quantify code quality. However, there are some common-sense attributes of good quality:

  • The code should be functional. It should do what users expect it to do.
  • Every line of code plays a role. There is no bloating and no dead code.
  • Frequently run automated tests are available. They provide assurance that the code is working.
  • There is a reasonable amount of documentation.
  • The code is readable and has sufficient comments. It has well-chosen names for variables, methods, and classes. The design is modular.
  • Making changes, and adding new features, is easy.
  • The product is not vulnerable to cyber-attacks.
  • Its speed is acceptable.

What is Technical Debt?

Technical debt results from a software team prioritizing speedy delivery over perfect code. The team must correct or refactor the imperfect code later.

Technical debt, like the financial version, is not always bad. There are benefits to borrowing money to pay for things you cannot afford. Similarly, there is value to releasing code that is not perfect. You get experience, feedback, and in any case, you repay the debt later on, though at a higher cost. But, because technical debt is not as visible to business leaders, people often ignore it.

There are two types of technical debt. A team consciously takes on intentional debt as a strategic decision. It inadvertently incurs unintentional debt because of monolithic application code.

Again, like financial debt, technical debt is manageable to some extent. Once it grows beyond a point, it affects your business. Then you have no choice but to address it. Technical debt is difficult to measure directly. However, a host of issues inevitably accompany technical debt. You either observe them or find them while testing. Here are some of them:

The Pace of Releasing New Features Slows Down

At some point, teams start spending more time on reducing tech debt (refactoring the code to get it to a state where adding features is not very difficult) than on working on new features. As a result, the product lags behind the competition.

Releases Take Longer

Code suffering from tech debt is code difficult to read and understand. Developers who add new features to this codebase find it difficult and time-consuming. Release cycle times increase.

Poor Quality Releases

Thanks to technical debt, developers take longer than planned to deliver builds to the QA team. Testers have insufficient time to test thoroughly; therefore, they cut corners. The number of defects that escape to production increases.

Regression of Issues

As technical debt increases, the code base becomes unstable. Adding new code almost inevitably breaks some other functionality. Previously resolved defects resurface.

When you face these issues in your organization, you can be sure that you have incurred a significant amount of technical debt and must pay it off immediately.

How to Get Rid of Technical Debt

The best way of paying off technical debt is to stop adding new features and focus only on refactoring and improving the code. List out all your problems and resolve them one by one. Map sets of fixes to releases so that the team continues its cadence of rolling out regular updates.

When these issues get out of hand, focus exclusively on paying off the technical debt. It is time to stop maintaining and start modernizing.

Related: What is Refactoring in Cloud Migration? Judging Legacy Java Applications for Refactoring or Discarding

Differences in Testing Legacy Code vs. New Code: Best Practices for Testing Change Over Time

Often, the only way to pay off tech debt for enterprise applications is to modernize them. But then, how can the team make sure that tech debt does not accumulate again in the modernized apps? It is unlikely because testing a modern application differs from testing a legacy app. Let’s look at some of these differences.

Testing Legacy Code

  • Testers have difficulty understanding the complexity of large monolithic applications.
  • Fixing defects may have unintended consequences, so testers often expend a lot of effort to verify even minor code changes. The team must constantly test for regression.
  • Automated testing is beneficial but has to be done from scratch. Unit tests may not make sense. Instead, integration or end-to-end tests may be more suitable. The team should prioritize the areas to be automated.
  • Developers should add automated unit tests when they work on new features.

Testing Modern Applications: Challenges and Advantages

  • Modern applications are often developed as cloud-native microservices. Testing them requires special skills.
  • The software needs to run on several devices, operating systems, and browsers, so managers should plan for this.
  • Setting up a test environment with production-like test data is challenging. Testing must cover performance and scalability.
  • Test teams need to be agile. They must complete writing test plans, automating tests, running them, and generating bug reports within a sprint.
  • UI/UX matters a lot. Testers must pay a lot of attention to usability and look-and-feel.
  • Developers follow Test Driven Development (TDD). Also, Continuous Integration/Continuous Delivery pipelines support running automated test cases. Correspondingly, this improves and maintains quality and reduces the burden on test teams.

Determining Code Quality: the Easy Way

As we have seen, testing legacy code and assessing its quality is a complex undertaking. We have described some parameters and techniques for qualitatively and quantitatively appraising the quality of the code.

We must either use tools to measure these parameters or make manual judgments, say, by doing code reviews. But each tool only throws light on one parameter. So, we need to use several tools to get a complete picture of the quality of the legacy app.

So, to evaluate the quality of legacy code and decide if it is worth modernizing requires us to use several tools. And after we have partially or fully modernized the application, we want to calculate the ROI by measuring the quality of the modernized code. Again, this requires multiple tools and is an expensive and lengthy process.

vFunction offers an alternative approach: using a custom-built platform that provides tools to drive modernization assessment projects from a single pane of glass. vFunction can analyze architectural flows, classes, usage, memory, dead code, class linkages, and resources even in megaliths. They use this analysis to recommend whether modernization makes sense. Contact vFunction today to see how they can help you assess the feasibility of modernizing your legacy applications.