Category: Uncategorized

Ten Microsoft Azure Products for Modernizing Applications

The Microsoft Azure Cloud is a popular destination for companies seeking to modernize their legacy applications. Azure provides a variety of fully managed services that make application modernization much easier by shifting much of the infrastructure management workload from developers to the platform itself.

These services include Infrastructure as a Service (IaaS) offerings that allow legacy apps to be directly migrated to the cloud without making major changes to the code. This approach allows developers to take advantage of cloud characteristics, such as increased scalability, performance, stability, and security, without making major investments of time and money.

Azure also offers a suite of fully managed Platform as a Service (PaaS) tools that help developers restructure legacy apps for easy integration into the cloud ecosystem.

To take full advantage of the Azure platform, companies should be aware of the modernization services it provides, and closely examine them to determine how they can best be used. But to put those services into proper perspective, let’s first establish what application modernization is all about.

The Need for Application Modernization

Many companies still use applications developed years or decades ago for some of their most business-critical processing. Often, such applications were built at a time when software development standards were far less sophisticated than they are today, and when apps were not expected to interact with other software. As a result, many legacy apps are limited in two crucial areas:

  • Their architecture is monolithic, meaning that the codebase is essentially a single unit with function implementations and dependencies interwoven throughout. Upgrading such applications is extremely difficult because a single change to any function or feature might ripple through the code in unexpected ways, possibly causing the entire app to fail.
  • Legacy apps are usually self-contained, with little ability to interact with other applications. In today’s open-source, cloud-centered environment, an inability to tap into the functionalities provided by other cloud resources can be a crippling disadvantage.

Legacy application modernization aims at overcoming these deficiencies by converting an app’s codebase from a monolith to microservices.

The Importance of Microservices

For true modernization to occur, a monolithic legacy app codebase must be restructured into a cloud-native, microservices architecture.

Microservices are small chunks of code that operate autonomously and perform a single task. Because they can be deployed and updated independently of one another, microservices allow individual functions to be easily upgraded to meet new requirements without impacting other portions of the application. When a legacy app is restructured into microservices, upgrading it becomes far easier.

Because microservices communicate using APIs based on published, standardized interface definitions, legacy apps that have been converted to microservices can easily be integrated into the cloud ecosystem and tap into the multitude of open-source services it offers.

Related: Migrating Monolithic Applications to Microservices Architecture

App Modernization Strategies

Most companies involved in legacy application modernization make use of three major approaches:

1. Migrate (Rehost, Replatform)

Because it’s usually not possible to fully modernize all of their legacy apps in one fell swoop, companies often rehost or replatform many of those apps to the cloud with minimal changes. This is the easiest means of reaping some of the benefits of the cloud environment, such as improvements in performance and scalability, without making wholesale alterations to the code.

The problem is that although such “lift and shift” efforts tap into some cloud capabilities, the app itself remains essentially unchanged—if it was monolithic in the data center, it’s monolithic in the cloud, with all the disadvantages of that architecture.

2. Replace

Companies typically replace only a small percentage of their legacy apps. It’s usually done when the complexities of upgrading the original code to meet new requirements are too great, and it seems simpler to start from scratch. But because this option is the most extreme in terms of time, cost, and risk, it’s normally chosen only as a last resort.

3. Modernize (Refactor, Rearchitect, or Rewrite)

Forward-looking companies recognize that their most business-critical legacy apps should not simply be migrated to the cloud, but should be fully modernized by restructuring them as cloud-native microservices. This is normally done using one of the following approaches:

  • Refactoring: Restructure the app’s code to microservices without changing its external behavior. Refactoring allows legacy apps to fit comfortably into a cloud-first environment.
  • Rearchitecting: Create a new application architecture that enables improved performance, greater scaling, and enhanced capabilities.
  • Rewriting: Rewrite the application from scratch, retaining its original scope and specifications.

Refactoring is normally the starting point because it transforms monolithic code into a form that’s simpler, cleaner, and easier for developers to understand. That makes it much easier to update refactored apps with new features and integrate them into the cloud ecosystem. Even if a legacy app must ultimately be rearchitected or rewritten to obtain the needed functionality or performance, the first step is usually to refactor it so that developers can more easily work with it.

Related: The CIO Guide to Modernizing Monolithic Applications

Azure App Modernization Offerings

Now that we understand what legacy app modernization is all about, let’s look at the tools Azure offers to facilitate that process.

Azure App Service (AAS)

Azure App Service enables migration of apps directly to the cloud. As a PaaS platform, Azure App Service takes over management of all infrastructure functions, such as the operating system and runtime or middleware components, relieving users of such concerns. AAS provides a set of tools that allow applications written in the most popular programming languages, including .NET, Java, Ruby, Node.js, PHP, and Python, to be adapted into an essentially Azure-native form.

Azure Spring Cloud

Azure Spring Cloud, now renamed to Azure Spring Apps, is a fully managed PaaS offering that facilitates seamless integration of Spring apps with Azure. It is especially important in restructuring monolithic legacy apps as microservices. As the digital publication AiThority notes:

“Spring is the most powerful and widely used Java framework for building custom Cloud-based microservices for web and app… [Azure Spring Cloud] enables any coder to run complex Spring apps at scale, completely removing the pain and risk of managing Spring architecture in virtualized setup.”

Azure Kubernetes Service (AKS)

According to Microsoft, cloud-native apps “are often composed of distributed microservices hosted in containers.” The most widely used container orchestration platform is Kubernetes. However, Kubernetes deployment and management can be quite complex. Azure Kubernetes Service relieves developers of much of the operational workload and is, Microsoft says, the quickest way to develop and deploy cloud-native apps.

Azure Container Apps (ACA)

Azure Container Apps is a fully managed serverless container service that enables users to build, deploy, and run microservices and containerized apps at scale in a serverless environment. ACA automatically manages infrastructure and complex container orchestrations. ACA applications, as well as individual microservices, can dynamically scale up or down, in or out, to meet changing requirements.

Azure SQL Database / Azure Cosmos DB

Azure SQL Database is Azure’s native SQL database, while Azure Cosmos DB is a serverless NoSQL database. Both are fully managed PaaS services that handle all infrastructure-related issues, such as updates, patches, backups, and monitoring.

  • Azure SQL Database is essentially a DBaaS (database as a service) offering. It uses the same underlying DB engine as SQL Server, allowing easy migration from an on-premises SQL Server database to Azure.
  • Azure Cosmos DB provides API access to NoSQL databases such as MongoDB and Apache Cassandra while offering single-digit millisecond performance, instant and automatic scalability, and an SLA-backed 99.999% availability guarantee.

Azure Database Migration Service

Azure Database Migration Service is a fully managed service that enables seamless migration of data from relational database sources to Azure. Supported databases include SQL Server, MySQL, PostgreSQL, and MongoDB.

Azure API Management (APIM)

Azure API Management is a multi-cloud API management platform. C# Corner describes it this way:

“API Management (APIM) is a way to create consistent and modern API gateways for existing back-end services. API Management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services”

With APIM, your previously isolated legacy apps can be fully integrated into the cloud ecosystem.

Azure DevOps

Azure DevOps is a SaaS (Software as a Service) offering that provides services and tools to support DevOps teams in building cloud-native applications. Microsoft describes it this way:

“Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software.”

Azure Application Gateway

Azure Application Gateway is a PaaS load balancer that maximizes throughput by automatically distributing web traffic among multiple virtual servers.

Azure Key Vault

Azure Key Vault allows centralized, secure storage of information (such as passwords, API keys, cryptographic keys, and security certificates) for which access must be controlled. Key Vault eliminates the need to include security information (such as key strings) as part of an application’s code.

How vFunction Accelerates Azure App Modernization

Ideally, a company would want to modernize all of its important legacy apps. But because traditional refactoring is so time, labor, and risk intensive, developers often settle for migrating many critical apps to the cloud basically as-is. By doing so they forego most of the benefits of the cloud ecosystem.

Now, however, vFunction has changed that equation. Its AI-driven modernization platform substantially automates the process, significantly increasing speed, lowering risk, and allowing a greater proportion of a company’s legacy app portfolio to be refactored into microservices.

In early 2022, Microsoft explicitly recognized the value vFunction brings to the Azure app modernization table. According to a recent devops.com article:

“Microsoft has teamed up with vFunction to make it easier to convert monolithic Java applications into a set of microservices that can be deployed on the Microsoft Azure cloud… the goal is to eliminate the heavy lifting currently required to shift Java applications into the cloud.”If you’d like to experience first-hand how the Azure/vFunction partnership can help modernize your company’s legacy applications, please schedule a demo today.

Ten AWS Products for Modernizing Your Monolithic Applications

In today’s rapidly changing marketplace environment, companies face an imperative to modernize their business-critical legacy applications. That’s why, as the State of the CIO Study 2022 notes, modernizing legacy systems and applications is currently among the top priorities of corporate CIOs.

In most instances such modernization involves transferring legacy apps to the cloud, which is now the seedbed of technological innovation. Once housed in the cloud, and adapted to conform to the technical norms of that environment, legacy apps can improve their functionality, performance, flexibility, security, and overall usefulness by tapping into a sophisticated software ecosystem that offers a wide variety of preexisting services.

Amazon Web Services (AWS), with a 33% share of the market, is the most widely used cloud service platform. AWS provides users with a wide range of fully managed cloud services that can make modernizing legacy apps far easier than it otherwise would be. These include container management services, Kubernetes services, database and DB migration services, application migration services, API and Security management services, support for serverless functions, and more.

In this article, we want to take a brief look at ten of these key AWS services that companies should research and test to determine how they can best be used in modernizing the organization’s suite of legacy apps. But before looking at the AWS services themselves, we need to understand exactly what modernization aims to achieve.

What Application Modernization is All About: Transforming Monoliths into Microservices

Gartner describes application modernization this way:

“Application modernization services address the migration of legacy to new applications or platforms, including the integration of new functionality to provide the latest functions to the business.”

The major problem with most legacy applications is that the way they are architected makes “the integration of new functionality” extremely difficult. That’s because such apps are typically monolithic, meaning that the codebase is basically a single unit with functions and dependencies interwoven throughout.

Any single functional change could ripple through the code in unexpected ways, which makes adapting the app to add new functions or to integrate with other systems very difficult and risky.

A microservices architecture, on the other hand, is expressly designed to make updating the application easy. Each microservice is a separate piece of code that performs a single task; it is deployed and changed independently of any others. This approach allows individual functions to be quickly and easily updated to meet new requirements without impacting other portions of the application.

The fundamental purpose of legacy application modernization, then, is to restructure the application’s codebase from a monolith to microservices.

Related: Migrating Monolithic Applications to Microservices Architecture

The Importance of Refactoring

How does that restructuring take place? In most instances it begins with refactoring. The Agile Alliance defines refactoring this way:

“Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior.”

Refactoring allows developers to transform a legacy codebase into a cloud-native microservices architecture while not altering its external functionality or user interface. But because the refactored application can fully interoperate with other resources in the cloud ecosystem, updates that were previously almost impossible now become easy. For that reason, refactoring will normally be a key element of any legacy application modernization process.

The Migration “Lift and Shift” Trap

A report from McKinsey highlights a disturbing reality:

“Thus far, modernization efforts have largely failed to generate the expected benefits. Despite migrating a portion of workloads to the cloud, around 80 percent of CIOs report that they have not attained the level of agility and business benefits that they sought through modernization.”

To a significant degree this failure can be attributed to organizations confusing migration with modernization. Far too often companies have focused on simply getting their legacy applications moved to the cloud, as if that in itself constituted a significant level of modernization. That is most emphatically not the case.

The problem is that just removing an application from a data center and rehosting it in the cloud (often called a “lift and shift”) does nothing to change the fundamental nature of the codebase. If it was a monolith before being migrated, it remains a monolith once it gets to the cloud, and retains all the disadvantages of that architecture.

It’s only when a legacy application is not only migrated to the cloud but is refactored from a monolith to a microservices architecture that true modernization can begin. That’s why the modernization services provided by AWS must be evaluated in light of how they facilitate not just the migration, but more importantly the transformation of legacy applications.

Related: Accelerate AWS Migration for Java Applications

Key Modernization Services from AWS

For each of these important AWS services, we’ll provide a brief description along with a link for further information.

1. Amazon EC2 (Elastic Compute Cloud)

Amazon EC2 provides an unlimited number of virtual servers to run your apps. If, for example, you’ve had a particular application running on a physical server in your data center, you can migrate that application to the cloud by launching an EC2 server instance to run it. Rather than having to purchase and maintain your own server hardware, you pay Amazon by the second for each server instance you invoke.

2. Amazon ECS (Elastic Container Service)

Amazon ECS is a container orchestration service that allows you to run containerized apps in the cloud without having to configure an environment for the code to run in. It can be particularly helpful in running microservices apps by facilitating integration with other AWS services. Although container management is normally complex and error-prone, the distinguishing feature of ECS is its “powerful simplicity” that allows users to easily deploy, manage, and scale containerized workloads in the AWS environment.

3. Amazon EKS (Elastic Kubernetes Service)

Kubernetes is an open-source container-orchestration system with which you can automate your containerized application deployments. Amazon EKS allows you to run Kubernetes on AWS without having to install, operate, or maintain your own Kubernetes infrastructure. Applications running in other Kubernetes environments, whether in an on-premises data center or the cloud, can be directly migrated to EKS with no modifications to the code.

4. Amazon VPC (Virtual Private Cloud)

Amazon VPC allows you to define a virtual network (similar to a traditional network you might run out of your data center) within an isolated section of the AWS cloud. Other AWS resources, such as EC2 instances, can be enabled within the network, and you can optionally connect your VPC network with other networks or the internet. All AWS accounts created after December 4, 2013 come with a default VPC that has a default subnet (range of IP addresses) in each Availability Zone. You can also create your own VPC and define your own subnet IP address ranges.

5. AWS Database Migration Service (DMS)

AWS DMS allows you to migrate your databases quickly and securely to AWS. Both homogeneous (e.g. Oracle to Oracle) and heterogeneous (e.g. Oracle to MySQL) migrations are supported. You can set DMS up for either a one-time migration or for continuing replication in which changes to the source DB are continuously applied in real time to the target DB.

6. Amazon S3 / Aurora / DynamoDB / RDS

AWS provides a range of database and data storage services that can simplify the process of migrating data to the cloud:

Amazon S3 (Simple Storage Service) is a high-speed, highly scalable data storage service designed for online backup and archiving in AWS.

Amazon Aurora is “a fully managed relational database engine that’s compatible with MySQL and PostgreSQL.”

Amazon DynamoDB is “a fully managed, serverless, key-value NoSQL database” that provides low latency and high scalability.

Amazon RDS (Relational Database Service) is a managed SQL database service that supports the deployment, operation, and maintenance of seven relational database engines: Amazon Aurora with MySQL compatibility, PostgreSQL, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server.

7. Amazon API Gateway

Amazon API Gateway enables developers to securely create, publish, and manage APIs to connect non-AWS software to AWS-native applications and resources. That kind of integration, which can substantially enhance the functionality of legacy applications, is a fundamental element of the application modernization process.

8. AWS IAM (Identity and Access Management)

AWS IAM allows you to securely manage AWS access permissions for both users and workloads. You can use IAM policies to specify who (or what workloads) can access specific services and resources, and under what conditions. IAM is a feature of your AWS account, and there is no charge to use it.

9. AWS Lambda

AWS Lambda is an event-driven compute service that lets you run code as stateless functions without provisioning or managing servers or storage–also known as Function as a Service (FaaS). With those tasks performed automatically, developers can focus on their application code. Lambda supports several popular programming languages, including C#, Python, Java, and Node.js. Lambda runs a function only when triggered by an appropriate event, and can automatically scale to handle anything from a few requests per day to thousands of requests per second.

10. Amazon Migration Hub Refactor Spaces (MHRS)

Amazon describes Migration Hub Refactor Spaces as “the starting point for customers looking to incrementally refactor applications to microservices.” MHRS orchestrates AWS services to create an environment optimized for refactoring, allowing modernization teams to easily set up and manage an infrastructure that supports the testing, staging, deployment, and management of refactored legacy applications.

How vFunction Works with MHRS

vFunction and MHRS work together to refactor monolithic legacy applications into microservices and to safely stage, migrate, and deploy those microservice applications to AWS. Developers use MHRS to set up and manage the environment in which the refactoring process is carried out, while the vFunction Platform uses its AI capabilities to substantially automate both the analysis and refactoring of legacy applications.The result of this collaboration is a significant acceleration of the process of modernizing legacy apps and safely deploying them to the AWS cloud. To experience first-hand how vFunction and AWS can work together to help you modernize your legacy applications, schedule a demo today.

Five Insights for CIOs to Understand Application Modernization

This webinar from ADTMag with guest speakers Moti Rafalin, CEO of vFunction, and KellyAnn Fitzpatrick, Senior Analyst at RedMonk, reveals some critical insights with supporting survey data that can help CIOs and architects better understand, plan, prioritize, and succeed with application modernization projects.   


Insight #1 – Application Modernization Must Be A Dynamic, Not Static, Imperative

application modernization must be dynamic

Kelly’s first insight emphasized that modernizing applications is not a “one-and-done” project. It’s an ever-moving target–if you modernized a 15-year-old application in 2018, then you can expect to need further modernization initiatives to catch up to the expectations, technologies, and platforms of 2022 and beyond. 

The term “Continuous Modernization” is key here–the ability to maintain fast innovation and development cycles, rapidly detect and eliminate technical debt, and avoid poor architectural decisions and coding patterns–refers to a highly valuable set of capabilities that elite software organizations have internalized.

Insight #2 – In the Cloud ≠ Cloud Native

migration vs modernization

Kelly then dug deeper why it’s not enough to simply have workloads running in the cloud. Migrating existing applications to the cloud, often seen as “lift and shift”, is a short-term tactical action that doesn’t solve the major challenges of development velocity, technical debt accumulation, and speed of innovation. Migrating to the cloud solves certain problems, like hosting, baked-in security, and cost controls, but also introduces other problems if the application is still a monolith now running in the cloud.

The adoption of containers, managed services, serverless workloads, and new paradigms of building, integrating, and deploying applications mean something more substantial is needed: actual modernization (refactoring, rewriting, rearchitecting) of logical domains, business logic, APIs and more is where strategic value can be achieved.

Insight #3 – Microservices Have Trade-offs That Are Worth It

microservices have tradeoffs

Next up, Kelly focused on why modernizing a monolith into a microservices architecture isn’t easy–it requires a major mental shift in how development, testing, and deployment is done. The benefits of microservices have been discussed ad nauseum for many years now–like increased velocity, better flexibility, and faster development and deployment.  

However, these benefits come with a new set of challenges that IT organizations didn’t have to worry about as much before: overall application complexity, API management, event-driven communication and distributed data management are just some of these examples. Despite these trade-offs, elite technology organizations have made it a priority to succeed with a microservices architecture (when it makes sense).

Insight #4 – Technology Is Great, But Have You Tried Talented People?

modernization projects obstacles

If the decision to modernize didn’t directly impact the development team around it, things might be simpler, Kelly shared. However, we cannot ignore the human aspect–in fact, 97% of survey respondents expressed that organizational pushback against modernizing is to be expected. Whether it’s the cost, risk, fear of change, or fear of failure, it’s often difficult to get full team support to begin a large-scale modernization project.  

How does the monolithic structure of your business applications and organization influence the hiring and onboarding of new employees? Are they excited to spend their first 6 months on the job trying to understand a 15-year-old monolith with 10 million lines of code? More importantly, what does this mean for retaining valuable staff in the days of the Great Resignation?  

Insight #5 – Java (and .NET) Are Still Vital

redmonk language rankings

Finally, Kelly reminded us all that Java is still vital and evolving, adapting to modern cloud architectures in new and innovative ways. Newer programming languages like Scala, Kotlin, and Go may be viewed as popular  for greenfield projects and indeed have been used at some of the world’s most well-known companies–Twitter, Airbnb, Google, and many others have embraced alternative languages to deal with specific challenges.

Yet, as Redmonk’s Programming Language research continues to show, the newer investments and advancements in Java combined with the fact that the majority of monolithic enterprise systems were written in languages like Java (plus .NET and C# ), make Java very relevant for everyone today. These are vital programming languages to many leaders in the finance, healthcare, automotive, and manufacturing industries. When you’re a financial services provider processing $1 billion of transactions every day, you do not have to simply turn everything off and adopt a new set of technologies. This is where application modernization and the strategic, long-term impact of refactoring and re-architecting pay off.

Next Steps

To learn more, we invite you to learn from the 2022 research report Why App Modernization Projects Fail, and check out vFunction Architectural Observability Platform, a purpose-built tool to analyze, calculate, and identify technical debt in order to prioritize your efforts and make a business case for modernization.

Risk-Adjusted ROI Model for Modernization Projects

Over the past few years, we at vFunction have been focusing on the most significant problem inhibiting the third wave of cloud adoption: app modernization.

The first wave of cloud adoption included new apps written for the cloud, the second wave focused on lifting and shifting the low-hanging fruit of apps that can relatively easily be migrated to the cloud, without code or architectural changes, and the third wave, which we are experiencing today, includes the modernization of massive legacy IT to take advantage of the modern cloud services.

When we say modernization, we refer to refactoring or rewriting applications to transform them from a monolithic architecture to microservices, allowing organizations to eliminate technical debt, increase engineering velocity, onboard new developers faster, and increase the scalability of applications.

In recent research we conducted, we found out what many executives know firsthand…that over 70% (!!) of application modernization projects fail, that they last at least 16 months (30% longer than 24 months) and that they cost on average more than $1.5m.

No wonder executives are reluctant to put their careers on the line and embark on these projects. The problem is that they are stuck between the rock and the hard place. If they don’t modernize, they may lose their job since they can’t address the business needs, their development isn’t agile, and they are not supporting their business’s vital need to be competitive. If they do embark on these modernization projects and fail, they may lose their jobs as well.

We believe that modernization that is assisted by AI and automation dramatically disrupts the above convention, and saves executives from the difficult dilemma of “modernize or die.” We’ve created a model that we believe supports this claim. 

The research reveals that executives struggle with the length of projects, as well as the cost of projects. We find that using AI and automation to power modernization reduces the cost by 50%-66% and accelerates time to market by 10x-15x. We see this with our customers and have case studies to show this to be true (see our case studies). However, the research also reveals that risk is a very real obstacle for modernization projects, and this ROI model doesn’t address the significant risk reduction that comes with AI-assisted modernization. 

One could argue that even without incorporating the risk factor the savings and acceleration of AI and automation justify the project, and I would agree with that, but when incorporating the risk factor it becomes a no-brainer.

Let’s use some numbers to substantiate this claim (see the chart below for the calculation). 

Let’s say a medium-sized modernization project of an app that is 7,000 classes (medium complexity, as the number of classes, is a very good proxy for application complexity and possible technical debt) would cost about $1.8m, based on 6 FTEs for 2 years – which falls within the average modernization cost and length based on the Wakefield research.

The same project when using modern AI and automation tools will take only 1 year and requires only 2 FTEs (⅓ resources and half the time) based on our experience at vFunction.

When comparing the total cost of the project, ignoring the risk factor, the AI and the automation-powered project are less than half the price. That seems compelling, however, if we incorporate the risk factor to calculate the risk-adjusted cost we get very different numbers.

The $1.8m manual project has only a 30% success rate (conservatively, based on the research) which means we need to divide the $1.8m by 0.3 to get the true risk-adjusted cost which yields a $6m project cost.

The intuitive meaning of this higher cost is that the project is most likely not going to end in 2 years and not with 12 FTEs…but rather double that time with a lot more resources, therefore getting to a true cost of $6m.

When calculating the AI and automation-powered project cost, we should assume a 90% success rate and therefore the actual cost would be $765,000 divided by 0.9 yielding a true project cost of $850,000.

Now, comparing $6m to $850K…yields a massive ROI of 700%.

risk adjusted roi model for modernization projects

Modernization is indeed risky, lengthy, and costly, but incorporating AI and automation radically changes the economics and risk of these projects and can assist CIOs and CTOs in embarking on modernization projects that are controlled, measured, and have significantly higher chances of success.

Using Machine Learning to Measure and Manage Technical Debt

This post was originally featured on TheNewStack, sponsored by vFunction.

If you’re a software developer, then “technical debt” is probably a term you’re familiar with. Technical debt, in plain words, is an accumulation over time of lots of little compromises that hamper your coding efforts. Sometimes, you (or your manager) choose to handle these challenges “next time” because of the urgency of the current release.

This is a cycle that continues for many organizations until a true breaking point or crisis occurs. If software teams decide to confront technical debt head on, these brave software engineers may discover that the situation has become so complex that they do not know where to start.

The difficult part is that decisions we make regarding technical debt have to balance between short-term and long-term implications of accumulating such debt, emphasizing the need to properly assess and address it when planning development cycles.

The real-world implications of this is seen in a recent survey of 250 senior IT professionals, in which 97% predicted organization pushback to app modernization projects, with the primary concern of both executives and architects being “risk.” For architects, we can think of this as “technical risk” — the threat that making changes to part of an application will have unpredictable and unwelcome downstream effects elsewhere.

The Science Behind Measuring Technical Debt

In their seminal article from 2012, “In Search of a Metric for Managing Architectural Technical Debt”, authors Robert L. Nord, Ipek Ozkaya, Philippe Kruchten and Marco Gonzalez-Rojas offer a metric to measure technical debt based on dependencies between architectural elements. They use this method to show how an organization should plan development cycles while taking into account the effect that accumulating technical debt will have on the overall resources required for each subsequent version released.

Though this article was published nearly 10 years ago, its relevance today is hard to overstate. Earlier this March, it was received the “Most Influential Paper” award at the 19th IEEE International Conference on Software Architecture.

In this post, we will demonstrate that not only is technical debt key to making decisions regarding any specific application, it is also important when attempting to prioritize work between multiple applications — specifically, modernization work.

Moreover, we will show a method that can be used to not only compare the performance of different design paths for a single application, but also compare the technical debt levels of multiple applications at an arbitrary point in their development life cycle.

Accurately Measuring Systemwide Technical Debt

In the IEEE article mentioned above, calculating technical debt is done using a formula that mainly relies on the dependencies between architectural elements in the given application. It is worth noting that the article does not define what constitutes an architectural element or how to identify these elements when approaching an application.

We took a similar approach and devised a method to measure technical debt of an application based on the dependency graph between its classes. The dependency graph is a directional graph G=V, E, in which the V=c1, c2, … is the set of all classes in the application and an edge e=⟨c1, c2⟩E exists between two vertices if class c1 depends on class c2 in the original code. We perform multifaceted analysis on the graph to eventually come up with a score that describes the technical debt of the application. Here are some of the metrics we extract from the raw graph:

  1. Average/median outdegree of the vertices on the graph.
  2. Top N outdegree of any node in the graph.
  3. Longest paths between classes.

Using standard clustering algorithms on the graph, we can identify communities of classes within the graph and measure additional metrics on them, such as:

  1. Average outdegree of the identified communities.
  2. Longest paths between communities.

The hypothesis here is that by using these generic metrics on the dependency graphs, we can identify architectural issues that represent real technical debt in the original code base. Moreover, by analyzing dependencies on these two levels — class and community — we give a broad interpretation of what an architectural element is in practice without attempting to formally define it.

To test this method, we created a data set of over 50 applications from a variety of domains — financial services, eCommerce, automotive and others — and extracted the aforementioned metrics from them. We used this data set in two ways.

First, we correlated specific instances of high-ranking occurrences of outdegrees and long paths with local issues in the code. For example, identifying god classes by their high outdegree. This proved efficient and increased our confidence level that this approach is valid in identifying local technical debt issues.

Second, we attempted to provide a high-level score that can be used not only to identify technical debt in a single application, but also to compare technical debt between applications and to use it to help prioritize which should be addressed and how. To do that, we introduced three indexes:

  1. Complexity — represents the effort required to add new features to the software.
  2. Risk — represents the potential risk that adding new features has on the stability of existing ones.
  3. Overall Debt — represents the overall amount of extra work required when attempting to add new features.

From Graph Theory to Actionable Insights

We manually analyzed the applications in our data set, employing the expert knowledge of the individual architects and developers in charge of product development, and scored each application’s complexity, risk and overall debt on a scale of 1 to 5, where a score of 1 represents little effort required and 5 represents high effort. We used these benchmarks to train a machine learning model that correlates the values of the extracted metrics with the indexes and normalizes them to a score of 0 to 100.

This allows us to use this ML model to issue a score per index for any new application we encounter, enabling us to analyze entire portfolios of applications and compare them to each another and to our precalculated benchmarks. The following graph depicts a sample of 21 applications demonstrating the relationship between the aforementioned metrics:

relationship between the aforementioned metrics

The overall debt levels were then converted into currency units, depicting the level of investment required to add new functionality into the system. For example, for each $1 invested in application development and innovation, how much goes specifically to maintaining architectural technical debt? This is intended to help organizations build a business case for handling and removing architectural technical debt from their applications.

We have shown a method to measure the technical debt of applications based on the dependencies between its classes. We have successfully used this method to both identify local issues that cause technical debt as well as to provide a global score that can be compared between applications. By employing this method, organizations can successfully assess the technical debt in their software, which can lead to improved decision-making around it.

Cloud Modernization Approaches: Choosing Between Rehost, Replatform, or Refactor

In an era when continual digital transformation is forcing marketplaces to evolve with lightning speed, companies can’t afford to be held back by functionally limited and inflexible legacy systems that don’t adapt well to today’s requirements. Software applications that are hard to maintain and support, and that cannot easily incorporate new features or integrate with other systems are a drag on any company’s marketplace agility and ability to innovate.

Yet, many legacy applications are still performing necessary and business-critical functions. Because they remain indispensable to the organization’s daily operations, they cannot simply be abandoned. As a result, companies face a very real imperative to modernize aging applications to meet the rapidly shifting requirements of the marketplace. And for a growing number of them, that means modernizing those applications for the cloud.

Why Companies are Modernizing for the Cloud

Today the cloud is where the action is—where the leading edge of technological innovation is taking place, and where there is an established ecosystem that software can tap into to make use of infrastructure hosting, scaling, and security capabilities that don’t have to be programmed into the application itself.

It’s that ability to leverage a wide-ranging and technically sophisticated ecosystem that makes the cloud the perfect avenue for modernizing a company’s legacy applications.

Gartner estimates that by 2025, 90% of current monolithic applications will still be in use, and that compounded technical debt will consume more than 40% of the current IT budget.

Because software that cannot interoperate in that environment will lose much of its utility, modernizing legacy applications is an urgent imperative for most companies today.

When legacy applications are moved to the cloud and modernized so that they become cloud-enabled, they gain improvements in scalability, flexibility, security, reliability, and availability. What’s more, they also gain the ability to tap into a multitude of already existing cloud-based services, so that developers don’t have to continually reinvent the wheel.

Related: Why Cloud Migration Is Important

Once a company decides that modernization of its legacy applications is a high priority, the next question is how to go about it.

Approaches to Cloud Modernization of Legacy Applications

Gartner has identified seven options that may be useful for modernizing legacy systems in the cloud: encapsulate, rehost, replatform, refactor, re-architect, rebuild, and replace. Experience has shown that for companies beginning their modernization journey, the most viable options are rehosting, replatforming, and refactoring. Let’s take a brief look at each of these.

1. Rehosting (“Lift and Shift”)

Rehosting is the most commonly used approach for bringing legacy applications to the cloud. It involves transferring the application as-is, without changing the code at all, from its original environment to a cloud hosting platform.

For example, one of the most frequently performed rehosting tasks is moving an application from a physical server in an on-premises data center to a virtual server hosted on a cloud service such as AWS or Azure.

Rehosting is the simplest, easiest, quickest, and least risky cloud migration method because there’s no new code to be written and tested. And the demands for technical expertise in the migration team are minimal.

The downside to rehosting is the flipside of its advantages—because no changes are made to the code or functionality of the application, even though it now runs in the cloud it is no more able to take advantage of cloud-native capabilities than it was in its original environment.

On the other hand, simply by being hosted in the cloud the application gains some significant advantages:

Advantages of Rehosting

  • Enhanced security—cloud service providers (CSPs) provide superior data security because their business model depends on it.
  • Greater reliability— CSPs typically guarantee “five 9’s” (99.999%) availability in their Service Level Agreements.
  • Global access—since the user interface (UI) of a web-hosted application is normally delivered through a browser (although the look and operation of the UI may be unchanged), users are no longer tied to dedicated terminals, but, with proper authorization, can access the system through any internet-enabled device anywhere in the world.
  • Minimum risk—because there are no changes to the codebase, there’s little chance of new bugs being introduced into the application during migration.
  • It’s a good starting point—having the application already hosted in the cloud is a good first step toward further modernization efforts.

Disadvantages of Rehosting

  • No improvements in functionality—the code runs exactly as it always has. There are no upgrades in functionality or in the ability to integrate with other cloud-based systems or take advantage of the unique capabilities available to cloud-enabled applications. For example, although cloud-native applications are inherently highly scalable, a legacy application rehosted to the cloud may lack the ability to scale by auto-provisioning additional resources as needed.
  • Potential latency and performance issues—when moving an application unchanged from an on-premises data center to the cloud, latency and performance issues may arise due to inherent cloud network communication delays.
  • Potentially higher costs— while running applications in the cloud that are not optimized for that environment may decrease CapEX expenditures (you don’t have to purchase or maintain hardware), it may actually increase monthly OpEX spending because of excessive cloud resource usage.

When to Use Rehosting

Rehosting may be the best choice for companies that:

  • are just beginning to migrate applications to the cloud, or
  • need to move the application to the cloud as quickly as possible, or
  • have a high level of concern that migration hiccups might disrupt the workflows served by the application

Because of its simplicity, rehosting is most commonly adopted by companies that are just beginning to move applications to the cloud.

2. Replatforming

As with rehosting, replatforming moves legacy applications to the cloud basically intact. But unlike rehosting, minimal changes are made to the codebase to enable the application to take advantage of some of the advanced capabilities available to cloud-enabled software, such as the adoption of containers, DevOps best practices, and automation, as well as improvements in functionality or in the ability to integrate with other cloud resources.

For example, changes might be instituted during replatforming to enable the application to access a modern cloud-based database management system or to increase application scalability through autoscaling.

Advantages of Replatforming

Because it’s basically “rehosting-plus,” replatforming shares the advantages associated with rehosting. Its greatest additional advantage is that it enables the application to be modestly more cloud-compatible, though still falling far short of cloud-native capabilities. But even relatively small improvements, such as the ability to automatically scale as needed, can have a significant impact on the performance and usability of the application.

Replatforming allows you to upgrade an application’s functionality or integration with other systems through a series of small, incremental changes that minimize risk.

Disadvantages of Replatforming

Changes to the codebase bring with them a risk of introducing new code that might disrupt operations. Avoiding such mistakes requires a higher level of expertise in the modernization team, with regard to both the original application and the cloud environment onto which it is being replatformed. It’s easy to get into trouble when inexperienced migration teams attempt to replace functions in the original codebase with supposedly equivalent cloud functions they don’t really understand.

When to use Replatforming

Replatforming is a good option for organizations that want to work toward increasing the cloud compatibility of their legacy applications on an incremental basis, and without the risks associated with more comprehensive changes.

3. Refactoring

According to Agile Alliance’s definition:

“Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior.”

Whereas rehosting and replatforming shift an application to the cloud without changing its fundamental nature, refactoring goes much further. Its purpose is to transform the codebase to take full advantage of the cloud’s capabilities while maintaining the original external functionality and user interface.

Most legacy applications have serious defects caused by their monolithic architecture (a monolithic codebase is organized basically as a single unit). Because various functions and dependencies are interwoven throughout the code, it can be extremely difficult to upgrade or alter specific behaviors without triggering unintended and often unnoticed changes elsewhere in the application.

Refactoring eliminates that problem by helping software transition to a cloud-native microservices architecture. This produces a modern, fully cloud-native codebase that can now be adapted, upgraded, and integrated with other cloud resources far more easily than before the refactoring.

Advantages of Refactoring

  • Enhanced developer productivity—productivity rises when developers work in a cloud-native environment, with code that can be clearly understood, and with the ability to integrate their software with other cloud resources, thereby leveraging existing functions rather than coding them into their own applications.
  • Eliminated technical debt—by correcting all the quick fixes, coding shortcuts, compromises, and just plain bad programming that typically seep into legacy applications over the years, refactoring can eliminate technical debt.
  • Better maintenance—whereas a monolithic codebase can be extremely difficult to parse and understand, refactored code is far more understandable. That makes a huge difference in the application’s maintainability.
  • Simpler integrations—because a microservice architecture is fully cloud-enabled, refactored applications can easily integrate with other cloud-based resources.
  • Greater adaptability—in a microservice-based codebase each function can be addressed independently, allowing modifications to be made cleanly and iteratively, without fear that one change might ripple through the entire system.
  • High scalability—because the codebase has been reshaped into a cloud-native architecture, autoscaling can be easily implemented.
  • Improved performance—the refactoring process optimizes the code for the functions it performs. This usually results in fewer bottlenecks and greater throughput.

Disadvantages of Refactoring

The main disadvantage of the refactoring approach is that it is far more complex, time-consuming, resource-intensive, and risky than rehosting or replatforming. That’s because the code is extensively modified. Refactoring must be done extremely carefully, by experts who know what they are doing, to avoid introducing difficult-to-find bugs or behavioral anomalies into the code. And that increases costs in both time and money.

On the other hand, the automated, AI-driven refactoring tools available today can take much of the complexity, time, cost, and risk out of the refactoring process.

When to Use Refactoring

Companies that need maximum flexibility and agility to keep pace with the demands of customers and the challenges of competitors will typically find that refactoring is their best choice. Though the up-front costs of refactoring are the greatest of the options we’ve considered, the ability of microservices-based applications to use only the cloud resources needed at a particular time will keep long-term operating expenses much lower than can be achieved with the other options.

Choosing Your Modernization Approach

How can you determine the approach you should use for modernizing your legacy applications? Here are some steps you should take:

1. Understand Your Business Strategy and Goals

Why are you considering modernizing your legacy applications? What business interests will be served by doing so? The only way to determine which applications should be modernized and how is to examine how each serves the goals your business is trying to achieve.

2. Assess Your Applications

In light of your business goals, determine which applications are in greatest need of modernization, and what the end-product of that upgrade needs to be.

3. Decide Whether to Truly Modernize or Just Migrate

Rehosting and replatforming are not really about modernizing applications. Rather, their focus is on simply getting them moved to the cloud. That can be the first step in a modernization effort, but just migrating an application to the cloud pretty much as-is does little to enable it to become a full participant in the modern cloud ecosystem.

In general, migration is a short-term, tactical approach, while modernization is a more long-term solution.

4. Repeat Steps 1-3 Again, and Again, and…

Application modernization is not a one-and-done deal. As technology continues to evolve at a rapid pace, you’ll need to periodically revisit these assessments of how well your business-critical applications are contributing to current business objectives, and what improvements might be needed. Otherwise, the software you so carefully modernize today might become, after a few years, your new legacy applications.

Related: Preventing Monoliths: Why Cloud Modernization is a Continuum

Making the Choice

As we’ve seen, rehosting or re-platforming are the quickest, easiest, and least costly ways to bring monolithic application services at least partially into the cloud. But those applications are still hamstrung as far as taking advantage of the cloud’s extensive capabilities is concerned.

Refactoring, on the other hand, is more expensive and time-consuming at the beginning, but positions applications to function as true cloud-native resources that can be much more easily adapted as requirements change. 

If you’ve got an executive mandate to move beyond just a “quick fix” approach to your legacy applications, you should strongly consider refactoring. And remember that by employing today’s sophisticated, AI-driven application modernization tools, the time and cost gaps between refactoring on the one hand, and rehosting or re-platforming on the other, can be significantly narrowed.

A good example of such a tool is the vFunction Platform. It’s a state-of-the-art application modernization platform that can rapidly assess monolithic legacy applications and transform them into microservices. It also provides decision-makers with data-driven assessments of legacy code that allow them to determine how to proceed with their modernization efforts. To see how vFunction can help your company get started on its journey toward legacy application modernization, schedule a demo today.

Modernizing Legacy Code: Refactor, Rearchitect, or Rewrite?

If your company is like most, you have legacy monolithic applications that are indispensable for everyday operations. Valuable as they are, due to their traditional architecture those applications are almost certainly hindering your company’s ability to display the agility, flexibility, and responsiveness necessary to keep up with the rapidly shifting demands of today’s marketplace. That’s why refactoring legacy code should be high on your priority list.

Almost by definition, legacy apps lack the functionality and adaptability required for them to seamlessly integrate with the modern, cloud-based ecosystem that defines today’s technological landscape. In an era when marketplace requirements are constantly evolving, continued dependence on apps with such limitations is a recipe for eventual disaster. That’s why the pressure to modernize is growing by the day.

Why Refactoring Legacy Code is Critical

Most enterprises today realize that they must do something to modernize the legacy apps on which they still depend. In fact, in CIO Magazine’s 2022 State of the CIO survey, 40% of CIOs say modernizing infrastructure and applications is their focus. But what will it take to make legacy app modernization a reality?

The basic issue that makes most legacy applications so ill-suited to fully participate in today’s cloud ecosystem is that they have a monolithic architecture. That means that the code is organized essentially as a single unit, with various functions and dependencies interwoven throughout.

Such code is brittle, inflexible, and hard to understand; modifying its functionality to meet new requirements is typically an extremely difficult and risky process.

As long as an application retains its monolithic structure, there’s little hope of any significant modernization. So, the first step in most efforts to modernize legacy applications is to transform them from a monolithic structure to a cloud-native, microservices architecture. And the first step in accomplishing that transformation is refactoring.

Related: Migrating Monolithic Applications to Microservices Architecture

The refactoring process restructures and optimizes an application’s code to meet modern coding standards and allow full integration with other cloud-based applications and systems.

But why “cloud-based”?

The Importance of the Cloud

The cloud has become the focal point of intense and continuous technological innovation—most software advancements are birthed and deployed in the cloud. That’s why Gartner projects that by 2025, 95% of new digital workloads will be cloud-native. What’s more, according to Forbes, 77% of enterprises, and 73% of all organizations, already have at least some of their IT infrastructure in the cloud.

The cloud is critical to modernization because it provides a well-established software ecosystem that allows newly cloud-enabled legacy apps to tap into a wide range of existing functional capabilities that don’t have to be programmed into the app itself.

That’s why today’s norm for modernizing legacy apps is to start by moving them to the cloud. Once relocated to the cloud and adapted to interoperate in that environment, such applications gain some substantial advantages, including improvements in performance, scalability, security, agility, flexibility, and operating costs.

But the degree to which such benefits are realized depends on how that cloud transfer is accomplished—will the app be optimized for the cloud environment, or just shifted basically intact from its original environment.

Migration vs Modernization

Many companies begin their modernization journey by simply migrating legacy software to the cloud. An app is transferred, pretty much as-is, without altering its basic internal structure. Some minor changes may be made to meet specific needs, but for the most part, the app functions exactly as it did in its original environment.

Because the app retains its original structure and functionality, it also retains the defects that undermine its usefulness in the modern technological context. For example, if the codebase was monolithic before migration, it remains monolithic once it reaches the cloud. Such apps bring with them all the limitations that plague the monolithic architectural pattern, including an inability to integrate with other cloud-based systems.

Migration represents an essentially short-term, tactical approach that aims at alleviating immediate pain points without making fundamental changes to the codebase.

Modernization, on the other hand, is a more long-term, strategic approach to updating legacy apps. The application isn’t simply shifted to the cloud. Rather, as part of the migration process much of the original code is significantly altered to meet cloud-native technical standards.

That enables the app to fully interoperate with other applications and systems within the cloud ecosystem, and thereby reap all the benefits that cloud-native apps inherit.

Options for Modernizing Legacy Applications

Gartner identifies seven options for upgrading legacy systems. These may be grouped into two broad categories:

  • Migration options that simply transfer the software to the cloud essentially as-is
  • Modernization options that not only migrate the application to the cloud but which, as an essential part of the migration process, adapt it to function in that environment as cloud-native software 

Let’s examine Gartner’s list of options in light of that distinction:

Migration methods

  • Encapsulate: Connect the app to cloud-based resources by providing API access to its existing data and functions. Its internal structure and operations remain unchanged.
  • Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  • Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.

Modernization methods

  • Refactor: Restructure and optimize the app’s code to meet modern standards without changing its external behavior.
  • Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  • Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  • Replace: Completely eliminate the original application, and replace it with a new one. This option requires such an extreme investment, in terms of time, cost, and risk, that it is normally used only as a last resort.

Since our concern in this article is with truly modernizing legacy apps rather than just migrating them to the cloud or entirely replacing them, we’ll limit our consideration to the modernization options: refactoring, rearchitecting, and rewriting.

Related: Legacy Application Modernization Approaches: What Architects Need to Know

Refactoring vs Rearchitecting vs Rewriting

Let’s take a closer look at each of these modernization options.

Refactoring

As we’ve seen, refactoring legacy code is fundamental to the modernization process. According to the Agile Alliance, one of the major benefits of refactoring is that it

“improves objective attributes of code (length, duplication, coupling and cohesion, cyclomatic complexity) that correlate with ease of maintenance.”

As a result of those improvements, refactored legacy code is simpler and cleaner; it’s also easier to understand, update with new features, and integrate with other cloud-based resources. Plus, the app’s performance will typically improve.

Because no functional changes are made to the app, the risk of new bugs being introduced during refactoring is low.

One significant advantage of refactoring is that it can (and should) be an incremental, iterative process that proceeds in small steps. Developers operate on small segments of the code and work to ensure that each is fully tested and functioning correctly before it is incorporated into the codebase.

As a result, when refactoring is done correctly, the operation of the overall system is never disrupted. This also eliminates the necessity of maintaining two separate codebases for the original and the updated code.

The fundamental purpose of refactoring legacy code is to convert it to a cloud-native structure that allows developers to easily adapt the application to meet changing requirements. A valuable byproduct of the process is the elimination of technical debt through the removal of the coding compromises, shortcuts, and ad hoc patches that often characterize legacy code.

Rearchitecting

Rearchitecting is used to restructure the application’s codebase to enable improvements in areas such as performance and scalability. It’s often employed when business requirements change and the application needs to add functionality that its current structure doesn’t support. Rearchitecting allows such changes to be incorporated without developers having to rewrite the app from scratch.

Because it goes beyond refactoring by making fundamental changes to the structure and operation of the code, rearchitecting is more complex and time-consuming, and it carries a higher risk of introducing bugs or business process errors into the code.

One of the major risk factors associated with rearchitecting (and with rewriting as well) is that with most legacy applications, documentation about not just the original requirements, but also of how the code has been modified along the way (and for what reasons) is inadequate or entirely missing.

For that reason, any rearchitecting or rewriting effort must be preceded by a thorough assessment of the original code so that developers gain a deep level of understanding before making changes. Otherwise, there is a high risk that even if the new code is technically bug-free, important business processes may be omitted or inadvertently changed because developers overlooked their implementations in the original code.

Rewriting

Full rewrites most often occur with legacy applications that are specialized and proprietary. Usually, the intent is not to modify the functionality or user interface in major ways, but to move to a modern (usually microservices) architecture without having to deconstruct the existing code to understand how it works.

Rewriting allows developers to start with a clean slate and implement the application requirements using modern technologies and coding standards.

As with rearchitecting, rewriting brings with it a significant danger of overlooking business process workflows that are implicit in the legacy code because of ad hoc patches and modifications made over the years, but which were never explicitly documented. Developers also shouldn’t forget that the legacy app is still in use because it works—it will have been heavily debugged and patched through time so that even low probability or extreme operational conditions are handled, if not gracefully, at least adequately.

For these reasons, developers involved in a rewrite must be extremely careful to ensure that all of the application’s use scenarios, whether documented or not, are uncovered and explicitly implemented in the new code.

One of the greatest dangers with a rewrite is that until it is completed, it may be necessary to freeze the functionality of the original app—otherwise, the rewrite is chasing a moving target. And in today’s environment of ever-accelerating technological change, that can be a recipe for disaster.

Joel Spolsky, formerly a Program Manager at Microsoft, and now Chairman of the Board at Glitch, cites a case in point. Netscape was once the leader in the internet browser market, but it made a fatal mistake by attempting a full rewrite of its browser code.

That effort took three years, during which Netscape was unable to update the functionality of its product because the original codebase was frozen. Competitors forged ahead with innovations, and Netscape’s market share plummeted. The company never recovered. According to Spolsky,

Netscape made “the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.”

Doing a complete rewrite of a legacy application may be necessary in some cases, but such a project should not be undertaken without a full evaluation of the associated costs and risks. It’s tempting to just clear the decks and start over without all the complexities of dealing with inherited code. But, as experts like Spolsky are quick to say, doing so is usually a mistake.

Refactoring is Key

Refactoring, rearchitecting, and rewriting are not mutually exclusive options. They can be seen as points along a continuum in the process of modernizing legacy applications:

  1. Start by refactoring legacy code into microservices. This gives the app an essentially cloud-enabled codebase that can be easily integrated with other cloud-based resources and positions it for further updates and improvements.
  2. If the application needs new functionality or performance levels that can’t be achieved with its original structure, rearchitecting may be in order.
  3. If rearchitecting to achieve the required functionality appears to be too complex or risky, starting from scratch by completely rewriting the app may be the best option.

Whichever option is ultimately pursued, refactoring should be the starting point because it produces a codebase that’s far easier for developers to understand and work with than was the original.

Plus, refactoring will unveil hidden dependencies and business process workflows buried in the code that may be missed if a development team goes straight to rearchitecting or rewriting as their initial step.

Note that all of these options require a substantial investment of time and expertise, especially if they are pursued through a mostly manual process using tools that were never designed for application modernization. But that need not, and should not, be the case.

Simplify Legacy App Modernization

The vFunction platform is specifically designed for AI-driven, cloud-native modernization of legacy applications. With it, designers can rapidly and incrementally modernize their legacy apps, and unlock the power of the cloud to innovate and scale.

The vFunction Assessment Hub uses its AI capabilities to automatically assess your legacy applications estate to help you prioritize and make a business case for modernization of a particular app or set of applications. This analysis provides a data-driven assessment of the levels of complexity, risk, and technical debt associated with the application.

Once this assessment has been performed, the vFunction Modernization Hub can then, under the direction of architects and developers, automatically transform complex monolithic applications into microservices. Through the use of these industry-leading vFunction capabilities, the time, complexity, risk, and cost of a legacy app modernization project can be substantially reduced. To see how vFunction can smooth the road to legacy application modernization at your company, schedule a demo today.

The CIO Guide to Modernizing Monolithic Applications

As the pace of technological change continues to accelerate, companies are being put under more and more pressure to improve their ability to quickly react to marketplace changes. And that, in turn, is putting corporate CIOs on the hot seat.

In a recent McKinsey survey, 71% of responding CIOs said that the top priority of their CEO was “agility in reacting to changing customer needs and faster time to market.” Those CEOs are looking to digital technology to enable their companies to keep ahead of competitors in a constantly evolving market environment.

CIOs are tasked with providing the IT infrastructure and tools needed to drive the marketplace innovation and agility required to accomplish that goal.

But in many cases CIOs are facing a seemingly intractable problem—they’ve inherited a suite of legacy applications that are indispensable to the company’s daily operations, but which also have very limited capacity for the upgrades necessary for them to be effective in the cloud-native, open-source technological landscape of today.

As a recent report by Forrester puts it,

“Most legacy core software systems are too inflexible, outdated, and brittle to give businesses the flexibility they need to win, serve, and retain customers.”

But because such systems are still critical for day-to-day operations, CIOs can’t just get rid of them. Rather, a way must be found to provide them with the flexibility and adaptability that will enable them to be full participants in the modern technological age.

The Problem with Monoliths

The fundamental cause of the brittleness and inflexibility that characterize most legacy systems is their monolithic arch

itecture. That is, the codebase (which may have millions of lines of code) is a single entity with functionalities and dependencies interwoven throughout. Such applications are extremely difficult to update because a change to any part of the code can ripple through the application, causing unintended operational changes or failures in seemingly unrelated parts of the codebase.

Because they are inflexible and brittle, such applications cannot be easily updated with new features or functions—they were not designed with that capability in mind. A much broader transformation is required, one in which the application’s codebase is restructured in ways that allow it to be upgraded while maintaining the original scope. That broad restructuring is referred to as application modernization.

Application Modernization and The Cloud

What, exactly, is application modernization? Gartner provides this description:

“Application modernization services address the migration of legacy to new applications or platforms, including the integration of new functionality to provide the latest functions to the business.”

There are two key aspects of this definition: migration and integration.

Because the cloud is where the technological action is today, most application modernization efforts involve, as a first step, migrating legacy apps from their original host setting to the cloud. As McKinsey says of this trend:

“CIOs see the cloud as a predominant enabler of IT architecture and its modernization. They are increasingly migrating workloads and redirecting a greater share of their infrastructure spending to the cloud.”

The report goes on to note that McKinsey expects that by 2022, 75% of corporate IT workloads will be housed in the cloud.

That leads to the second element of the Gartner definition: integration. If legacy applications are to be effective in the cloud environment, they must be integrated into the open services-based cloud ecosystem.

That means it’s not enough to simply migrate applications to the cloud. They must also be transformed or restructured so that integration with cloud-native resources is not just possible, but easy and natural.

The fundamental purpose of application modernization is to restructure legacy code so that it is easily understandable to developers, and can be quickly updated to meet new business requirements.

Transitioning From a Monolithic Architecture to Microservices

What does it take to transform legacy apps so that they are not only cloud-enabled, but they fit as naturally into the cloud landscape as do cloud-native systems?

As we’ve seen, the fundamental problem that causes the rigidity and inflexibility that must be overcome in transforming legacy apps is their monolithic architecture. Monolithic applications are self-contained and aren’t always easy to integration with other applications or systems. The codebase is a single entity in which all the functions are tightly-coupled and interdependent. Such an app is, in essence, a “black box” as far as the outside world is concerned—its inputs and outputs can be observed, but its internal processes are entirely opaque.

If an app is to be integrated into the cloud’s open-source ecosystem, its functions must somehow be separated out so that they can interoperate with other cloud services. The way that’s normally accomplished is by refactoring the legacy code into microservices.

Related: Migrating Monolithic Applications to Microservices Architecture

What are Microservices?

Microsoft provides a useful description of the microservices concept:

“A microservices architecture consists of a collection of small, autonomous services. Each service is self-contained and should implement a single business capability.”

The key terms here are “small” and “autonomous.” Microservices may or may not be “small”, but they should be independent, and loosely coupled with a specific functionality to cover. Each is a separate codebase that performs only a single task, and each can be deployed and updated independently of any others. Microservices communicate with one another and with other resources only through well-defined APIs—there is no external visibility or coupling into their internal functions.

Advantages of the microservices architecture include:

  • Agility: Because each microservice is small and independent, it can be quickly updated to meet new requirements without impacting the entire application.
  • Scalability: To scale any feature of a monolithic application when demand increases, the entire application must be scaled. In contrast, each microservice can be scaled independently without scaling the application as a whole. In the cloud environment, not having to scale the entire app can yield substantial savings in operating costs.
  • Maintainability: Because each microservice is small and does only one thing, maintenance is far easier than with a monolithic codebase, and can be handled by a small team of developers.

The key task of legacy application modernization is to decompose a monolithic codebase into a collection of microservices while maintaining the functionality of the original application.

But how is that to be accomplished with legacy code that is little understood and probably not well documented?

Options for Transforming Monolithic Code to Microservices

Gartner has identified seven options for migrating and upgrading legacy systems.

  1. Encapsulate: Connect the app to cloud-based resources by providing API access to its existing data and functions. Its internal structure and operations remain unchanged.
  2. Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  3. Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.
  4. Refactor: Restructure and optimize the app’s code to a microservices architecture without changing its external behavior.
  5. Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  6. Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  7. Replace: Completely eliminate the original application, and replace it with a new one.

All of these options are sometimes characterized as “modernization” methodologies. Actually, while encapsulating, rehosting, or replatforming do migrate an app (or in the case of encapsulation, its interfaces) to the cloud, no restructuring of the codebase takes place. If the app was monolithic in its original environment, it’s still monolithic once it’s housed in the cloud. So, these methods cannot rightly be called modernization options at all.

Neither does replacement qualify as a modernization option since rather than restructuring the legacy codebase, it throws it out completely and replaces it with something entirely new.

So, to truly modernize a legacy application from a monolith to microservices will involve the use of some combination of refactoring, rearchitecting, and rewriting. Let’s take a brief look at each of these:

  • Refactoring: Refactoring will be the first step in almost any process of modernizing monolithic legacy applications. By converting the codebase to a cloud-native, microservices structure, refactoring enables the app to be fully integrated into the cloud ecosystem. And once that’s accomplished, developers can easily update the app with new features to meet specific requirements.
  • Rearchitecting: Rearchitecting is usually employed to enable improvements in areas such as performance and scalability, or to add features that are not supported by the original design. Because rearchitecting makes fundamental changes to the structure and operation of the code, it is more complex, time-consuming, and risky than simply refactoring.
  • Rewriting: Completely rewriting the legacy code is the most complex, time-consuming, and risky of all the modernization options. It is usually resorted to when developers wish to avoid spending the time and effort required to deconstruct the existing code to understand how it works. Because a rewrite carries the highest risk of causing disruptions to a company’s business operations, it is normally used only as a last resort.

Although rearchitecting or rewriting may be appropriate for some cases, refactoring should always be the starting point because it produces a codebase that developers can easily upgrade with new features or functionality. As McKinsey puts it:

“It [is] critical for many applications to refactor for modern architecture.”

Challenges of Modernization

All of the modernization options, refactoring, rearchitecting, and rewriting, require extensive changes to the legacy application’s codebase. That’s not a task to be undertaken lightly. Legacy apps typically hold onto their secrets very tightly due to several common realities:

  • The developers who wrote and maintained the original code, which in some cases is decades old, have by now retired or are otherwise unavailable.
  • Documentation, both of the original requirements and modifications made to the code through the years, is often incomplete, misleading, or missing entirely.
  • Patches to the code to handle low frequency-of-occurrence exceptions or boundary conditions may not be documented at all, and can be understood only by a minute examination of the code.
  • Similarly, changes to business process workflows may have been incorporated through code patches that were never adequately documented or covered by tests. If such workflows are not discovered and accounted for in a modernization effort, important functions of the application may be lost.

Any modernization approach will involve a high degree of complexity, time, and expertise. McKinsey quotes one technology leader as saying,

“We were surprised by the hidden complexity, dependencies and hard-coding of legacy applications, and slow migration speed.”

Building a Modernization Roadmap

If you’re trying to drive to someplace you’ve never been before, it’s very helpful to have a map. That’s especially the case if you’re driving toward modernization of your legacy applications. You need a roadmap.

The first stop on your modernization roadmap will be an assessment of the goals of your business, where you currently stand in relation to those goals, and what you need from your technology to enable you to achieve those goals.

Then you’ll want to develop an understanding of exactly what you want your modernization process to achieve. You’ll analyze your current application portfolio in light of your business and technology goals, and determine which apps must be modernized, what method should be used, and what priority each app should have.

To learn more about creating a modernization roadmap, take a look at the following resource:

Related: Succeed with an Application Modernization Roadmap

Why Automation is Required for Successful Modernization

Converting a monolithic legacy app to a microservices architecture is not a trivial exercise. It is, in fact, quite difficult, labor-intensive, time-consuming, and risky. That is, it is all that if you try to do it manually.

It’s not unusual for a legacy codebase to have millions of lines of code and thousands of classes, with embedded dependencies and hidden flows that are far from obvious to the human eye. That’s why using a tool that automates the process is a practical necessity.

By intelligently performing static and dynamic code analyses, a state-of-the-art, AI-driven automation tool can, in just a few hours, uncover functionalities, dependencies, and hidden business flows that might take a human team months or years to unravel by manual inspection of the code.

And not only can a good modernization tool analyze and parse the monolithic codebase, it can actually refactor and rearchitect the application automatically, saving the untold hours that a team of highly skilled developers would otherwise have to put into the project.

According to McKinsey, companies that display a high level of agility in their marketplaces have dramatically higher rates of automation than those characterized as the “laggards” in their industries.

The vFunction Application Modernization Platform

The vFunction platform was built from scratch to be exactly the kind of automation tool that’s needed for any practical application modernization effort. It has advanced AI capabilities that allow it to automatically analyze huge monolithic codebases, both statically and during the actual execution of the code.

As the vFunction Assessment Hub crawls through your code, it automatically builds a lightweight assessment of your application landscape that helps you prioritize and make a business case for modernization. Once you’ve selected the right application to modernize, the vFunction Modernization Hub takes over, analyzing and automatically converting complex monolithic applications into extracted microservices.

vFunction has been demonstrated to speed up the modernization process by a factor of 15 or more, which can reduce the time required by such projects from months or years to just a few weeks.

If you’d like to experience firsthand how vFunction can help your company modernize its monolithic legacy applications, schedule a demo today.

Survey: 79% of Application Modernization Projects Fail

In the recent report “Why App Modernization Projects Fail”, vFunction partnered with Wakefield Research to gather insights from 250 IT professionals at a director level or higher in companies with at least 5000 employees and one or more monolithic systems.

Application modernization is not a new concept. That is, if companies are developing software, at some point they will need to modernize it. Because a code base continues to grow, it becomes complex, and engineering velocity slows down.

So what is elevating app modernization to a top priority for so many companies now? We see two major trends that are driving forces in the market:

  1. Digital Transformation – Many companies expedited these initiatives in response to the COVID-19 pandemic
  2. Shift to the Cloud – The benefits of cloud platforms has driven more companies to institute an executive mandate to move to the cloud

We also see competitive pressures increasingly driving companies to embark on modernization projects. Digital natives with software built for the cloud, originating with modern architectures (cloud native) and stacks, are able to rapidly respond to the market with innovative features and functionality, whereas established companies are fighting with scalability and reliability issues—which brings heavy competitive pressure in a fight for customer loyalty. 

Today, companies spend years mired in complex, lengthy, and inefficient app modernization projects, manually trying to untangle monolithic code.

So, it is not surprising that 79% of app modernization projects fail, averaging a cost of $1.5 million and a 16-month timeline.

There are many reasons for this: CIOs are under immense pressure to meet business objectives, having evolved into one of the most strategic roles on the executive team. 

Undoubtedly, this role comes with changing priorities and limited resources. Additionally, architects are charged with modernizing monolithic apps, but often only have limited tools, teams, and time. Given the stakes, it is imperative that the C-Suite has a clear understanding of why modernization projects fail, and how investing in these modernization projects now benefits the company’s present and future. 

To help provide for this, we partnered with Wakefield Research to survey 250 technology professionals—leaders, architects and developers at a director level or above—who have the responsibility of maintaining at least one monolithic app in a company of at least 5,000 employees.

The insights we gleaned say as much about the changing definition of successful outcomes as it does about cultures and how teams are organized to support these projects. The long-held notion of “lift and shift” is no longer considered a successful modernization outcome, and successful projects require a change in organizational structure to support the targeted modernized architecture.

We hope that this report will not only serve as valuable insight for those responsible for app modernization initiatives—but also as a reminder that having the proper tools in use plays an invaluable role in the success (or failure) of every venture.

Legacy Application Challenges and How Architects Can Face Them

Legacy system architectures are quite the challenge when held up to today’s capabilities, and working with them can be frustrating. Software architects head the list of those frustrated because of the numerous struggles they experience with applications designed a decade ago or more.

Many of these problems stem from their monolithic architecture. In contrast to today’s architecture, legacy applications contain tight coupling between classes and complex interdependencies, leading to lengthy test and release cycles.

A lack of agility and slow engineering velocity make it onerous to meet customer requirements. Performance limitations imposed by the architecture result in a poor customer experience. Operational expenses are high.

All this leads to a competitive disadvantage for the company and its clients. 

Overall, legacy systems are hard to maintain and extend. Their design stifles digital modernization initiatives and hinders the adoption of new technologies like AI, ML, and IOT. Security is another crucial concern. There are no straightforward solutions to these problems. These are just a few reasons architects feel pain with legacy systems. 

Let’s examine some issues that legacy systems have and see how modern applications fare in those same areas.

Scalability

The architecture of non-cloud-native applications makes them difficult and expensive to scale. A mature application is still deployed as a monolith, even if parts of it have been “lifted and shifted” to the cloud.

If some parts of the application experience load or performance issues, you cannot scale only those parts. You must scale up the entire application. This would require starting an additional large (or extra-large) compute instance, which can become expensive.

The situation is even more challenging for monolithic applications hosted on on-premise or in data centers. It can take weeks to procure new hardware to scale up. There is no elasticity. Once provisioned, you are stuck with the hardware, even in times of low usage.

Organizations generate data at ever-increasing rates. They must store the data safely and securely in their servers. The cost of acquiring more storage is prohibitive.

Contrast this with a modern application built with microservices. If overall system performance needs a boost, it’s possible to scale only those microservices needed. Because the microservices are decoupled from the monolithic app, it’s possible to use compute instances more efficiently to keep costs from spiraling out of control. 

Modern applications hosted on the cloud can add capacity to handle spikes in demand in seconds. Cloud computing offers elasticity. You can automatically free up excess capacity in times of low usage. So, you trade fixed costs (on-premise data centers and servers) for variable expenses based on your consumption. The latter expenditure is low because cloud operators enjoy economies of scale.

Long Release Cycles

Software development on monolithic applications typically involves long release cycles. Teams tend to follow traditional development processes like Waterfall. The product team first documents all change requirements.

All concerned must review and sign off on the changes. The architecture is tightly-coupled, so many groups are involved. They need to exercise due diligence to avoid undesired side effects because of the lack of modularity. Subsequent changes in requirements result in redoing the entire process. 

After the developers have made the changes, the QA team tests extensively to ensure that there are no breakages. The release process itself is lengthy. You must deploy the entire application, even if you have changed only a minor part.

If you find any issues post-deployment, you must roll back the release, fix the problem, and repeat the release process. Technical debt makes integrating CI/CD into the legacy application workflow difficult. All this contributes to long release cycles.

Modern application developers, however, follow agile processes. Every team works only on one microservice, which they understand well. Microservices are autonomous and decoupled, enabling teams to work independently. Each microservice can be changed and deployed alone. A failure in one microservice does not impact the whole application.

Microservices can run in containers, making them easier to deploy, test, and port to different platforms. Many teams use DevOps techniques like Continuous Integration and Continuous Development (CI/CD). Consequently, developers make releases quickly, sometimes several times a day.

Most importantly, teams get out of the traditional mindset of long release cycles and into a mode where IT is aligned closely with business priorities. 

Accelerating the development process has many advantages. You can go to market faster, gaining a competitive advantage. Customers benefit because they get new features at a rapid clip. The workforce finds more satisfaction in their work.

Long Test Cycles

Legacy applications require a lot of testing effort. There are many reasons for this.

Mature monolithic applications are often poorly documented and understood by only a few employees. As the application ages, the team periodically adds new functionality, but this often happens in a silo, and updates across the organization and documentation are not guaranteed. Hence, the domain knowledge – what functionality the application contains – is not present in all the testers in the team.

This, together with tight coupling and dependencies between different classes in the application, means that testers cannot focus only on testing the changes. They must also verify the functionality far afield because unexpected side-effects could have occurred.

Software crews usually develop legacy applications without writing automated unit test cases, so they don’t have a safety net to rely on when making changes. They must proceed cautiously.

In general, there is no, or insufficient, test automation for monolithic applications. Any automated tests available are developed well after testing, and these only cover a small part of the functionality. Creating automated tests for a legacy application is a long-drawn-out process with an unclear ROI, and it’s rarely taken up. Therefore, testers must manually test everything. For large applications, this could take days or even weeks.

Another common issue with legacy applications is the existence of dead code. Dead code refers to code that is not used anymore but is still lurking in the system. Dead code is problematic on many levels.

It makes it difficult for newcomers to understand the application flow. The inactive code could inadvertently become live with catastrophic results. Dead code is also an indicator of poor development culture.

Hence, testing legacy applications is more of an art– it’s a risky affair. 

Testing microservices is a lot easier. Developers write unit tests alongside creating new features. The tests find any breakages that result from code changes. They are repurposed and added to the CI/CD pipelines.

Hence, the process automatically tests all new builds. It is easy to plug gaps in testing, as they are few. The testing time is compressed into the build and deployment cycle and is usually short. Manual testers can focus on exploratory testing. Overall, the product quality is a lot better.

Related: Four Advantages of Refactoring That Java Architects Love

Security and Compliance

Teams working with legacy applications may be unable to implement security best practices and consequently face more vulnerabilities and cybersecurity risks. The application design may make it incapable of supporting features like multi-factor authentication, audit trails, or encryption. 

Legacy applications can also pose a security threat because they use outdated technology versions that their manufacturers or vendors no longer support (or recommend updating). Security updates are vital to keeping systems secure. Interdependencies may make it impossible to upgrade from unsupported older operating systems and other software. Therefore, the IT team may not be able to address even known security issues. 

There is extensive documentation on known security flaws in legacy applications and platforms. These systems are no longer supported, hence don’t receive patches and updates. They are particularly exposed. Hackers are well aware of their vulnerabilities and can attack them.

If a security breach happens, damage to a company’s reputation takes ages to repair. The public perception that your brand is unsafe never goes away completely. 

Many countries have introduced privacy standards like GDPR to protect personal data. Non-compliance with these standards can cause hefty penalties and fines. Hence, organizations must modernize their legacy systems to adhere to these requirements. They must change how they acquire, store, transmit and use personal data. It is a hard task.

Modern applications live on the cloud. Cloud providers make substantial investments to offer state-of-the-art security. They comply with all security requirements prescribed by regulators, like PCI, SOC1, and SOC2. They also promptly apply the latest security patches to all their systems.

Related: “Java Monoliths” – Modernizing an Oxymoron

Inability to Meet Business Requirements

We have seen that legacy applications cannot meet customer demands on time because of long and unpredictable testing and release cycles. Additionally, the monolithic architecture does not scale efficiently when demand surges. Maintaining legacy applications is costly, laborious, and can begin to take a toll on team morale. It is not easy to find people who are interested or skilled in working with these aging technologies. You must build any new functionality you want to offer yourself, like analytics or IOT. 

All this results in customer dissatisfaction. Clients look for more nimble and agile alternatives. Lack of performance, reliability, and agility, plus high costs, cause a loss of competitive advantage. They impact productivity and profitability both in your organization and for the products and services you deliver to your customer. You cannot meet your business goals.

The ability to respond swiftly to changing conditions is a key differentiator. IT leaders would like to increase agility and provide better quality service while reducing cost and risk.

Companies with cloud-native applications have a significant advantage because their deployment processes are automated end-to-end. They can release code into production hundreds or even thousands of times every day. So, they can rapidly offer their customers new features.

Cloud providers offer several built-in services that you can leverage. AWS, for instance, provides over 200 services like computing, storage, networking, security, and databases. You can start using them in minutes for a pay-as-you-use fee and quickly scale up your app’s functionality.

Poor Customer Experience

Today’s consumers expect a high-quality user experience. Exceptional customer experience allows a product to stand out in a crowded marketplace. It leads to higher brand awareness and increases customer retention, Average Order Value (AOV), and Customer Lifetime Values (CLTV).

Consumers presume immediacy and convenience in their interactions with customer service. Whether they are looking for information, wanting to report an issue, or seeking an update on an earlier request, they want fast and accurate answers. A long wait time has a negative impact.

Slow performance and high latency plague legacy applications even during simple transactions. To this, add buggy app experiences and inadequately addressed business requirements. This leads to a poor customer experience. 

Legacy systems suffer from compatibility issues. They work with data formats that may be out of fashion or obsolete. For example, they may generate reports as text files instead of PDFs, and may not integrate with monitoring, observability, and tracing technologies. 

Customers are now used to accessing services online using a device of their choice at a time that suits them. But most legacy systems don’t support mobile applications. Modern applications can provide 24/7 customer service using AI-powered chatbots.

They provide RPA (Robotic Process Automation) to automate the mechanical parts of a call center employee’s job. Businesses with legacy applications must invest in call centers staffed by expensive personnel or provide support only during business hours.

Legacy applications might serve their customers from on-premise servers located far away. Cloud-based applications can be deployed to servers close to customers anywhere, reducing latency.

Mitigating Architects’ Pains with Legacy Systems

We have seen the pains architects have with legacy systems and how modern applications alleviate these problems. But migrating from a monolith to microservices is an arduous undertaking. 

Every organization’s needs are different, and a one-size-fits-all approach won’t work. Architects must start with an assessment of their existing systems. It may be possible to rehost some parts of the application, whereas others will first need refactoring. In addition, they must also implement the process and tool-related modernization changes like containers and CI/CD pipelines.

So, modernizing is a complicated process, especially when done manually and from the ground up. 

Instead, it is much more predictable and less risky when performed with an automated modernization platform. A platform-centric approach provides a framework with proven best practices that have worked with multiple organizations. vFunction is an application modernization platform built with intuitive algorithms and an AI engine. It is the first and only platform for developers and architects that automatically decomposes legacy monolithic Java applications into microservices. The platform helps in overcoming all the issues inherent in monolithic applications.

Engineering velocity increases, customer experience improves, and the business enjoys all the benefits of being cloud-native. vFunction enables the reuse of the resulting microservices across many projects. The platform approach leads to a scalable and repeatable factory model. To see how we can help with your modernization needs, request a demo today.