Category: Uncategorized

Organizing and Managing Remote Dev Teams for Application Modernization

Many companies still depend on legacy applications for some of their most business-critical processing. But those apps typically don’t support the kind of technological agility that’s needed for continuing success in today’s ever-changing marketplace environment. That’s why modernizing legacy apps is an accelerating trend among leading companies.

But app modernization is not easy—it requires highly skilled developers who understand both the legacy app and the modern cloud ecosystem. Assembling such a team on-site can be difficult. It’s often easier to find the needed skills and organize the team for maximum effectiveness if team members can work remotely. According to Forbes, which declares that remote work is the new normal,

“When it comes to the tech workforce, it takes more than simply offering remote opportunities to get employees motivated. Employers must embrace flexibility and build (or reinforce) a strong, supportive remote work culture to ensure teams are engaged and high-performing.”

That’s why understanding how to assemble and manage remote legacy app modernization teams is vitally important.

The Goal of App Modernization: From Monoliths to Microservices

Legacy apps are often a severe drag on a company’s ability to innovate at the pace required in today’s fast-changing marketplace and technological environments. That’s because such apps are typically monolithic, meaning that the codebase is organized as a single unit.

Because various function implementations and dependencies are interwoven throughout the code, an attempt to change any specific behavior could impact the entire application in unexpected ways, potentially causing it to fail.

When legacy apps are used for a company’s most important operational processes, such failures cannot be tolerated. When they occur, development teams may be required to stop work on the innovations that are so necessary to a company’s continued marketplace success and take an all-hands-on-deck approach to fixing the problem as quickly as possible.

In contrast to the typical legacy app, software based on a cloud-native microservices architecture can be updated far more easily and safely from remote locations. Microservices are small units of code that perform a single task. Because they are designed to function independently of one another, changes made to one microservice can’t ripple through the rest of the application. That’s why restructuring a legacy app from a monolith to a microservices architecture gives it a much greater level of adaptability.

The goal of legacy application modernization is to substantially improve an app’s adaptability and maintainability by restructuring its codebase from a monolith to microservices.

Related: Application Modernization and Optimization: What Does It Mean?

How System Architecture Correlates With Organizational Structure

In 1967 Melvyn Conway formulated what’s come to be known as Conway’s Law:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Software engineer Alex Kondov highlights the practical implications of Conway’s Law:

Imagine a small company with a handful of engineers all sitting in the same room. They will probably end up with a more tightly coupled solution that relies on their constant communication. [In other words, a monolith].

A large company with teams in different time zones working on separate parts of the product will need to come up with a more distributed solution. That will allow them to work autonomously, build and deploy without interfering with each other… An organization in which teams need to operate in a fully autonomous manner would naturally come to an architecture like microservices.

The organizational structure of your legacy app modernization team will determine the kind of software it ultimately produces. Although you may aim at creating a microservices-based application, if your team is organized in the tightly-coupled manner of Kondov’s first example, it will be more predisposed towards a monolithic application architecture rather than a microservices architecture.

Optimizing Your Team’s Structure to Support Microservices

The principles of Domain-Driven Design (DDD) provide a useful framework for organizing legacy app modernization teams. As Tomas Fernandez explains,

“Domain-Driven Development allows us to plan a microservice architecture by decomposing the larger system into self-contained units, understanding the responsibilities of each, and identifying their relationships.”

According to María Gómez, Head of Technology for ThoughtWorks, examples of domains include broad business areas such as finance, health, or retail. Each domain may contain several sub-domains. The finance domain, for example, might have sub-domains of payments, statements, or credit card applications.

The DDD paradigm allows developers to identify the domains and subdomains that exist in a monolithic codebase, and draw the boundaries that delineate each service that will be implemented as a microservice. Every microservice embodies a single business goal and has a well-defined function and communications interface. This allows each microservice to run independently of any others.

Each team is fully and solely responsible for the microservices that they develop, and functions, to a considerable degree, independently of other teams. Once a team is made aware of the communications interface defined for each microservice, it can work independently and asynchronously without needing to interact with other teams about how their microservices are implemented.

This organizational pattern favors remote development teams. Each team needs a high degree of internal asynchronous communication as they work out the design of the microservice for which they are responsible. But less communication is needed between teams.

For that reason, remote, loosely-coupled development teams are perfect for converting monolithic legacy apps to microservices. But there are some potential pitfalls tech leaders should be aware of.

Related: What is a Monolithic Application? Everything you need to know

Challenges of Remote Teams

With all of the advantages remote development teams provide for the application modernization process, there are some definite challenges that IT leaders will have to overcome to make their use of such teams effective. Let’s take a brief look.

1. Work-Life Balance

Working remotely can affect work-life balance both positively and negatively: while ZDNet reports that 64% of developers say that working remotely has improved their work-life balance, 27% say that it’s difficult for them to unplug from the job.

Leaders must proactively help remote developers in this area. A big part of doing that is ensuring that goals and deadlines are realistic and that workers aren’t encouraged to spend what should be family or personal time on job-related activities.

2. Assessing Productivity and Progress

Because personal contact with workers is more constrained, leaders have less visibility into the productivity or progress of remote teams. Gaining that visibility may require more formal reporting arrangements, such as daily check-ins where workers report progress on their KPIs. Project management tools such as Trello or Asana can also help.

3. Communications

In the office, developers naturally consult and collaborate informally. That’s more difficult when they are working remotely. In fact, in Buffer’s 2022 State Of Remote Work survey, 56% of respondents identify “how I collaborate and communicate” as a top remote work issue, while 52% say they feel less connected to coworkers. Scheduling regular virtual team meetings using tools like Zoom or Google Chat can help.

4. Work Schedules Across Time Zones

It’s 9 am in San Francisco, and your West Coast team members are starting their workday, but team members in Europe are finishing theirs. How can a team collaborate across time zones? One approach is to focus on asynchronous communication methods, such as group chats and emails, that don’t require individuals to be online at the same time.

5. Company Culture

Company culture is absorbed most easily through face-to-face interactions with leaders and peers. The isolation inherent in remote work makes instilling that culture among team members difficult. John Carter, Founder of TCGen, offers this suggestion:

“Make the unconscious cues in the company culture conscious. Refer to them often and reinforce them. Company culture can follow your team members home, but only if it is made explicit and constantly reinforced.”

Why Remote Teams are the Future

Not only is the distributed nature of remote teams ideal for implementing distributed, cloud-based microservices applications, but they represent a trend that may redefine the IT landscape well into the future.

The COVID-19 pandemic accelerated the use of remote software development teams. As companies learned how to onboard, train, and manage a remote workforce, they realized that disregarding geographical limitations in their hiring allowed them to lower costs and increase quality by tapping into a wider developer talent pool. Bjorn Lundberg, Senior Client Partner at 3Pillar Global describes the trend this way:

“Contract workers, freelancers, and outsourced teams have been on the rise for a while now… As remote collaboration becomes a fixture of the modern workplace, American companies increasingly view outsourcing software development as an opportunity to extend their development talent without exhausting their budgets.”

Even full-time employees want to work remotely: according to the 2022 State of Remote Engineering Report, 75% of developers would prefer to work remotely most of the time.

How vFunction Can Empower Your App Modernization Teams

As we’ve seen, the ideal application modernization team should be relatively small. vFunction helps small teams maximize their effectiveness by providing an AI-enabled, automated platform that reduces the legacy app restructuring workload by orders of magnitude.

vFunction can automatically analyze complex monolithic applications, with perhaps millions of lines of code, to reveal hidden functionalities and dependencies. It can then automatically transform those apps into microservices.To see first-hand how vFunction helps remote application modernization teams maximize their effectiveness, request a demo today.

IT Leader Strategies for Effectively Managing Technical Debt

In a report on managing technical debt, Google researchers make a startling admission:

“With a large and rapidly changing codebase, Google software engineers are constantly paying interest on various forms of technical debt.”

What’s true of Google is very likely true of your company as well, especially if you have legacy applications you still depend on for important business functions. If you do, you’re almost certainly carrying a load of technical debt that is hindering your ability to innovate as quickly and as nimbly as you need to in today’s fast-changing marketplace and technological environments.

Technical debt is an issue you cannot afford to ignore. As an article in CIO Magazine explains,

“CIOs say reducing technical debt needs increasing focus. It isn’t wasting money. It’s about replacing brittle, monolithic systems with more secure, fluid, customizable systems. CIOs stress there is ROI in less maintenance labor, fewer incursions, and easier change.”

But what, exactly, is technical debt, and why is managing it so vital for companies today?

Why Managing Technical Debt is Critical

What is technical debt? According to Ori Saporta: “Technical debt, in plain words, is an accumulation over time of lots of little compromises that hamper your coding efforts.”

In other words, technical debt is what happens when developers prioritize speed over quality. The problem is that, just as with financial debt, you must eventually pay off your technical debt, and until you do, you’ll pay interest on the principal.

  • The “interest” on technical debt consists of the ongoing charges you incur in trying to keep flawed, inflexible, and outmoded applications running as the technological context for which they were designed recedes further and further into the past. Software developers spend, on average, about a third of their workweek addressing technical debt. Plus, there’s also the opportunity cost of time that’s not being spent to develop the innovations that can help propel a company ahead in its marketplace.
  • The “principle” on technical debt is what it costs to clean up (or replace) the original messy code and bring the application into the modern world. Companies typically incur $361,000 of technical debt for every 100,000 lines of code.

Managing your technical debt is critical because the price you’ll pay for not doing so, in terms of time, money, focus, and lost market opportunities, will grow at an ever-accelerating pace until you do.

Managing Technical Debt: Getting Started

A report from McKinsey highlights how a company can begin dealing with its technical debt:

“[A] degree of tech debt is an unavoidable cost of doing business, and it needs to be managed appropriately to ensure an organization’s long-term viability. That could include ‘paying down’ debt through carefully targeted, high-impact interventions, such as modernizing systems to align with target architecture.”

The place to start in managing technical debt is with modernizing legacy applications to align with a target architecture, which today is usually the cloud. Legacy applications weren’t designed to work in the cloud context, and it’s very difficult to upgrade them to do so. That’s because such apps often have a monolithic system architecture.

Monolithic code is organized as a single unit with various functionalities and dependencies interwoven throughout the code. The coding shortcuts, ad hoc patches, and documentation inadequacies that are typical sources of technical debt in legacy applications are embedded in the code in ways that are extremely difficult for humans to unravel. Worse, because of hidden dependencies in the code, any changes aimed at upgrading functions or adding features may ripple throughout the codebase in unexpected ways, potentially causing the entire application to fail.

From Monoliths to Microservices

Because a monolithic architecture makes upgrading an application for new features or for integration into the cloud ecosystem so difficult, the first step of legacy app modernization is usually to restructure the code from a monolith to a cloud-native, microservices architecture.

Microservices are small chunks of code that perform a single task. Each can be deployed and updated independently of any others. This allows developers to change a specific function in an application by updating the associated microservice without the risk of unintentionally impacting the codebase as a whole.

The process of restructuring a codebase from a monolith to microservices will expose the hidden dependencies and coding shortcuts that are the source of technical debt.

Related: Migrating Monolithic Applications to Microservices Architecture

Options for Modernizing Legacy Apps

Gartner lists seven options for modernizing legacy applications:

  1. Encapsulate: Connect the app to cloud resources by providing API access to its existing data and functions, but without changing its internal structure and operations.
  2. Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  3. Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.
  4. Refactor: Restructure the app’s code to a microservices architecture without changing its external behavior.
  5. Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  6. Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  7. Replace: Throw out the original application, and replace it with a new one.

The first three options, encapsulation, rehosting, and replatforming, simply migrate an app to the cloud with minimal changes. They offer some improvements in terms of operating costs, performance, and integration with the cloud. However, they do little to reduce technical debt because there’s no restructuring of the legacy application’s codebase—if it was monolithic before being migrated to the cloud, it remains monolithic once there.

The last option, replacing the original application, can certainly impact technical debt, but because it’s the most extreme in terms of time, cost, and risk, it’s usually considered only as a last resort.

The most viable options, then, for removing technical debt are refactoring, rearchitecting, or rewriting the code.

Assessing Your Monolithic Application Landscape

The ideal solution for managing technical debt would be to immediately identify and convert a subset of your legacy applications to microservices. But because any restructuring project involves significant costs in terms of money, time, and risk, trying to modernize every app in your portfolio is not a practical strategy for most companies.

That’s why your first step on the road to effectively managing technical debt should be surveying your portfolio of monolithic legacy applications to assess each in terms of its complexity, degree of technical debt, and the level of risk associated with upgrading it. With that information in hand, you can then prioritize each app based on the degree to which its value to the business justifies the amount of effort required to modernize it.

  • For apps with high levels of technical debt and great value to the business, consider full modernization through refactoring, rearchitecting, or rewriting.
  • Apps with lower levels of technical debt (meaning that they function acceptably as they are) or that have a lesser business value should be considered for simple migration through encapsulation, rehosting, or replatforming.

Refactoring is Key

Refactoring is fundamental to managing technical debt for at least two reasons:

  1. It exposes the elements of technical debt, such as hidden dependencies and undocumented functionalities, that are inherent in an app’s original monolithic code. These must be well understood before any rearchitecting or rewriting efforts can be safely initiated.
  2. By converting an app to a cloud-native microservices architecture, refactoring positions it for full integration into the cloud ecosystem, making further upgrades and functional extensions relatively easy.

That’s why refactoring is normally the first stage in modernizing a monolithic legacy app. Then, if new capabilities or performance improvements are required that the original code structure does not support, rearchitecting may be in order. Or, if the development team wishes to avoid the complexities of rearchitecting existing code, they may opt to rewrite the application instead.

In any case, refactoring will normally be the initial step because it produces a codebase that developers can easily understand and work with.

Implement “Continuous Modernization”

Technical debt is unavoidable. As the pace of technological change continues to accelerate, even your most recently written or upgraded apps will slide relentlessly over time toward increased technical debt. That means you should plan to deal with your technical debt on a continuous basis–known as continuous modernization. As John Kodumal, CTO and cofounder of LaunchDarkly has said,

“Technical debt is inevitable in software development, but you can combat it by being proactive… This is much healthier than stopping other work and trying to dig out from a mountain of debt.”

You need to constantly monitor and clean up your technical debt as you go, rather than waiting until some application or system reaches a crisis point that requires an immediate all-out effort at modernization. In fact, continuous modernization leads to technical debt removal and should be a fundamental element of your CI/CD pipeline.

Related: Preventing Monoliths: Why Cloud Modernization is a Continuum

vFunction Can Help You Manage Your Technical Debt

As we’ve seen, the first step toward effectively managing your technical debt is to assess your suite of legacy apps to understand just how large the problem is. That has historically been a very complex and time-consuming task when pursued manually. But now the AI-driven vFunction platform can substantially simplify and speed up the process.

The vFunction Architectural Observability Platform will automatically evaluate your applications and generate qualitative measures of code complexity and risk due to interdependencies. It produces a number that represents the amount of technical debt associated with each app, providing you with just the information you need to prioritize your modernization efforts.

And once you’ve determined your modernization priorities, the vFunction Code Copy automates the process of actually transforming complex monolithic applications into microservices, which can result in immense savings of time and money. If you’d like a first-hand view of how vFunction can help your company effectively manage its technical debt, schedule a demo today.

Ten AWS Products for Modernizing Your Monolithic Applications

In today’s rapidly changing marketplace environment, companies face an imperative to modernize their business-critical legacy applications. That’s why, as the State of the CIO Study 2022 notes, modernizing legacy systems and applications is currently among the top priorities of corporate CIOs.

In most instances such modernization involves transferring legacy apps to the cloud, which is now the seedbed of technological innovation. Once housed in the cloud, and adapted to conform to the technical norms of that environment, legacy apps can improve their functionality, performance, flexibility, security, and overall usefulness by tapping into a sophisticated software ecosystem that offers a wide variety of preexisting services.

Amazon Web Services (AWS), with a 33% share of the market, is the most widely used cloud service platform. AWS provides users with a wide range of fully managed cloud services that can make modernizing legacy apps far easier than it otherwise would be. These include container management services, Kubernetes services, database and DB migration services, application migration services, API and Security management services, support for serverless functions, and more.

In this article, we want to take a brief look at ten of these key AWS services that companies should research and test to determine how they can best be used in modernizing the organization’s suite of legacy apps. But before looking at the AWS services themselves, we need to understand exactly what modernization aims to achieve.

What Application Modernization is All About: Transforming Monoliths into Microservices

Gartner describes application modernization this way:

“Application modernization services address the migration of legacy to new applications or platforms, including the integration of new functionality to provide the latest functions to the business.”

The major problem with most legacy applications is that the way they are architected makes “the integration of new functionality” extremely difficult. That’s because such apps are typically monolithic, meaning that the codebase is basically a single unit with functions and dependencies interwoven throughout.

Any single functional change could ripple through the code in unexpected ways, which makes adapting the app to add new functions or to integrate with other systems very difficult and risky.

A microservices architecture, on the other hand, is expressly designed to make updating the application easy. Each microservice is a separate piece of code that performs a single task; it is deployed and changed independently of any others. This approach allows individual functions to be quickly and easily updated to meet new requirements without impacting other portions of the application.

The fundamental purpose of legacy application modernization, then, is to restructure the application’s codebase from a monolith to microservices.

Related: Migrating Monolithic Applications to Microservices Architecture

The Importance of Refactoring

How does that restructuring take place? In most instances it begins with refactoring. The Agile Alliance defines refactoring this way:

“Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior.”

Refactoring allows developers to transform a legacy codebase into a cloud-native microservices architecture while not altering its external functionality or user interface. But because the refactored application can fully interoperate with other resources in the cloud ecosystem, updates that were previously almost impossible now become easy. For that reason, refactoring will normally be a key element of any legacy application modernization process.

The Migration “Lift and Shift” Trap

A report from McKinsey highlights a disturbing reality:

“Thus far, modernization efforts have largely failed to generate the expected benefits. Despite migrating a portion of workloads to the cloud, around 80 percent of CIOs report that they have not attained the level of agility and business benefits that they sought through modernization.”

To a significant degree this failure can be attributed to organizations confusing migration with modernization. Far too often companies have focused on simply getting their legacy applications moved to the cloud, as if that in itself constituted a significant level of modernization. That is most emphatically not the case.

The problem is that just removing an application from a data center and rehosting it in the cloud (often called a “lift and shift”) does nothing to change the fundamental nature of the codebase. If it was a monolith before being migrated, it remains a monolith once it gets to the cloud, and retains all the disadvantages of that architecture.

It’s only when a legacy application is not only migrated to the cloud but is refactored from a monolith to a microservices architecture that true modernization can begin. That’s why the modernization services provided by AWS must be evaluated in light of how they facilitate not just the migration, but more importantly the transformation of legacy applications.

Related: Accelerate AWS Migration for Java Applications

Key Modernization Services from AWS

For each of these important AWS services, we’ll provide a brief description along with a link for further information.

1. Amazon EC2 (Elastic Compute Cloud)

Amazon EC2 provides an unlimited number of virtual servers to run your apps. If, for example, you’ve had a particular application running on a physical server in your data center, you can migrate that application to the cloud by launching an EC2 server instance to run it. Rather than having to purchase and maintain your own server hardware, you pay Amazon by the second for each server instance you invoke.

2. Amazon ECS (Elastic Container Service)

Amazon ECS is a container orchestration service that allows you to run containerized apps in the cloud without having to configure an environment for the code to run in. It can be particularly helpful in running microservices apps by facilitating integration with other AWS services. Although container management is normally complex and error-prone, the distinguishing feature of ECS is its “powerful simplicity” that allows users to easily deploy, manage, and scale containerized workloads in the AWS environment.

3. Amazon EKS (Elastic Kubernetes Service)

Kubernetes is an open-source container-orchestration system with which you can automate your containerized application deployments. Amazon EKS allows you to run Kubernetes on AWS without having to install, operate, or maintain your own Kubernetes infrastructure. Applications running in other Kubernetes environments, whether in an on-premises data center or the cloud, can be directly migrated to EKS with no modifications to the code.

4. Amazon VPC (Virtual Private Cloud)

Amazon VPC allows you to define a virtual network (similar to a traditional network you might run out of your data center) within an isolated section of the AWS cloud. Other AWS resources, such as EC2 instances, can be enabled within the network, and you can optionally connect your VPC network with other networks or the internet. All AWS accounts created after December 4, 2013 come with a default VPC that has a default subnet (range of IP addresses) in each Availability Zone. You can also create your own VPC and define your own subnet IP address ranges.

5. AWS Database Migration Service (DMS)

AWS DMS allows you to migrate your databases quickly and securely to AWS. Both homogeneous (e.g. Oracle to Oracle) and heterogeneous (e.g. Oracle to MySQL) migrations are supported. You can set DMS up for either a one-time migration or for continuing replication in which changes to the source DB are continuously applied in real time to the target DB.

6. Amazon S3 / Aurora / DynamoDB / RDS

AWS provides a range of database and data storage services that can simplify the process of migrating data to the cloud:

Amazon S3 (Simple Storage Service) is a high-speed, highly scalable data storage service designed for online backup and archiving in AWS.

Amazon Aurora is “a fully managed relational database engine that’s compatible with MySQL and PostgreSQL.”

Amazon DynamoDB is “a fully managed, serverless, key-value NoSQL database” that provides low latency and high scalability.

Amazon RDS (Relational Database Service) is a managed SQL database service that supports the deployment, operation, and maintenance of seven relational database engines: Amazon Aurora with MySQL compatibility, PostgreSQL, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server.

7. Amazon API Gateway

Amazon API Gateway enables developers to securely create, publish, and manage APIs to connect non-AWS software to AWS-native applications and resources. That kind of integration, which can substantially enhance the functionality of legacy applications, is a fundamental element of the application modernization process.

8. AWS IAM (Identity and Access Management)

AWS IAM allows you to securely manage AWS access permissions for both users and workloads. You can use IAM policies to specify who (or what workloads) can access specific services and resources, and under what conditions. IAM is a feature of your AWS account, and there is no charge to use it.

9. AWS Lambda

AWS Lambda is an event-driven compute service that lets you run code as stateless functions without provisioning or managing servers or storage–also known as Function as a Service (FaaS). With those tasks performed automatically, developers can focus on their application code. Lambda supports several popular programming languages, including C#, Python, Java, and Node.js. Lambda runs a function only when triggered by an appropriate event, and can automatically scale to handle anything from a few requests per day to thousands of requests per second.

10. Amazon Migration Hub Refactor Spaces (MHRS)

Amazon describes Migration Hub Refactor Spaces as “the starting point for customers looking to incrementally refactor applications to microservices.” MHRS orchestrates AWS services to create an environment optimized for refactoring, allowing modernization teams to easily set up and manage an infrastructure that supports the testing, staging, deployment, and management of refactored legacy applications.

How vFunction Works with MHRS

vFunction and MHRS work together to refactor monolithic legacy applications into microservices and to safely stage, migrate, and deploy those microservice applications to AWS. Developers use MHRS to set up and manage the environment in which the refactoring process is carried out, while the vFunction Platform uses its AI capabilities to substantially automate both the analysis and refactoring of legacy applications.The result of this collaboration is a significant acceleration of the process of modernizing legacy apps and safely deploying them to the AWS cloud. To experience first-hand how vFunction and AWS can work together to help you modernize your legacy applications, schedule a demo today.

Five Insights for CIOs to Understand Application Modernization

This webinar from ADTMag with guest speakers Moti Rafalin, CEO of vFunction, and KellyAnn Fitzpatrick, Senior Analyst at RedMonk, reveals some critical insights with supporting survey data that can help CIOs and architects better understand, plan, prioritize, and succeed with application modernization projects.   


Insight #1 – Application Modernization Must Be A Dynamic, Not Static, Imperative

application modernization must be dynamic

Kelly’s first insight emphasized that modernizing applications is not a “one-and-done” project. It’s an ever-moving target–if you modernized a 15-year-old application in 2018, then you can expect to need further modernization initiatives to catch up to the expectations, technologies, and platforms of 2022 and beyond. 

The term “Continuous Modernization” is key here–the ability to maintain fast innovation and development cycles, rapidly detect and eliminate technical debt, and avoid poor architectural decisions and coding patterns–refers to a highly valuable set of capabilities that elite software organizations have internalized.

Insight #2 – In the Cloud ≠ Cloud Native

migration vs modernization

Kelly then dug deeper why it’s not enough to simply have workloads running in the cloud. Migrating existing applications to the cloud, often seen as “lift and shift”, is a short-term tactical action that doesn’t solve the major challenges of development velocity, technical debt accumulation, and speed of innovation. Migrating to the cloud solves certain problems, like hosting, baked-in security, and cost controls, but also introduces other problems if the application is still a monolith now running in the cloud.

The adoption of containers, managed services, serverless workloads, and new paradigms of building, integrating, and deploying applications mean something more substantial is needed: actual modernization (refactoring, rewriting, rearchitecting) of logical domains, business logic, APIs and more is where strategic value can be achieved.

Insight #3 – Microservices Have Trade-offs That Are Worth It

microservices have tradeoffs

Next up, Kelly focused on why modernizing a monolith into a microservices architecture isn’t easy–it requires a major mental shift in how development, testing, and deployment is done. The benefits of microservices have been discussed ad nauseum for many years now–like increased velocity, better flexibility, and faster development and deployment.  

However, these benefits come with a new set of challenges that IT organizations didn’t have to worry about as much before: overall application complexity, API management, event-driven communication and distributed data management are just some of these examples. Despite these trade-offs, elite technology organizations have made it a priority to succeed with a microservices architecture (when it makes sense).

Insight #4 – Technology Is Great, But Have You Tried Talented People?

modernization projects obstacles

If the decision to modernize didn’t directly impact the development team around it, things might be simpler, Kelly shared. However, we cannot ignore the human aspect–in fact, 97% of survey respondents expressed that organizational pushback against modernizing is to be expected. Whether it’s the cost, risk, fear of change, or fear of failure, it’s often difficult to get full team support to begin a large-scale modernization project.  

How does the monolithic structure of your business applications and organization influence the hiring and onboarding of new employees? Are they excited to spend their first 6 months on the job trying to understand a 15-year-old monolith with 10 million lines of code? More importantly, what does this mean for retaining valuable staff in the days of the Great Resignation?  

Insight #5 – Java (and .NET) Are Still Vital

redmonk language rankings

Finally, Kelly reminded us all that Java is still vital and evolving, adapting to modern cloud architectures in new and innovative ways. Newer programming languages like Scala, Kotlin, and Go may be viewed as popular  for greenfield projects and indeed have been used at some of the world’s most well-known companies–Twitter, Airbnb, Google, and many others have embraced alternative languages to deal with specific challenges.

Yet, as Redmonk’s Programming Language research continues to show, the newer investments and advancements in Java combined with the fact that the majority of monolithic enterprise systems were written in languages like Java (plus .NET and C# ), make Java very relevant for everyone today. These are vital programming languages to many leaders in the finance, healthcare, automotive, and manufacturing industries. When you’re a financial services provider processing $1 billion of transactions every day, you do not have to simply turn everything off and adopt a new set of technologies. This is where application modernization and the strategic, long-term impact of refactoring and re-architecting pay off.

Next Steps

To learn more, we invite you to learn from the 2022 research report Why App Modernization Projects Fail, and check out vFunction Architectural Observability Platform, a purpose-built tool to analyze, calculate, and identify technical debt in order to prioritize your efforts and make a business case for modernization.

Risk-Adjusted ROI Model for Modernization Projects

Over the past few years, we at vFunction have been focusing on the most significant problem inhibiting the third wave of cloud adoption: app modernization.

The first wave of cloud adoption included new apps written for the cloud, the second wave focused on lifting and shifting the low-hanging fruit of apps that can relatively easily be migrated to the cloud, without code or architectural changes, and the third wave, which we are experiencing today, includes the modernization of massive legacy IT to take advantage of the modern cloud services.

When we say modernization, we refer to refactoring or rewriting applications to transform them from a monolithic architecture to microservices, allowing organizations to eliminate technical debt, increase engineering velocity, onboard new developers faster, and increase the scalability of applications.

In recent research we conducted, we found out what many executives know firsthand…that over 70% (!!) of application modernization projects fail, that they last at least 16 months (30% longer than 24 months) and that they cost on average more than $1.5m.

No wonder executives are reluctant to put their careers on the line and embark on these projects. The problem is that they are stuck between the rock and the hard place. If they don’t modernize, they may lose their job since they can’t address the business needs, their development isn’t agile, and they are not supporting their business’s vital need to be competitive. If they do embark on these modernization projects and fail, they may lose their jobs as well.

We believe that modernization that is assisted by AI and automation dramatically disrupts the above convention, and saves executives from the difficult dilemma of “modernize or die.” We’ve created a model that we believe supports this claim. 

The research reveals that executives struggle with the length of projects, as well as the cost of projects. We find that using AI and automation to power modernization reduces the cost by 50%-66% and accelerates time to market by 10x-15x. We see this with our customers and have case studies to show this to be true (see our case studies). However, the research also reveals that risk is a very real obstacle for modernization projects, and this ROI model doesn’t address the significant risk reduction that comes with AI-assisted modernization. 

One could argue that even without incorporating the risk factor the savings and acceleration of AI and automation justify the project, and I would agree with that, but when incorporating the risk factor it becomes a no-brainer.

Let’s use some numbers to substantiate this claim (see the chart below for the calculation). 

Let’s say a medium-sized modernization project of an app that is 7,000 classes (medium complexity, as the number of classes, is a very good proxy for application complexity and possible technical debt) would cost about $1.8m, based on 6 FTEs for 2 years – which falls within the average modernization cost and length based on the Wakefield research.

The same project when using modern AI and automation tools will take only 1 year and requires only 2 FTEs (⅓ resources and half the time) based on our experience at vFunction.

When comparing the total cost of the project, ignoring the risk factor, the AI and the automation-powered project are less than half the price. That seems compelling, however, if we incorporate the risk factor to calculate the risk-adjusted cost we get very different numbers.

The $1.8m manual project has only a 30% success rate (conservatively, based on the research) which means we need to divide the $1.8m by 0.3 to get the true risk-adjusted cost which yields a $6m project cost.

The intuitive meaning of this higher cost is that the project is most likely not going to end in 2 years and not with 12 FTEs…but rather double that time with a lot more resources, therefore getting to a true cost of $6m.

When calculating the AI and automation-powered project cost, we should assume a 90% success rate and therefore the actual cost would be $765,000 divided by 0.9 yielding a true project cost of $850,000.

Now, comparing $6m to $850K…yields a massive ROI of 700%.

risk adjusted roi model for modernization projects

Modernization is indeed risky, lengthy, and costly, but incorporating AI and automation radically changes the economics and risk of these projects and can assist CIOs and CTOs in embarking on modernization projects that are controlled, measured, and have significantly higher chances of success.

Using Machine Learning to Measure and Manage Technical Debt

This post was originally featured on TheNewStack, sponsored by vFunction.

If you’re a software developer, then “technical debt” is probably a term you’re familiar with. Technical debt, in plain words, is an accumulation over time of lots of little compromises that hamper your coding efforts. Sometimes, you (or your manager) choose to handle these challenges “next time” because of the urgency of the current release.

This is a cycle that continues for many organizations until a true breaking point or crisis occurs. If software teams decide to confront technical debt head on, these brave software engineers may discover that the situation has become so complex that they do not know where to start.

The difficult part is that decisions we make regarding technical debt have to balance between short-term and long-term implications of accumulating such debt, emphasizing the need to properly assess and address it when planning development cycles.

The real-world implications of this is seen in a recent survey of 250 senior IT professionals, in which 97% predicted organization pushback to app modernization projects, with the primary concern of both executives and architects being “risk.” For architects, we can think of this as “technical risk” — the threat that making changes to part of an application will have unpredictable and unwelcome downstream effects elsewhere.

The Science Behind Measuring Technical Debt

In their seminal article from 2012, “In Search of a Metric for Managing Architectural Technical Debt”, authors Robert L. Nord, Ipek Ozkaya, Philippe Kruchten and Marco Gonzalez-Rojas offer a metric to measure technical debt based on dependencies between architectural elements. They use this method to show how an organization should plan development cycles while taking into account the effect that accumulating technical debt will have on the overall resources required for each subsequent version released.

Though this article was published nearly 10 years ago, its relevance today is hard to overstate. Earlier this March, it was received the “Most Influential Paper” award at the 19th IEEE International Conference on Software Architecture.

In this post, we will demonstrate that not only is technical debt key to making decisions regarding any specific application, it is also important when attempting to prioritize work between multiple applications — specifically, modernization work.

Moreover, we will show a method that can be used to not only compare the performance of different design paths for a single application, but also compare the technical debt levels of multiple applications at an arbitrary point in their development life cycle.

Accurately Measuring Systemwide Technical Debt

In the IEEE article mentioned above, calculating technical debt is done using a formula that mainly relies on the dependencies between architectural elements in the given application. It is worth noting that the article does not define what constitutes an architectural element or how to identify these elements when approaching an application.

We took a similar approach and devised a method to measure technical debt of an application based on the dependency graph between its classes. The dependency graph is a directional graph G=V, E, in which the V=c1, c2, … is the set of all classes in the application and an edge e=⟨c1, c2⟩E exists between two vertices if class c1 depends on class c2 in the original code. We perform multifaceted analysis on the graph to eventually come up with a score that describes the technical debt of the application. Here are some of the metrics we extract from the raw graph:

  1. Average/median outdegree of the vertices on the graph.
  2. Top N outdegree of any node in the graph.
  3. Longest paths between classes.

Using standard clustering algorithms on the graph, we can identify communities of classes within the graph and measure additional metrics on them, such as:

  1. Average outdegree of the identified communities.
  2. Longest paths between communities.

The hypothesis here is that by using these generic metrics on the dependency graphs, we can identify architectural issues that represent real technical debt in the original code base. Moreover, by analyzing dependencies on these two levels — class and community — we give a broad interpretation of what an architectural element is in practice without attempting to formally define it.

To test this method, we created a data set of over 50 applications from a variety of domains — financial services, eCommerce, automotive and others — and extracted the aforementioned metrics from them. We used this data set in two ways.

First, we correlated specific instances of high-ranking occurrences of outdegrees and long paths with local issues in the code. For example, identifying god classes by their high outdegree. This proved efficient and increased our confidence level that this approach is valid in identifying local technical debt issues.

Second, we attempted to provide a high-level score that can be used not only to identify technical debt in a single application, but also to compare technical debt between applications and to use it to help prioritize which should be addressed and how. To do that, we introduced three indexes:

  1. Complexity — represents the effort required to add new features to the software.
  2. Risk — represents the potential risk that adding new features has on the stability of existing ones.
  3. Overall Debt — represents the overall amount of extra work required when attempting to add new features.

From Graph Theory to Actionable Insights

We manually analyzed the applications in our data set, employing the expert knowledge of the individual architects and developers in charge of product development, and scored each application’s complexity, risk and overall debt on a scale of 1 to 5, where a score of 1 represents little effort required and 5 represents high effort. We used these benchmarks to train a machine learning model that correlates the values of the extracted metrics with the indexes and normalizes them to a score of 0 to 100.

This allows us to use this ML model to issue a score per index for any new application we encounter, enabling us to analyze entire portfolios of applications and compare them to each another and to our precalculated benchmarks. The following graph depicts a sample of 21 applications demonstrating the relationship between the aforementioned metrics:

relationship between the aforementioned metrics

The overall debt levels were then converted into currency units, depicting the level of investment required to add new functionality into the system. For example, for each $1 invested in application development and innovation, how much goes specifically to maintaining architectural technical debt? This is intended to help organizations build a business case for handling and removing architectural technical debt from their applications.

We have shown a method to measure the technical debt of applications based on the dependencies between its classes. We have successfully used this method to both identify local issues that cause technical debt as well as to provide a global score that can be compared between applications. By employing this method, organizations can successfully assess the technical debt in their software, which can lead to improved decision-making around it.

Cloud Modernization Approaches: Choosing Between Rehost, Replatform, or Refactor

In an era when continual digital transformation is forcing marketplaces to evolve with lightning speed, companies can’t afford to be held back by functionally limited and inflexible legacy systems that don’t adapt well to today’s requirements. Software applications that are hard to maintain and support, and that cannot easily incorporate new features or integrate with other systems are a drag on any company’s marketplace agility and ability to innovate.

Yet, many legacy applications are still performing necessary and business-critical functions. Because they remain indispensable to the organization’s daily operations, they cannot simply be abandoned. As a result, companies face a very real imperative to modernize aging applications to meet the rapidly shifting requirements of the marketplace. And for a growing number of them, that means modernizing those applications for the cloud.

Why Companies are Modernizing for the Cloud

Today the cloud is where the action is—where the leading edge of technological innovation is taking place, and where there is an established ecosystem that software can tap into to make use of infrastructure hosting, scaling, and security capabilities that don’t have to be programmed into the application itself.

It’s that ability to leverage a wide-ranging and technically sophisticated ecosystem that makes the cloud the perfect avenue for modernizing a company’s legacy applications.

Gartner estimates that by 2025, 90% of current monolithic applications will still be in use, and that compounded technical debt will consume more than 40% of the current IT budget.

Because software that cannot interoperate in that environment will lose much of its utility, modernizing legacy applications is an urgent imperative for most companies today.

When legacy applications are moved to the cloud and modernized so that they become cloud-enabled, they gain improvements in scalability, flexibility, security, reliability, and availability. What’s more, they also gain the ability to tap into a multitude of already existing cloud-based services, so that developers don’t have to continually reinvent the wheel.

Related: Why Cloud Migration Is Important

Once a company decides that modernization of its legacy applications is a high priority, the next question is how to go about it.

Approaches to Cloud Modernization of Legacy Applications

Gartner has identified seven options that may be useful for modernizing legacy systems in the cloud: encapsulate, rehost, replatform, refactor, re-architect, rebuild, and replace. Experience has shown that for companies beginning their modernization journey, the most viable options are rehosting, replatforming, and refactoring. Let’s take a brief look at each of these.

1. Rehosting (“Lift and Shift”)

Rehosting is the most commonly used approach for bringing legacy applications to the cloud. It involves transferring the application as-is, without changing the code at all, from its original environment to a cloud hosting platform.

For example, one of the most frequently performed rehosting tasks is moving an application from a physical server in an on-premises data center to a virtual server hosted on a cloud service such as AWS or Azure.

Rehosting is the simplest, easiest, quickest, and least risky cloud migration method because there’s no new code to be written and tested. And the demands for technical expertise in the migration team are minimal.

The downside to rehosting is the flipside of its advantages—because no changes are made to the code or functionality of the application, even though it now runs in the cloud it is no more able to take advantage of cloud-native capabilities than it was in its original environment.

On the other hand, simply by being hosted in the cloud the application gains some significant advantages:

Advantages of Rehosting

  • Enhanced security—cloud service providers (CSPs) provide superior data security because their business model depends on it.
  • Greater reliability— CSPs typically guarantee “five 9’s” (99.999%) availability in their Service Level Agreements.
  • Global access—since the user interface (UI) of a web-hosted application is normally delivered through a browser (although the look and operation of the UI may be unchanged), users are no longer tied to dedicated terminals, but, with proper authorization, can access the system through any internet-enabled device anywhere in the world.
  • Minimum risk—because there are no changes to the codebase, there’s little chance of new bugs being introduced into the application during migration.
  • It’s a good starting point—having the application already hosted in the cloud is a good first step toward further modernization efforts.

Disadvantages of Rehosting

  • No improvements in functionality—the code runs exactly as it always has. There are no upgrades in functionality or in the ability to integrate with other cloud-based systems or take advantage of the unique capabilities available to cloud-enabled applications. For example, although cloud-native applications are inherently highly scalable, a legacy application rehosted to the cloud may lack the ability to scale by auto-provisioning additional resources as needed.
  • Potential latency and performance issues—when moving an application unchanged from an on-premises data center to the cloud, latency and performance issues may arise due to inherent cloud network communication delays.
  • Potentially higher costs— while running applications in the cloud that are not optimized for that environment may decrease CapEX expenditures (you don’t have to purchase or maintain hardware), it may actually increase monthly OpEX spending because of excessive cloud resource usage.

When to Use Rehosting

Rehosting may be the best choice for companies that:

  • are just beginning to migrate applications to the cloud, or
  • need to move the application to the cloud as quickly as possible, or
  • have a high level of concern that migration hiccups might disrupt the workflows served by the application

Because of its simplicity, rehosting is most commonly adopted by companies that are just beginning to move applications to the cloud.

2. Replatforming

As with rehosting, replatforming moves legacy applications to the cloud basically intact. But unlike rehosting, minimal changes are made to the codebase to enable the application to take advantage of some of the advanced capabilities available to cloud-enabled software, such as the adoption of containers, DevOps best practices, and automation, as well as improvements in functionality or in the ability to integrate with other cloud resources.

For example, changes might be instituted during replatforming to enable the application to access a modern cloud-based database management system or to increase application scalability through autoscaling.

Advantages of Replatforming

Because it’s basically “rehosting-plus,” replatforming shares the advantages associated with rehosting. Its greatest additional advantage is that it enables the application to be modestly more cloud-compatible, though still falling far short of cloud-native capabilities. But even relatively small improvements, such as the ability to automatically scale as needed, can have a significant impact on the performance and usability of the application.

Replatforming allows you to upgrade an application’s functionality or integration with other systems through a series of small, incremental changes that minimize risk.

Disadvantages of Replatforming

Changes to the codebase bring with them a risk of introducing new code that might disrupt operations. Avoiding such mistakes requires a higher level of expertise in the modernization team, with regard to both the original application and the cloud environment onto which it is being replatformed. It’s easy to get into trouble when inexperienced migration teams attempt to replace functions in the original codebase with supposedly equivalent cloud functions they don’t really understand.

When to use Replatforming

Replatforming is a good option for organizations that want to work toward increasing the cloud compatibility of their legacy applications on an incremental basis, and without the risks associated with more comprehensive changes.

3. Refactoring

According to Agile Alliance’s definition:

“Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior.”

Whereas rehosting and replatforming shift an application to the cloud without changing its fundamental nature, refactoring goes much further. Its purpose is to transform the codebase to take full advantage of the cloud’s capabilities while maintaining the original external functionality and user interface.

Most legacy applications have serious defects caused by their monolithic architecture (a monolithic codebase is organized basically as a single unit). Because various functions and dependencies are interwoven throughout the code, it can be extremely difficult to upgrade or alter specific behaviors without triggering unintended and often unnoticed changes elsewhere in the application.

Refactoring eliminates that problem by helping software transition to a cloud-native microservices architecture. This produces a modern, fully cloud-native codebase that can now be adapted, upgraded, and integrated with other cloud resources far more easily than before the refactoring.

Advantages of Refactoring

  • Enhanced developer productivity—productivity rises when developers work in a cloud-native environment, with code that can be clearly understood, and with the ability to integrate their software with other cloud resources, thereby leveraging existing functions rather than coding them into their own applications.
  • Eliminated technical debt—by correcting all the quick fixes, coding shortcuts, compromises, and just plain bad programming that typically seep into legacy applications over the years, refactoring can eliminate technical debt.
  • Better maintenance—whereas a monolithic codebase can be extremely difficult to parse and understand, refactored code is far more understandable. That makes a huge difference in the application’s maintainability.
  • Simpler integrations—because a microservice architecture is fully cloud-enabled, refactored applications can easily integrate with other cloud-based resources.
  • Greater adaptability—in a microservice-based codebase each function can be addressed independently, allowing modifications to be made cleanly and iteratively, without fear that one change might ripple through the entire system.
  • High scalability—because the codebase has been reshaped into a cloud-native architecture, autoscaling can be easily implemented.
  • Improved performance—the refactoring process optimizes the code for the functions it performs. This usually results in fewer bottlenecks and greater throughput.

Disadvantages of Refactoring

The main disadvantage of the refactoring approach is that it is far more complex, time-consuming, resource-intensive, and risky than rehosting or replatforming. That’s because the code is extensively modified. Refactoring must be done extremely carefully, by experts who know what they are doing, to avoid introducing difficult-to-find bugs or behavioral anomalies into the code. And that increases costs in both time and money.

On the other hand, the automated, AI-driven refactoring tools available today can take much of the complexity, time, cost, and risk out of the refactoring process.

When to Use Refactoring

Companies that need maximum flexibility and agility to keep pace with the demands of customers and the challenges of competitors will typically find that refactoring is their best choice. Though the up-front costs of refactoring are the greatest of the options we’ve considered, the ability of microservices-based applications to use only the cloud resources needed at a particular time will keep long-term operating expenses much lower than can be achieved with the other options.

Choosing Your Modernization Approach

How can you determine the approach you should use for modernizing your legacy applications? Here are some steps you should take:

1. Understand Your Business Strategy and Goals

Why are you considering modernizing your legacy applications? What business interests will be served by doing so? The only way to determine which applications should be modernized and how is to examine how each serves the goals your business is trying to achieve.

2. Assess Your Applications

In light of your business goals, determine which applications are in greatest need of modernization, and what the end-product of that upgrade needs to be.

3. Decide Whether to Truly Modernize or Just Migrate

Rehosting and replatforming are not really about modernizing applications. Rather, their focus is on simply getting them moved to the cloud. That can be the first step in a modernization effort, but just migrating an application to the cloud pretty much as-is does little to enable it to become a full participant in the modern cloud ecosystem.

In general, migration is a short-term, tactical approach, while modernization is a more long-term solution.

4. Repeat Steps 1-3 Again, and Again, and…

Application modernization is not a one-and-done deal. As technology continues to evolve at a rapid pace, you’ll need to periodically revisit these assessments of how well your business-critical applications are contributing to current business objectives, and what improvements might be needed. Otherwise, the software you so carefully modernize today might become, after a few years, your new legacy applications.

Related: Preventing Monoliths: Why Cloud Modernization is a Continuum

Making the Choice

As we’ve seen, rehosting or re-platforming are the quickest, easiest, and least costly ways to bring monolithic application services at least partially into the cloud. But those applications are still hamstrung as far as taking advantage of the cloud’s extensive capabilities is concerned.

Refactoring, on the other hand, is more expensive and time-consuming at the beginning, but positions applications to function as true cloud-native resources that can be much more easily adapted as requirements change. 

If you’ve got an executive mandate to move beyond just a “quick fix” approach to your legacy applications, you should strongly consider refactoring. And remember that by employing today’s sophisticated, AI-driven application modernization tools, the time and cost gaps between refactoring on the one hand, and rehosting or re-platforming on the other, can be significantly narrowed.

A good example of such a tool is the vFunction Platform. It’s a state-of-the-art application modernization platform that can rapidly assess monolithic legacy applications and transform them into microservices. It also provides decision-makers with data-driven assessments of legacy code that allow them to determine how to proceed with their modernization efforts. To see how vFunction can help your company get started on its journey toward legacy application modernization, schedule a demo today.

Modernizing Legacy Code: Refactor, Rearchitect, or Rewrite?

If your company is like most, you have legacy monolithic applications that are indispensable for everyday operations. Valuable as they are, due to their traditional architecture those applications are almost certainly hindering your company’s ability to display the agility, flexibility, and responsiveness necessary to keep up with the rapidly shifting demands of today’s marketplace. That’s why refactoring legacy code should be high on your priority list.

Almost by definition, legacy apps lack the functionality and adaptability required for them to seamlessly integrate with the modern, cloud-based ecosystem that defines today’s technological landscape. In an era when marketplace requirements are constantly evolving, continued dependence on apps with such limitations is a recipe for eventual disaster. That’s why the pressure to modernize is growing by the day.

Why Refactoring Legacy Code is Critical

Most enterprises today realize that they must do something to modernize the legacy apps on which they still depend. In fact, in CIO Magazine’s 2022 State of the CIO survey, 40% of CIOs say modernizing infrastructure and applications is their focus. But what will it take to make legacy app modernization a reality?

The basic issue that makes most legacy applications so ill-suited to fully participate in today’s cloud ecosystem is that they have a monolithic architecture. That means that the code is organized essentially as a single unit, with various functions and dependencies interwoven throughout.

Such code is brittle, inflexible, and hard to understand; modifying its functionality to meet new requirements is typically an extremely difficult and risky process.

As long as an application retains its monolithic structure, there’s little hope of any significant modernization. So, the first step in most efforts to modernize legacy applications is to transform them from a monolithic structure to a cloud-native, microservices architecture. And the first step in accomplishing that transformation is refactoring.

Related: Migrating Monolithic Applications to Microservices Architecture

The refactoring process restructures and optimizes an application’s code to meet modern coding standards and allow full integration with other cloud-based applications and systems.

But why “cloud-based”?

The Importance of the Cloud

The cloud has become the focal point of intense and continuous technological innovation—most software advancements are birthed and deployed in the cloud. That’s why Gartner projects that by 2025, 95% of new digital workloads will be cloud-native. What’s more, according to Forbes, 77% of enterprises, and 73% of all organizations, already have at least some of their IT infrastructure in the cloud.

The cloud is critical to modernization because it provides a well-established software ecosystem that allows newly cloud-enabled legacy apps to tap into a wide range of existing functional capabilities that don’t have to be programmed into the app itself.

That’s why today’s norm for modernizing legacy apps is to start by moving them to the cloud. Once relocated to the cloud and adapted to interoperate in that environment, such applications gain some substantial advantages, including improvements in performance, scalability, security, agility, flexibility, and operating costs.

But the degree to which such benefits are realized depends on how that cloud transfer is accomplished—will the app be optimized for the cloud environment, or just shifted basically intact from its original environment.

Migration vs Modernization

Many companies begin their modernization journey by simply migrating legacy software to the cloud. An app is transferred, pretty much as-is, without altering its basic internal structure. Some minor changes may be made to meet specific needs, but for the most part, the app functions exactly as it did in its original environment.

Because the app retains its original structure and functionality, it also retains the defects that undermine its usefulness in the modern technological context. For example, if the codebase was monolithic before migration, it remains monolithic once it reaches the cloud. Such apps bring with them all the limitations that plague the monolithic architectural pattern, including an inability to integrate with other cloud-based systems.

Migration represents an essentially short-term, tactical approach that aims at alleviating immediate pain points without making fundamental changes to the codebase.

Modernization, on the other hand, is a more long-term, strategic approach to updating legacy apps. The application isn’t simply shifted to the cloud. Rather, as part of the migration process much of the original code is significantly altered to meet cloud-native technical standards.

That enables the app to fully interoperate with other applications and systems within the cloud ecosystem, and thereby reap all the benefits that cloud-native apps inherit.

Options for Modernizing Legacy Applications

Gartner identifies seven options for upgrading legacy systems. These may be grouped into two broad categories:

  • Migration options that simply transfer the software to the cloud essentially as-is
  • Modernization options that not only migrate the application to the cloud but which, as an essential part of the migration process, adapt it to function in that environment as cloud-native software 

Let’s examine Gartner’s list of options in light of that distinction:

Migration methods

  • Encapsulate: Connect the app to cloud-based resources by providing API access to its existing data and functions. Its internal structure and operations remain unchanged.
  • Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  • Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.

Modernization methods

  • Refactor: Restructure and optimize the app’s code to meet modern standards without changing its external behavior.
  • Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  • Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  • Replace: Completely eliminate the original application, and replace it with a new one. This option requires such an extreme investment, in terms of time, cost, and risk, that it is normally used only as a last resort.

Since our concern in this article is with truly modernizing legacy apps rather than just migrating them to the cloud or entirely replacing them, we’ll limit our consideration to the modernization options: refactoring, rearchitecting, and rewriting.

Related: Legacy Application Modernization Approaches: What Architects Need to Know

Refactoring vs Rearchitecting vs Rewriting

Let’s take a closer look at each of these modernization options.

Refactoring

As we’ve seen, refactoring legacy code is fundamental to the modernization process. According to the Agile Alliance, one of the major benefits of refactoring is that it

“improves objective attributes of code (length, duplication, coupling and cohesion, cyclomatic complexity) that correlate with ease of maintenance.”

As a result of those improvements, refactored legacy code is simpler and cleaner; it’s also easier to understand, update with new features, and integrate with other cloud-based resources. Plus, the app’s performance will typically improve.

Because no functional changes are made to the app, the risk of new bugs being introduced during refactoring is low.

One significant advantage of refactoring is that it can (and should) be an incremental, iterative process that proceeds in small steps. Developers operate on small segments of the code and work to ensure that each is fully tested and functioning correctly before it is incorporated into the codebase.

As a result, when refactoring is done correctly, the operation of the overall system is never disrupted. This also eliminates the necessity of maintaining two separate codebases for the original and the updated code.

The fundamental purpose of refactoring legacy code is to convert it to a cloud-native structure that allows developers to easily adapt the application to meet changing requirements. A valuable byproduct of the process is the elimination of technical debt through the removal of the coding compromises, shortcuts, and ad hoc patches that often characterize legacy code.

Rearchitecting

Rearchitecting is used to restructure the application’s codebase to enable improvements in areas such as performance and scalability. It’s often employed when business requirements change and the application needs to add functionality that its current structure doesn’t support. Rearchitecting allows such changes to be incorporated without developers having to rewrite the app from scratch.

Because it goes beyond refactoring by making fundamental changes to the structure and operation of the code, rearchitecting is more complex and time-consuming, and it carries a higher risk of introducing bugs or business process errors into the code.

One of the major risk factors associated with rearchitecting (and with rewriting as well) is that with most legacy applications, documentation about not just the original requirements, but also of how the code has been modified along the way (and for what reasons) is inadequate or entirely missing.

For that reason, any rearchitecting or rewriting effort must be preceded by a thorough assessment of the original code so that developers gain a deep level of understanding before making changes. Otherwise, there is a high risk that even if the new code is technically bug-free, important business processes may be omitted or inadvertently changed because developers overlooked their implementations in the original code.

Rewriting

Full rewrites most often occur with legacy applications that are specialized and proprietary. Usually, the intent is not to modify the functionality or user interface in major ways, but to move to a modern (usually microservices) architecture without having to deconstruct the existing code to understand how it works.

Rewriting allows developers to start with a clean slate and implement the application requirements using modern technologies and coding standards.

As with rearchitecting, rewriting brings with it a significant danger of overlooking business process workflows that are implicit in the legacy code because of ad hoc patches and modifications made over the years, but which were never explicitly documented. Developers also shouldn’t forget that the legacy app is still in use because it works—it will have been heavily debugged and patched through time so that even low probability or extreme operational conditions are handled, if not gracefully, at least adequately.

For these reasons, developers involved in a rewrite must be extremely careful to ensure that all of the application’s use scenarios, whether documented or not, are uncovered and explicitly implemented in the new code.

One of the greatest dangers with a rewrite is that until it is completed, it may be necessary to freeze the functionality of the original app—otherwise, the rewrite is chasing a moving target. And in today’s environment of ever-accelerating technological change, that can be a recipe for disaster.

Joel Spolsky, formerly a Program Manager at Microsoft, and now Chairman of the Board at Glitch, cites a case in point. Netscape was once the leader in the internet browser market, but it made a fatal mistake by attempting a full rewrite of its browser code.

That effort took three years, during which Netscape was unable to update the functionality of its product because the original codebase was frozen. Competitors forged ahead with innovations, and Netscape’s market share plummeted. The company never recovered. According to Spolsky,

Netscape made “the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.”

Doing a complete rewrite of a legacy application may be necessary in some cases, but such a project should not be undertaken without a full evaluation of the associated costs and risks. It’s tempting to just clear the decks and start over without all the complexities of dealing with inherited code. But, as experts like Spolsky are quick to say, doing so is usually a mistake.

Refactoring is Key

Refactoring, rearchitecting, and rewriting are not mutually exclusive options. They can be seen as points along a continuum in the process of modernizing legacy applications:

  1. Start by refactoring legacy code into microservices. This gives the app an essentially cloud-enabled codebase that can be easily integrated with other cloud-based resources and positions it for further updates and improvements.
  2. If the application needs new functionality or performance levels that can’t be achieved with its original structure, rearchitecting may be in order.
  3. If rearchitecting to achieve the required functionality appears to be too complex or risky, starting from scratch by completely rewriting the app may be the best option.

Whichever option is ultimately pursued, refactoring should be the starting point because it produces a codebase that’s far easier for developers to understand and work with than was the original.

Plus, refactoring will unveil hidden dependencies and business process workflows buried in the code that may be missed if a development team goes straight to rearchitecting or rewriting as their initial step.

Note that all of these options require a substantial investment of time and expertise, especially if they are pursued through a mostly manual process using tools that were never designed for application modernization. But that need not, and should not, be the case.

Simplify Legacy App Modernization

The vFunction platform is specifically designed for AI-driven, cloud-native modernization of legacy applications. With it, designers can rapidly and incrementally modernize their legacy apps, and unlock the power of the cloud to innovate and scale.

The vFunction Assessment Hub uses its AI capabilities to automatically assess your legacy applications estate to help you prioritize and make a business case for modernization of a particular app or set of applications. This analysis provides a data-driven assessment of the levels of complexity, risk, and technical debt associated with the application.

Once this assessment has been performed, the vFunction Modernization Hub can then, under the direction of architects and developers, automatically transform complex monolithic applications into microservices. Through the use of these industry-leading vFunction capabilities, the time, complexity, risk, and cost of a legacy app modernization project can be substantially reduced. To see how vFunction can smooth the road to legacy application modernization at your company, schedule a demo today.

The CIO Guide to Modernizing Monolithic Applications

As the pace of technological change continues to accelerate, companies are being put under more and more pressure to improve their ability to quickly react to marketplace changes. And that, in turn, is putting corporate CIOs on the hot seat.

In a recent McKinsey survey, 71% of responding CIOs said that the top priority of their CEO was “agility in reacting to changing customer needs and faster time to market.” Those CEOs are looking to digital technology to enable their companies to keep ahead of competitors in a constantly evolving market environment.

CIOs are tasked with providing the IT infrastructure and tools needed to drive the marketplace innovation and agility required to accomplish that goal.

But in many cases CIOs are facing a seemingly intractable problem—they’ve inherited a suite of legacy applications that are indispensable to the company’s daily operations, but which also have very limited capacity for the upgrades necessary for them to be effective in the cloud-native, open-source technological landscape of today.

As a recent report by Forrester puts it,

“Most legacy core software systems are too inflexible, outdated, and brittle to give businesses the flexibility they need to win, serve, and retain customers.”

But because such systems are still critical for day-to-day operations, CIOs can’t just get rid of them. Rather, a way must be found to provide them with the flexibility and adaptability that will enable them to be full participants in the modern technological age.

The Problem with Monoliths

The fundamental cause of the brittleness and inflexibility that characterize most legacy systems is their monolithic arch

itecture. That is, the codebase (which may have millions of lines of code) is a single entity with functionalities and dependencies interwoven throughout. Such applications are extremely difficult to update because a change to any part of the code can ripple through the application, causing unintended operational changes or failures in seemingly unrelated parts of the codebase.

Because they are inflexible and brittle, such applications cannot be easily updated with new features or functions—they were not designed with that capability in mind. A much broader transformation is required, one in which the application’s codebase is restructured in ways that allow it to be upgraded while maintaining the original scope. That broad restructuring is referred to as application modernization.

Application Modernization and The Cloud

What, exactly, is application modernization? Gartner provides this description:

“Application modernization services address the migration of legacy to new applications or platforms, including the integration of new functionality to provide the latest functions to the business.”

There are two key aspects of this definition: migration and integration.

Because the cloud is where the technological action is today, most application modernization efforts involve, as a first step, migrating legacy apps from their original host setting to the cloud. As McKinsey says of this trend:

“CIOs see the cloud as a predominant enabler of IT architecture and its modernization. They are increasingly migrating workloads and redirecting a greater share of their infrastructure spending to the cloud.”

The report goes on to note that McKinsey expects that by 2022, 75% of corporate IT workloads will be housed in the cloud.

That leads to the second element of the Gartner definition: integration. If legacy applications are to be effective in the cloud environment, they must be integrated into the open services-based cloud ecosystem.

That means it’s not enough to simply migrate applications to the cloud. They must also be transformed or restructured so that integration with cloud-native resources is not just possible, but easy and natural.

The fundamental purpose of application modernization is to restructure legacy code so that it is easily understandable to developers, and can be quickly updated to meet new business requirements.

Transitioning From a Monolithic Architecture to Microservices

What does it take to transform legacy apps so that they are not only cloud-enabled, but they fit as naturally into the cloud landscape as do cloud-native systems?

As we’ve seen, the fundamental problem that causes the rigidity and inflexibility that must be overcome in transforming legacy apps is their monolithic architecture. Monolithic applications are self-contained and aren’t always easy to integration with other applications or systems. The codebase is a single entity in which all the functions are tightly-coupled and interdependent. Such an app is, in essence, a “black box” as far as the outside world is concerned—its inputs and outputs can be observed, but its internal processes are entirely opaque.

If an app is to be integrated into the cloud’s open-source ecosystem, its functions must somehow be separated out so that they can interoperate with other cloud services. The way that’s normally accomplished is by refactoring the legacy code into microservices.

Related: Migrating Monolithic Applications to Microservices Architecture

What are Microservices?

Microsoft provides a useful description of the microservices concept:

“A microservices architecture consists of a collection of small, autonomous services. Each service is self-contained and should implement a single business capability.”

The key terms here are “small” and “autonomous.” Microservices may or may not be “small”, but they should be independent, and loosely coupled with a specific functionality to cover. Each is a separate codebase that performs only a single task, and each can be deployed and updated independently of any others. Microservices communicate with one another and with other resources only through well-defined APIs—there is no external visibility or coupling into their internal functions.

Advantages of the microservices architecture include:

  • Agility: Because each microservice is small and independent, it can be quickly updated to meet new requirements without impacting the entire application.
  • Scalability: To scale any feature of a monolithic application when demand increases, the entire application must be scaled. In contrast, each microservice can be scaled independently without scaling the application as a whole. In the cloud environment, not having to scale the entire app can yield substantial savings in operating costs.
  • Maintainability: Because each microservice is small and does only one thing, maintenance is far easier than with a monolithic codebase, and can be handled by a small team of developers.

The key task of legacy application modernization is to decompose a monolithic codebase into a collection of microservices while maintaining the functionality of the original application.

But how is that to be accomplished with legacy code that is little understood and probably not well documented?

Options for Transforming Monolithic Code to Microservices

Gartner has identified seven options for migrating and upgrading legacy systems.

  1. Encapsulate: Connect the app to cloud-based resources by providing API access to its existing data and functions. Its internal structure and operations remain unchanged.
  2. Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  3. Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.
  4. Refactor: Restructure and optimize the app’s code to a microservices architecture without changing its external behavior.
  5. Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  6. Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  7. Replace: Completely eliminate the original application, and replace it with a new one.

All of these options are sometimes characterized as “modernization” methodologies. Actually, while encapsulating, rehosting, or replatforming do migrate an app (or in the case of encapsulation, its interfaces) to the cloud, no restructuring of the codebase takes place. If the app was monolithic in its original environment, it’s still monolithic once it’s housed in the cloud. So, these methods cannot rightly be called modernization options at all.

Neither does replacement qualify as a modernization option since rather than restructuring the legacy codebase, it throws it out completely and replaces it with something entirely new.

So, to truly modernize a legacy application from a monolith to microservices will involve the use of some combination of refactoring, rearchitecting, and rewriting. Let’s take a brief look at each of these:

  • Refactoring: Refactoring will be the first step in almost any process of modernizing monolithic legacy applications. By converting the codebase to a cloud-native, microservices structure, refactoring enables the app to be fully integrated into the cloud ecosystem. And once that’s accomplished, developers can easily update the app with new features to meet specific requirements.
  • Rearchitecting: Rearchitecting is usually employed to enable improvements in areas such as performance and scalability, or to add features that are not supported by the original design. Because rearchitecting makes fundamental changes to the structure and operation of the code, it is more complex, time-consuming, and risky than simply refactoring.
  • Rewriting: Completely rewriting the legacy code is the most complex, time-consuming, and risky of all the modernization options. It is usually resorted to when developers wish to avoid spending the time and effort required to deconstruct the existing code to understand how it works. Because a rewrite carries the highest risk of causing disruptions to a company’s business operations, it is normally used only as a last resort.

Although rearchitecting or rewriting may be appropriate for some cases, refactoring should always be the starting point because it produces a codebase that developers can easily upgrade with new features or functionality. As McKinsey puts it:

“It [is] critical for many applications to refactor for modern architecture.”

Challenges of Modernization

All of the modernization options, refactoring, rearchitecting, and rewriting, require extensive changes to the legacy application’s codebase. That’s not a task to be undertaken lightly. Legacy apps typically hold onto their secrets very tightly due to several common realities:

  • The developers who wrote and maintained the original code, which in some cases is decades old, have by now retired or are otherwise unavailable.
  • Documentation, both of the original requirements and modifications made to the code through the years, is often incomplete, misleading, or missing entirely.
  • Patches to the code to handle low frequency-of-occurrence exceptions or boundary conditions may not be documented at all, and can be understood only by a minute examination of the code.
  • Similarly, changes to business process workflows may have been incorporated through code patches that were never adequately documented or covered by tests. If such workflows are not discovered and accounted for in a modernization effort, important functions of the application may be lost.

Any modernization approach will involve a high degree of complexity, time, and expertise. McKinsey quotes one technology leader as saying,

“We were surprised by the hidden complexity, dependencies and hard-coding of legacy applications, and slow migration speed.”

Building a Modernization Roadmap

If you’re trying to drive to someplace you’ve never been before, it’s very helpful to have a map. That’s especially the case if you’re driving toward modernization of your legacy applications. You need a roadmap.

The first stop on your modernization roadmap will be an assessment of the goals of your business, where you currently stand in relation to those goals, and what you need from your technology to enable you to achieve those goals.

Then you’ll want to develop an understanding of exactly what you want your modernization process to achieve. You’ll analyze your current application portfolio in light of your business and technology goals, and determine which apps must be modernized, what method should be used, and what priority each app should have.

To learn more about creating a modernization roadmap, take a look at the following resource:

Related: Succeed with an Application Modernization Roadmap

Why Automation is Required for Successful Modernization

Converting a monolithic legacy app to a microservices architecture is not a trivial exercise. It is, in fact, quite difficult, labor-intensive, time-consuming, and risky. That is, it is all that if you try to do it manually.

It’s not unusual for a legacy codebase to have millions of lines of code and thousands of classes, with embedded dependencies and hidden flows that are far from obvious to the human eye. That’s why using a tool that automates the process is a practical necessity.

By intelligently performing static and dynamic code analyses, a state-of-the-art, AI-driven automation tool can, in just a few hours, uncover functionalities, dependencies, and hidden business flows that might take a human team months or years to unravel by manual inspection of the code.

And not only can a good modernization tool analyze and parse the monolithic codebase, it can actually refactor and rearchitect the application automatically, saving the untold hours that a team of highly skilled developers would otherwise have to put into the project.

According to McKinsey, companies that display a high level of agility in their marketplaces have dramatically higher rates of automation than those characterized as the “laggards” in their industries.

The vFunction Application Modernization Platform

The vFunction platform was built from scratch to be exactly the kind of automation tool that’s needed for any practical application modernization effort. It has advanced AI capabilities that allow it to automatically analyze huge monolithic codebases, both statically and during the actual execution of the code.

As the vFunction Assessment Hub crawls through your code, it automatically builds a lightweight assessment of your application landscape that helps you prioritize and make a business case for modernization. Once you’ve selected the right application to modernize, the vFunction Modernization Hub takes over, analyzing and automatically converting complex monolithic applications into extracted microservices.

vFunction has been demonstrated to speed up the modernization process by a factor of 15 or more, which can reduce the time required by such projects from months or years to just a few weeks.

If you’d like to experience firsthand how vFunction can help your company modernize its monolithic legacy applications, schedule a demo today.

Survey: 79% of Application Modernization Projects Fail

In the recent report “Why App Modernization Projects Fail”, vFunction partnered with Wakefield Research to gather insights from 250 IT professionals at a director level or higher in companies with at least 5000 employees and one or more monolithic systems.

Application modernization is not a new concept. That is, if companies are developing software, at some point they will need to modernize it. Because a code base continues to grow, it becomes complex, and engineering velocity slows down.

So what is elevating app modernization to a top priority for so many companies now? We see two major trends that are driving forces in the market:

  1. Digital Transformation – Many companies expedited these initiatives in response to the COVID-19 pandemic
  2. Shift to the Cloud – The benefits of cloud platforms has driven more companies to institute an executive mandate to move to the cloud

We also see competitive pressures increasingly driving companies to embark on modernization projects. Digital natives with software built for the cloud, originating with modern architectures (cloud native) and stacks, are able to rapidly respond to the market with innovative features and functionality, whereas established companies are fighting with scalability and reliability issues—which brings heavy competitive pressure in a fight for customer loyalty. 

Today, companies spend years mired in complex, lengthy, and inefficient app modernization projects, manually trying to untangle monolithic code.

So, it is not surprising that 79% of app modernization projects fail, averaging a cost of $1.5 million and a 16-month timeline.

There are many reasons for this: CIOs are under immense pressure to meet business objectives, having evolved into one of the most strategic roles on the executive team. 

Undoubtedly, this role comes with changing priorities and limited resources. Additionally, architects are charged with modernizing monolithic apps, but often only have limited tools, teams, and time. Given the stakes, it is imperative that the C-Suite has a clear understanding of why modernization projects fail, and how investing in these modernization projects now benefits the company’s present and future. 

To help provide for this, we partnered with Wakefield Research to survey 250 technology professionals—leaders, architects and developers at a director level or above—who have the responsibility of maintaining at least one monolithic app in a company of at least 5,000 employees.

The insights we gleaned say as much about the changing definition of successful outcomes as it does about cultures and how teams are organized to support these projects. The long-held notion of “lift and shift” is no longer considered a successful modernization outcome, and successful projects require a change in organizational structure to support the targeted modernized architecture.

We hope that this report will not only serve as valuable insight for those responsible for app modernization initiatives—but also as a reminder that having the proper tools in use plays an invaluable role in the success (or failure) of every venture.