Category: Uncategorized

Five Reasons Tech Debt Accumulates in Business Applications

Almost any organization that develops software for internal or external use will sooner or later have to face the issue of technical debt. That’s especially the case if the organization depends on legacy apps for some of its critical business processing. Because technical debt makes it extremely difficult for a company to maintain and update its apps to meet changing requirements, it can significantly diminish the organization’s ability to achieve its business goals. A report from McKinsey puts it this way:

“Poor management of tech debt hamstrings companies’ ability to compete.”

But before a company can deal effectively with its technical debt, there are several questions it needs to answer: what, exactly, is technical debt, why is it so problematic, where does it come from, and what can be done about it? Let’s take a brief look at these issues.

What is Technical Debt?

TechTarget explains technical debt this way:

“Software development and IT infrastructure projects sometimes require leaders to cut corners, delay features or functionality, or live with suboptimal performance to move a project forward. It’s the notion of ‘build now and fix later.’ Technical debt describes the financial and material costs that come with fixing it later.”

And the financial costs can be substantial. The cost to companies of “fixing” their technical debt is about $361,000 for every 100,000 lines of code, and that number is even higher for Java applications.

As with financial debt, an organization can carry its technical debt load for a while. But because it severely limits the ability of apps to integrate into today’s cloud-based technological ecosystem, sooner or later the debt must be dealt with. Otherwise, the company won’t be able to keep up with the rapidly evolving demands of its marketplace.

Where Technical Debt Comes From

In a recent Ph.D. dissertation, Maheshwar Boodraj of Georgia State University determined that there are five major sources of technical debt in software development projects. Let’s look at each of them.

1. External Factors

This technical debt arises from outside the organization due to conditions or events that are beyond its control. For example, developers may inadvertently introduce technical debt into their applications by using or integrating with external technologies the team doesn’t control and may not fully understand. Technical debt can also be introduced by contractors who work to standards that are different from those adopted by your organization.

2. Organizational Factors

Deficiencies in a company’s organizational structure or practices often generate technical debt. Here are some factors that may cause that to happen:

  • Misalignment between business and technical stakeholders. Business representatives, who may not fully understand the technology, sometimes exert undue influence over technical seo decisions. Requirements statements may be incomplete, constantly changing, or worse, missing altogether. Additionally, developers may feel they don’t have the freedom to push back against requirements that violate their budgetary, schedule, or programming practices constraints.
  • Inadequate resources. Technical debt can result when developers lack the financial, human, or technological resources they need. An inadequate budget can limit the acquisition of needed talent or technical tools, leading developers to take short-sighted shortcuts.
  • Inadequate leadership. Technical debt may result when leaders don’t provide a clear vision, careful planning, and a long-term rather than short-term focus. Such conditions often lead to high staff turnover, resulting in a loss of institutional knowledge that will inevitably be reflected in the codebase.
  • Not prioritizing technical debt. Technical debt will persist if the organization fails to devote enough time and resources to managing it.
  • Unrealistic schedules. When the pressure of meeting unrealistic delivery schedules causes developers to take shortcuts, the incorporation of significant amounts of technical debt is inevitable.

3. People Factors

Software development teams function most effectively when members understand and abide by appropriate coding standards and best practices. To consistently implement those standards, team members need the requisite skills, experience, training, and commitment. 

If any of these are missing due to inexperience, inadequate leadership, bad team dynamics, self-serving attitudes among team members, or low morale due to stressful conditions, the team won’t have the cohesion necessary to successfully manage technical debt.

4. Process Factors

Effective development teams consistently follow a process that enforces appropriate standards and practices. Minimizing technical debt requires a process that enables:

  • Adequate focus on both business requirements and non-functional requirements such as usability, reliability, scalability, performance, and security
  • Appropriate coding standards that ensure proper code reviews, adequate documentation, comprehensive testing and QA, and ongoing refactoring as necessary
  • Proper definition of the minimum viable product (MVP) for each release
  • Adhering to the Continuous Integration and Continuous Delivery (CI/CD) paradigm
  • Avoiding morale-draining team dynamics, such as too frequent or over-extended meetings, lack of proper communication among and between teams, and inflexible procedures that make responding to unanticipated conditions difficult or frustrating for team members.

5. Product Factors

An application that has a substantial amount of technical debt will usually be marked by one or more of these characteristics:

  • Complex Code that’s difficult to understand, maintain, and upgrade
  • Duplicated Code that appears in multiple places in the codebase
  • Bad Code Structure resulting from the use of inappropriate development frameworks, tools, or abstractions
  • Undisciplined Reuse of Existing Code (including code from open-source libraries) that may be carrying its own load of technical debt
  • Monolithic Architecture that is by nature difficult to understand, adapt, or refactor
  • Ad Hoc Code Fixes added over time (and often inadequately documented) as bug fixes or new feature implementations
  • Poor Architectural Design resulting in a codebase that’s fragile, complex, opaque, not easily scalable, difficult to maintain, and too inflexible to integrate into the modern technological ecosystem

Related: How to Measure Technical Debt for Effective App Modernization Planning

Techniques for Managing Technical Debt

The place to start in managing your technical debt is to first understand what you want to accomplish. Then you must ensure that you have the organizational structure, people resources, and proper coding standards and practices to get you there. Finally, you’ll need to implement a process for refactoring your existing apps. Let’s take a brief look at each of these issues.

What Needs to be Accomplished

In essence, eliminating technical debt in existing apps is about refactoring those apps from the monolithic structure that typically characterizes legacy code into a cloud-native microservices architecture, thereby producing  a codebase that can easily be adapted, upgraded, and integrated with other cloud resources.

How Your Organization and Teams Need to Change

In 1967 computer scientist Melvin Conway articulated what’s come to be known as Conway’s Law:

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”

Since the microservices your teams will implement consist of small units of code that do a single task and operate independently of one another, you’ll want to structure your software development organization as a group of small teams, each loosely coupled to other teams, and each having full responsibility for one or more microservices.

The Refactoring Project

Start by analyzing your application portfolio to determine the architectural complexity and technical debt load of each app. You can then prioritize which apps should be refactored (modernized) and in what order. 

That analysis will also provide the information you need to determine the budget, schedule, and required team member skill sets and experience levels for the project. And that information will, in turn, give you the ammunition you need to secure the required budget and resources.

Related: Using Machine Learning to Measure and Manage Technical Debt

Expect Pushback Against Dealing With Technical Debt

Don’t be surprised if you receive some pushback from other stakeholders in your organization. In a recent survey of senior IT professionals, 97% expected organizational pushback to app modernization proposals. As the survey report declares,

“Organizational pushback can hamstring projects before they start.”

The survey respondents were primarily concerned about cost, risk, and complexity. And that concern is warranted—modernization projects cost an average of $1.5 million, require, on average, 16 months to complete, and 79% of them fail to meet expectations.

That’s why being able to present comprehensive and accurate information about those cost, risk, and complexity factors to senior management is so important—it’s the best way to give them confidence that the benefits of dealing with the technical debt in your application portfolio will far outweigh the costs and risks of the project.

You Need the Right Analysis Tool

Trying to manually assess a legacy application portfolio that may contain multiple applications, some with perhaps millions of lines of code, is a losing proposition. That process is in itself so time-consuming and potentially inaccurate as to inspire little confidence in the estimates produced. 

What’s needed is an automated, AI-enabled analysis platform that can perform the required static and dynamic analyses of your applications in a fraction of the time required to do the job manually.

And that’s exactly what vFunction provides. The vFunction Assessment Hub can quickly and automatically analyze complexity and unravel dependencies in your apps to assess the level of technical debt and refactoring risk associated with each app and the portfolio as a whole. Then, the vFunction Modernization Hub can automatically refactor your complex monolithic apps into microservices, substantially reducing the timeframe, risks, and costs associated with that endeavor.

To see first-hand how vFunction can help you manage your technical debt, request a demo today.

The Strangler Architecture Pattern for Modernization

For companies that depend on legacy applications for critical business processing, modernizing those apps to make them compatible with today’s technologically sophisticated cloud ecosystem is crucial. But because most legacy apps are monolithic, updating them can be a difficult, time-consuming, and risky process. 

A monolithic codebase is organized as a single unit that has function implementations and dependencies interwoven throughout. Because a change to one part of the code can generate unexpected side-effects in other parts of the codebase, any update has the potential to cause the app to fail in unpredictable ways.

Yet, if these legacy apps are to continue fulfilling their business-critical missions, they must have the flexibility and adaptability necessary for keeping pace with the ever-evolving requirements of a fast-changing marketplace and technological environment. What’s needed is a means of encapsulating any changes to the legacy code so that only the targeted function is affected.

The Strangler Fig Architecture pattern meets that need. It allows legacy apps to be safely updated by replacing each function with an independent microservice. This enables developers to incrementally modernize specific functions without impacting the operation of other portions of the app.

What is the Strangler Fig Pattern?

Martin Fowler, Chief Scientist at Thoughtworks, coined the term in 2004. He noticed that strangler fig seeds, which germinate in the upper branches of other trees, send down roots that surround and eventually strangle their host tree. In effect, the strangler fig kills the original tree and takes its place.

Fowler saw this as a metaphor for how a large, monolithic software application could be modernized by surrounding it with a new superstructure of microservices that, over time, strangles and replaces the original app. A microservice is a small, self-contained codebase that performs only one task and replaces a single function or service in the legacy app. It can be updated without affecting other parts of the app. 

As new microservices are added over time, they take over the functions of the original codebase one by one until the functionality of the legacy app is entirely replaced by microservices. At that point, the original app has been fully “strangled” and can be decommissioned.

Related: Migrating Monolithic Applications to Microservices Architecture

Why the Strangler Fig Pattern is Ideal for Application Modernization

Faced with the daunting prospect of replacing or rewriting their portfolio of legacy apps, some companies settle for simply migrating them, pretty much as-is, to the cloud. But though that approach may yield some benefits, it falls far short of true modernization. That’s because a monolithic codebase in the cloud is still monolithic, and retains all the detrimental characteristics of that architecture.

The Strangler Fig Pattern enables true legacy app modernization by allowing you to replace the functions of the original app one at a time without having to rewrite the entire app all at once. As key functions are re-implemented one by one as microservices, the app continues to function and can be fully transformed without ever going offline.

A key element of the strangler paradigm is the use of an interface layer, called a façade, between the original app and its microservices superstructure. All communications to and from the legacy app go through the façade, which includes feature flags that you can set to dynamically control whether the original code for a function or its microservice replacement is live.

This approach provides some major advantages:

1. Allows incremental updating

If you elect to entirely rewrite a legacy app, you can’t use the new system until the rewrite (and all testing) is complete. The strangler approach allows you to incrementally add features and capabilities without disrupting the operation of the app or taking it offline.

2. Enables quicker modernization

As each new microservice is added, the benefits it provides, such as increased adaptability, flexibility, scalability, and performance, take effect immediately. As IBM notes,

The great thing about applying this pattern is that it creates incremental value in a much faster timeframe than if you tried a “big bang” migration in which you update all the code of your application before you release any of the new functionality.

3. Minimizes risk

Any attempt to replace or upgrade a large monolithic app all at once will almost certainly introduce new bugs that can cause significant downtime once you bring the new codebase online. But, as Bob Reselman of Red Hat explains,

“Small failures are easier to remedy than large ones, hence the essential benefit of the Strangler pattern.”

Because the strangler approach incorporates changes in small steps, with each new microservice being thoroughly tested before going live with the app, downtime due to new bugs can be almost totally eliminated.

4. Allows you to choose the pace of modernization

Since the app is never taken offline, you can implement the modernization project at a pace that’s comfortable for your team (and budget).

5. Allows easy and seamless rollbacks

Rolling back a change that isn’t working correctly is easy. Each new microservice deployment can be quickly and cleanly reversed simply by setting feature flags appropriately.

6. Eliminates the need to maintain two separate codebases

New functions are implemented as microservices that surround the legacy codebase; the original app is never changed. Since any needed changes (including those that correct bugs in the legacy code) are made only to the microservices superstructure, the original codebase need not be separately maintained.

7. Enhances QA

Because microservices can be run in parallel with the original code for QA purposes, each change can be comprehensively tested in the app’s production environment before it goes live with the app.

How Refactoring With Strangler Fig Helps You Avoid Destructive Coding Anti-Patterns

As we’ve seen, the Strangler Fig Pattern provides an ideal template for modernizing legacy apps. However, some other widely used software design patterns yield far more negative results. These are called, appropriately, anti-patterns. Martin Fowler notes that the term was coined in 1995 by programmer Andrew Koenig, who described it this way:

“An antipattern is just like a pattern, except that instead of a solution it gives something that looks superficially like a solution but isn’t one.”

According to Fowler, anti-patterns are extraordinarily dangerous because they initially fool developers into thinking they are appropriate solutions to common software coding problems, only to reveal their detrimental consequences later when the damage has been done. As software engineer Kealan Parr declares:

“In software, anti-pattern is a term that describes how NOT to solve recurring problems in your code. Anti-patterns are considered bad software design… They generally also add “technical debt” – which is code you have to come back and fix properly later.”

Parr lists some of the more common anti-patterns. They include:

Spaghetti Code

This anti-pattern is often encountered in monolithic legacy apps. The term describes a codebase that has little or no structure. There’s no modularization, and function implementations and dependencies are intermingled throughout the code, just like strands of spaghetti on a plate. As a result, the logical flow of the application is extremely difficult to understand. Parr calls it a maintenance nightmare:

“You will constantly break things, not understand the scope of your changes, or give any accurate estimates for your work as it’s impossible to foresee the countless issues that crop up when doing such archaeology/guesswork.”

Because of these characteristics, updating or adding features to spaghetti code is an extraordinarily difficult and risky process.

Golden Hammer

This anti-pattern derives its name from a quote attributed to Abraham Maslow:

“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

That’s a human tendency to which software developers are as prone as anyone else. When they have a high level of competence with, love for, or comfort with particular coding tools, languages, or architectures, they naturally seek to apply them across the board, even in situations where they are not the best options. The result can be highly detrimental—as Parr says, “Your whole program could end up taking a serious performance hit because you are trying to ram a square into a circle shape.”

Boat Anchor

Boat anchors do nothing but retard the progress of the vessel to which they are attached. And that’s exactly what the boat anchor anti-pattern does with software. The term describes code modules that developers insert into the codebase to implement functions not currently needed or used. They do so because they think those functions might be needed later.

This too, says Parr, is a maintenance nightmare. Developers who are new to the codebase, or who have not worked with it for some time, will have a hard time identifying boat anchor modules and figuring out whether they impact the logical flow of the program or are entirely superfluous. There’s a real possibility of your developers spending significant amounts of time and effort on understanding and debugging modules that literally do nothing.

Dead Code

This term describes code that, unlike boat anchors, implements functions that are not only used in the application but which may be called frequently from many different places in the codebase.

The problem is that it’s not clear what this code is doing or why it’s needed. Perhaps it had an important function at some point, but now the issues it was created to solve no longer exist. On the other hand, it could be crucial for handling infrequent edge or boundary conditions that current developers haven’t yet run into but eventually will. Because you can’t be sure why it’s there, you don’t dare to remove it. So, it remains in the codebase as a time-waster and generator of confusion for the developers who have to deal with it.

Proliferation of Code

This anti-pattern occurs when there are objects in the code that seem to exist only to invoke other more strategic objects. These are essentially useless “middleman” objects that provide no additional value, but only add an unnecessary level of abstraction, and therefore confusion, to the code. Such objects should be bypassed and removed to make the code more easily understandable.

God Object

This is sometimes called the “Swiss Army Knife” anti-pattern. It describes objects that are accessed by many other objects in the codebase for a multitude of different and often unrelated purposes. Such objects are problematic because they violate the Single Responsibility principle of coding, which says that every class, module, or function should do only one thing. According to software architect Thanh Le, they are “hard to unit test, debug and document” and can be a “maintenance nightmare.”

Strangler Fig to the Rescue!

Refactoring a legacy app according to the Strangler Fig Pattern will remove anti-patterns such as these from the codebase almost automatically. For example, Strangler Fig refactoring eliminates spaghetti code by re-implementing legacy app functions as a set of independent, single-task microservices that can be easily understood, maintained, and upgraded. 

Similarly, code that does too much is replaced by individual microservices with precisely specified, single-purpose functionality, while unneeded modules are not re-implemented at all. And Golden Hammer technology choices can be eliminated by implementing new microservices using a carefully chosen modern technology stack.

Best Practices to Implement the Strangler Fig Pattern

How can you maximize the benefits of the strangler fig paradigm in modernizing your legacy apps? Here are some best practices:

1. Automate the process using an AI-based modernization platform

As industry insider Oliver White has said,

“Large monolithic applications need an automated, data-driven way to identify potential service boundaries.”

Manual analysis of a monolithic codebase with millions of lines of code is a time-consuming, error-prone process. An automated, AI-based analysis platform can perform that task quickly, comprehensively, and at scale. Using static and dynamic analyses, it can assess the monolithic codebase for technical debt, complexity, and risk; reveal functional flows and dependencies; identify service domain boundaries; and quantify the amount of effort that will be needed to refactor the app.

That information will allow you to determine:

  • the negative impact of your legacy apps’ technical debt on your ability to innovate
  • the ROI that can be realized from modernizing some or all of your legacy apps
  • which applications should be modernized and in what order
  • which legacy app services should be re-implemented as microservices and which should not
  • the functional scope of each microservice
  • which functions are so similar or overlapping that they can be consolidated into a single microservice

Once the analysis phase is complete, a state-of-the-art modernization platform will be able to substantially automate the process of refactoring the monolithic code into microservices.

2. Pick the right starting point

For most companies, it’s not feasible—nor desirable—to modernize all their legacy apps at once. Instead, it’s best to start with those apps that have the greatest business value and which also have a high degree of technical debt. Then, for each app choose functions that have the highest impact on your business operations as the first to be re-implemented as microservices.

3. Pick the right ending point

It’s natural to want to replace all your legacy apps with microservices. But the costs of refactoring an entire legacy suite may exceed the benefits. In such cases, it might be best to continue using the original app for specific functions that are isolated, stable, and don’t require upgrading, while re-implementing as microservices any functions that must be easily upgradeable, or that interact directly with other systems or resources.

4. Follow an incremental, step-by-step process

The Strangler Fig Pattern provides its greatest benefits when it is applied incrementally, one microservice at a time. Avoid trying to modernize entire apps all at once. As one research paper succinctly advises:

“Start small and gradually evolve the system (baby steps).”

5. Implement new functionality only in microservices

When you begin the modernization process, you should freeze the legacy codebase and implement any new functionalities only through microservices. If you continue to make updates to the original app, you create two simultaneously evolving codebases, both of which must be supported, tested, and synchronized.

Related: Simplify Refactoring Monoliths to Microservices with AWS and vFunction

How AWS Migration Hub Refactor Spaces and vFunction Work Together

As an AWS Partner, vFunction provides an automated, AI-driven modernization platform that closely integrates with AWS Migration Hub Refactor Spaces to enable developers to quickly and safely transform complex monolithic Java applications into microservices and deploy them into AWS environments.

Refactor Spaces establishes, maintains, and manages the modernization environment, and orchestrates AWS services across accounts to facilitate the refactoring of legacy apps from monoliths to microservices. Refactor Spaces implements the Strangler Fig Pattern for the target application and allows developers to easily manage communication between services throughout the environment.

Developers begin the refactoring process by using vFunction to generate an automated, AI-based analysis that quantifies the complexity of monolithic legacy apps. Using both static and dynamic analyses, vFunction provides the detailed information regarding technical debt, complexity, and risk that’s required for developing a comprehensive refactoring plan that prioritizes which apps and services will be converted and in what order.

vFunction then automatically decomposes the monolithic apps into microservices. Using sophisticated, AI-driven static analysis, the vFunction platform analyzes architectural flows, classes, usage, memory, and resources to detect and expose critical business domain functions buried in the code and untangle complex dependency relationships.

See vFunction For Yourself

The vFunction platform is unique in its ability to make refactoring monolithic legacy apps into microservices as quick, easy, painless, and safe as possible. It easily handles codebases with tens of millions of lines of code, and can accelerate the modernization process by at least a factor of 15. If you’d like to see for yourself what it can do, please schedule a demo today.

How Much Does it Cost to Maintain Legacy Software Systems?

Many companies depend on legacy software for some of their most business-critical processing. But useful as they are, those applications can hold companies back from being able to keep pace with rapidly changing marketplace demands. The culprit is the technical debt of their legacy apps. 

Technical debt makes software difficult and risky to change, which increases the cost of maintaining legacy software systems. Dealing with technical debt in legacy applications can eat up substantial portions of a company’s IT budget and schedule, diminishing the organization’s ability to create new features and capabilities. In one survey of C-level corporate executives, 70% of respondents said that technical debt severely limits their IT operation’s ability to innovate.

Yet, many companies hesitate to commit themselves to modernizing their legacy apps. They often take the attitude that since their legacy systems are still functioning and doing the job they were designed to do, there’s no need to invest the time, money, and organizational effort that would be required to update them. But is that really the case? What are the true costs of maintaining legacy systems that may be approaching or beyond their technological expiration dates?

How Technical Debt Impacts the Cost of Maintaining Legacy Software Systems

TechTarget explains the concept of technical debt this way:

“Software development and IT infrastructure projects sometimes require leaders to cut corners, delay features or functionality, or live with suboptimal performance to move a project forward. It’s the notion of ‘build now and fix later.’ Technical debt describes the financial and material costs that come with fixing it later.”

As with financial debt, technical debt consists of two distinct components: principal and interest. Both must be paid off before the debt can be retired. In the financial sphere, the concepts of principal and interest are well understood. But how do those terms apply to technical debt?

The Principal on Technical Debt

The principal on your technical debt is the amount you’ll pay to clean up (or replace) the original substandard code and bring that application into the modern world. According to one research report, companies typically incur $361,000 of technical debt for every 100,000 lines of code in their software.

Just as with financial debt, you must eventually pay off the principal on your technical debt, and until you do, you’ll pay interest on it.

The Interest on Technical Debt

The interest on technical debt consists of the ongoing charges you incur in trying to keep flawed, inflexible, and outmoded legacy applications running as the technological context for which they were designed recedes further and further into the past. 

It’s an unavoidable cost of maintaining legacy systems. And those interest charges can be substantial—according to InformationWeek, U.S. companies are spending $85 billion every year on maintaining bad technology.

Related: How to Measure Technical Debt for Effective App Modernization Planning

Specific Legacy Software Maintenance Costs

According to Gartner, by 2025 companies will be spending 40% of their IT budgets on simply maintaining technical debt. But that’s not the worst of it. The direct financial cost of maintaining legacy systems is just the tip of the iceberg. There are other impacts on your company and its IT organization that may be even more significant. Let’s take a look at some of them.

Wasted Time

According to a survey by Stripe, out of a 41.1-hour average work week, the typical software developer spends 13.5 hours, or almost a third of their time, addressing technical debt. When developers were asked how many hours per week they “waste” on the maintenance of bad legacy code, the average of their answers was 17.3 hours. That means that developers typically believe they are “wasting” more than 42% of their work week on legacy software.

Lowered Morale

The fact is, most of today’s developers just don’t like working on legacy code or dealing with technical debt. They’re usually far more interested in working with modern programming languages, architectures, and frameworks. 

For many of them, spending significant amounts of time dealing with older, technically obsolescent applications can seem mind-numbing, unproductive, and frustrating. When Stripe asked about issues that negatively impact developers’ morale:

  • 78% named “Spending too much time on legacy systems”
  • 76% cited “Paying down technical debt”

The natural result of low morale on a development team is decreased productivity and increased turnover. With the U.S. currently experiencing a shortage of more than a million software developers, the costs for finding, hiring, and training replacements for unhappy employees can be significant.

Opportunity Costs

Not only does technical debt impose a direct financial cost on companies, but there is a very real opportunity cost as well—the time devoted to maintaining legacy applications is time that’s not being spent to develop the innovations that can propel a company forward in its marketplace. A recent Deloitte report highlights the importance of this issue:

The accumulation of technical debt adversely affects an organization’s ability to innovate and employ new technologies … which makes it harder for the organization to retain its market share, secure clients, and stay on track with market trends.

Other Indirect Costs

There are other costs of maintaining legacy software systems that, while perhaps not easy to quantify, are nevertheless quite real. These include:

  • Slow test and release cycles: Technical debt makes legacy apps brittle (easy to break) and opaque (hard to understand), which lengthens upgrade/test/release cycle times.
  • Inability to meet business goals: The inability to quickly release and deploy innovative new applications or features can cripple a company’s ability to meet its marketplace goals.
  • Security exposures: Legacy apps were not designed to modern security standards, and neither were the quick fixes, patches, and ad hoc workarounds that typically have been incorporated over time.

A report from McKinsey sums up the negative impact of technical debt this way:

“Poor management of tech debt hamstrings companies’ ability to compete.”

Overcoming the Challenges of Maintaining Legacy Applications

Continuing to spend your company’s time and resources on keeping “venerable” applications running is a losing proposition—the cost of maintaining legacy software systems will only increase over time. Instead, the key to maintaining the value these applications have for the organization is to modernize them to bring them into today’s technological universe. 

That means transforming the typically monolithic structure of these apps into a cloud-native microservices architecture. The result will be a codebase that has minimal technical debt, and that can easily be adapted, upgraded, and integrated with other cloud resources.

But the modernization process is not without its own challenges. The average app modernization project costs $1.5 million and takes about 16 months to complete. And after all that investment of time and resources, 79% of those projects fail. Speaking of companies that have not had the success they hoped for with their application modernization efforts, McKinsey reports that:

“In our experience, this poor performance is a result of companies’ uncertainty about where to start or how to prioritize their tech-debt efforts. They spend significant amounts of money on modernizing applications that aren’t major contributors to tech debt, for example, or try to modernize applications in ways that won’t actually reduce tech debt.”

This assessment points toward two critical elements of a successful legacy app modernization program:

  1. You must choose the right modernization strategy – it’s not enough to simply migrate legacy apps to the cloud. Instead, true modernization involves refactoring monolithic legacy codebases into microservices.
  2. To know where to start and how to prioritize your modernization efforts, you need comprehensive, quantifiable data concerning the complexity, risk, and technical debt of your legacy app portfolio.

Let’s look at this data requirement in a little more detail.

Related: Succeed with an Application Modernization Roadmap

Getting the Data You Need for Legacy App Modernization Success

As the McKinsey report indicates, without good data that allows you to assess which of your legacy apps need to be modernized, and in what order, your modernization efforts are likely to fall short. But asking developers to manually assess a legacy application portfolio that may contain multiple applications, some with perhaps tens of millions of lines of code and thousands of classes, is rarely a viable approach. 

The task of unraveling the functionalities and dependencies of a large, non-modularized, monolithic codebase is simply too complex for humans to perform effectively in any reasonable timeframe. As one IT leader told McKinsey,

“We were surprised by the hidden complexity, dependencies and hard-coding of legacy applications, and slow migration speed.”

Instead of a manual approach, what would serve you best is the use of an automated, AI-enabled analysis platform that can perform the required static and dynamic analyses of your legacy apps in a fraction of the time your developers would require. Such a solution will also provide the information you need to quantify the expected ROI of your modernization program.

The vFunction platform offers all those features and more.

To see first-hand how vFunction can help you modernize your legacy apps, schedule a demo today.

Key Roles to Hire for an App Modernization Dream Team

For many companies today, modernizing their legacy apps is a top priority. They recognize that although those apps are critical to their business operations, they’re also major hindrances to the organization’s ability to keep pace in its marketplace. 

That’s because legacy apps, which are typically structured as monolithic architectures, are very difficult to update to meet the rapidly changing market and technological demands that characterize today’s commercial landscape.

In response to that challenge, companies are mounting projects to modernize their legacy software. But modernizing a suite of legacy apps is not a quick and easy process, and finding workers with the requisite skills and expertise to staff such projects can be difficult. 

In fact, two recent surveys found that a shortfall of skills and expertise was one of the top reasons why application modernization had failed at their organization. What are the skills required for building an effective app modernization team? To answer that question, let’s start by looking at the application modernization process.

What Application Modernization is Really All About

The goal of application modernization is to restructure a monolithic legacy app from an isolated, stand-alone codebase that doesn’t interact easily, if at all, with the modern cloud-centric ecosystem, into a microservices architecture that is cloud-native in its capabilities. How is that restructuring accomplished? A report from Capgemini describes the process this way:

Modernizing means transforming existing software, taking an agile approach to development across people, processes, and technology, and embracing cloud-native development, including microservices and containerization, hybrid cloud- integration, and agile and DevOps methodologies.

The key phrase in this description is “taking an agile approach to development across people, processes, and technology.” A legacy app modernization team should be organized according to the Agile/DevOps methodology that characterizes the modern approach to software development.

Deciding Your App Modernization Objective

Application modernization involves refactoring or rearchitecting the monolith into a set of small, autonomous microservices.

Companies have sometimes attempted to gain the benefits of modernization by simply migrating their legacy apps, with minimal changes, to the cloud. That approach is simply migration not modernization and has proven to be ineffective: as a report from Google explains,

“Although CIOs have successfully migrated some applications to the cloud, according to a McKinsey study, around 80 percent of them report that they have not achieved the agility or business outcomes they sought from application modernization.”

Migrating an app to the cloud can, by itself, provide some benefits, such as DevOps improvements in deployability and security along with shutting down data center resources. The problem is that the migrated monolithic application remains monolithic and continues to suffer from all the deficiencies of that architecture. Migration alone achieves little in terms of achieving cloud benefits such as increased scalability, development velocity, and reduced technical debt.

That’s why full modernization is accomplished by refactoring the app into microservices. The app is thereby transformed into a cloud-native architecture that’s easy to update and that can integrate with and take full advantage of the technologically advanced resources available in the cloud.

Related: Cloud Modernization Approaches: Rehost, Replatform, or Refactor?

App Modernization Team Requirements

The fact that microservices are designed to operate independently of one another has a major impact on how application modernization teams are structured. That’s because of Conway’s Law, which was first articulated by software expert Melvin Conway in 1967:

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”

The implication of Conway’s Law for app modernization is that an organization that aims at producing small, autonomous, independent microservices ought to be structured as small, autonomous, independent teams. 

If the organization adopts a different structure (like, for example, the large integrated teams normally used with monolithic apps), the product they produce will reflect the way the team is organized, whatever the intent for that product may have been. As software engineer Alex Kondov emphatically warns,

“You can’t fight Conway’s Law… Time and time again, when a company decides that it doesn’t apply to them they learn a hard lesson… If the company’s structure doesn’t change the software will slowly evolve into something that mirrors it.”

Domain-Driven Design (DDD) in App Modernization

Application modernization teams formed in compliance with Conway’s Law will each take full ownership of one or more microservices. But what should be the scope of each of those microservices, and therefore of the team that supports it? Tomas Fernandez of Semaphore provides a useful answer to that question:

“Domain-Driven Development allows us to plan a microservice architecture by decomposing the larger system into self-contained units, understanding the responsibilities of each, and identifying their relationships.”

DDD helps you to identify the functional domains and subdomains in a monolithic codebase and draw boundaries around each function that will be implemented as a microservice. It also allows you to employ key modernization best practices such as the Strangler Fig Pattern to carefully shut off old functions and domains in the monolith as new microservices replace them.

Related: Organizing and Managing Remote Dev Teams for Application Modernization

Key Roles in Modernization

Jason Smith, Chief User Experience Officer at dotCMS observes that,

“Agile Development, which some refer to as ‘DevOps’, means having a number of small teams working on individual, smaller projects so that those projects are able to get a team’s undivided attention. This enables the individual projects to be completed quicker. Microservice architecture fits into this development model perfectly, as each small team can own and focus on one service.”

Because the process of restructuring a monolithic codebase into microservices fits “perfectly” into the Agile model, we’ll look to that model to identify the roles you’ll need to fill for your app modernization dream team.

  1. Corporate leadership – May include an Executive Sponsor who will provide both the budget and overall guidance for the project, as well as other leaders such as the CIO.
  2. Product Owner – Sets requirements for the project from the viewpoint of the customer. This role, which typically is filled in-house, assigns team roles and tasks and takes full responsibility for the success of the project.
  3. Project Manager – Responsible for the day-to-day operation of teams. Provides leadership for team members, defines goals, oversees progress, and ensures that budget and schedule milestones are met.
  4. Modernization Architect – Translates business requirements into technical deliverables. Responsible for defining the microservice architecture the team will implement and for ensuring effective integration of the modernized app into the cloud environment.
  5. Legacy App Expert – Has the skills needed to analyze the app to understand its functionality and the internal and external dependencies that must be accounted for when it is restructured into microservices.
  6. Senior Developers – Individuals who have the normal range of software development skills that allow them to function as, for example, front-end or back-end developers or UI/UX designers.
  7. QA Engineer – Responsible for defining comprehensive test coverage to ensure that each microservice and the restructured app as a whole function as intended.

Each microservice team won’t necessarily have all of these roles—some, such as corporate leaders or developers who analyze legacy apps, may work with several or all of the teams. Also, on small microservice teams, several roles may be filled by one person.

How Do You Find the Talent You Need?

The first question is, to what degree can you tap into your existing staff to fill the roles required for your modernization project? To answer that question, you’ll need to assess what skillsets are available in-house and whether any that are lacking can be supplied through training.

If you do have to go outside to find the skills or expertise you need, be prepared to face some challenges. The U.S. is currently experiencing a shortage of more than a million software developers, and even if you can find them, they’re hard to keep. One recruiter, DaedTech founder Erik Dietrich, declares that,

“The only way to find software developers is to go prying them loose from other firms.”

And there’s an additional challenge when you’re staffing an app modernization project: many developers just don’t want to work on legacy code. According to one survey, 78% of developers say that “Spending too much time on legacy systems” has a “negative impact” on their personal morale.

Automation Can Minimize Your Need to Hire

Legacy monoliths may contain tens of millions of lines of code and thousands of classes. Having developers manually conduct the comprehensive static and dynamic analyses necessary to uncover all of the app’s functions and dependencies can be extremely time-consuming, error-prone, costly, and from the developer’s point of view, boring. And the process of decomposing the codebase into services and reimplementing them as microservices is similarly fraught.

But an AI-enabled, automated modernization platform can perform those chores accurately and comprehensively in a fraction of the time a human would require. With that kind of assistance, not only will you need fewer new hires for your modernization efforts, but you’ll also boost the morale of your existing workers by releasing them to spend more of their time on exciting innovations rather than on decidedly unexciting legacy code.

The vFunction platform employs sophisticated AI capabilities to quickly and comprehensively analyze complex monolithic applications to uncover hidden functionalities and dependencies. Plus, it can automate about 90% of the process of restructuring a monolithic codebase into microservices.

To see first-hand how vFunction can help you build an effective modernization team while minimizing the need to hire new development talent, request a demo.

Eliminating Technical debt: Where to Start?

This article is the first in a four-part series. In part two, we’ll explore the challenges in navigating the cultural change of modernization. Part three will delve into sustaining the resulting transformation. We’ll wrap up in part four, as we discuss evolving toward modern-day technology goals.

At its most basic, technical debt represents some kind of technology mess that someone has to clean up. In many cases, technical debt results from poorly written code, but more often than not, is more a result of evolving requirements that existing technology simply cannot keep up with.

Technical debt can accrue across the technology landscape – and we could even argue, extends beyond technology altogether in other areas of the business, for example, process debt.

Within the technology arena, technical debt breaks down into two basic categories: infrastructure and applications.

Compared to application technical debt, reducing infrastructure technical debt is the more straightforward. Application technical debt, in contrast, is a knottier problem, because there are so many places that technical debt can hide in existing applications.

Simply identifying this debt is challenge enough. Prioritizing its reduction is also difficult. Eliminating the debt once and for all can be a task of Herculean proportions.

Here’s how to start on this journey.

Assessing Your Current State

The first step in an effective assessment of technical debt is to ensure you start with the big picture. Application technical debt may be where the most urgent problems like, but even so, it’s important to place these issues into the proper context.

The first consideration here is the business and its requirements. Where is existing technical debt causing the business or your customers the most pain?

It’s also important to include operational and cost factors in any assessment. In many cases, for example, older applications require older hardware – and thus the plan for infrastructure technical debt and the corresponding application plan are interdependent.

A recent survey by Wakefield Research of enterprise software architects and developers indicated that the most difficult step in application modernization was securing the budget and resources for the project followed by “knowing what to modernize” and “building a business case.” The only way to address these challenges is to accurately calculate the current technical debt in those applications up front with a data-driven plan.

Cost considerations are also important to any consideration of technical debt reduction planning. Some debt reduction projects will inevitably be more expensive than others – but won’t necessarily deliver more value. You’re looking for projects with the most ‘bang for the buck.’

Rationalize Your Applications

Once you’ve assessed the technical debt across your application landscape, it’s time to sort your application portfolio into four main buckets:

Refactoring: Applications in this category are more or less meeting their requirements, but some internal issue with the code is bogging them down. In this situation, refactoring is the best approach – reworking the code in place without necessarily changing the overall functionality of the application.

Deprecating: Sometimes it’s simply not worth the cost and trouble of dealing with particular instances of technical debt. While it may provide value to clean up these situations, your assessment has determined that your time and money are best spent elsewhere.

It’s important, however, to flag such code as something you’ve intentionally decided to leave alone (at least for the time being). Hence the notion of deprecation – an application you can still use, with the understanding that if you have a choice, use an alternative instead.

Replatforming: In some situations, the technical debt has more to do with the platform than the application itself. For example, an application might be running on an older version of .NET or Java EE.

The main goal of replatforming is typically to move an application to the cloud – without changing anything about the application that isn’t necessary to achieve this goal.

While replatforming is occasionally the best approach to dealing with technical debt, many organizations overuse it – putting debt-laden applications in the cloud without resolving that debt.

Rearchitecting: Whenever replatforming falls short (as it often does), then rearchitecting is typically the approach that best resolves application technical debt issues – but also tends to be the most expensive option.

While it’s possible to replatform an application in the cloud without rearchitecting it, such rearchitecture is necessary in order to take advantage of many of the core benefits of the cloud, including scalability, elasticity, and automated provisioning.

When moving to a cloud native platform (typically Kubernetes), rearchitecture is an absolute must.

Rearchitecting to Reduce Technical Risk

While cloud native computing is all the rage today, it’s important to note that taking a cloud native approach may include a variety of architecture and technology options depending upon the business need.

Rearchitecting, therefore, requires a careful consideration of such needs, as well as available resources (human as well as technology), realistic timeframes for modernization, and the overall cost of the initiative.

Rearchitecture also never works within a vacuum. Any rearchitecture project must take into account the existing application and management landscape in order to address issues of security, governance, and whatever integration is necessary to connect the rearchitected applications to other applications and services.

Because rearchitecture initiatives also include replatforming, it’s also essential to plan ahead for any cloud migration requirements as part of the overall effort.

At some point in every rearchitecture effort, of course, it will be necessary to modernize the code as well as the architecture. Fortunately, there are approaches to modernizing code without the onerous responsibility of rewriting it line by line – even in situations where the architecture of the software is changing.

We’ll be covering application modernization in more depth in the rest of this four-part article series. Stay tuned!

The Intellyx Take

In many cases, organizations tackle modernization projects without thinking specifically about resolving technical debt. However, framing such initiatives in terms of such debt is a good way to improve their chances of success.

Today, it’s possible to measure technical debt with a reasonable amount of accuracy using solutions like vFunction Architectural Observability Platform. Such measurements give you a heat map across your application landscape, pointing out those areas that have particularly knotty messes to clean up and can even pinpoint where to start refactoring or rearchitecting in those apps including the top app component contributors to technical debt and the resulting ROI.

Without such a heat map, modernization efforts tend to go off the rails. Therefore, while technical debt is generally a bad thing, it does serve certain purposes – including directing the modernization team to the highest priority projects.

vFunction Enables Rapid Technical Debt Analysis

base report

vFunction has launched Assessment Hub Express to help architects and developers quickly calculate the technical debt of their monolithic Java applications. Assessment Hub Express is a cloud-based version of the recently announced vFunction Assessment Hub and provides a rapid, self-service technical debt assessment solution that is free for up to 3 applications for one year.

Assessment Hub Express provides key diagnostic measurements so architects can measure technical debt based on critical architectural complexity, risk, and dependency scores. Based on the industry leading vFunction Assessment Hub, this lightweight SaaS tool quickly scans your Java app binaries and details the total cost of ownership (TCO) for your app in relation to technical debt, detecting the top classes that contribute to that debt and aging frameworks to address.

How It Works: the Mathematics Behind Assessment Hub Express

In a recent technical blog, Ori Saporta, vFunction co-founder and systems architect, outlined how vFunction is Using Machine Learning to Measure and Manage Technical Debt. This same science drives Assessment Hub Express and aligns with the approach outlined in leading academic and IEEE studies.  The common conclusion from vFunction engineers and the industry is that the most accurate way to measure technical debt is to focus on the dependencies between architectural components in the given application.

Using this approach, vFunction Assessment Hub Express measures the technical debt of monolithic applications based on the dependency graph between its classes. The details are described in the technical blog above, but in essence the algorithms perform multifaceted analysis on the graph to eventually come up with a score that describes the technical debt of the application.

Based on the metrics of the dependency graphs, vFunction can identify architectural issues that represent real technical debt in the original architecture. Moreover, by analyzing dependencies on two levels — class and community — this produces the architectural measurements required to calculate a high-level score that can be used not only to identify technical debt in a single application, but also to compare technical debt between applications. In addition, it can be used to prioritize which apps should be modernized and how. To do that, Assessment Hub measures and displays three key indexes:

  • Complexity Index — the effort required to add new features to the software based on the degrees of entanglement between classes
  • Risk Index — the potential risk that adding new features has on the stability of existing ones based on the length of dependency chains
  • Overall Technical Debt Index — a synthesis of the above complexity and risk scores that represent the overall amount of extra work required when attempting to add new features.
debt breakdown

Finally, applying machine learning to the graph theory-derived metrics across hundreds of monolithic applications, vFunction used these benchmarks to train a machine learning model that correlates the values of the extracted metrics with the indexes and normalizes them to a score of 0 to 100.

The overall debt levels were then converted into currency units, depicting the level of investment required to add new functionality into the system. For example, for each $1 invested in application development and innovation, how much goes specifically to maintaining technical debt? This is intended to help organizations build a business case for handling and removing architectural technical debt from their applications.

report cost of innovation

Rapidly Assess the Technical Debt of Monolith Java Applications

Assessment Hub Express provides a cloud-based rapid, self-service technical debt assessment solution that allows architects and developers to:

  • Measure technical debt based on critical architectural complexity, risk, and dependency scores
  • Automatically evaluate the innovation to technical debt cost ratio and total cost of ownership (TCO) factor improvement by modernization
  • Identify and recommend the top 10 classes contributing to technical debt
  • Share an exportable PDF report to build the business case for modernization
  • Report on aging software frameworks, compile versions, and Jars that classified into aging, modern, or unknown categories

Monoliths to Microservices: 4 Modernization Best Practices

This post was originally featured on TheNewStack, sponsored by vFunction.

When it comes to refactoring monolithic applications into microservices, most engineering teams have no idea where to start. Additionally, a recent survey revealed that 79% of modernization projects fail, at an average cost of $1.5 million and 16 months of work.

In other articles, we discussed the necessity of developing competencies for assessing your application landscape in a data-driven way to help you prioritize your first big steps. Factors like technical debt accumulation, cost of innovation and ownership, complexity and risk are important to understand before blindly embarking on a modernization project.

Event storming exercises, domain-driven design (DDD), the Strangler Fig Pattern and others are all helpful concepts to follow here, but what do you as an architect or developer actually do to refactor a monolithic application into microservices?

There is a large spectrum of best practices for getting the job done, and in this post, we look at some specific actions for intelligently decomposing your monolith into microservices.

These actions include identifying service domains, merging two services into one, renaming services to something more accurate and removing services or classes as candidates for microservice extraction. The best part: Instead of trying to do any of this manually, we’ll be using artificial intelligence (AI) plus automation to achieve our objectives.

Best Practice #1: Automate the Identification of Services and Domains

Surveys have shown that the days of manually analyzing a monolith using sticky notes on whiteboards take too long, cost too much and rarely end in success. Which architect or developer in your team has the time and ability to stop what they’re doing to review millions of lines of code and tens of thousands of classes by hand? Large monolithic applications need an automated, data-driven way to identify potential service boundaries.

The Real-World Approach

Let’s select a readily available, real-world application as the platform in which we’ll explore these best practices. As a tutorial example for Java developers, Oracle offers a medical records (MedRec) application — also known as the Avitek Medical Records application, which is a traditional monolith using WebLogic and Java EE.

Using vFunction, we will initiate a “learning” phase using dynamic analysis, static analysis and machine learning based on the call tree and system flows to identify ideal service domains.

events identified for extraction
Image 1: This services graph displays individual services identified for extraction

In Image 1, we see a services graph in which services are shown as spheres of different sizes and colors, as well as lines (edges) connecting them. Each sphere represents a service that vFunction has automatically identified as related to a specific domain. These services are named and detailed on the right side of the screen.

The size of the sphere represents the number of classes contained within the service. The colors represent the level of class “exclusivity” within each service, referring to the percentage of classes that exist only within that service, as opposed to classes shared across multiple services.

Red represents low exclusivity, blue medium exclusivity and green high exclusivity. Higher class exclusivity indicates better boundaries between services, fewer interdependencies and less code duplication. Taken together, these traits indicate that it will be less complex to refactor highly-exclusive services into microservices.

different relationships between services
Images 2 and 3: Solid and dashed lines represent different relationships between services

The solid lines here represent common resources that are shared across the services (Image 2). Common resources include things like beans, synchronization objects, read-only DB transactions and tables, read-write DB transactions and tables, websockets, files and embedded files. The dashed lines represent method calls between the services (Image 3).

The black sphere in the middle represents classes still in the monolith, which contains classes and resources that are not specific to any particular domain, and thus have not been selected as candidates for extraction.

By using automation and AI to analyze and expose new service boundaries previously contained in the black box of the monolith, you are now able to begin manipulating services inside of a suggested reference architecture that has cleared the way to make better decisions based on data-driven analysis.

Best Practice #2: Consolidate Functionality and Avoid Duplication

When everything was in the monolith, your visibility was somewhat limited. If you’re able to expose the suggested service boundaries, you can begin to make decisions and test design concepts — for example, identifying overlapping functionality in multiple services.

The Real-World Approach

When does it make sense to consolidate disparate services with similar functionality into a single microservice? The most basic example is that, as an architect, you may see an opportunity to combine two services that appear to overlap — and we can identify these services based on the class names and level of class exclusivity.

two similar services ready to merge
Image 4: Two similar services have been identified to be merged

In the services graph (Image 4), we see two similar chat services outlined with a white ring: PatientChatWebSocket and PhysicianChatWebSocket. We can see that the physician chat service (red) has 0% dynamic exclusivity and that the patient chat service (blue) has slightly higher exclusivity at 33%.

Neither of these services is using any shared resources, which indicates that we can merge these into a single service without entangling anything by our actions.

decision to merge services can be rolled back with a click of a button
Image 5: Confirming the decision to merge services can be rolled back immediately with the push of a button

By merging two similar services, you are able to consolidate duplicate functionality as well as increase the exclusivity of classes in the newly merged service (Image 5). As we’re using vFunction Platform in this example, everything needed to logically bind these services is taken care of — classes, entry points and resources are intelligently updated.

merged single service
Image 6: A newly merged single service now represents two previous chat services

Merging services is as simple as dragging and dropping one service onto the other, and after vFunction Platform recalculates the analysis of this action, we see that the sphere is now green, with a dynamic exclusivity of 75% (Image 6). This indicates that the newly-merged service is less interconnected at the class level and gives us the opportunity to extract this service with less complexity.

Best Practice #3: Create Accurate and Meaningful Names for Services

We all know that naming things is hard. When dealing with monolithic services, we can really only use the class names to figure out what is going on. With this information alone, it’s difficult to accurately identify which classes and functionality may belong to a particular domain.

The Real-World Approach

In our example, vFunction has automatically derived service domain names from the class names on the right side of screen in Image 7. As an architect, you need to be able to rename services according to your preferences and requirements.

rename a merged service
Image 7: Rename a merged service to something more accurate

Let’s now go back to the two chat services we merged in the last section. Whereas previously we had a service for both the patient and physician chat, we now have a single service that represents both profiles, so the name PatientChatWebSocket is no longer accurate, and may cause misunderstandings for other developers working on this service in the future. We can decide to select a better name, such as ChatService (Image 7).

rename an automatically identified service
Image 8: Rename an automatically identified service to something more meaningful

In Image 8, we can see another service named JaxRSRecordFacadeBroker (+2). The (+2) part here indicates that we have entry points belonging to multiple classes. You may find this name unnecessarily descriptive, so you can change it simply to RecordBroker.

By renaming services in a more accurate and meaningful way, you can ensure that your engineering team can quickly identify and work with future microservices in a straightforward way.

Best Practice #4: Identify Functionality That Shouldn’t Be a Separate Microservice

What qualities suggest that functionality previously contained in a monolith deserves to be a microservice? Not everything should become a microservice, so when would you want to remove a service as a candidate for separation and extraction?

Well, you may decide that some services don’t actually belong in a separate domain, for example, a filter class that simply filters messages. Because this isn’t exclusive to any particular service, you can decide to move it to a common library or another service in the future.

The Real-World Approach

When removing functionality as a candidate for future extraction as a microservice, you are deciding not to treat this class as an individual entry point for receiving traffic. Let’s look at the AuthenticatingAdministrationController service (Image 9), which is a simple controller class.

removing a non-specific service
Image 9: Removing a very simple, non-specific service

In Image 9, we can see that the selected class has low exclusivity by the red color, and also that it is a very small service, containing only one dynamic class, one static class and no resources. You can decide that this should not be a separate service by itself and remove it by dragging and dropping it onto the black sphere in the middle (Image 10).

relocating this class back to the monolith

By relocating this class back to the monolith, we have decided that this particular functionality does not meet the requirements to become an individual microservice.

In this post, we demonstrated some of the best practices that architects and developers can follow to make refactoring a monolithic application into bounded contexts and accurate domains for future microservice extraction.

By using the vFunction Platform, much of the heavy lifting and manual efforts have been automated using AI and data-driven analysis. This ensures that architects and development teams can spend time focusing on refining a reference architecture based on intelligent suggestions, instead of spending thousands of hours manually analyzing small chunks of code without the appropriate “big picture” context to be successful.

App Modernization Challenges That Keep CIOs Up at Night

Companies today must be highly agile to meet ever-evolving marketplace demands. But the legacy applications many still depend on for much of their business-critical processing are ill-equipped to support that kind of agility. That’s why identifying and overcoming app modernization challenges is critically important.

The Consortium For Information & Software Quality (CISQ) highlights the necessity for app modernization in this way:

After decades of operation, they [legacy apps] may have become less efficient, less secure, unstable, incompatible with newer technologies and systems, and more difficult to support due to loss of knowledge and/or increased complexity or loss of vendor support. In many cases, they represent a single point of failure risk to the business.

CISQ notes that companies now spend 75% of their IT budgets on their legacy systems. That’s money that’s not being spent to provide the innovations that are critical for success in today’s market environment.

For that reason, modernizing legacy systems and applications so that they can fully participate in today’s cloud-centric ecosystem is critically important. Yet according to a 2022 survey of senior IT professionals, 79% of app modernization efforts fail, and those failures represent an enormous waste of time and resources. In this article, we want to examine the particular app modernization challenges that may give CIOs sleepless nights as they contemplate upgrading their legacy applications.

Why Overcoming App Modernization Challenges is Critical

More than half (56%) of the respondents to a survey of corporate CIOs say that their legacy applications are significant obstacles to their efforts toward digital transformation. But because those applications still perform functions that are critical for day-to-day business operations, they can’t simply be eliminated.

Rather, to gain the technological agility that’s necessary for meeting ever-changing marketplace demands, companies must find ways to integrate their legacy apps into today’s cloud-centric ecosystem.

That’s what application modernization is all about. Industry analyst David Weldon puts it this way:

“Application modernization is the process of taking old applications and the platforms they run on and making them “new” again by replacing or updating each with modern features and capabilities that better align with current business needs.”

When done well, application modernization not only provides the flexibility and agility that enables companies to react quickly to changing marketplace conditions, but it also yields better application performance, increased efficiency for developers, and reduced overall costs.

Challenges of Application Modernization

The app modernization challenges most likely to keep CIOs awake at night fall into three major categories: costs, time, and risk. Let’s take a closer look at each.

1. Costs

There’s no question that any significant legacy app modernization effort will require a substantial investment of both money and time. According to one survey, the average application modernization project costs $1.5 million and takes 16 months–and still, 79% of these initiatives fail.

Obtaining the necessary budget commitment is often the biggest obstacle CIOs face in initiating a legacy app modernization project. That’s especially the case when a company’s legacy apps seem to still be functioning as intended and providing value to the organization. That perception may cause senior management to take an attitude of “if it ain’t broke, we can’t afford to fix it.”

Yet, the risks and costs of doing nothing are substantial. Using a standardized metric it calls the Cost of Poor Software Quality, CISQ calculated that in 2020 it cost businesses $520 billion to maintain legacy software.

In addition, as the technological context for which they were designed recedes further and further into the past, legacy apps become more brittle (easily breakable) and may constitute, as CISQ notes, a “single point of failure risk” that could trigger huge unexpected costs if a breakdown should occur.

How can a CIO deal with the inevitable budget limitations that could seriously curtail any effort at legacy app modernization? First, costs must be contained; second, senior management must be shown the positive ROI that legacy app modernization can yield.

Containing Legacy App Modernization Costs

Legacy application modernization, which normally involves refactoring monolithic legacy codebases to convert them to a cloud-native microservices architecture, requires a high level of expertise within the development team. Not only must developers understand the legacy app itself (and its technological framework), but they must also be skilled in navigating the open-source environment of the cloud.

That presents a two-fold problem. First, the developers who were most familiar with the legacy app are usually no longer available, and finding equally knowledgeable replacements is nearly impossible. Second, while developers who are at home with cloud technologies are available, they come at a cost that is often beyond what companies can pay to staff a modernization project.

The answer is to not try to do everything yourself. Instead, you’ll want to work with partners that already have the skills and experience required for efficiently modernizing monolithic codebases.

Related: The CIO Guide to Modernizing Monolithic Applications

Making the ROI Case for Legacy App Modernization

Getting the budget commitments needed for an app modernization effort requires getting buy-in from the executive team. You’ll need to present quantitative information that establishes the ROI that can be expected from such a project. That data can be acquired by use of an AI-enabled automated tool that can help you assess the technical effort required, the associated costs, and the savings that will ultimately be gained.

2. Time

Refactoring legacy apps to give them an essentially cloud-native architecture is a highly complex task. A monolithic codebase is basically a single unit that has function implementations and dependencies woven throughout the code in non-obvious ways.

Manually unraveling millions of lines of code to expose those functions and dependencies requires both a high degree of technical expertise and the investment of many months or even years of developers’ time. A report by McKinsey quotes one IT leader as saying,

“We were surprised by the hidden complexity, dependencies and hard-coding of legacy applications, and slow migration speed.”

But the key word in this scenario is “manually.” The degree of complexity, and the time required to manage it, can be substantially reduced by employing the kind of AI-enabled, automated analysis tool we mentioned before. The use of such a tool can allow developers to accomplish in just weeks what would take many months if done manually.

3. Risk

Perhaps the scariest of the app modernization challenges that may keep a CIO up at night is the potential that after a significant investment of time and money, the project might fail to deliver the expected benefits. Even worse is the risk that faults might be inserted into a business-critical legacy app, causing significant disruptions to company operations.

Those risks are real. As we’ve seen, almost 80% of legacy app modernization efforts end in failure. And there are risk factors inherent in the process of refactoring monolithic codebases into microservices that can rarely be avoided.

For example, there’s often a knowledge gap caused by the fact that the original developers of a legacy app are no longer available, and written documentation is often incomplete or outdated. In its many years of service, legacy code will usually have been modified, adapted, and patched to meet specific needs of the moment, but such changes are rarely adequately documented. As a result, developers engaged in a modernization effort might overlook important functions that are hidden in the code.

Minimizing the Risk Factor

How can such risks be mitigated? The key is good planning and taking a step-by-step approach that allows you to fully test changes before they are incorporated into the application.

Start by assessing your legacy app portfolio to identify and prioritize those apps that need immediate modernization to meet current or future business goals versus those that can be left alone for now or simply migrated to the cloud (via rehosting or replatforming) with minimal changes. This is another area in which having an automated, AI-based analysis tool is critical.

In a microservices architecture, each service embodies a single function that can be implemented and thoroughly tested without impacting the application as a whole. Your modernization plan should be based on adding or replacing functionality one at a time with their microservice implementations. The plan should also include a process for quickly and seamlessly backing a microservice out if problems arise when it’s incorporated into the application.

Related: Succeed with an Application Modernization Roadmap

How vFunction Helps CIOs Overcome App Modernization Challenges

We’ve seen how having an AI-based, automated tool can make a huge difference in minimizing the cost, time, and risk factors of legacy app modernization. And that’s exactly what the vFunction Platform provides.

The vFunction Assessment Hub can automatically analyze legacy apps and generate quantitative measures of code complexity and interdependence. It produces hard numbers that quantify the amount of effort required to modernize each app, providing the information you need to prioritize your modernization efforts and estimate ROI for the project.

The vFunction Modernization Hub automatically transforms complex monolithic applications into microservices. It uncovers hidden process flows and dependencies in applications, thereby minimizing the chance that important functions will be overlooked or incorrectly implemented.By making use of such automated features, vFunction accelerates modernization projects by 10-15x, saving hundreds of thousands of dollars per application. To see first-hand how vFunction can help you overcome app modernization challenges and sleep well at night, schedule a demo today.

Organizing and Managing Remote Dev Teams for Application Modernization

Many companies still depend on legacy applications for some of their most business-critical processing. But those apps typically don’t support the kind of technological agility that’s needed for continuing success in today’s ever-changing marketplace environment. That’s why modernizing legacy apps is an accelerating trend among leading companies.

But app modernization is not easy—it requires highly skilled developers who understand both the legacy app and the modern cloud ecosystem. Assembling such a team on-site can be difficult. It’s often easier to find the needed skills and organize the team for maximum effectiveness if team members can work remotely. According to Forbes, which declares that remote work is the new normal,

“When it comes to the tech workforce, it takes more than simply offering remote opportunities to get employees motivated. Employers must embrace flexibility and build (or reinforce) a strong, supportive remote work culture to ensure teams are engaged and high-performing.”

That’s why understanding how to assemble and manage remote legacy app modernization teams is vitally important.

The Goal of App Modernization: From Monoliths to Microservices

Legacy apps are often a severe drag on a company’s ability to innovate at the pace required in today’s fast-changing marketplace and technological environments. That’s because such apps are typically monolithic, meaning that the codebase is organized as a single unit.

Because various function implementations and dependencies are interwoven throughout the code, an attempt to change any specific behavior could impact the entire application in unexpected ways, potentially causing it to fail.

When legacy apps are used for a company’s most important operational processes, such failures cannot be tolerated. When they occur, development teams may be required to stop work on the innovations that are so necessary to a company’s continued marketplace success and take an all-hands-on-deck approach to fixing the problem as quickly as possible.

In contrast to the typical legacy app, software based on a cloud-native microservices architecture can be updated far more easily and safely from remote locations. Microservices are small units of code that perform a single task. Because they are designed to function independently of one another, changes made to one microservice can’t ripple through the rest of the application. That’s why restructuring a legacy app from a monolith to a microservices architecture gives it a much greater level of adaptability.

The goal of legacy application modernization is to substantially improve an app’s adaptability and maintainability by restructuring its codebase from a monolith to microservices.

Related: Application Modernization and Optimization: What Does It Mean?

How System Architecture Correlates With Organizational Structure

In 1967 Melvyn Conway formulated what’s come to be known as Conway’s Law:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Software engineer Alex Kondov highlights the practical implications of Conway’s Law:

Imagine a small company with a handful of engineers all sitting in the same room. They will probably end up with a more tightly coupled solution that relies on their constant communication. [In other words, a monolith].

A large company with teams in different time zones working on separate parts of the product will need to come up with a more distributed solution. That will allow them to work autonomously, build and deploy without interfering with each other… An organization in which teams need to operate in a fully autonomous manner would naturally come to an architecture like microservices.

The organizational structure of your legacy app modernization team will determine the kind of software it ultimately produces. Although you may aim at creating a microservices-based application, if your team is organized in the tightly-coupled manner of Kondov’s first example, it will be more predisposed towards a monolithic application architecture rather than a microservices architecture.

Optimizing Your Team’s Structure to Support Microservices

The principles of Domain-Driven Design (DDD) provide a useful framework for organizing legacy app modernization teams. As Tomas Fernandez explains,

“Domain-Driven Development allows us to plan a microservice architecture by decomposing the larger system into self-contained units, understanding the responsibilities of each, and identifying their relationships.”

According to María Gómez, Head of Technology for ThoughtWorks, examples of domains include broad business areas such as finance, health, or retail. Each domain may contain several sub-domains. The finance domain, for example, might have sub-domains of payments, statements, or credit card applications.

The DDD paradigm allows developers to identify the domains and subdomains that exist in a monolithic codebase, and draw the boundaries that delineate each service that will be implemented as a microservice. Every microservice embodies a single business goal and has a well-defined function and communications interface. This allows each microservice to run independently of any others.

Each team is fully and solely responsible for the microservices that they develop, and functions, to a considerable degree, independently of other teams. Once a team is made aware of the communications interface defined for each microservice, it can work independently and asynchronously without needing to interact with other teams about how their microservices are implemented.

This organizational pattern favors remote development teams. Each team needs a high degree of internal asynchronous communication as they work out the design of the microservice for which they are responsible. But less communication is needed between teams.

For that reason, remote, loosely-coupled development teams are perfect for converting monolithic legacy apps to microservices. But there are some potential pitfalls tech leaders should be aware of.

Related: What is a Monolithic Application? Everything you need to know

Challenges of Remote Teams

With all of the advantages remote development teams provide for the application modernization process, there are some definite challenges that IT leaders will have to overcome to make their use of such teams effective. Let’s take a brief look.

1. Work-Life Balance

Working remotely can affect work-life balance both positively and negatively: while ZDNet reports that 64% of developers say that working remotely has improved their work-life balance, 27% say that it’s difficult for them to unplug from the job.

Leaders must proactively help remote developers in this area. A big part of doing that is ensuring that goals and deadlines are realistic and that workers aren’t encouraged to spend what should be family or personal time on job-related activities.

2. Assessing Productivity and Progress

Because personal contact with workers is more constrained, leaders have less visibility into the productivity or progress of remote teams. Gaining that visibility may require more formal reporting arrangements, such as daily check-ins where workers report progress on their KPIs. Project management tools such as Trello or Asana can also help.

3. Communications

In the office, developers naturally consult and collaborate informally. That’s more difficult when they are working remotely. In fact, in Buffer’s 2022 State Of Remote Work survey, 56% of respondents identify “how I collaborate and communicate” as a top remote work issue, while 52% say they feel less connected to coworkers. Scheduling regular virtual team meetings using tools like Zoom or Google Chat can help.

4. Work Schedules Across Time Zones

It’s 9 am in San Francisco, and your West Coast team members are starting their workday, but team members in Europe are finishing theirs. How can a team collaborate across time zones? One approach is to focus on asynchronous communication methods, such as group chats and emails, that don’t require individuals to be online at the same time.

5. Company Culture

Company culture is absorbed most easily through face-to-face interactions with leaders and peers. The isolation inherent in remote work makes instilling that culture among team members difficult. John Carter, Founder of TCGen, offers this suggestion:

“Make the unconscious cues in the company culture conscious. Refer to them often and reinforce them. Company culture can follow your team members home, but only if it is made explicit and constantly reinforced.”

Why Remote Teams are the Future

Not only is the distributed nature of remote teams ideal for implementing distributed, cloud-based microservices applications, but they represent a trend that may redefine the IT landscape well into the future.

The COVID-19 pandemic accelerated the use of remote software development teams. As companies learned how to onboard, train, and manage a remote workforce, they realized that disregarding geographical limitations in their hiring allowed them to lower costs and increase quality by tapping into a wider developer talent pool. Bjorn Lundberg, Senior Client Partner at 3Pillar Global describes the trend this way:

“Contract workers, freelancers, and outsourced teams have been on the rise for a while now… As remote collaboration becomes a fixture of the modern workplace, American companies increasingly view outsourcing software development as an opportunity to extend their development talent without exhausting their budgets.”

Even full-time employees want to work remotely: according to the 2022 State of Remote Engineering Report, 75% of developers would prefer to work remotely most of the time.

How vFunction Can Empower Your App Modernization Teams

As we’ve seen, the ideal application modernization team should be relatively small. vFunction helps small teams maximize their effectiveness by providing an AI-enabled, automated platform that reduces the legacy app restructuring workload by orders of magnitude.

vFunction can automatically analyze complex monolithic applications, with perhaps millions of lines of code, to reveal hidden functionalities and dependencies. It can then automatically transform those apps into microservices.To see first-hand how vFunction helps remote application modernization teams maximize their effectiveness, request a demo today.

IT Leader Strategies for Effectively Managing Technical Debt

In a report on managing technical debt, Google researchers make a startling admission:

“With a large and rapidly changing codebase, Google software engineers are constantly paying interest on various forms of technical debt.”

What’s true of Google is very likely true of your company as well, especially if you have legacy applications you still depend on for important business functions. If you do, you’re almost certainly carrying a load of technical debt that is hindering your ability to innovate as quickly and as nimbly as you need to in today’s fast-changing marketplace and technological environments.

Technical debt is an issue you cannot afford to ignore. As an article in CIO Magazine explains,

“CIOs say reducing technical debt needs increasing focus. It isn’t wasting money. It’s about replacing brittle, monolithic systems with more secure, fluid, customizable systems. CIOs stress there is ROI in less maintenance labor, fewer incursions, and easier change.”

But what, exactly, is technical debt, and why is managing it so vital for companies today?

Why Managing Technical Debt is Critical

What is technical debt? According to Ori Saporta: “Technical debt, in plain words, is an accumulation over time of lots of little compromises that hamper your coding efforts.”

In other words, technical debt is what happens when developers prioritize speed over quality. The problem is that, just as with financial debt, you must eventually pay off your technical debt, and until you do, you’ll pay interest on the principal.

  • The “interest” on technical debt consists of the ongoing charges you incur in trying to keep flawed, inflexible, and outmoded applications running as the technological context for which they were designed recedes further and further into the past. Software developers spend, on average, about a third of their workweek addressing technical debt. Plus, there’s also the opportunity cost of time that’s not being spent to develop the innovations that can help propel a company ahead in its marketplace.
  • The “principle” on technical debt is what it costs to clean up (or replace) the original messy code and bring the application into the modern world. Companies typically incur $361,000 of technical debt for every 100,000 lines of code.

Managing your technical debt is critical because the price you’ll pay for not doing so, in terms of time, money, focus, and lost market opportunities, will grow at an ever-accelerating pace until you do.

Managing Technical Debt: Getting Started

A report from McKinsey highlights how a company can begin dealing with its technical debt:

“[A] degree of tech debt is an unavoidable cost of doing business, and it needs to be managed appropriately to ensure an organization’s long-term viability. That could include ‘paying down’ debt through carefully targeted, high-impact interventions, such as modernizing systems to align with target architecture.”

The place to start in managing technical debt is with modernizing legacy applications to align with a target architecture, which today is usually the cloud. Legacy applications weren’t designed to work in the cloud context, and it’s very difficult to upgrade them to do so. That’s because such apps often have a monolithic system architecture.

Monolithic code is organized as a single unit with various functionalities and dependencies interwoven throughout the code. The coding shortcuts, ad hoc patches, and documentation inadequacies that are typical sources of technical debt in legacy applications are embedded in the code in ways that are extremely difficult for humans to unravel. Worse, because of hidden dependencies in the code, any changes aimed at upgrading functions or adding features may ripple throughout the codebase in unexpected ways, potentially causing the entire application to fail.

From Monoliths to Microservices

Because a monolithic architecture makes upgrading an application for new features or for integration into the cloud ecosystem so difficult, the first step of legacy app modernization is usually to restructure the code from a monolith to a cloud-native, microservices architecture.

Microservices are small chunks of code that perform a single task. Each can be deployed and updated independently of any others. This allows developers to change a specific function in an application by updating the associated microservice without the risk of unintentionally impacting the codebase as a whole.

The process of restructuring a codebase from a monolith to microservices will expose the hidden dependencies and coding shortcuts that are the source of technical debt.

Related: Migrating Monolithic Applications to Microservices Architecture

Options for Modernizing Legacy Apps

Gartner lists seven options for modernizing legacy applications:

  1. Encapsulate: Connect the app to cloud resources by providing API access to its existing data and functions, but without changing its internal structure and operations.
  2. Rehost (“Lift and Shift”): Migrate the application to the cloud as-is, without significantly modifying its code.
  3. Replatform: Migrate the application’s code to the cloud, incorporating small changes designed to enhance its functionality in the cloud environment, but without modifying its existing architecture or functionality.
  4. Refactor: Restructure the app’s code to a microservices architecture without changing its external behavior.
  5. Rearchitect: Create a new application architecture that enables improved performance and new capabilities.
  6. Rewrite: Rewrite the application from scratch, retaining its original scope and specifications.
  7. Replace: Throw out the original application, and replace it with a new one.

The first three options, encapsulation, rehosting, and replatforming, simply migrate an app to the cloud with minimal changes. They offer some improvements in terms of operating costs, performance, and integration with the cloud. However, they do little to reduce technical debt because there’s no restructuring of the legacy application’s codebase—if it was monolithic before being migrated to the cloud, it remains monolithic once there.

The last option, replacing the original application, can certainly impact technical debt, but because it’s the most extreme in terms of time, cost, and risk, it’s usually considered only as a last resort.

The most viable options, then, for removing technical debt are refactoring, rearchitecting, or rewriting the code.

Assessing Your Monolithic Application Landscape

The ideal solution for managing technical debt would be to immediately identify and convert a subset of your legacy applications to microservices. But because any restructuring project involves significant costs in terms of money, time, and risk, trying to modernize every app in your portfolio is not a practical strategy for most companies.

That’s why your first step on the road to effectively managing technical debt should be surveying your portfolio of monolithic legacy applications to assess each in terms of its complexity, degree of technical debt, and the level of risk associated with upgrading it. With that information in hand, you can then prioritize each app based on the degree to which its value to the business justifies the amount of effort required to modernize it.

  • For apps with high levels of technical debt and great value to the business, consider full modernization through refactoring, rearchitecting, or rewriting.
  • Apps with lower levels of technical debt (meaning that they function acceptably as they are) or that have a lesser business value should be considered for simple migration through encapsulation, rehosting, or replatforming.

Refactoring is Key

Refactoring is fundamental to managing technical debt for at least two reasons:

  1. It exposes the elements of technical debt, such as hidden dependencies and undocumented functionalities, that are inherent in an app’s original monolithic code. These must be well understood before any rearchitecting or rewriting efforts can be safely initiated.
  2. By converting an app to a cloud-native microservices architecture, refactoring positions it for full integration into the cloud ecosystem, making further upgrades and functional extensions relatively easy.

That’s why refactoring is normally the first stage in modernizing a monolithic legacy app. Then, if new capabilities or performance improvements are required that the original code structure does not support, rearchitecting may be in order. Or, if the development team wishes to avoid the complexities of rearchitecting existing code, they may opt to rewrite the application instead.

In any case, refactoring will normally be the initial step because it produces a codebase that developers can easily understand and work with.

Implement “Continuous Modernization”

Technical debt is unavoidable. As the pace of technological change continues to accelerate, even your most recently written or upgraded apps will slide relentlessly over time toward increased technical debt. That means you should plan to deal with your technical debt on a continuous basis–known as continuous modernization. As John Kodumal, CTO and cofounder of LaunchDarkly has said,

“Technical debt is inevitable in software development, but you can combat it by being proactive… This is much healthier than stopping other work and trying to dig out from a mountain of debt.”

You need to constantly monitor and clean up your technical debt as you go, rather than waiting until some application or system reaches a crisis point that requires an immediate all-out effort at modernization. In fact, continuous modernization leads to technical debt removal and should be a fundamental element of your CI/CD pipeline.

Related: Preventing Monoliths: Why Cloud Modernization is a Continuum

vFunction Can Help You Manage Your Technical Debt

As we’ve seen, the first step toward effectively managing your technical debt is to assess your suite of legacy apps to understand just how large the problem is. That has historically been a very complex and time-consuming task when pursued manually. But now the AI-driven vFunction platform can substantially simplify and speed up the process.

The vFunction Architectural Observability Platform will automatically evaluate your applications and generate qualitative measures of code complexity and risk due to interdependencies. It produces a number that represents the amount of technical debt associated with each app, providing you with just the information you need to prioritize your modernization efforts.

And once you’ve determined your modernization priorities, the vFunction Code Copy automates the process of actually transforming complex monolithic applications into microservices, which can result in immense savings of time and money. If you’d like a first-hand view of how vFunction can help your company effectively manage its technical debt, schedule a demo today.